CN112232323A - Face verification method and device, computer equipment and storage medium - Google Patents

Face verification method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112232323A
CN112232323A CN202011468418.7A CN202011468418A CN112232323A CN 112232323 A CN112232323 A CN 112232323A CN 202011468418 A CN202011468418 A CN 202011468418A CN 112232323 A CN112232323 A CN 112232323A
Authority
CN
China
Prior art keywords
face
image
target
depth
infrared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011468418.7A
Other languages
Chinese (zh)
Other versions
CN112232323B (en
Inventor
陈鑫
郑东
赵拯
赵五岳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Yufan Intelligent Technology Co ltd
Original Assignee
Universal Ubiquitous Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Universal Ubiquitous Technology Co ltd filed Critical Universal Ubiquitous Technology Co ltd
Priority to CN202011468418.7A priority Critical patent/CN112232323B/en
Publication of CN112232323A publication Critical patent/CN112232323A/en
Application granted granted Critical
Publication of CN112232323B publication Critical patent/CN112232323B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Library & Information Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a face verification method, a face verification device, computer equipment and a storage medium, which belong to the field of face recognition, and specifically comprise the steps of judging whether a target to be verified corresponding to a face image is a living body or not according to a shot infrared image and a depth image when the face image is judged to exist in the shot infrared image; when the target to be verified is judged to be a living body, extracting the face image from the shot RGB image; when the face image is judged to accord with a first preset processing condition and a second preset processing condition, extracting face key points from the shot RGB image; carrying out face correction on the RGB image, the depth image and the infrared image according to the face key points, and extracting corrected target face features; and comparing the target face characteristics with sample face characteristics of sample candidates in a face database to obtain a verification result corresponding to the target to be verified. Through the processing scheme disclosed by the invention, the cost is reduced and the face recognition verification accuracy is improved.

Description

Face verification method and device, computer equipment and storage medium
Technical Field
The invention relates to the field of face recognition, in particular to a face verification method, a face verification device, computer equipment and a storage medium.
Background
Human face recognition plays an important role in the field of artificial intelligence as an important machine vision technology. In many application scenarios (such as door lock, entrance guard, gate, etc.), the requirements on equipment power consumption, attack prevention capability, identification accuracy and passing rate are high. At present, two cameras are generally adopted for detection, for example, an image shot by an RGB camera is adopted for face recognition, an image shot by an IR camera is adopted for live body detection, the IR camera is prone to rely too much on a feedback video image, once the camera is attacked, for example, an attacker replaces a video image collected by the camera with a pre-recorded video image, the live body detection failure or the live body detection error is caused, loss is brought to a user, and the security is low.
Disclosure of Invention
Therefore, in order to overcome the above-mentioned shortcomings of the prior art, the present invention provides a method, an apparatus, a computer device and a storage medium for face verification, which can reduce the cost and improve the accuracy of face recognition verification.
In order to achieve the above object, the present invention provides a face verification method, including: when the fact that a face image exists in the shot infrared image is judged, whether a target to be verified corresponding to the face image is a living body is judged according to the shot infrared image and the shot depth image; when the target to be verified is judged to be a living body, extracting the face image from the shot RGB image; when the face image is judged to accord with a first preset processing condition and a second preset processing condition, extracting face key points from the shot RGB image; respectively carrying out face correction on the RGB image, the depth image and the infrared image according to the face key points, and extracting corrected target face features; and comparing the target face characteristics with sample face characteristics of sample candidates in a face database to obtain a verification result corresponding to the target to be verified.
In one embodiment, the performing, according to the face key point, face rectification on the RGB image, the depth image, and the infrared image, and extracting a rectified target face feature includes: performing face correction on monochrome color images corresponding to three color channels of the RGB image according to the face key points of the RGB image, extracting RGB target face features of the RGB image, and comparing the target face features with sample face features of sample candidates of a face database to obtain a verification result corresponding to the target to be verified, wherein the verification result comprises the following steps: comparing the RGB target face features with RGB sample face features of sample candidates in a face database one by one to obtain a first comparison score corresponding to the target to be verified; and when the first comparison score of the maximum score is judged to be larger than a first threshold value, generating a verification result that the target to be verified belongs to a face database.
In one embodiment, the performing, according to the face key point, face rectification on the RGB image, the depth image, and the infrared image, and extracting a target face feature after rectification further includes: performing face correction on three channel images and the infrared image generated by the depth image and the infrared image according to the face key points of the depth image and the infrared image, extracting corrected infrared-depth target face features, comparing the target face features with sample face features of sample candidate persons in a face database to obtain a verification result corresponding to the target to be verified, and further comprising: when the first comparison score of the maximum score is judged to be not larger than a first threshold value, sorting the first comparison score, and sequentially acquiring infrared-depth sample face features of sample candidates corresponding to the first comparison score according to scores from large to small; calculating a first face similarity score of the target to be verified and the sample candidate according to the infrared-depth target face characteristics, the infrared-depth sample face characteristics and a preset formula; when the first face similarity score of the maximum score is judged to be larger than a second threshold value, generating a verification result that the target to be verified belongs to a face database; and when the first face similarity score of the maximum score is judged to be not larger than a second threshold value, generating a verification result that the target to be verified does not belong to a face database.
In one embodiment, the method further comprises: when the face image is judged to accord with a first preset processing condition but not a second preset processing condition, extracting face key points of the depth image and the infrared image from the shot RGB image, carrying out face correction on three channel images and the infrared image generated with the depth image according to the face key points, and extracting corrected infrared-depth target face features; comparing the infrared-depth target face features with infrared-depth sample face features of sample candidates in a face database one by one to obtain a second comparison score corresponding to the target to be verified; and when the second comparison score of the maximum score is judged to be larger than a third threshold value, generating a verification result that the target to be verified belongs to the face database.
In one embodiment, the method further comprises: when the second comparison score with the maximum score is judged to be not larger than a third threshold value, the second comparison score is sorted, and RGB sample face features of sample candidates corresponding to the second comparison score are sequentially obtained according to scores from large to small; calculating a second face similarity score of the target to be verified and the sample candidate according to a preset formula, the RGB target face characteristics and RGB sample face characteristics of the sample candidate of a face database; when the second face similarity score with the maximum score is judged to be larger than a fourth threshold value, generating a verification result that the target to be verified belongs to a face database; and when the second face similarity score with the maximum score is judged to be not larger than a fourth threshold value, generating a verification result that the target to be verified does not belong to a face database.
In one embodiment, the determining whether the target to be verified corresponding to the face image is a living body according to the shot infrared image and the shot depth image includes: converting the depth image into three-channel data; respectively intercepting face areas in the infrared image and the three-channel data image according to the face coordinates in the infrared image to form face image data of 4 channels; inputting the face image data of the 4 channels into a living body identification network, and calculating living body scores; and when the living body score is larger than a living body judgment threshold value, judging that the target to be verified corresponding to the face image data is a living body by adopting a face position detection unit.
In one embodiment, the converting the depth image into three-channel data includes: converting points on the depth image into three-dimensional points in camera coordinates according to a conversion principle of pixel coordinates and camera coordinates, and forming the three-dimensional points into three channels of an infrared image; calculating the normal direction and offset of each pixel point corresponding to the three-dimensional point; iterating the normal direction according to a preset angle threshold value, and calculating a rotation matrix according to the direction to obtain three-channel initial data; normalizing the three-channel initial data into three-channel data with the value of 0-255.
In one embodiment, before determining whether the target to be verified corresponding to the face image is a living body, the method further includes: judging whether the ratio of the brightness of the face area in the infrared image to the number of effective point clouds of the face area in the depth image is within a preset threshold interval or not; when the human face is judged not to be in the preset threshold interval, calculating the brightness difference value between the brightness of the human face area and the preset threshold interval; calculating an infrared ideal exposure time interval for shooting the infrared image according to the brightness difference value; calculating a depth ideal exposure time interval for shooting the depth image according to the brightness difference value and the average distance information of the face area in the depth image; and adjusting the current exposure time length according to the infrared ideal exposure time length interval and the depth ideal exposure time length interval.
In one embodiment, the adjusting the current exposure duration according to the infrared ideal exposure duration interval and the depth ideal exposure duration interval includes: judging whether an intersection exists between the infrared ideal exposure time interval and the depth ideal exposure time interval; when the intersection exists, setting a central point value of the intersection as a target exposure duration, and adjusting the current exposure duration; and when the intersection does not exist, adjusting the identification distance of the camera, and recalculating the infrared ideal exposure time interval and the depth ideal exposure time interval.
The invention also provides a face verification device, comprising: the living body judging module is used for judging whether a target to be verified corresponding to the face image is a living body or not according to the shot infrared image and the shot depth image when the face image is judged to exist in the shot infrared image; the image extraction module is used for extracting the face image from the shot RGB image when the target to be verified is judged to be a living body; the face key point extraction module is used for extracting face key points from the shot RGB image when the face image is judged to accord with a first preset processing condition and a second preset processing condition; the target feature extraction module is used for carrying out face correction on the RGB image, the depth image and the infrared image according to the face key points and extracting corrected target face features; and the verification module is used for comparing the target face characteristics with sample face characteristics of sample candidates in a face database to obtain a verification result corresponding to the target to be verified.
The invention also provides a computer device comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the method when executing the computer program.
The invention also provides a computer-readable storage medium, on which a computer program is stored, characterized in that the computer program realizes the steps of the above-mentioned method when being executed by a processor.
Compared with the prior art, the invention has the advantages that: the human body judgment and the face recognition are carried out by combining the infrared image and the depth image, the face position in RGB (red, green and blue) is positioned by using IR face detection in an auxiliary mode according to the RGB image, the three channel image and the infrared image generated by the depth image of the key point pair of the face, and the face position in RGB is combined with the IR image information and the depth information, so that the time consumed by RGB face detection is reduced, the detection rate of RGB face detection is increased, the face recognition accuracy is improved, and the high-precision anti-counterfeiting and face recognition functions are realized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings needed to be used in the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram of an application scenario of a face verification method in one embodiment;
FIG. 2 is a flow diagram illustrating a face verification method according to an embodiment;
FIG. 3 is a flow diagram illustrating the face verification steps in one embodiment;
FIG. 4 is a schematic flow chart showing the living body judgment step in one embodiment;
FIG. 5 is a flowchart illustrating a parameter adjustment step according to another embodiment;
FIG. 6 is a block diagram of an embodiment of a face verification device;
FIG. 7 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
The embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
The embodiments of the present disclosure are described below with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure in the specification. It is to be understood that the described embodiments are merely illustrative of some, and not restrictive, of the embodiments of the disclosure. The disclosure may be embodied or carried out in various other specific embodiments, and various modifications and changes may be made in the details within the description without departing from the spirit of the disclosure. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present disclosure, and the drawings only show the components related to the present disclosure rather than the number, shape and size of the components in actual implementation, and the type, amount and ratio of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that aspects may be practiced without these specific details.
The face verification method provided by the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The terminal 102 uploads the shot infrared image (IR image), Depth image (Depth image) and RGB image to the server 104 in real time; the server 104 receives the infrared image, the depth image and the RGB image uploaded by the terminal 102; when the server 104 judges that the face image exists in the shot infrared image, the server 104 judges whether the target to be verified corresponding to the face image is a living body according to the shot infrared image and the depth image; when the server 104 determines that the target to be verified is a living body, the server 104 extracts a face image from the shot RGB image; when the server 104 judges that the face image meets the first preset processing condition and the second preset processing condition, the server 104 extracts face key points from the shot RGB image; the server 104 performs face correction on the RGB image, the depth image and the infrared image according to the face key points, and extracts corrected target face features; the server 104 compares the target face features with sample face features of sample candidates in the face database to obtain a verification result corresponding to the target to be verified. The terminal 102 may be, but not limited to, various devices or intelligent devices carrying a single 3D structured light camera or a TOF camera, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.
In an embodiment, as shown in fig. 2, a face verification method is provided, which is described by taking the application of the method to the server in fig. 1 as an example, and includes the following steps:
step 202, when the face image exists in the shot infrared image, judging whether the target to be verified corresponding to the face image is a living body according to the shot infrared image and the shot depth image.
The server judges whether a face image exists in an infrared image (IR image) shot by a single 3D structured light or TOF camera. In one embodiment, the method for judging whether a human face image exists in an infrared image shot by a single 3D structured light or TOF camera comprises the following steps: carrying out binarization processing on depth distance information in a depth image corresponding to an infrared image and shot by a single 3D structured light or TOF camera according to a preset distance threshold value to obtain a binarized image, and setting a region lower than the preset distance threshold value as 1 and a region higher than the preset distance threshold value as 0; determining a connected region in the binary image, and calculating the area of the connected region; and when the area is larger than a preset area threshold value, judging that a face image exists in the infrared image shot by the single 3D structured light or the TOF camera. In one embodiment, when it is determined that a face image exists in an infrared image shot by a single 3D structured light or TOF camera and the distance from a target to be verified to a device is within a face recognition distance threshold, a face recognition service is awakened; otherwise, continuing to wait. Therefore, the equipment to which the camera belongs can be always in a low power consumption state.
And when the human face image exists in the infrared image, the server judges whether the target to be verified corresponding to the human face image is a living body according to the shot infrared image and the shot depth image.
And 204, when the target to be verified is judged to be a living body, extracting a face image from the shot RGB image.
When the server judges that the target to be verified is a living body, the server extracts a face image from the captured RGB image. The server identifies a face area in the RGB image, regresses face key points for the face area, connects the face key points to generate a face circumscribed frame, and determines the face image according to the face circumscribed frame. In an embodiment, the server may also map the face coordinates in the IR image to the RGB image according to the mapping relationship to obtain a face region in the RGB image, determine whether to turn on the light supplement lamp according to the brightness of the face region, and simultaneously transmit the target region to the ISP to adjust the imaging quality of the target region under the RGB camera, so that the quality of the target image under the RGB camera is optimal. And the server extracts the face image from the re-acquired RGB image.
And step 206, when the face image is judged to accord with the first preset processing condition and the second preset processing condition, extracting face key points from the shot RGB image.
The server can determine the face size according to the face outer bounding box, calculate the face angle according to the face key point, and count RGB pixel distribution to obtain the face ambiguity value. The first preset treatment condition is: the face size is larger than or equal to max (the minimum face recognition threshold and the minimum living body detection face threshold), the negative angle threshold is smaller than the face angle (p, y, r) and smaller than the positive angle threshold, and the ambiguity value is smaller than the ambiguity threshold. The second preset treatment condition is: the lowest brightness threshold value is less than the human face brightness value and less than the highest brightness threshold value. And when the server judges that the face image simultaneously accords with the first preset processing condition and the second preset processing condition, the server extracts face key points from the shot RGB image.
And 208, respectively carrying out face correction on the RGB image, the depth image and the infrared image according to the face key points, and extracting corrected target face features.
And the server respectively corrects the face of the RGB image, the depth image and the infrared image according to the face key points and extracts the corrected target face features. In one embodiment, the server performs face rectification on the monochromatic color images corresponding to the three color channels of the RGB image according to the face key points of the RGB image, and extracts RGB target face features of the RGB image. The server can carry out face correction on the monochromatic color images corresponding to the three color channels of the RGB images according to the face key points of the RGB images to obtain 3-channel face images, and the server extracts RGB target face features of the RGB images by using an RGB face feature extraction model. The RGB face feature extraction model is obtained by training the RGB face pictures of the sample personnel in the face database by adopting a convolution learning algorithm. The RGB face feature extraction model can be obtained by respectively training the face images of the samples of 3 channels, and can also be obtained by simultaneously training the face images of the samples of 3 channels.
Step 210, comparing the target face characteristics with sample face characteristics of sample candidates in the face database to obtain a verification result corresponding to the target to be verified.
And the server compares the target face characteristics with sample face characteristics of sample candidates in the face database to obtain a verification result corresponding to the target to be verified. In one embodiment, the server may compare the RGB target face features with RGB sample face features of sample candidates in the face database one by one to obtain a first comparison score corresponding to the target to be verified. The server determines whether the first comparison score of the maximum score is greater than a first threshold. And when the server judges that the first comparison score of the maximum score is larger than a first threshold value, the server generates a verification result that the target to be verified belongs to the face database.
According to the face verification method, living body judgment and face recognition are carried out by combining the infrared image and the depth image, the face position in RGB is located by using IR face detection in an auxiliary mode according to the RGB image, the three channel image and the infrared image generated by the depth image through the key point of the face, and the face position in RGB is located by combining IR image information and depth information, so that the time consumption of RGB face detection is reduced, the detection rate of RGB face detection is increased, the face recognition accuracy is improved, and the high-precision anti-counterfeiting and face recognition functions are realized.
In another embodiment, as shown in fig. 3, step 208 further comprises:
and step 208-2, performing face correction on the three channel images and the infrared image generated by the depth image according to the face key points of the depth image and the infrared image, and extracting the corrected infrared-depth target face features.
The server can also carry out face correction on the three channel images and the infrared images generated by the depth images according to the face key points of the depth images and the infrared images, and extract the corrected infrared-depth target face features. The server can perform face correction on three channel images and the infrared image generated by the Depth image according to the face key points to obtain a 4-channel face image, and extract the corrected infrared-Depth target face features by using an IR _ Depth face feature extraction model. The IR _ Depth face feature extraction model is obtained by training an infrared image and a Depth image of a sample person in a face database by adopting a convolution learning algorithm. The IR _ Depth face feature extraction model may be obtained by training sample face pictures of 4 channels respectively, or by training sample face pictures of 4 channels simultaneously.
Step 210 further comprises:
and step 210-2, when the first comparison score of the maximum score is judged to be not larger than the first threshold value, sequencing the first comparison score, and sequentially acquiring the infrared-depth sample facial features of the sample candidate corresponding to the first comparison score according to the scores from large to small.
And when the server judges that the first comparison score of the maximum score is not larger than a first threshold value, the server sorts the first comparison scores, sequentially obtains the scores of topN according to the scores from large to small, and obtains the infrared-depth sample face features of the sample candidate corresponding to the first comparison score of topN from the face database. The infrared-Depth target face features are face features extracted from an IR face of a target to be verified and a Depth image subjected to HHA coding by a server by adopting an IR _ Depth face feature extraction model. The infrared-Depth sample face features are face features adopted by a server training an IR _ Depth face feature extraction model from IR faces of sample personnel and a Depth image after HHA coding.
And step 210-4, calculating a first face similarity score of the target to be verified and the sample candidate according to the infrared-depth target face characteristics, the infrared-depth sample face characteristics and a preset formula.
And the server calculates the first face similarity score of the target to be verified and the sample candidate according to the infrared-depth target face characteristics, the infrared-depth sample face characteristics and a preset formula. The preset formula is: face similarity score =1- (1-score _ rgb)i)*( 1-score_ir_depthi*w i ),
Figure 708569DEST_PATH_IMAGE001
I is the ith sample person in topN,w i is the weight of the score of the IR _ Depth face feature extraction model.
Step 210-6, when the first face similarity score of the maximum score is judged to be larger than a second threshold value, generating a verification result that the target to be verified belongs to a face database; and when the first face similarity score of the maximum score is judged to be not larger than a second threshold value, generating a verification result that the target to be verified does not belong to the face database.
The server judges whether the first face similarity score with the maximum score is larger than a second threshold value or not, and when the first face similarity score with the maximum score is judged to be larger than the second threshold value, the server generates a verification result that the target to be verified belongs to the face database; and when the first face similarity score of the maximum score is judged to be not larger than a second threshold value, the server generates a verification result that the target to be verified does not belong to the face database.
In another embodiment, the method further comprises:
when the face image is judged to be in accordance with the first preset processing condition but not in accordance with the second preset processing condition, extracting face key points of the depth image and the infrared image from the shot RGB image;
carrying out face correction on three channel images and infrared images generated by the depth image according to the key point pairs of the face, and extracting the corrected infrared-depth target face features;
comparing the infrared-depth target face features with infrared-depth sample face features of sample candidates in a face database one by one to obtain a second comparison score corresponding to a target to be verified;
and when the second comparison score of the maximum score is judged to be larger than a third threshold value, generating a verification result that the target to be verified belongs to the face database.
And when the server judges that the face image meets the first preset processing condition but does not meet the second preset processing condition, the server extracts the face key points from the shot RGB image.
And the server performs face correction on the three channel images and the infrared image generated by the depth image according to the face key points, and extracts the corrected infrared-depth target face features.
The server compares the infrared-depth target face features with the infrared-depth sample face features of the sample candidates in the face database one by one to obtain a second comparison score corresponding to the target to be verified. The server can perform face correction on three channel images and the infrared image generated by the Depth image according to the face key points to obtain a 4-channel face image, and extract the corrected infrared-Depth target face features by using an IR _ Depth face feature extraction model. The IR _ Depth face feature extraction model is obtained by training an infrared image and a Depth image of a sample person in a face database by adopting a convolution learning algorithm. At the moment, the number of the infrared images and the depth images of the sample persons in the face database is less than N2, and N2 is less than or equal to 20000. For face recognition, the training data amount of the IR _ Depth image is smaller than that of an RGB image, and although the accuracy of the IR _ Depth face feature extraction model is lower than that of the RGB face feature extraction model, the IR _ Depth face feature extraction model can also ensure better accuracy in a face library with smaller data amount.
And when the second comparison score of the maximum score is judged to be larger than a third threshold value, the server generates a verification result that the target to be verified belongs to the face database.
In another embodiment, the method further comprises:
and when the second comparison score with the maximum score is judged to be not larger than the third threshold value, the server sorts the second comparison scores, sequentially obtains the scores of topN according to the scores from large to small, and obtains the RGB sample face characteristics of the sample candidate corresponding to the second comparison score from the face database. The RGB target face features are face features extracted from RGB images of the target to be verified by the server through an RGB face feature extraction model. The RGB sample face features are face features adopted by a server training RGB face feature extraction model from RGB images of sample personnel.
The server can calculate a second face similarity score of the target to be verified and the sample candidate according to a preset formula, the RGB target face characteristics and the RGB sample face characteristics of the sample candidate in the face database. The preset formula is: face similarity score =1- (1-score _ rgb)i)*( 1-score_ir_depthi*w i ),
Figure 563392DEST_PATH_IMAGE002
I is the ith sample person in topN,w i is the weight of the score of the IR _ Depth face feature extraction model.
When the second face similarity score with the maximum score is judged to be larger than a fourth threshold value, the server generates a verification result that the target to be verified belongs to the face database; and when the second face similarity score with the maximum score is judged to be not larger than the fourth threshold, the server generates a verification result that the target to be verified does not belong to the face database. At this time, although the RGB image does not meet the constraint condition B, the screening range is reduced to N candidates from the whole database by the IR _ Depth recognition model, the recognition difficulty of the RGB model is greatly reduced, and the screening accuracy is ensured.
In one embodiment, as shown in fig. 4, the determining whether the target to be verified corresponding to the face image is a living body includes:
step 402, converting the depth image into three-channel data.
The server may convert the depth image taken by a single 3D structured light or TOF camera into three channel data. In one embodiment, the server may convert points on the depth image shot by the single 3D structured light or TOF camera into three-dimensional points in the camera coordinates according to the conversion principle of the pixel coordinates and the camera coordinates, and form the three-dimensional points into three channels of the infrared image.
Figure 31545DEST_PATH_IMAGE003
Figure 364437DEST_PATH_IMAGE004
Figure 178810DEST_PATH_IMAGE005
. Wherein,Z c corresponding to the original depth map value, (u,v) Is a coordinate point in the image coordinate system (1)u 0 ,v 0 ) F/dx and f/dy are the physical dimension dx in the 1/x-axis direction and the physical dimension dy in the 1/y-axis direction, respectively, which are the central coordinates of the image coordinate system. The server may also calculate the direction and offset of each pixel point to the normal of the three-dimensional point. The server can set a parameter R to get around the point
Figure 188223DEST_PATH_IMAGE006
The point calculates a direction N (a line connecting the origin to the three-dimensional point) corresponding to the normal of the three-dimensional point and an offset b for each pixel point so that
Figure 455256DEST_PATH_IMAGE007
. And the server iterates the normal direction according to a preset angle threshold value, and calculates a rotation matrix according to the direction to obtain three-channel initial data. The server may give two thresholds of 45 ° and 15 °, make 5 iterations, respectively, and find the set of "parallel edges" and "vertical edges" according to the thresholds, and the optimization equation is as follows:
Figure 591839DEST_PATH_IMAGE008
wherein
Figure 260718DEST_PATH_IMAGE009
expressed as the angle between a and b, the definition of the set of parallel and vertical edges is
Figure 942497DEST_PATH_IMAGE010
. The server normalizes the three-channel initial data into three-channel data with values of 0-255.
And step 404, respectively intercepting the face areas in the infrared image and the image of the three-channel data according to the face coordinates in the infrared image to form face image data of 4 channels.
The server intercepts the face areas in the infrared image and the image of the three-channel data respectively according to the face coordinates in the infrared image, and 4 channels of face image data are formed. The server intercepts the face areas in the 3 channel images generated by the IR image and the HHA code according to the face coordinates in the IR image respectively to form face image data of 4 channels.
And step 406, inputting the face image data of the 4 channels into a living body identification network, and calculating living body scores.
The server inputs the face image data of 4 channels into the living body identification network, and the living body score is calculated. The living body identification network may include data processing layers such as Conv + bn + prelu, Resnet Blocks, FullyConnected, Softmaxloss, and the like.
And step 408, when the living body score is larger than the living body judgment threshold, judging that the target to be verified corresponding to the face image data is a living body by adopting the face position detection unit.
When the living body score is larger than the living body judgment threshold value, the server adopts the face position detection unit to detect the face image data, and when the face is detected, the server judges that the target to be verified corresponding to the face image data is the living body.
According to the face verification method, living body judgment is carried out in a mode of combining the IR image and the depth coding image, and the living body judgment accuracy is improved.
In an embodiment, as shown in fig. 5, before determining whether the target to be verified corresponding to the face image is a living body, the method further includes:
step 502, judging whether the ratio of the brightness of the face area in the infrared image to the effective point cloud number of the face area in the depth image is within a preset threshold interval.
The server acquires the brightness of the face area in the infrared image
Figure 696827DEST_PATH_IMAGE011
Number of effective point clouds in face area in depth image
Figure 371522DEST_PATH_IMAGE012
. The server judges the brightness
Figure 894907DEST_PATH_IMAGE011
Whether the image is located in a preset IR face area brightness interval
Figure 511702DEST_PATH_IMAGE013
Internally, and judge the percentage
Figure 753327DEST_PATH_IMAGE012
Whether the effective point cloud number is located in the ratio interval of the effective point cloud number in the preset Depth face area
Figure 966134DEST_PATH_IMAGE014
And (4) the following steps. When it is judged that
Figure 609605DEST_PATH_IMAGE011
Located in the brightness interval of the preset IR face area
Figure 633187DEST_PATH_IMAGE015
Internal and fractional number
Figure 362109DEST_PATH_IMAGE012
Effective point cloud number ratio interval in preset Depth face area
Figure 378606DEST_PATH_IMAGE016
The server determines the infrared image and depth image symbolAnd combining the image processing requirements, and judging whether the target to be verified corresponding to the face image is a living body.
And step 504, when the face area is judged not to be in the preset threshold interval, calculating the brightness difference value between the brightness of the face area and the preset threshold interval.
When the server determines
Figure 876584DEST_PATH_IMAGE011
Is not located in the brightness interval of the preset IR face area
Figure 835181DEST_PATH_IMAGE017
Internal or fractional number
Figure 723503DEST_PATH_IMAGE012
Effective point cloud number ratio interval not located in preset Depth face area
Figure 606008DEST_PATH_IMAGE018
When the face area is in the interior, the server calculates the brightness of the face area and a preset IR face area brightness interval
Figure 658626DEST_PATH_IMAGE019
Brightness difference value of
Figure 601174DEST_PATH_IMAGE020
And step 506, calculating an infrared ideal exposure time interval for shooting the infrared image according to the brightness difference value.
And the server calculates the infrared ideal exposure time interval of the single 3D structured light or the infrared image shot by the TOF camera according to the brightness difference value. The server can obtain the time length and the brightness according to the exposure in advance
Figure 976792DEST_PATH_IMAGE011
Fitting function obtained by fitting
Figure 397409DEST_PATH_IMAGE021
Calculating the infrared ideal exposure time interval of the single 3D structured light or TOF camera for shooting the infrared image
Figure 56930DEST_PATH_IMAGE022
Wherein
Figure 170379DEST_PATH_IMAGE023
indicating the tuning parameters. To satisfy the IR face region brightness
Figure 33293DEST_PATH_IMAGE024
The calculation formula is as follows:
Figure 257601DEST_PATH_IMAGE025
the server adjusts the exposure time according to the light, the distance dis and the face brightness difference value diff to obtain
Figure 7513DEST_PATH_IMAGE021
. The server can set light and distance dis, continuously adjust exposure time and calculate the human face brightness difference value diff before and after adjusting the exposure time; changing the light condition and the distance value, and testing for multiple times to obtain a recording list; fitting to obtain an adjusting function according to the recording parameter values in the recording list
Figure 291864DEST_PATH_IMAGE021
And step 508, calculating a depth ideal exposure time interval of the shot depth image according to the brightness difference value and the average distance information of the face area in the depth image.
And the server calculates the depth ideal exposure time interval of the shooting depth image of the camera according to the brightness difference value and the average distance information of the face area in the depth image. The server can obtain a fitting function obtained by fitting the ideal depth exposure duration interval according to the light and the distance in advance
Figure 642074DEST_PATH_IMAGE026
And calculating the ideal depth exposure time interval of the shooting depth image of the single 3D structured light or TOF camera
Figure 670073DEST_PATH_IMAGE027
. The server can dynamically adjust the exposure duration according to the light and the distance dis, and record the minimum exposure time for enabling the Depth face information in the Depth image to meet the quality requirement
Figure 38606DEST_PATH_IMAGE028
And maximum exposure time
Figure 228279DEST_PATH_IMAGE029
(ii) a The server changes the light condition and the distance value, tests for multiple times and obtains a recording list; the server fits according to the recording parameters in the recording list to obtain a fitting function
Figure 65785DEST_PATH_IMAGE030
And step 510, adjusting the current exposure time of the camera according to the infrared ideal exposure time interval and the depth ideal exposure time interval.
And the server adjusts the current exposure time length of the camera according to the infrared ideal exposure time length interval and the depth ideal exposure time length interval. In one embodiment, the server may first determine whether there is an intersection between the infrared ideal exposure duration interval and the depth ideal exposure duration interval. When the server judges that an intersection exists, the center point value of the intersection is set as the target exposure duration, the current exposure duration of the single 3D structured light or the TOF camera is adjusted, and the exposure time is as follows:
Figure 631896DEST_PATH_IMAGE031
(ii) a When judging that no intersection exists, according to the average distance information of the face area in the depth image
Figure 621979DEST_PATH_IMAGE032
And presetting the optimal recognition distance
Figure 982554DEST_PATH_IMAGE033
Adjusting the recognition distance of a single 3D structured light or TOF camera and recalculating the infraredAn ideal exposure duration interval and a depth ideal exposure duration interval. When in use
Figure 307356DEST_PATH_IMAGE034
Adjust the recognition distance to be larger
Figure 677157DEST_PATH_IMAGE035
And the identification distance is reduced.
It should be understood that although the various steps in the flow charts of fig. 2-5 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-5 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 6, there is provided a face authentication apparatus, comprising:
and the living body judging module 602 is configured to, when it is determined that a face image exists in the captured infrared image, judge whether a target to be verified corresponding to the face image is a living body according to the captured infrared image and the depth image.
And an image extraction module 604, configured to extract a face image from the captured RGB image when it is determined that the target to be authenticated is a living body.
And a face key point extracting module 606, configured to extract a face key point from the captured RGB image when it is determined that the face image meets the first preset processing condition and the second preset processing condition.
And the target feature extraction module 608 is configured to perform face correction on the RGB image, the depth image, and the infrared image according to the face key points, and extract a corrected target face feature.
The verification module 610 is configured to compare the target face features with sample face features of sample candidates in the face database to obtain a verification result corresponding to the target to be verified.
In one embodiment, the target feature extraction module comprises:
and the RGB target face feature extraction unit is used for carrying out face correction on the monochromatic color images corresponding to the three color channels of the RGB images according to the face key points of the RGB images and extracting the RGB target face features of the RGB images.
The verification module includes:
and the first comparison unit is used for comparing the RGB target face characteristics with the RGB sample face characteristics of the sample candidate in the face database one by one to obtain a first comparison score corresponding to the target to be verified.
And the verification unit is used for generating a verification result that the target to be verified belongs to the face database when the first comparison score of the maximum score is judged to be larger than the first threshold.
In one embodiment, the target feature extraction module comprises:
and the infrared-depth target face feature extraction unit is used for carrying out face correction on the three channel images and the infrared image generated by the depth image according to the face key points of the depth image and the infrared image and extracting the corrected infrared-depth target face features.
The verification module includes:
and the infrared-depth sample face feature acquisition unit is used for sorting the first comparison scores when the first comparison score of the maximum score is judged to be not larger than the first threshold value, and sequentially acquiring the infrared-depth sample face features of the sample candidate corresponding to the first comparison score according to the scores from large to small.
And the first similarity score calculating unit is used for calculating the first human face similarity scores of the target to be verified and the sample candidate according to the infrared-depth target human face characteristics, the infrared-depth sample human face characteristics and a preset formula.
The verification unit is used for generating a verification result that the target to be verified belongs to the face database when the first face similarity score of the maximum score is judged to be larger than a second threshold value; and when the first face similarity score of the maximum score is judged to be not larger than a second threshold value, generating a verification result that the target to be verified does not belong to the face database.
In some embodiments, the apparatus further comprises:
and the face key point extraction module is used for extracting face key points of the depth image and the infrared image from the shot RGB image when the face image is judged to be in accordance with the first preset processing condition but not in accordance with the second preset processing condition.
And the infrared-depth target face feature extraction module is used for carrying out face correction on the three channel images and the infrared image generated by the depth image according to the key point pairs of the face and extracting the corrected infrared-depth target face features.
And the second comparison module is used for comparing the infrared-depth target face characteristics with the infrared-depth sample face characteristics of the sample candidate in the face database one by one to obtain a second comparison score corresponding to the target to be verified.
And the verification module is used for generating a verification result that the target to be verified belongs to the face database when the second comparison score of the maximum score is judged to be larger than a third threshold value.
In some embodiments, the apparatus further comprises:
and the RGB sample face feature acquisition module is used for sorting the second comparison scores when the second comparison scores with the maximum scores are judged to be not larger than a third threshold value, and sequentially acquiring the RGB sample face features of the sample candidate corresponding to the second comparison scores according to the scores from large to small.
And the second similarity score calculating module is used for calculating a second face similarity score of the target to be verified and the sample candidate according to a preset formula, the RGB target face characteristics and the RGB sample face characteristics of the sample candidate in the face database.
The verification module is used for generating a verification result that the target to be verified belongs to the face database when the second face similarity score of the maximum score is judged to be larger than a fourth threshold; and when the second face similarity score with the maximum score is judged to be not larger than the fourth threshold, generating a verification result that the target to be verified does not belong to the face database.
In one embodiment, the living body judgment module includes:
and the conversion unit is used for converting the depth image into three-channel data.
And the image intercepting unit is used for respectively intercepting the face areas in the infrared image and the image of the three-channel data according to the face coordinates in the infrared image to form face image data of 4 channels.
And the calculating unit is used for inputting the face image data of the 4 channels into the living body identification network and calculating the living body score.
And the judging unit is used for judging that the target to be verified corresponding to the face image data is a living body when the living body score is greater than the living body judging threshold value.
In one embodiment, the living body judgment module includes:
and the point conversion unit is used for converting points on the depth image shot by the single 3D structured light or TOF camera into three-dimensional points in the camera coordinates according to the conversion principle of the pixel coordinates and the camera coordinates, and forming the three-dimensional points into three channels of the infrared image.
And the three-dimensional point calculating unit is used for calculating the normal direction and the offset of each pixel point corresponding to the three-dimensional point.
And the iteration unit is used for iterating the normal direction according to the preset angle threshold value and calculating a rotation matrix according to the direction to obtain three-channel initial data.
And the normalization unit is used for normalizing the three-channel initial data into three-channel data with the value of 0-255.
In one embodiment, the apparatus further comprises:
the interval comparison judging module is used for judging whether the ratio of the brightness of the face area in the infrared image to the number of effective point clouds of the face area in the depth image is in a preset threshold interval or not;
the brightness difference value calculation module is used for calculating the brightness difference value between the brightness of the face area and a preset threshold interval when the face area is judged not to be in the preset threshold interval;
the infrared ideal exposure time calculation module is used for calculating an infrared ideal exposure time interval for shooting the infrared image according to the brightness difference value;
and the depth ideal exposure time calculation module is used for calculating a depth ideal exposure time interval of the shooting depth image of the camera according to the brightness difference value and the average distance information of the face area in the depth image.
And the duration adjusting module is used for adjusting the current exposure duration according to the infrared ideal exposure duration interval and the depth ideal exposure duration interval.
In one embodiment, the apparatus further comprises:
and the intersection judging module is used for judging whether the infrared ideal exposure time interval and the depth ideal exposure time interval have intersection or not.
The time length adjusting module is used for setting a central point value of the intersection as a target exposure time length and adjusting the current exposure time length when the intersection is judged to exist; and when the intersection does not exist, adjusting the identification distance of the camera, and recalculating the infrared ideal exposure time interval and the depth ideal exposure time interval.
For specific limitations of the face verification apparatus, reference may be made to the above limitations of the face verification method, and details are not described here. All or part of the modules in the face verification device can be realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 7. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing sample images of sample candidates of the face database. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a face verification method.
Those skilled in the art will appreciate that the architecture shown in fig. 7 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
A computer device comprising a memory and one or more processors, the memory having stored therein computer readable instructions which, when executed by the processors, implement the steps of a face verification method as provided in any one of the embodiments of the present application.
One or more non-transitory storage media storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to implement the steps of the face verification method provided in any one of the embodiments of the present application.
The above is only a specific embodiment of the present disclosure, but the scope of the present disclosure is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present disclosure should be covered within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (12)

1. A face verification method, comprising:
when the fact that a face image exists in the shot infrared image is judged, whether a target to be verified corresponding to the face image is a living body is judged according to the shot infrared image and the shot depth image;
when the target to be verified is judged to be a living body, extracting the face image from the shot RGB image;
when the face image is judged to accord with a first preset processing condition and a second preset processing condition, extracting face key points from the shot RGB image, wherein the first preset processing condition is as follows: the face size of the face image is more than or equal to max (the minimum face recognition threshold value and the minimum living body detection face threshold value), the negative angle threshold value is less than the face angle (p, y, r) of the face image and less than the positive angle threshold value, and the ambiguity value is less than the ambiguity threshold value; the second preset treatment condition is: the lowest brightness threshold value is less than the human face brightness value of the human face image and less than the highest brightness threshold value;
respectively carrying out face correction on the RGB image, the depth image and the infrared image according to the face key points, and extracting corrected target face features;
and comparing the target face characteristics with sample face characteristics of sample candidates in a face database to obtain a verification result corresponding to the target to be verified.
2. The method of claim 1, wherein the performing face correction on the RGB image, the depth image, and the infrared image according to the face key points and extracting corrected target face features comprises:
carrying out face correction on monochrome color images corresponding to three color channels of the RGB image according to the face key points of the RGB image, extracting RGB target face characteristics of the RGB image,
the comparing the target face features with sample face features of sample candidates in a face database to obtain a verification result corresponding to the target to be verified, includes:
comparing the RGB target face features with RGB sample face features of sample candidates in a face database one by one to obtain a first comparison score corresponding to the target to be verified;
and when the first comparison score of the maximum score is judged to be larger than a first threshold value, generating a verification result that the target to be verified belongs to a face database.
3. The method of claim 2, wherein the face correction is performed on the RGB image, the depth image, and the infrared image according to the face key points, and a corrected target face feature is extracted, further comprising:
carrying out face correction on three channel images and the infrared image generated with the depth image according to the face key points of the depth image and the infrared image, and extracting corrected infrared-depth target face features,
the comparing the target face features with sample face features of sample candidates in a face database to obtain a verification result corresponding to the target to be verified, further comprising:
when the first comparison score of the maximum score is judged to be not larger than a first threshold value, sorting the first comparison score, and sequentially acquiring infrared-depth sample face features of sample candidates corresponding to the first comparison score according to scores from large to small;
calculating a first face similarity score of the target to be verified and the sample candidate according to the infrared-depth target face characteristics, the infrared-depth sample face characteristics and a preset formula;
when the first face similarity score of the maximum score is judged to be larger than a second threshold value, generating a verification result that the target to be verified belongs to a face database; and when the first face similarity score of the maximum score is judged to be not larger than a second threshold value, generating a verification result that the target to be verified does not belong to a face database.
4. The face verification method of claim 1, further comprising:
when the face image is judged to be in accordance with the first preset processing condition but not in accordance with the second preset processing condition, extracting face key points of the depth image and the infrared image from the shot RGB image,
carrying out face correction on the three channel images and the infrared image generated by the depth image according to the face key points, and extracting corrected infrared-depth target face features;
comparing the infrared-depth target face features with infrared-depth sample face features of sample candidates in a face database one by one to obtain a second comparison score corresponding to the target to be verified;
and when the second comparison score of the maximum score is judged to be larger than a third threshold value, generating a verification result that the target to be verified belongs to the face database.
5. The face verification method of claim 4, further comprising:
when the second comparison score with the maximum score is judged to be not larger than a third threshold value, the second comparison score is sorted, and RGB sample face features of sample candidates corresponding to the second comparison score are sequentially obtained according to scores from large to small;
calculating a second face similarity score of the target to be verified and the sample candidate according to a preset formula, the RGB target face characteristics and RGB sample face characteristics of the sample candidate of a face database;
when the second face similarity score with the maximum score is judged to be larger than a fourth threshold value, generating a verification result that the target to be verified belongs to a face database; and when the second face similarity score with the maximum score is judged to be not larger than a fourth threshold value, generating a verification result that the target to be verified does not belong to a face database.
6. The method for verifying the human face according to claim 1, wherein the step of judging whether the target to be verified corresponding to the human face image is a living body according to the shot infrared image and the shot depth image comprises the following steps:
converting the depth image into three-channel data;
respectively intercepting face areas in the infrared image and the three-channel data image according to the face coordinates in the infrared image to form face image data of 4 channels;
inputting the face image data of the 4 channels into a living body identification network, and calculating living body scores;
and when the living body score is larger than a living body judgment threshold value, judging that the target to be verified corresponding to the face image data is a living body by adopting a face position detection unit.
7. The face verification method of claim 6, wherein the converting the depth image into three-channel data comprises:
converting points on the depth image into three-dimensional points in camera coordinates according to a conversion principle of pixel coordinates and camera coordinates, and forming the three-dimensional points into three channels of an infrared image;
calculating the normal direction and offset of each pixel point corresponding to the three-dimensional point;
iterating the normal direction according to a preset angle threshold value, and calculating a rotation matrix according to the direction to obtain three-channel initial data;
normalizing the three-channel initial data into three-channel data with the value of 0-255.
8. The method of claim 1, wherein before determining whether the target to be verified corresponding to the face image is a living body, the method further comprises:
judging whether the ratio of the brightness of the face area in the infrared image to the number of effective point clouds of the face area in the depth image is within a preset threshold interval or not;
when the human face is judged not to be in the preset threshold interval, calculating the brightness difference value between the brightness of the human face area and the preset threshold interval;
calculating an infrared ideal exposure time interval for shooting the infrared image according to the brightness difference value;
calculating a depth ideal exposure time interval for shooting the depth image according to the brightness difference value and the average distance information of the face area in the depth image;
and adjusting the current exposure time length according to the infrared ideal exposure time length interval and the depth ideal exposure time length interval.
9. The method of claim 8, wherein the adjusting the current exposure duration according to the infrared ideal exposure duration interval and the depth ideal exposure duration interval comprises:
judging whether an intersection exists between the infrared ideal exposure time interval and the depth ideal exposure time interval;
when the intersection exists, setting a central point value of the intersection as a target exposure duration, and adjusting the current exposure duration; and when the intersection does not exist, adjusting the identification distance of the camera, and recalculating the infrared ideal exposure time interval and the depth ideal exposure time interval.
10. An apparatus for face authentication, the apparatus comprising:
the living body judging module is used for judging whether a target to be verified corresponding to the face image is a living body or not according to the shot infrared image and the shot depth image when the face image is judged to exist in the shot infrared image;
the image extraction module is used for extracting the face image from the shot RGB image when the target to be verified is judged to be a living body;
the face key point extraction module is used for extracting face key points from the shot RGB image when the face image is judged to accord with a first preset processing condition and a second preset processing condition;
the target feature extraction module is used for respectively carrying out face correction on the RGB image, the depth image and the infrared image according to the face key points and extracting corrected target face features;
and the verification module is used for comparing the target face characteristics with sample face characteristics of sample candidates in a face database to obtain a verification result corresponding to the target to be verified.
11. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 9 when executing the computer program.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 9.
CN202011468418.7A 2020-12-15 2020-12-15 Face verification method and device, computer equipment and storage medium Active CN112232323B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011468418.7A CN112232323B (en) 2020-12-15 2020-12-15 Face verification method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011468418.7A CN112232323B (en) 2020-12-15 2020-12-15 Face verification method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112232323A true CN112232323A (en) 2021-01-15
CN112232323B CN112232323B (en) 2021-04-16

Family

ID=74124074

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011468418.7A Active CN112232323B (en) 2020-12-15 2020-12-15 Face verification method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112232323B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112883918A (en) * 2021-03-22 2021-06-01 深圳市百富智能新技术有限公司 Face detection method and device, terminal equipment and computer readable storage medium
CN112926498A (en) * 2021-03-20 2021-06-08 杭州知存智能科技有限公司 In-vivo detection method based on multi-channel fusion and local dynamic generation of depth information
CN113469036A (en) * 2021-06-30 2021-10-01 北京市商汤科技开发有限公司 Living body detection method and apparatus, electronic device, and storage medium
CN113609950A (en) * 2021-07-30 2021-11-05 深圳市芯成像科技有限公司 Living body detection method and system of binocular camera and computer storage medium
CN113673378A (en) * 2021-08-02 2021-11-19 中移(杭州)信息技术有限公司 Face recognition method and device based on binocular camera and storage medium
CN114120415A (en) * 2021-11-30 2022-03-01 支付宝(杭州)信息技术有限公司 Face detection method and device and electronic equipment
CN114151942A (en) * 2021-09-14 2022-03-08 海信家电集团股份有限公司 Air conditioner and human face area detection method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107832677A (en) * 2017-10-19 2018-03-23 深圳奥比中光科技有限公司 Face identification method and system based on In vivo detection
CN109670487A (en) * 2019-01-30 2019-04-23 汉王科技股份有限公司 A kind of face identification method, device and electronic equipment
US20190230339A1 (en) * 2018-01-22 2019-07-25 Kuan-Yu Lu Image sensor capable of enhancing image recognition and application of the same
CN110458041A (en) * 2019-07-19 2019-11-15 国网安徽省电力有限公司建设分公司 A kind of face identification method and system based on RGB-D camera
CN111554006A (en) * 2020-04-13 2020-08-18 绍兴埃瓦科技有限公司 Intelligent lock and intelligent unlocking method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107832677A (en) * 2017-10-19 2018-03-23 深圳奥比中光科技有限公司 Face identification method and system based on In vivo detection
US20190230339A1 (en) * 2018-01-22 2019-07-25 Kuan-Yu Lu Image sensor capable of enhancing image recognition and application of the same
CN109670487A (en) * 2019-01-30 2019-04-23 汉王科技股份有限公司 A kind of face identification method, device and electronic equipment
CN110458041A (en) * 2019-07-19 2019-11-15 国网安徽省电力有限公司建设分公司 A kind of face identification method and system based on RGB-D camera
CN111554006A (en) * 2020-04-13 2020-08-18 绍兴埃瓦科技有限公司 Intelligent lock and intelligent unlocking method

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112926498A (en) * 2021-03-20 2021-06-08 杭州知存智能科技有限公司 In-vivo detection method based on multi-channel fusion and local dynamic generation of depth information
CN112926498B (en) * 2021-03-20 2024-05-24 杭州知存智能科技有限公司 Living body detection method and device based on multichannel fusion and depth information local dynamic generation
CN112883918A (en) * 2021-03-22 2021-06-01 深圳市百富智能新技术有限公司 Face detection method and device, terminal equipment and computer readable storage medium
CN112883918B (en) * 2021-03-22 2024-03-19 深圳市百富智能新技术有限公司 Face detection method, face detection device, terminal equipment and computer readable storage medium
CN113469036A (en) * 2021-06-30 2021-10-01 北京市商汤科技开发有限公司 Living body detection method and apparatus, electronic device, and storage medium
CN113609950A (en) * 2021-07-30 2021-11-05 深圳市芯成像科技有限公司 Living body detection method and system of binocular camera and computer storage medium
CN113673378A (en) * 2021-08-02 2021-11-19 中移(杭州)信息技术有限公司 Face recognition method and device based on binocular camera and storage medium
CN114151942A (en) * 2021-09-14 2022-03-08 海信家电集团股份有限公司 Air conditioner and human face area detection method
CN114120415A (en) * 2021-11-30 2022-03-01 支付宝(杭州)信息技术有限公司 Face detection method and device and electronic equipment

Also Published As

Publication number Publication date
CN112232323B (en) 2021-04-16

Similar Documents

Publication Publication Date Title
CN112232323B (en) Face verification method and device, computer equipment and storage medium
WO2021036436A1 (en) Facial recognition method and apparatus
CN112232324B (en) Face fake-verifying method and device, computer equipment and storage medium
CN109446981B (en) Face living body detection and identity authentication method and device
WO2019192121A1 (en) Dual-channel neural network model training and human face comparison method, and terminal and medium
CN103530599B (en) The detection method and system of a kind of real human face and picture face
US7664315B2 (en) Integrated image processor
WO2021008252A1 (en) Method and apparatus for recognizing position of person in image, computer device and storage medium
WO2019071664A1 (en) Human face recognition method and apparatus combined with depth information, and storage medium
CN111091075B (en) Face recognition method and device, electronic equipment and storage medium
CN110084135A (en) Face identification method, device, computer equipment and storage medium
EP2580711A2 (en) Distinguishing live faces from flat surfaces
TWI721786B (en) Face verification method, device, server and readable storage medium
CN110956114A (en) Face living body detection method, device, detection system and storage medium
WO2015131468A1 (en) Method and system for estimating fingerprint pose
WO2022222575A1 (en) Method and system for target recognition
CN111191521B (en) Face living body detection method and device, computer equipment and storage medium
TWI731503B (en) Live facial recognition system and method
KR102137060B1 (en) Face Recognition System and Method for Updating Registration Face Template
WO2022222569A1 (en) Target discrimation method and system
CN111339897A (en) Living body identification method, living body identification device, computer equipment and storage medium
CN109344758B (en) Face recognition method based on improved local binary pattern
KR20120114934A (en) A face recognition system for user authentication of an unmanned receipt system
Sun et al. Multimodal face spoofing detection via RGB-D images
CN115620117A (en) Face information encryption method and system for network access authority authentication

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: Room 658, building 1, 1 luting Road, Cangqian street, Hangzhou, Zhejiang 310000

Patentee after: Hangzhou Yufan Intelligent Technology Co.,Ltd.

Country or region after: China

Address before: Room 658, building 1, 1 luting Road, Cangqian street, Hangzhou, Zhejiang 310000

Patentee before: UNIVERSAL UBIQUITOUS TECHNOLOGY Co.,Ltd.

Country or region before: China