CN110287900B - Verification method and verification device - Google Patents

Verification method and verification device Download PDF

Info

Publication number
CN110287900B
CN110287900B CN201910568579.4A CN201910568579A CN110287900B CN 110287900 B CN110287900 B CN 110287900B CN 201910568579 A CN201910568579 A CN 201910568579A CN 110287900 B CN110287900 B CN 110287900B
Authority
CN
China
Prior art keywords
face
module
stored
features
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910568579.4A
Other languages
Chinese (zh)
Other versions
CN110287900A (en
Inventor
梁鼎
吴立威
李启铭
谢文彪
高航
陈婷婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN201910568579.4A priority Critical patent/CN110287900B/en
Publication of CN110287900A publication Critical patent/CN110287900A/en
Application granted granted Critical
Publication of CN110287900B publication Critical patent/CN110287900B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Abstract

The embodiment of the invention provides a verification method and a verification device, comprising the following steps: acquiring a near infrared image and a depth image of a target object; performing face detection on the near infrared image to obtain face contour information; performing living body detection according to the face contour information and the depth image; performing face verification on the target object according to the near infrared image in response to the living body detection passing; responding to the face verification, and executing unlocking operation; the verification method can effectively solve the problem of face recognition under dim backlight, the problem of difficult defense such as dummy attack, and the like.

Description

Verification method and verification device
Technical Field
The invention relates to the technical field of face recognition, in particular to a verification method and a verification device.
Background
The face recognition scheme comprises face detection, face recognition, living body detection, eye opening and closing detection and image quality detection technologies. By inputting the face image information in advance, the mobile phone can quickly identify the face in front of the screen when opening the screen, and perform living body detection, eye opening and closing detection and comparison of the face information, so that unlocking is realized. The conventional face unlocking scheme comprises common RGB face unlocking, near infrared face unlocking and 3D structured light face unlocking.
The existing 2D mobile phone face unlocking scheme is low in cost, but has the problems that the 2D mobile phone face unlocking scheme is easily influenced by ambient light or is difficult to defend against paper cutting/photo/high-cost dummy attack. Because the face of the RGB photo is darker in a dark environment, it is difficult to recognize. The face is easy to be overexposed under strong light, or the face is blocked by shade and the like to form a 'yin-yang face', so that the face recognition difficulty is increased. In the aspect of attack prevention, 2D RGB images or 2D near infrared photos are obtained by the 2D scheme and are easy to attack successfully, so that the security of user equipment is threatened.
Disclosure of Invention
The embodiment of the invention provides a verification method and a verification device, which can effectively solve the problems of face recognition under dim backlight, and difficult defense of dummy attack and the like.
The first aspect of the embodiment of the invention discloses a verification method, which comprises the following steps: acquiring a near infrared image and a depth image of a target object; performing face detection on the near infrared image to obtain face contour information; performing living body detection according to the face contour information and the depth image; performing face verification on the target object according to the near infrared image in response to the living body detection passing; and responding to the pass of the face verification, and executing unlocking operation.
Optionally, the performing face detection on the near infrared image to obtain face contour information includes: adjusting the size of the near infrared image to obtain an image pyramid; extracting facial features of the image pyramid and calibrating a frame; and optimizing the frame to obtain the face contour information.
Optionally, after the face detection is performed on the near infrared image to obtain face contour information, the method further includes: and opening and closing eyes are detected according to the face contour information.
Optionally, the detecting the open and close eyes according to the face contour information includes: inputting the near infrared image into a first module of a deep learning network, wherein the first module is used for detecting open and closed eyes; acquiring eye key point coordinates in the face contour information; extracting eye features from the first module according to the eye key point coordinates; comparing the extracted eye features with first pre-stored features to perform eye opening and closing detection, wherein the first pre-stored features are eye features pre-stored in a face feature library when eyes are opened.
Optionally, the deep learning network further includes a second module, where the second module is configured to perform the living body detection according to the face contour information and the depth image, and includes: dividing a first face region image in the depth image according to the face contour information of the near infrared image; inputting a first face region image into the second module, and extracting first face features of the first face region image through the second module; and comparing the first facial features with second pre-stored features to perform living body detection, wherein the second pre-stored features are living body facial feature information pre-stored in the facial feature library.
Optionally, the deep learning network further includes a third module, where the third module is configured to perform face verification on the target object according to the near infrared image, and the performing face verification on the target object includes: dividing a second face region image in the near infrared image according to the face contour information; and inputting a second face region image into the third module, extracting second face features of the second face region image through the third module, comparing the second face features with third pre-stored features, and judging whether the second face features are the same person or not, wherein the third pre-stored features are the face features of the target object pre-stored in the face feature library.
Optionally, the deep learning network is trained, and the training includes: establishing a deep learning network, wherein the deep learning network comprises the first module, the second module and the third module; selecting a training sample, wherein the training sample comprises a closed eye detection sample, a living body detection sample and a face verification sample; extracting the first pre-stored feature from the open-eye and closed-eye detection sample by the first module, extracting the second pre-stored feature from the living body detection sample by the second module, extracting the third pre-stored feature from the face verification sample by the third module, and storing the first pre-stored feature, the second pre-stored feature and the third pre-stored feature into the face feature library; training the first module according to the first pre-stored feature, training the second module according to the second pre-stored feature, and training the third module according to the third pre-stored feature; and adjusting the super parameters of the deep learning network according to the training result to obtain the trained deep learning network.
The second aspect of the invention discloses a verification method device, which comprises the following steps: a first acquisition unit configured to acquire a near infrared image and a depth image of a target object; the face detection unit is used for carrying out face detection on the near infrared image to obtain face contour information; the living body detection unit is used for carrying out living body detection according to the face outline information and the depth image; a face verification unit, configured to perform face verification on the target object according to the near infrared image in response to the living body detection passing; and the unlocking unit is used for executing unlocking operation in response to the passing of the face verification.
Optionally, in the aspect of performing face detection on the near infrared image to obtain face contour information, the face detection unit is specifically configured to: adjusting the size of the near infrared image to obtain an image pyramid; extracting facial features of the image pyramid and calibrating a frame; and optimizing the frame to obtain the face contour information.
Optionally, in the aspect of open-close eye detection according to the face contour information, the open-close eye detection unit is specifically configured to: inputting the near infrared image into a first module of a deep learning network, wherein the first module is used for detecting open and closed eyes; acquiring eye key point coordinates in the face contour information; extracting eye features from the first module according to the eye key point coordinates; comparing the extracted eye features with first pre-stored features to perform eye opening and closing detection, wherein the first pre-stored features are eye features pre-stored in a face feature library when eyes are opened.
Optionally, the deep learning network further includes a second module, where the second module is configured to perform the living body detection, and the living body detection unit is specifically configured to: dividing a first face region image in the depth image according to the face contour information of the near infrared image; inputting a first face region image into the second module, and extracting first face features of the first face region image through the second module; and comparing the first facial features with second pre-stored features to perform living body detection, wherein the second pre-stored features are living body facial feature information pre-stored in the facial feature library.
Optionally, the deep learning network further includes a third module, where the third module is configured to perform face verification, and the face verification unit is specifically configured to: dividing a second face region image in the near infrared image according to the face contour information; inputting a second face region image into the third module, and extracting second face features of the second face region image through the third module; and comparing the second facial features with third pre-stored features to judge whether the second facial features are the same person, wherein the third pre-stored features are the facial features of the target object pre-stored in the facial feature library.
Optionally, the deep learning network is trained, and the verification device further includes a training unit, configured to: establishing a deep learning network, wherein the deep learning network comprises the first module, the second module and the third module; selecting a training sample, wherein the training sample comprises a closed eye detection sample, a living body detection sample and a face verification sample; extracting the first pre-stored feature from the open-eye and closed-eye detection sample by the first module, extracting the second pre-stored feature from the living body detection sample by the second module, extracting the third pre-stored feature from the face verification sample by the third module, and storing the first pre-stored feature, the second pre-stored feature and the third pre-stored feature into the face feature library; training the first module according to the first pre-stored feature, training the second module according to the second pre-stored feature, and training the third module according to the third pre-stored feature; and adjusting the super parameters of the deep learning network according to the training result to obtain the trained deep learning network.
A third aspect of the invention discloses an electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising means for performing any of the methods of the first aspect.
A fourth aspect of the invention discloses a computer-readable storage medium storing a computer program for execution by a processor to implement the method of any of the first aspects.
In the scheme of the embodiment of the invention, a near infrared image and a depth image of a target object are acquired; performing face detection on the near infrared image to obtain face contour information; performing living body detection according to the face contour information and the depth image; performing face verification on the target object according to the near infrared image in response to the living body detection passing; and responding to the pass of the face verification, and executing unlocking operation. By the scheme provided by the invention, the problems of face recognition under dim backlight and difficult defense such as dummy attack can be effectively solved by carrying out face detection, living body detection, face verification and the like on the near infrared image and the depth image.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a registration flow of a verification method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an unlocking flow of a verification method according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a verification method according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a verification apparatus according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution of an embodiment of the present invention will be clearly described below with reference to the accompanying drawings in the embodiment of the present invention, and it is apparent that the described embodiment is a part of the embodiment of the present invention, but not all the embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
The terms first, second, third and the like in the description, in the claims and in the drawings, are used for distinguishing between different objects and not necessarily for describing a particular sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The electronic device according to the embodiment of the present application may include various handheld devices, vehicle-mounted devices, wireless headphones, computing devices or other processing devices connected to a wireless modem, and various forms of User Equipment (UE), mobile Station (MS), terminal device (terminal device), etc., where the electronic device may be, for example, a smart phone, a tablet computer, a headset case, etc. For convenience of description, the above-mentioned devices are collectively referred to as electronic devices.
In the embodiment of the application, the near-infrared image and the depth image are respectively acquired by a near-infrared camera and a TOF (Time of flight) camera, wherein the principle of acquiring the depth image by a TOF camera is as follows: the method comprises the steps of transmitting continuous near infrared pulses to a target scene through a TOF camera, then receiving light pulses reflected by an object through a sensor, calculating transmission delay between the transmitted light pulses and the light pulses reflected by the object through comparison, further obtaining the distance between the object and the transmitter, and finally obtaining a depth image, wherein the depth image is an image taking the distance (depth) from a camera to each point in the scene as a pixel value, and 3D information of the object can be obtained according to the depth image.
Generally, a TOF camera includes an infrared emission unit, an optical lens, an imaging sensor, a control unit, and a core algorithm calculation unit.
Wherein, infrared emission unit includes Vcsel transmitter, diffuser. Vcsel emits a pulsed square wave at 940nm, with infrared light at a wavelength that is invisible and has a minimum amount in the spectrum, thus avoiding interference with ambient light.
The optical lens is used for converging the reflected light rays and imaging the light rays on the optical sensor.
The imaging Sensor is similar to a photosensitive element of a general camera, and is configured to receive the reflected light and perform photoelectric conversion on a Sensor.
The control unit is the drive IC of the laser transmitter, can drive the laser to reach the high-frequency pulse drive of 100MHz in the upper limit, dispel all kinds of interference at the same time, ensure the drive waveform is square wave, rise edge and falling edge time are about 0.2ns, thus guarantee the extraction of the deep precision of the high precision effectively.
The TOF chip is the core of the TOF camera and can convert acquired image information into a depth map.
The TOF camera can rapidly complete identification and tracking of targets, can acquire richer position relations among objects through distance information, namely, can distinguish a foreground from a background, and can further complete three-dimensional modeling and other applications through further deepening processing.
For a better understanding of a verification method provided in the embodiments of the present application, a brief description thereof will be provided below. The method mainly comprises two parts of registration and unlocking.
Referring to fig. 1, fig. 1 is a schematic registration flow chart of a verification method provided in an embodiment of the present application, wherein a near infrared image is firstly obtained through a near infrared camera, a depth image is obtained through a TOF camera, then the near infrared image is preprocessed, the preprocessing includes face detection, picture quality detection and eye opening and closing detection, then the near infrared image and the depth image are subjected to image post-processing, and the image post-processing includes living detection, face template storage, face feature extraction and face feature storage.
The above procedure can also be understood as follows: after a human face is detected in the near infrared image, filtering out a problematic near infrared image (face shielding, position frame outputting, too far face deviation and the like), filtering out a depth image which is too close or too far away, further filtering out a near infrared image which closes eyes or does not watch a screen, filtering out a non-living image through the near infrared image matched with the depth image, finally obtaining a near infrared image which is a qualified image, finally, saving the qualified image in a face template, extracting the face characteristics in the face template, and storing the face characteristics in a face characteristic library.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
The embodiment of the application shows a registration process of the verification method, wherein the registration process is as follows:
near infrared images and depth images of the target object are acquired.
And carrying out face detection on the near infrared image to obtain face contour information.
And detecting the quality of the near infrared image.
The quality detection comprises the steps of detecting factors such as exposure, definition, color, noise, anti-shake, flash lamp, focusing, artifact and the like, and can pass the quality detection only under the condition that detection results of the factors meet a preset threshold value.
And opening and closing eyes are detected according to the face contour information.
And carrying out gaze detection on the near infrared image.
The gaze detection is an unnecessary item, and the gaze detection needs to detect the eye gaze conditions in the five directions of up, down, left, right and front, and if the gaze detection fails, the near infrared image and the depth image are re-recorded.
And performing living body detection according to the face contour information and the depth image.
And detecting a face angle in the near infrared image in response to the living body detection passing.
The detection of the face angle in the near infrared image is to ensure that the face angle is in a preset range when a face template with a corresponding angle is recorded, for example, when front detection is performed, the face must be ensured to face the screen within an allowable deviation range.
And after confirming that the face angle is in a preset range, extracting the face characteristics in the near infrared image and comparing the face characteristics with the face characteristics recorded in the face characteristic library.
And if the face angle exceeds the angle deviation allowable range, returning to re-enter the depth image and the near infrared image.
And comparing and confirming the face features to be the same person, and storing the near infrared image as a face template.
And if the comparison is not the same person, returning to re-enter the depth image and the near infrared image.
Repeating the steps 5 times, and respectively inputting face templates in the front, upper, lower, left and right directions.
When each input is started, the user is reminded of preparing to input the template and related action instructions in the form of characters or voices or characters and voices, wherein the action instructions comprise: front view screen, deflection to the left, deflection to the right, deflection upwards and deflection downwards.
And in the process of inputting the template, if the effective face is not detected or any one of the detection is not passed in the preset time, the depth image and the near infrared image are input again.
Therefore, the last recorded face template can be ensured to be the face image of the target object, and the subsequent unlocking operation can be performed through face comparison.
Referring to fig. 2, fig. 2 is a schematic diagram of an unlocking flow of a verification method provided in an embodiment of the present application, wherein a near infrared image is firstly obtained through a near infrared camera, a depth image is obtained through a TOF camera, then the near infrared image is preprocessed, the preprocessing includes face detection, picture quality detection and eye opening and closing detection, then the near infrared image and the depth image are subjected to image post-processing, and the image post-processing includes living detection, face feature extraction and face feature comparison in a face feature library.
The above procedure can also be understood as follows: after the face is detected in the near infrared image, the near infrared image with the eye closing condition is filtered, the non-living image is filtered through the near infrared image and the depth image, the face features are extracted from the near infrared image and compared with the face features in the face feature library, and after all the steps pass, the unlocking is successfully returned, otherwise, the unlocking is failed.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Referring to fig. 3, fig. 3 is a flow chart of a verification method according to an embodiment of the present application, including:
s301, acquiring a near infrared image and a depth image of a target object.
The near infrared camera emits near infrared light through an active light source, then the near infrared light is received by a photosensitive device and then imaged, the near infrared camera is not influenced by visible light intensity, and the face recognition problem under dark backlight can be perfectly solved; the TOF camera detects depth information according to the principle of the flight time judgment distance, the detection precision is about 1cm within the distance of 100m, and the TOF camera is not influenced by object textures, and has the advantages of higher frame rate, small volume, high effective depth information, simple manufacturing process, clear object edges and the like.
S302, face detection is carried out on the near infrared image, and face contour information is obtained.
The face contour information comprises face frame coordinates and face key point coordinates, wherein the face key point coordinates comprise eye key point coordinates, eyebrow key point coordinates, nose key point coordinates and lip key point coordinates, and further comprise ear key point coordinates, cheekbone key point coordinates, chin key point coordinates and the like.
The method for detecting the face in the near infrared image may be a feature-invariant-based method, a template matching method, an appearance-based method, a knowledge-based method, or the like, which is not limited in this application. Optionally, a deep learning model MTCNN (Multi-task convolutional neural network Multi-task convolutional neural network) is selected for face detection, wherein the MTCNN is formed by cascading PNet, RNet and Onet, and the face contour information can be finally obtained after the image data are processed by the three convolutional neural networks. After face detection, unqualified images with face shielding, frame out and too biased faces can be effectively filtered.
And S303, performing living body detection according to the face contour information and the depth image.
The method comprises the steps of selecting a data sample, establishing a deep learning network, extracting the characteristic with the largest distinction degree between a living face and a non-living face as a distinguishing characteristic, training the deep learning network based on the data sample to obtain a trained deep learning network, aligning face frame coordinates in a near infrared image to the deep image, segmenting a first face area image in the deep image according to the face frame coordinates, inputting the first face area image into the trained deep learning network, and distinguishing living bodies and non-living bodies by using the trained deep learning network.
The method can judge whether the face in the image is a true person or not through living body detection, and the scheme has strong anti-attack capability, and can prevent all 2D attacks of pictures, videos, paper cuts, photos and most 3D attacks.
And S304, responding to the passing of the living body detection, and carrying out face verification on the target object according to the near infrared image.
The feature extraction and feature comparison are carried out on the face region in the near infrared image through the trained deep learning network, so that face verification is carried out, the recognition effect is good in a complex environment, and the passing rate can be ensured in various scenes such as angles, complex light rays and distances of the face.
And S305, in response to the pass of the face verification, performing unlocking operation.
When the unlocking operation is executed, the ambient brightness is obtained through the photosensitive sensor, and the screen brightness of the electronic equipment is adjusted according to the ambient brightness.
Step S303 may also be performed after step S304, i.e. the above steps may be:
acquiring a near infrared image and a depth image of a target object;
performing face detection on the near infrared image to obtain face contour information;
performing face verification on the target object according to the near infrared image;
Responding to the passing of the face verification, and performing living body detection according to the face contour information and the depth image;
in response to the living body detection passing, an unlocking operation is performed.
In addition, the embodiment of the application supports the unlocking of the user in a non-forward state of the mobile phone, and the unlocking range is up to 360-degree rotation interval of the plane of the mobile phone.
It can be seen that in the embodiment of the present application, a near infrared image and a depth image of a target object are acquired; performing face detection on the near infrared image to obtain face contour information; performing living body detection according to the face contour information and the depth image; performing face verification on the target object according to the near infrared image in response to the living body detection passing; and responding to the pass of the face verification, and executing unlocking operation. By the scheme provided by the invention, the problems of face recognition under dim backlight and difficult defense such as dummy attack can be effectively solved by carrying out face detection, living body detection, face verification and the like on the near infrared image and the depth image. In addition, the embodiment of the application is developed based on the security platform of the hardware level, and can be used for directions with higher security level requirements such as payment.
Optionally, the performing face detection on the near infrared image to obtain face contour information includes:
and adjusting the size of the near infrared image to obtain an image pyramid.
And scaling the near infrared image to obtain near infrared images with different scales, and forming an image pyramid so as to achieve the unchanged scale.
And extracting facial features of the image pyramid and calibrating a frame.
Wherein, candidate frames and frame regression vectors are generated simultaneously.
And optimizing the frame to obtain the face contour information.
Wherein the optimization includes calibrating, merging, fine tuning and removing overlap, in particular: and (3) using the frames as regression, calibrating other candidate frames, merging the highly overlapped candidate frames through non-maximum suppression (NMS), using the frame regression vector to finely tune the candidate frames, and using the NMS to remove the overlapped frames to obtain face frame coordinates and face key point coordinates, wherein the face frame coordinates and the face key point coordinates are marked on corresponding pixel points in a label form, for example, if a certain pixel point belongs to an eye key point, the pixel point has a corresponding label which is an eye key point, and the coordinates are: * **". And if the face detection fails, re-acquiring the near infrared image.
Therefore, near infrared images without faces can be filtered through face detection, and the positions of the faces are positioned, so that the follow-up operations of open-close eye detection, face recognition and the like are facilitated.
Optionally, after the face detection is performed on the near infrared image to obtain face contour information, the method further includes:
and opening and closing eyes are detected according to the face contour information.
And if the open-close eye detection fails, the near infrared image is acquired again. Whether the user is actively authenticated or not can be judged through the open-close eye detection.
Optionally, the detecting the open and close eyes according to the face contour information includes:
the near infrared image is input into a first module of a deep learning network, wherein the first module is used for detecting open eyes and closed eyes.
The deep learning network comprises a first module, a second module and a third module, wherein the first module is used for detecting open and closed eyes.
And acquiring eye key point coordinates in the face contour information.
The names and the coordinates of the key points of the human face are marked on the pixel points of the image in a label mode, so that the coordinates of the key points of the eyes can be quickly obtained through the names of the key points in the label.
And extracting the eye feature in the first module according to the eye key point coordinates.
After the coordinates of the eye key points are obtained, the eye key points can be positioned, and then feature extraction is performed on the eye key points through methods such as SIFT (scale invariant feature transform), HOG (direction gradient histogram), LBP (local binary pattern) and the like.
Comparing the extracted eye features with first pre-stored features to perform eye opening and closing detection, wherein the first pre-stored features are eye features pre-stored in a face feature library when eyes are opened.
And judging whether the user is in a binocular open state or not through the trained deep learning network, namely whether the user wants to actively authenticate, if the detection result is in the closed state or in the open-closed state, the open-closed detection fails, and the near infrared image and the depth image are acquired again.
Optionally, the deep learning network further includes a second module, where the second module is configured to perform the living body detection according to the face contour information and the depth image, and includes:
dividing a first face region image in the depth image according to the face contour information of the near infrared image;
Inputting a first face region image into the second module, and extracting first face features of the first face region image through the second module;
and comparing the first facial features with second pre-stored features to perform living body detection, wherein the second pre-stored features are living body facial feature information pre-stored in the facial feature library.
The step of dividing the first face region image in the depth image according to the face contour information of the near infrared image comprises the following steps: and aligning the face frame coordinates to the depth image, and dividing a first face area image in the depth image according to the face frame coordinates. The depth image and the RGB color image can be obtained simultaneously through the TOF camera, the RGB color image and the depth image are registered, and the two pixel points are in one-to-one correspondence, but the edge of an object in the depth image is saw-toothed and is not aligned with the boundary of the object in the corresponding color image, so that if the face position in the RGB color image is directly corresponding to the depth image, a large deviation can occur, and the face position coordinate in the near infrared image is aligned to the depth image.
The extracted first face features are depth information extracted from the depth image, and can distinguish a living face image from a non-living face image, and if the comparison result of the first face features exceeds a preset threshold, the first face features are considered as living faces. And if the living body detection fails, re-acquiring the near infrared image.
Optionally, the deep learning network further includes a third module, where the third module is configured to perform face verification on the target object according to the near infrared image, and the performing face verification on the target object includes:
dividing a second face region image in the near infrared image according to the face contour information;
and inputting a second face region image into the third module, extracting second face features of the second face region image through the third module, comparing the second face features with third pre-stored features, and judging whether the second face features are the same person or not, wherein the third pre-stored features are the face features of the target object pre-stored in the face feature library.
The second face region image can be segmented through the face contour information, so that the face region is extracted from the background image, interference of the background information is eliminated, subsequent second face feature extraction is facilitated, and whether the authentication personnel is a target user can be determined through face verification.
Optionally, the second deep learning network is trained, and the training includes:
and establishing a deep learning network, wherein the deep learning network comprises the first module, the second module and the third module.
The first module is used for detecting open eyes, the second module is used for detecting living bodies, and the third module is used for face verification.
And selecting a training sample, wherein the training sample comprises a closed eye detection sample, a living body detection sample and a human face verification sample.
The face verification sample is a face template of a target object stored in a registration stage, the open eye detection sample and the living body detection sample are image samples selected from a database, wherein the database can be COCO, imageNet and the like.
Extracting the first pre-stored features from the open-eye and closed-eye detection samples through the first module, extracting the second pre-stored features from the living body detection samples through the second module, extracting the third pre-stored features from the face verification samples through the third module, and storing the first pre-stored features, the second pre-stored features and the third pre-stored features into the face feature library.
Training the first module according to the first pre-stored feature, training the second module according to the second pre-stored feature, and training the third module according to the third pre-stored feature.
The first module, the second module and the third module are trained by BP algorithm (inverse error propagation algorithm) or OLS algorithm (orthogonal least squares learning algorithm) or RBF network learning algorithm, and the weights of all layers in the deep learning network are adjusted.
And adjusting the super parameters of the deep learning network according to the training result to obtain the trained deep learning network.
The training result comprises a loss function, an accuracy rate and an absolute error adjustment parameter, and the super-parameters comprise a learning rate, iteration times and batch sizes.
Therefore, the deep learning network obtained after training the training sample can be used for open and close eye detection, living body detection and face verification, so that face attacks of various material photos, videos, masks and 3D dummy persons can be effectively prevented.
The embodiment of the application shows a specific scene of the verification method, in which a user uses an android mobile phone with 3D ToF equipment to perform operations of mobile phone face input and unlocking. Firstly, inputting a face, namely finishing the acquisition of the upper, lower, left and right middle face images according to the prompt of a mobile phone, obtaining a face template of the user, and storing the face template into a face feature library. And then, when a user uses the mobile phone each time, the unlock action without perception can be realized at the moment that the screen is lightened, and the whole face recognition unlock process is completed within tens of milliseconds.
Therefore, the embodiment of the application can realize unlocking at any angle, and even if the electronic equipment is in a non-forward state, the unlocking can be realized by comparing the acquired image with face images at different angles in the template.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and includes a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor.
Optionally, when the electronic device is an authentication device, where the device may be an electronic device such as a smart phone, a tablet computer, or an intelligent wearable device, the program includes instructions for executing the following steps:
acquiring a near infrared image and a depth image of a target object;
performing face detection on the near infrared image to obtain face contour information;
performing living body detection according to the face contour information and the depth image;
performing face verification on the target object according to the near infrared image in response to the living body detection passing;
and responding to the pass of the face verification, and executing unlocking operation.
Optionally, in the aspect of performing face detection on the near infrared image to obtain face contour information, the program includes instructions for performing the following steps:
adjusting the size of the near infrared image to obtain an image pyramid;
extracting facial features of the image pyramid and calibrating a frame;
and optimizing the frame to obtain the face contour information.
Optionally, after the face detection is performed on the near infrared image to obtain face contour information, the program includes instructions for executing the following steps:
and opening and closing eyes are detected according to the face contour information.
Optionally, in the aspect of open-eye and closed-eye detection according to the face contour information, the program includes instructions for:
inputting the near infrared image into a first module of a deep learning network, wherein the first module is used for detecting open and closed eyes;
acquiring eye key point coordinates in the face contour information;
extracting eye features from the first module according to the eye key point coordinates;
comparing the extracted eye features with first pre-stored features to perform eye opening and closing detection, wherein the first pre-stored features are eye features pre-stored in a face feature library when eyes are opened.
Optionally, the deep learning network further includes a second module, where the second module is configured to perform the living body detection, and the program includes instructions for performing the following steps in terms of performing living body detection according to the face contour information and the depth image:
dividing a first face region image in the depth image according to the face contour information of the near infrared image;
inputting a first face region image into the second module, and extracting first face features of the first face region image through the second module;
and comparing the first facial features with second pre-stored features to perform living body detection, wherein the second pre-stored features are living body facial feature information pre-stored in the facial feature library.
Optionally, the deep learning network further includes a third module, where the third module is configured to perform the face verification, and in the aspect of performing face verification on the target object according to the near infrared image, the program includes instructions for performing the following steps:
dividing a second face region image in the near infrared image according to the face contour information;
And inputting a second face region image into the third module, extracting second face features of the second face region image through the third module, comparing the second face features with third pre-stored features, and judging whether the second face features are the same person or not, wherein the third pre-stored features are the face features of the target object pre-stored in the face feature library.
Optionally, the deep learning network is trained, and in the training aspect, the program includes instructions for:
establishing a deep learning network, wherein the deep learning network comprises the first module, the second module and the third module;
selecting a training sample, wherein the training sample comprises a closed eye detection sample, a living body detection sample and a face verification sample;
extracting the first pre-stored feature from the open-eye and closed-eye detection sample by the first module, extracting the second pre-stored feature from the living body detection sample by the second module, extracting the third pre-stored feature from the face verification sample by the third module, and storing the first pre-stored feature, the second pre-stored feature and the third pre-stored feature into the face feature library;
Training the first module according to the first pre-stored feature, training the second module according to the second pre-stored feature, and training the third module according to the third pre-stored feature;
and adjusting the super parameters of the deep learning network according to the training result to obtain the trained deep learning network.
The foregoing description of the embodiments of the present application has been presented primarily in terms of the implementation of a method. It will be appreciated that, in order to achieve the above-mentioned functions, the terminal includes corresponding hardware structures and/or software modules for performing the respective functions. Those of skill in the art will readily appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied as hardware or a combination of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the application may divide the functional units of the terminal according to the above method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated in one processing unit. The integrated units may be implemented in hardware or in software functional units. It should be noted that, in the embodiment of the present application, the division of the units is schematic, which is merely a logic function division, and other division manners may be implemented in actual practice.
In accordance with the foregoing, referring to fig. 5, fig. 5 is a schematic structural diagram of a verification apparatus 500 according to an embodiment of the present application. The verification device includes a first acquisition unit 501, a face detection unit 502, a living body detection unit 503, a face verification unit 504, and an unlocking unit 505, wherein:
a first acquiring unit 501 configured to acquire a near infrared image and a depth image of a target object;
the face detection unit 502 is configured to perform face detection on the near infrared image to obtain face contour information;
a living body detection unit 503, configured to perform living body detection according to the face contour information and the depth image;
a face verification unit 504, configured to perform face verification on the target object according to the near infrared image in response to the passing of the living body detection;
an unlocking unit 505, configured to perform an unlocking operation in response to the face verification passing.
Optionally, in the aspect of performing face detection on the near infrared image to obtain face contour information, the face detection unit 502 is specifically configured to:
adjusting the size of the near infrared image to obtain an image pyramid;
extracting facial features of the image pyramid and calibrating a frame;
And optimizing the frame to obtain the face contour information.
Optionally, the verification apparatus further includes a closed-eye open-detection unit 506, where after the face detection is performed on the near infrared image to obtain face contour information, the closed-eye open-detection unit 506 is configured to:
and opening and closing eyes are detected according to the face contour information.
Optionally, in the aspect of open-close eye detection according to the face contour information, the open-close eye detection unit 506 is specifically configured to:
inputting the near infrared image into a first module of a deep learning network, wherein the first module is used for detecting open and closed eyes;
acquiring eye key point coordinates in the face contour information;
extracting eye features from the first module according to the eye key point coordinates;
comparing the extracted eye features with first pre-stored features to perform eye opening and closing detection, wherein the first pre-stored features are eye features pre-stored in a face feature library when eyes are opened.
Optionally, the deep learning network further includes a second module, where the second module is configured to perform the living body detection, and the living body detection unit 503 is specifically configured to:
Dividing a first face region image in the depth image according to the face contour information of the near infrared image;
inputting a first face region image into the second module, and extracting first face features of the first face region image through the second module;
and comparing the first facial features with second pre-stored features to perform living body detection, wherein the second pre-stored features are living body facial feature information pre-stored in the facial feature library.
Optionally, the deep learning network further includes a third module, where the third module is configured to perform the face verification, and the face verification unit 504 is specifically configured to:
dividing a second face region image in the near infrared image according to the face contour information;
and inputting a second face region image into the third module, extracting second face features of the second face region image through the third module, comparing the second face features with third pre-stored features, and judging whether the second face features are the same person or not, wherein the third pre-stored features are the face features of the target object pre-stored in the face feature library.
Optionally, the deep learning network is trained, and the verification device further includes a training unit 507, where the training unit 507 is configured to:
establishing a deep learning network, wherein the deep learning network comprises the first module, the second module and the third module;
selecting a training sample, wherein the training sample comprises a closed eye detection sample, a living body detection sample and a face verification sample;
extracting the first pre-stored feature from the open-eye and closed-eye detection sample by the first module, extracting the second pre-stored feature from the living body detection sample by the second module, extracting the third pre-stored feature from the face verification sample by the third module, and storing the first pre-stored feature, the second pre-stored feature and the third pre-stored feature into the face feature library;
training the first module according to the first pre-stored feature, training the second module according to the second pre-stored feature, and training the third module according to the third pre-stored feature;
and adjusting the super parameters of the deep learning network according to the training result to obtain the trained deep learning network.
The foregoing units may be used to perform the methods described in the foregoing embodiments, and detailed descriptions thereof are given in the description of the embodiments, which are not repeated herein.
In the embodiment of the application, a near infrared image and a depth image of a target object are acquired; performing face detection on the near infrared image to obtain face contour information; performing living body detection according to the face contour information and the depth image; performing face verification on the target object according to the near infrared image in response to the living body detection passing; and responding to the pass of the face verification, and executing unlocking operation. By the scheme provided by the invention, the problems of face recognition under dim backlight and difficult defense such as dummy attack can be effectively solved by carrying out face detection, living body detection, face verification and the like on the near infrared image and the depth image.
The present application also provides a computer-readable storage medium storing a computer program for electronic data exchange, the computer program causing a computer to execute some or all of the steps of any one of the verification methods described in the above method embodiments.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer-readable storage medium storing a computer program that causes a computer to perform some or all of the steps of any one of the verification methods described in the method embodiments above.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application. In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
The above embodiments are merely for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (8)

1. A method of authentication, the method comprising:
acquiring a near infrared image and a depth image of a target object;
performing face detection on the near infrared image to obtain face contour information;
inputting the near infrared image into a first module of a deep learning network, wherein the first module is used for detecting open and closed eyes;
acquiring eye key point coordinates in the face contour information;
extracting eye features from the first module according to the eye key point coordinates;
comparing the extracted eye features with first pre-stored features to perform eye opening and closing detection, wherein the first pre-stored features are eye features pre-stored in a face feature library when eyes are opened;
under the condition that the open-close eyes are open as a result of the open-close eye detection, performing living body detection according to the face contour information and the depth image; the method comprises the steps of dividing a first face area image in the depth image according to face contour information of the near infrared image; inputting a first face region image into a second module of a deep learning network, and extracting first face features of the first face region image through the second module; comparing the first face features with second pre-stored features to perform living body detection, wherein the second pre-stored features are living body face feature information pre-stored in the face feature library;
Performing face verification on the target object according to the near infrared image in response to the living body detection passing; the second face region image is segmented in the near infrared image according to the face contour information; inputting a second face area image into a third module, and extracting second face features of the second face area image through the third module; comparing the second facial features with third pre-stored features to judge whether the second facial features are the same person, wherein the third pre-stored features are the facial features of the target object pre-stored in the facial feature library;
and responding to the pass of the face verification, and executing unlocking operation.
2. The method of claim 1, wherein performing face detection on the near infrared image to obtain face contour information comprises:
adjusting the size of the near infrared image to obtain an image pyramid;
extracting facial features of the image pyramid and calibrating a frame;
and optimizing the frame to obtain the face contour information.
3. The method of claim 1, wherein the deep learning network is trained, the training comprising:
Establishing a deep learning network, wherein the deep learning network comprises the first module, the second module and the third module;
selecting a training sample, wherein the training sample comprises a closed eye detection sample, a living body detection sample and a face verification sample;
extracting the first pre-stored feature from the open-eye and closed-eye detection sample by the first module, extracting the second pre-stored feature from the living body detection sample by the second module, extracting the third pre-stored feature from the face verification sample by the third module, and storing the first pre-stored feature, the second pre-stored feature and the third pre-stored feature into the face feature library;
training the first module according to the first pre-stored feature, training the second module according to the second pre-stored feature, and training the third module according to the third pre-stored feature;
and adjusting the super parameters of the deep learning network according to the training result to obtain the trained deep learning network.
4. A verification apparatus, characterized in that the verification apparatus comprises:
a first acquisition unit configured to acquire a near infrared image and a depth image of a target object;
The face detection unit is used for carrying out face detection on the near infrared image to obtain face contour information;
the open-close eye detection unit is specifically configured to:
inputting the near infrared image into a first module of a deep learning network, wherein the first module is used for detecting open and closed eyes;
acquiring eye key point coordinates in the face contour information;
extracting eye features from the first module according to the eye key point coordinates;
comparing the extracted eye features with first pre-stored features to perform eye opening and closing detection, wherein the first pre-stored features are eye features pre-stored in a face feature library when eyes are opened;
a living body detection unit, configured to perform living body detection according to the face contour information and the depth image when the open-close eye detection result is that both eyes are open; the method comprises the steps of dividing a first face area image in the depth image according to face contour information of the near infrared image; inputting a first face region image into a second module of a deep learning network, and extracting first face features of the first face region image through the second module;
comparing the first face features with second pre-stored features to perform living body detection, wherein the second pre-stored features are living body face feature information pre-stored in the face feature library;
A face verification unit, configured to perform face verification on the target object according to the near infrared image in response to the living body detection passing; the second face region image is segmented in the near infrared image according to the face contour information; inputting a second face area image into a third module, and extracting second face features of the second face area image through the third module; comparing the second facial features with third pre-stored features to judge whether the second facial features are the same person, wherein the third pre-stored features are the facial features of the target object pre-stored in the facial feature library;
and the unlocking unit is used for executing unlocking operation in response to the passing of the face verification.
5. The authentication device according to claim 4, wherein, in terms of the face detection performed on the near infrared image to obtain face contour information, the face detection unit is specifically configured to:
adjusting the size of the near infrared image to obtain an image pyramid;
extracting facial features of the image pyramid and calibrating a frame;
and optimizing the frame to obtain the face contour information.
6. The authentication apparatus of claim 4, wherein the deep learning network is trained, the authentication apparatus further comprising a training unit configured to:
establishing a deep learning network, wherein the deep learning network comprises the first module, the second module and the third module;
selecting a training sample, wherein the training sample comprises a closed eye detection sample, a living body detection sample and a face verification sample;
extracting the first pre-stored feature from the open-eye and closed-eye detection sample by the first module, extracting the second pre-stored feature from the living body detection sample by the second module, extracting the third pre-stored feature from the face verification sample by the third module, and storing the first pre-stored feature, the second pre-stored feature and the third pre-stored feature into the face feature library;
training the first module according to the first pre-stored feature, training the second module according to the second pre-stored feature, and training the third module according to the third pre-stored feature;
and adjusting the super parameters of the deep learning network according to the training result to obtain the trained deep learning network.
7. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-3.
8. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any of claims 1-3.
CN201910568579.4A 2019-06-27 2019-06-27 Verification method and verification device Active CN110287900B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910568579.4A CN110287900B (en) 2019-06-27 2019-06-27 Verification method and verification device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910568579.4A CN110287900B (en) 2019-06-27 2019-06-27 Verification method and verification device

Publications (2)

Publication Number Publication Date
CN110287900A CN110287900A (en) 2019-09-27
CN110287900B true CN110287900B (en) 2023-08-01

Family

ID=68019333

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910568579.4A Active CN110287900B (en) 2019-06-27 2019-06-27 Verification method and verification device

Country Status (1)

Country Link
CN (1) CN110287900B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112861568A (en) * 2019-11-12 2021-05-28 Oppo广东移动通信有限公司 Authentication method and device, electronic equipment and computer readable storage medium
CN110929286A (en) * 2019-11-20 2020-03-27 四川虹美智能科技有限公司 Method for dynamically detecting operation authorization and intelligent equipment
CN111160309B (en) * 2019-12-31 2023-05-16 深圳云天励飞技术有限公司 Image processing method and related equipment
CN113128320B (en) * 2020-01-16 2023-05-16 浙江舜宇智能光学技术有限公司 Human face living body detection method and device based on TOF camera and electronic equipment
CN113313856A (en) * 2020-02-10 2021-08-27 深圳市光鉴科技有限公司 Door lock system with 3D face recognition function and using method
CN111582197A (en) * 2020-05-07 2020-08-25 贵州省邮电规划设计院有限公司 Living body based on near infrared and 3D camera shooting technology and face recognition system
CN113673286B (en) * 2020-05-15 2024-04-16 深圳市光鉴科技有限公司 Depth reconstruction method, system, equipment and medium based on target area
CN113674230B (en) * 2021-08-10 2023-12-19 深圳市捷顺科技实业股份有限公司 Method and device for detecting key points of indoor backlight face

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005081178A1 (en) * 2004-02-17 2005-09-01 Yeda Research & Development Co., Ltd. Method and apparatus for matching portions of input images
JP2006260397A (en) * 2005-03-18 2006-09-28 Konica Minolta Holdings Inc Eye opening degree estimating device
US8031970B2 (en) * 2007-08-27 2011-10-04 Arcsoft, Inc. Method of restoring closed-eye portrait photo
CN100514353C (en) * 2007-11-26 2009-07-15 清华大学 Living body detecting method and system based on human face physiologic moving
CN103440479B (en) * 2013-08-29 2016-12-28 湖北微模式科技发展有限公司 A kind of method and system for detecting living body human face
US20180211096A1 (en) * 2015-06-30 2018-07-26 Beijing Kuangshi Technology Co., Ltd. Living-body detection method and device and computer program product
CN111144293A (en) * 2015-09-25 2020-05-12 北京市商汤科技开发有限公司 Human face identity authentication system with interactive living body detection and method thereof
CN106997452B (en) * 2016-01-26 2020-12-29 北京市商汤科技开发有限公司 Living body verification method and device
CN107609383B (en) * 2017-10-26 2021-01-26 奥比中光科技集团股份有限公司 3D face identity authentication method and device
CN107766840A (en) * 2017-11-09 2018-03-06 杭州有盾网络科技有限公司 A kind of method, apparatus of blink detection, equipment and computer-readable recording medium
CN108491772A (en) * 2018-03-09 2018-09-04 天津港(集团)有限公司 A kind of face recognition algorithms and face identification device
CN108805024B (en) * 2018-04-28 2020-11-24 Oppo广东移动通信有限公司 Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN108764069B (en) * 2018-05-10 2022-01-14 北京市商汤科技开发有限公司 Living body detection method and device
CN109034102B (en) * 2018-08-14 2023-06-16 腾讯科技(深圳)有限公司 Face living body detection method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN110287900A (en) 2019-09-27

Similar Documents

Publication Publication Date Title
CN110287900B (en) Verification method and verification device
US10956714B2 (en) Method and apparatus for detecting living body, electronic device, and storage medium
KR102483642B1 (en) Method and apparatus for liveness test
US10691939B2 (en) Systems and methods for performing iris identification and verification using mobile devices
KR102299847B1 (en) Face verifying method and apparatus
US9971920B2 (en) Spoof detection for biometric authentication
KR101495430B1 (en) Quality metrics for biometric authentication
KR101309889B1 (en) Texture features for biometric authentication
CN110462633B (en) Face recognition method and device and electronic equipment
US20160019421A1 (en) Multispectral eye analysis for identity authentication
US20160019420A1 (en) Multispectral eye analysis for identity authentication
US20170091550A1 (en) Multispectral eye analysis for identity authentication
CN105874472A (en) Multi-band biometric camera system having iris color recognition
JP2018508888A (en) System and method for performing fingerprint-based user authentication using an image captured using a mobile device
CN109190522B (en) Living body detection method based on infrared camera
KR20190094352A (en) System and method for performing fingerprint based user authentication using a captured image using a mobile device
CN112487921B (en) Face image preprocessing method and system for living body detection
CN107517340B (en) Camera module and electronic equipment
US20210256244A1 (en) Method for authentication or identification of an individual
CN107844742A (en) Facial image glasses minimizing technology, device and storage medium
EP3872753B1 (en) Wrinkle detection method and terminal device
Roy et al. Iris segmentation using game theory
US11948402B2 (en) Spoof detection using intraocular reflection correspondences
CN113095116B (en) Identity recognition method and related product
KR20210028469A (en) Ai based authentication method using finger vein

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant