CN106997452B - Living body verification method and device - Google Patents

Living body verification method and device Download PDF

Info

Publication number
CN106997452B
CN106997452B CN201610051911.6A CN201610051911A CN106997452B CN 106997452 B CN106997452 B CN 106997452B CN 201610051911 A CN201610051911 A CN 201610051911A CN 106997452 B CN106997452 B CN 106997452B
Authority
CN
China
Prior art keywords
eye
face
video
characteristic value
verified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610051911.6A
Other languages
Chinese (zh)
Other versions
CN106997452A (en
Inventor
吴立威
彭义刚
罗梓鑫
曹旭东
李�诚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN201610051911.6A priority Critical patent/CN106997452B/en
Publication of CN106997452A publication Critical patent/CN106997452A/en
Application granted granted Critical
Publication of CN106997452B publication Critical patent/CN106997452B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements

Abstract

The invention provides a method and a device for verifying a live experience, wherein the method comprises the following steps: acquiring a face video to be verified; extracting a plurality of frames of human face and eye images in the human face video to be verified; carrying out open-close eye judgment on the multi-frame human face eye images to obtain a first open-close eye characteristic value; judging shielding eyes of the multi-frame human face eye images to obtain a first shielding eye characteristic value; and verifying whether the face corresponding to the face video to be verified is a live human face or not at least according to the first opening and closing eye characteristic value and the first shielding eye characteristic value. The method solves the problem that in the process of carrying out the in-vivo verification by checking the blink movement in the related technology, the in-vivo verification result is incorrect because the local movement formed on the eyes can not be eliminated to simulate the disguised blink movement of the blink, thereby eliminating the influence of the false blink condition on the in-vivo verification result.

Description

Living body verification method and device
Technical Field
The invention relates to the field of biological identification, in particular to a method and a device for testimony of a witness.
Background
Face recognition techniques have grown to maturity. However, in most applications, in addition to the identification of the human face, the living body verification of the human face is also required to prevent an illegal user from cheating by a paper photo, an electronic screen photo and the like.
Many of the existing in-vivo verification methods are based on detecting blinking movement to perform in-vivo verification. However, these methods are often designed only to detect blink motion (e.g., CN101216887A, CN103400122A) and do not consider how to counteract the deception of simulating blinks by causing local motion in the eye. In addition, the optical flow-based living body verification method (for example, CN101908140A) detects optical flow change in the living body face image to judge whether the face to be detected is the living body face. However, the calculation of the optical flow is very costly and this method does not resist the deception of simulating blinking by causing local movements in the eye.
In the related art, in the process of performing in-vivo verification by checking blink movement, the phenomenon that the result of in-vivo verification is incorrect due to the fact that local movement is formed on eyes to simulate disguised blink movement of blinks cannot be eliminated, and an effective solution is not provided.
Disclosure of Invention
Therefore, the technical problem to be solved by the present invention is to overcome the problem that in the related art, in the process of performing in-vivo verification by checking blink movement, it is impossible to eliminate the phenomenon that the result of in-vivo verification is incorrect due to the fact that local movement is formed on the eyes to simulate the blink disguised blink movement, so that a method and a device for verification of liveness are provided.
According to an aspect of the present invention, there is provided a method of proof of liveness, comprising: acquiring a face video to be verified; extracting a plurality of frames of human face and eye images in the human face video to be verified; carrying out eye opening and closing judgment on the multi-frame human face eye images to obtain a first eye opening and closing characteristic value; judging the shielding eyes of the multi-frame human face eye images to obtain a first shielding eye characteristic value; and verifying whether the face corresponding to the face video to be verified is a live human face or not at least according to the first opening and closing eye characteristic value and the first shielding eye characteristic value.
Optionally, the verifying whether the face corresponding to the face video to be verified is a live human face according to at least the first opening and closing eye characteristic value and the first blocking eye characteristic value includes: and under the condition that the first opening and closing eye characteristic value indicates that the to-be-verified face video has blinking actions and the first shielding eye characteristic value indicates that no shielding phenomenon exists in a plurality of frames of face eye images, determining that the face corresponding to the to-be-verified face video is a live human face.
Optionally, the determining of the open/close eyes of the multiple frames of facial eye images to obtain a first open/close eye feature value includes: inputting the multi-frame human face eye image into a depth neural network for classifying open and closed eyes to obtain a first open and closed eye characteristic value; the open-closed eye classified deep neural network is used for judging whether the multi-frame human face eye image has an open eye phenomenon or a closed eye phenomenon.
Optionally, the step of judging the eyes of the multiple frames of human face eye images to obtain a first eye feature value includes: inputting the multi-frame human face eye images into a depth neural network for classifying the shielding eyes to obtain the first shielding eye characteristic value; the depth neural network for shielding eye classification is used for judging whether the multi-frame human face eye images are real human eye images or shielded disguised human eye images.
Optionally, verifying whether the face corresponding to the face video to be verified is a live human face at least according to the first opening and closing eye characteristic value and the first blocking eye characteristic value includes: inputting the first opening and closing eye characteristic value and the first shielding eye characteristic value into a video-level blink classifier, and verifying whether the face corresponding to the face video to be verified is a live human face or not; the video-level blink classifier is used for verifying whether the face corresponding to the face video to be verified is a live human face or not.
Optionally, before inputting the plurality of frames of facial images into the deep neural network of open-closed eye classification, training the deep neural network of open-closed eye classification, wherein training the deep neural network of open-closed eye classification includes: training the deep neural network for open-closed eye classification using the eye images of the plurality of open eyes and the eye images of the plurality of closed eyes.
Optionally, before the multi-frame human face eye image is input into the deep neural network for classifying the occlusion eye, training the deep neural network for classifying the occlusion eye, where training the deep neural network for classifying the occlusion eye includes: training the depth neural network for shielding eye classification by using a plurality of eye images of real human faces and a plurality of eye images of disguised living human faces; the eye image of the disguised living body face is a disguised living body face image which simulates blinking actions by shielding eyes of the living body face.
Optionally, before inputting the first open-closed eye characteristic value and the first occlusion eye characteristic value into a video-level blink classifier, training the video-level blink classifier, wherein training the video-level blink classifier comprises: acquiring a video positive sample including normal blinking of a real person and a video negative sample not including normal blinking of the real person; extracting second open-close eye characteristic values of multi-frame images in the video from the video positive sample and the video negative sample through the open-close eye classified deep neural network, and extracting second occlusion eye characteristic values of the multi-frame images in the video from the video positive sample and the video negative sample through the occlusion eye classified deep neural network; training the blink classifier at video level using the feature values of the second open and closed eyes and the feature values of the second occluded eyes.
Optionally, the extracting multiple frames of face-eye images in the face video to be verified includes: acquiring a short video from the face video to be verified by using a sliding window; and extracting the plurality of frames of human face eye images in the short video.
Optionally, the method further comprises: calculating the motion speed value of the face image in the face video to be verified; the verifying whether the face corresponding to the face video to be verified is a live human face at least according to the first opening and closing eye characteristic value and the first shielding eye characteristic value comprises: and verifying whether the face corresponding to the face video to be verified is a live human face according to the first opening and closing eye characteristic value, the first shielding eye characteristic value and the motion velocity value of the face image.
Optionally, the verifying whether the face corresponding to the face video to be verified is a live human face according to the first opening and closing eye characteristic value, the first shielding eye characteristic value, and the motion velocity value of the face image includes: and under the condition that the first opening and closing eye characteristic value indicates that the to-be-verified face video has blinking actions, the first shielding eye characteristic value indicates that a plurality of frames of face eye images have no shielding phenomenon, and the motion speed value of the face images is smaller than a preset threshold value, determining that the face corresponding to the to-be-verified face video is a live human face.
Optionally, calculating a motion velocity value of a face image in the face video to be verified includes: acquiring coordinate information of face key feature points of two adjacent frames of face images in the face video to be verified; and calculating the motion speed value of the face image in the face video to be verified according to the coordinate information of the key feature points of the face.
According to another aspect of the present invention, there is also provided a proof of presence apparatus, comprising: the acquisition module is used for acquiring a face video to be verified; the extraction module is used for extracting a plurality of frames of human face and eye images in the human face video to be verified; the first characteristic value acquisition module is used for judging the open and close eyes of the multi-frame human face eye images to obtain a first open and close eye characteristic value; the second characteristic value acquisition module is used for judging the shielding eyes of the multi-frame human face eye images to obtain a first shielding eye characteristic value; and the verification module is used for verifying whether the face corresponding to the face video to be verified is a live human face or not at least according to the first opening and closing eye characteristic value and the first shielding eye characteristic value.
Optionally, the verification module is specifically configured to determine that a face corresponding to the face video to be verified is a live human face under the condition that the first open-close eye characteristic value indicates that the face video to be verified has a blinking motion, and the first block-out eye characteristic value indicates that none of the plurality of frames of face eye images has a blocking phenomenon.
Optionally, the first feature value obtaining module is specifically configured to input the multiple frames of facial eye images into a deep neural network for classifying open and closed eyes, so as to obtain a first open and closed eye feature value; the open-closed eye classified deep neural network is used for judging whether the multi-frame human face eye image has an open eye phenomenon or a closed eye phenomenon.
Optionally, the second feature value obtaining module is specifically configured to input the multiple frames of human face eye images into a deep neural network for classifying the occlusion eyes, so as to obtain a first occlusion eye feature value; the depth neural network for shielding eye classification is used for judging whether the multi-frame human face eye images are real human eye images or shielded disguised human eye images.
Optionally, the verification module is specifically configured to input the first opening and closing eye feature value and the first blocking eye feature value into a video-level blink classifier, and verify whether a face corresponding to the face video to be verified is a live human face according to the first opening and closing eye feature value and the first blocking eye feature value; the video-level blink classifier is used for verifying whether the face corresponding to the face video to be verified is a live human face or not.
Optionally, the apparatus further comprises: the first training module is used for training the deep neural network for classifying the open and closed eyes by using the eye images of the open eyes and the eye images of the closed eyes.
Optionally, the apparatus further comprises: the second training module is used for training the depth neural network for shielding eye classification by using the eye images of a plurality of real human faces and the eye images of a plurality of disguised living human faces; the eye image of the disguised living body face is a disguised living body face image which simulates blinking actions by shielding eyes of the living body face.
Optionally, the apparatus further comprises: a third training module, configured to train a video-level blink classifier before inputting the first open-close eye feature value and the first block-out eye feature value into the video-level blink classifier, where the third training module includes: the first acquisition unit is used for acquiring a video positive sample including normal blinking of a real person and a video negative sample not including normal blinking of the real person; a first extraction unit, configured to extract, through the deep neural network classified by the open-closed eyes, a feature value of a second open-closed eye of a multi-frame image in a video from the video positive example and the video negative example, and extract, through the deep neural network classified by the blocked eyes, a feature value of a second blocked eye of the multi-frame image in the video from the video positive example and the video negative example; and the training unit is used for training the blink classifier at the video level by using the characteristic value of the second open and closed eye and the characteristic value of the second shielded eye.
Optionally, the extraction module comprises: the second acquisition unit is used for acquiring a short video from the face video to be verified by using a sliding window; and the second extraction unit is used for extracting the plurality of frames of human face and eye images in the short video.
Optionally, the apparatus further comprises: the computing module is used for computing the motion speed value of the face image in the face video to be verified; the verification module is specifically configured to verify whether the face corresponding to the face video to be verified is a live human face according to the first open-close eye characteristic value, the first occlusion eye characteristic value, and the motion velocity value of the face image.
Optionally, the verification module is specifically configured to determine that the face corresponding to the to-be-verified face video is a live human face when the first open-close eye characteristic value indicates that the to-be-verified face video has a blinking motion, the first block-out eye characteristic value indicates that none of the multiple frames of face eye images has a blocking phenomenon, and a motion velocity value of the face image is smaller than a predetermined threshold value.
Optionally, the calculation module comprises: the third acquisition unit is used for acquiring coordinate information of key feature points of the human face of two adjacent frames of human face images in the human face video to be verified; and the calculating unit is used for calculating the motion speed value of the face image in the face video to be verified according to the coordinate information of the key feature points of the face.
According to still another aspect of the present invention, there is also provided a proof of activity system, including: the camera device is used for capturing a face video to be verified; the processor is connected with the camera device and used for receiving the face video to be verified and executing the following steps: extracting a plurality of frames of human face and eye images in the human face video to be verified; carrying out eye opening and closing judgment on the multi-frame human face eye images to obtain a first eye opening and closing characteristic value; judging the shielding eyes of the multi-frame human face eye images to obtain a first shielding eye characteristic value; and verifying whether the face corresponding to the face video to be verified is a live human face or not at least according to the first opening and closing eye characteristic value and the first shielding eye characteristic value.
According to the invention, the face video to be verified is obtained; extracting a plurality of frames of human face and eye images in the human face video to be verified; carrying out open-close eye judgment on the multi-frame human face eye images to obtain a first open-close eye characteristic value; judging shielding eyes of the multi-frame human face eye images to obtain a first shielding eye characteristic value; and verifying whether the face corresponding to the face video to be verified is a live human face or not at least according to the first opening and closing eye characteristic value and the first shielding eye characteristic value. The problem that in the related technology, in the process of performing in-vivo verification by checking blinking movement, the problem that in-vivo verification results are incorrect due to the fact that local movement is formed on eyes to simulate disguised blinking movement of blinking is not eliminated, and therefore the influence of the condition of false blinking on the in-vivo verification results is eliminated.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flowchart of a living body authentication method according to embodiment 1 of the present invention;
fig. 2 is a flowchart of training a deep neural network for open-closed eye classification according to embodiment 1 of the present invention;
FIG. 3 is a flowchart of training a deep neural network for occlusion eye classification according to embodiment 1 of the present invention;
fig. 4 is a flowchart of training a blink classifier at video level according to embodiment 1 of the invention;
FIG. 5 is a flowchart illustrating in vivo verification of a face video to be verified using a trained video-level blink classifier according to the present embodiment;
FIG. 6 is a block diagram showing the configuration of a living body authentication device according to embodiment 2 of the present invention;
FIG. 7 is a block diagram showing the construction of a third training module in the in-vivo authentication apparatus according to embodiment 2 of the invention;
FIG. 8 is a block diagram showing the construction of a second extraction module in the living body authentication apparatus according to embodiment 2 of the present invention;
fig. 9 is a block diagram showing the configuration of a preferred embodiment of the living body authentication device according to embodiment 2 of the present invention;
fig. 10 is a block diagram showing the configuration of a calculation module in the living body authentication device according to embodiment 2 of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Example 1
In this embodiment, a method for verification of livelihood is provided, and fig. 1 is a flowchart of a method for verification of livelihood according to embodiment 1 of the present invention, as shown in fig. 1, the flowchart includes the following steps:
step S11: and acquiring a face video to be verified.
Step S12: and extracting a plurality of frames of human face eye images in the human face video to be verified.
Step S13: and carrying out open-close eye judgment on the multi-frame human face eye images to obtain a first open-close eye characteristic value.
Step S14: and judging the shielding eyes of the multi-frame human face eye images to obtain a first shielding eye characteristic value.
Step S15: and verifying whether the face corresponding to the face video to be verified is a live human face or not at least according to the first opening and closing eye characteristic value and the first shielding eye characteristic value.
Through the steps, in the process of detecting whether the face corresponding to the face video to be verified is a living body face, two factors, namely the first opening and closing eye characteristic value and the first shielding eye characteristic value, are extracted to jointly verify the living body. Compared with the prior art, the living body is verified only by detecting the blink movement, the steps solve the problem that in the prior art, in the process of verifying the living body by checking the blink movement, the problem that the result of the living body verification is incorrect because the eye forms local movement to simulate the blink disguised blink movement cannot be solved, and the influence of the false blink condition on the result of the living body verification is eliminated.
If the frame rate is low, the result will be poor due to a fast blinking action. Therefore, in an alternative embodiment, it is not advisable to extract one frame every few frames. The multi-frame human face and eye images are extracted from each frame of human face and eye images in the human face video to be verified. The first open-close eye characteristic value is used for judging whether the multi-frame human face eye image has an open-eye or closed-eye phenomenon; the first shielding eye characteristic value is used for judging whether the multi-frame human face eye image is a real human eye image or a shielded disguised human eye image.
The step S13 mentioned above involves acquiring a first open-close eye feature value, and in an optional embodiment, the first open-close eye feature value is obtained through a deep neural network of open-close eye classification, and specifically, the multi-frame human face image is input into the deep neural network of open-close eye classification to obtain the first open-close eye feature value; the open-closed eye classification deep neural network is used for judging whether the multi-frame human face eye image has an open eye phenomenon or a closed eye phenomenon.
The step S13 is related to inputting the multi-frame human face image into the deep neural network for classifying the open/close eyes to obtain the first open/close eye characteristic value, and inputting the multi-frame human face image into the deep neural network for classifying the open/close eyes to obtain the first open/close eye characteristic value before the training of the deep neural network for classifying the open/close eyes. It should be noted that the deep neural network for classifying the open and closed eyes can be trained in various ways, which is exemplified below. In an optional embodiment, a plurality of eye-opening face images and a plurality of eye-closing face images are collected, eye images in all the collected face images are extracted, the eye images are divided into eye-opening eye images and eye-closing eye images, and the eye-opening eye images and the eye-closing eye images are used for training a deep neural network for eye-opening and eye-closing classification, so that the deep neural network for eye-opening and eye-closing classification can judge whether an input face video to be verified has a blinking motion.
As a preferred embodiment, before step S13, training a deep neural network for classifying open and closed eyes is further included, and specifically, as shown in fig. 2, the process includes the following steps:
step S21: the method comprises the steps of collecting a large number of human face photos of different people and open eyes and closed eyes under different lighting conditions.
Step S22: eye images in all photographs were extracted.
Step S23: designing a suitable deep neural network model (for example, refer to a network structure in the article Gradient-Based Learning Applied to Document Recognition), taking all eye-opening eye images as a class a, all eye-closing eye images as a class B, and taking the eye images of the classes a and B as the input of the designed deep neural network model, and training a deep neural network for eye-opening and eye-closing classification, so that the deep neural network can well distinguish between eye opening and eye closing.
The step S14 mentioned above involves obtaining a first occlusion eye feature value, and in an optional embodiment, the first occlusion eye feature value is obtained through a deep neural network of the occlusion eye classification, specifically, the multi-frame human face eye image is input into the deep neural network of the occlusion eye classification to obtain the first occlusion eye feature value; the depth neural network for classifying the shielding eyes is used for judging whether the multi-frame human face eye images are real human eye images or shielded disguised human eye images.
The step S14 mentioned above involves inputting the multi-frame human face eye image into the depth neural network for classifying the blocking eyes to obtain the first blocking eye feature value, inputting the multi-frame human face eye image into the depth neural network for classifying the blocking eyes to train the depth neural network for classifying the blocking eyes before obtaining the first blocking eye feature value, and it should be noted that the depth neural network for classifying the blocking eyes may be trained in various ways, which is exemplified below. In an optional embodiment, a plurality of real human face images containing eye opening and closing actions and a plurality of disguised living body face images containing eye portions of disguised living body faces to simulate blinking actions, eye images in all the collected human face images are extracted, the eye images are divided into the real human face eye images and the eye images of the disguised living body faces, and the eye images of the real human faces and the eye images of the disguised living body faces are used for training a deep neural network for eye shielding classification, so that the deep neural network for eye shielding classification can judge whether the input human face video to be verified has actions of eye shielding images.
Since the accuracy of the open-closed eye classifier is reduced when the eye part is occluded, a lawless person may deceive the system through occlusion means (for example, the finger shakes at the eye part of the picture according to the blinking frequency), causing the system to mistakenly recognize as a real person blinking. To prevent this deception, it has been proposed to add occlusion classifiers to combat this situation. As a preferred embodiment, before step S14, training a deep neural network for classifying the occlusion eye is further included, and specifically, as shown in fig. 3, the process includes the following steps:
step S31: and the living human face is disguised by using a paper human face photo or an electronic screen display human face photo. Blinking behavior is simulated using different means including, but not limited to, moving a simulated blink back and forth with a finger over the eyes of a photograph of a person's face, etc.
Step S32: eye images in all photographs were extracted.
Step S33: designing a proper deep neural network model (for example, refer to a network structure in an article of Gradient-Based Learning Applied to Document Recognition), taking eye images of all real human faces as a class C, taking eye images of all disguised living human faces as a class D, taking eye images of the class C and the class D as the input of the designed deep neural network model, and training the deep neural network model for the occlusion eye classification, so that the model can well distinguish the real human eyes from the disguised human eyes (partially) occluded.
The step S15 mentioned above involves obtaining a live body verification result, and in an optional embodiment, the first open-close eye characteristic value and the first blocking eye characteristic value are input into a video-level blink classifier, and it is verified whether the face corresponding to the face video to be verified is a live human face according to at least the first open-close eye characteristic value and the first blocking eye characteristic value; the video-level blink classifier is used for verifying whether the face corresponding to the face video to be verified is a live human face or not.
The step S15 is related to how to verify whether the face corresponding to the face video to be verified is a live human face according to at least the first open-close eye characteristic value and the first blocking eye characteristic value, and in an optional embodiment, in a case that the first open-close eye characteristic value indicates that the face video to be verified has a blinking motion, and the first blocking eye characteristic value indicates that none of the multiple frames of face eye images has a blocking phenomenon, it is determined that the face corresponding to the face video to be verified is a live human face, and the verification passes. That is, in the case where the first open-close eye feature value indicates that a human face video to be verified has a blinking motion, it cannot be determined that the blinking motion is a blinking motion of a real person, and may be a spoofing motion of simulating blinking by causing a local motion at the eyes. Therefore, the situation that the simulated blinking motion is mistakenly judged as the blinking motion of a real person, and the result of the in-vivo verification is influenced is eliminated.
The step S15 also involves training the blink classifier in the video level, and it should be noted that the blink classifier in the video level may be trained in various ways, which will be exemplified below. In an optional embodiment, a video positive example comprising a normal blink of a real person and a video negative example not comprising a normal blink of a real person are obtained, the characteristic value of a second open-closed eye of a multi-frame image is extracted from the video positive example and the video negative example through the deep neural network of the open-closed eye classification, and the characteristic value of a second shielded eye of each frame of image is extracted from the video positive example and the video negative example through the deep neural network of the shielded eye classification; and training a video-level blink classifier by using the characteristic value of the second open and closed eye and the characteristic value of the second shielded eye. Therefore, the trained video-level blink classifier can judge whether the face corresponding to the input face video to be verified is a real human face.
As a preferred embodiment, in the process of training the blink classifier in the video level, as shown in fig. 4, the specific steps include:
step S41: generating a mass short video sample of normal blinking of the real person, wherein the segment of short video sample comprises n frames of continuous face images, and a normal blinking action of the real person exists in the segment of short video.
Step S42: a large number of non-blinking short video negative examples are generated, which contain n consecutive face images but no action of a normal blinking of a real person in the short video, e.g. a blinking in which the eyes are always open, or in which the eyes are always closed, or in which a forged blinking is moved back and forth with a finger in front of the eyes in the picture, etc.
Step S43, extracting the opening and closing eye characteristics of the human face and eyes of each frame in the video, which comprises the following steps: extracting the human face and eyes of each frame in the video, inputting the eye images into the deep neural network model of the open-closed eye classification trained in the step S2, and obtaining the feature score value of the open-closed eye.
Step S44, extracting the eye shielding characteristics of the human face and eyes of each frame in the video, which comprises the following steps: and extracting the human face and eyes of each frame in the video, and inputting the eye images into the depth neural network model for classifying the shielding eyes trained in the step S2 to obtain the feature score value of the shielding eyes.
In order to make the result of the in-vivo verification more accurate, in addition to taking the first open-close eye characteristic value and the first blocking eye characteristic value as the judgment basis, in an optional embodiment, a motion velocity value of the face image in the face video to be verified is also taken as the judgment basis, that is, the in-vivo verification method of the embodiment further includes calculating a motion velocity value of each frame of the face image in the face video to be verified relative to the face image of the previous frame, and inputting the first open-close eye characteristic value, the first blocking eye characteristic value and the motion velocity value of the face image to the blink classifier at the video level to verify whether the face corresponding to the face video to be verified is a live human face.
As a preferred embodiment, step S45 specifically includes the following sub-steps:
step S45.1: and obtaining coordinate information of the face key feature points of the front frame face image and the back frame face image in the video by using a face tracking and face key feature point alignment method.
Step S45.2: and calculating the relative motion magnitude fraction value of the human face in the front and rear frames of human face images according to the coordinate information of the key feature points of the front and rear frames of human face images in the step S45.1.
The above steps relate to calculating a motion velocity value of a face image in a face video to be verified, and it should be noted that there may be a variety of methods for calculating a motion velocity value of a face image, which will be exemplified below. In an optional embodiment, coordinate information of key feature points of the human faces of two adjacent frames of human face images in the human face video to be verified is obtained, and the motion speed value of the human face image in the human face video to be verified is calculated according to the coordinate information of the key feature points of the human face.
Step S46: and (4) repeating the steps S43-S45 for each frame of each segment of the positive sample short video and each frame of the negative sample short video respectively, calculating the opening and closing eye feature score value, the shielding eye feature score value and the relative motion magnitude score value of each frame of human face image, and splicing the score values together to serve as the total feature of the segment of the short video, wherein the dimension is 3 n.
Under the condition that the first open-close eye characteristic value, the first shielded eye characteristic value and the movement speed of the face image are taken as the basis for judging the living body verification, in an optional embodiment, under the condition that the first open-close eye characteristic value indicates that the face video to be verified has a blinking motion, the first shielded eye characteristic value indicates that each frame of the face eye image has no shielding phenomenon, and the movement speed value of the face image is smaller than a predetermined threshold value, the face corresponding to the face video to be verified is determined to be a living body face, and the verification is passed.
Step S47: calculating the features of all the positive and negative sample short videos in step S41 and step S42 according to step S46, a video-level blink classifier can be trained using a common classifier, such as a linear classifier, a support vector machine, a random forest, etc.
The present embodiment proposes a method of in-vivo verification based on blink detection. Training a deep neural network model for open-close eye classification and a deep neural network model for shield eye classification; extracting open-close eye features by using the trained open-close eye classification model, extracting the shielded eye features by using the trained shielded eye classification model, and extracting the speed features of the human face movement; respectively extracting two types of first open-close eye characteristic values and first shielding eye characteristic values or three types of characteristics of the first open-close eye characteristic values, the first shielding eye characteristic values and the motion velocity values of the human face images from the positive and negative examples of whether the eyes blink or not, and training a video-level blink classifier; and performing in-vivo verification on the face video to be verified by using the trained blink classifier at the video level.
In the living body verification method provided by the embodiment, an eye image is used as input, a multilayer deep convolutional neural network is built, the multilayer deep convolutional neural network is sequentially connected through a convolutional layer, a down-sampling layer and a nonlinear layer, the last layer is a full-connection layer with an f-dimension, and the state of opening and closing eyes is used as an output layer; and training the built deep convolution neural network by using eye images and open-close eye states in a training set, wherein the training is based on a back propagation algorithm, and model parameters are updated on training data by using random gradient descent.
In summary, the living body verification method of the present embodiment needs to have two functions: for the living body verification method of the embodiment, a video-level blink classifier is designed, and features of opening and closing eyes, blocking eyes, speed and the like are comprehensively used. Therefore, the blink of the real person can be effectively detected, and the deception system of lawless persons using photos and other means can be prevented.
When the obvious blinking motion is judged according to the open-close eye classifier, the classification result of the blocking eye classifier is the human eyes of the real person, and the velocity classifier judges that no large velocity exists, the blinking is really effective in the case.
The face video to be verified may occupy a large space, and in an optional embodiment, extracting each frame of face eye image in the face video to be verified includes: and acquiring a short video from the face video to be verified by using a sliding window, and extracting each frame of face eye image in the short video. If the action of blinking of a real person is detected in any short video, the fact that the face corresponding to the face video to be verified is the face of the real person is indicated, and the verification is passed; if no action of blinking of a real person is detected in the contained short videos, the fact that the face corresponding to the face video to be verified is not the face of the real person is indicated.
Fig. 5 is a flowchart of performing in-vivo verification on a face video to be verified by using a trained blink classifier at video level according to the embodiment, and as shown in fig. 5, the flowchart includes the following steps:
step S51: and for the face video to be verified, a sliding window with the length of n is used, and a short video is formed by n frames in the window each time.
Step S52: the short video is characterized according to steps S43-46.
Step S53: then, the computed features are classified by using the blink video level classifier trained in step S47, so as to determine whether the short video in the sliding window has a blinking motion.
Step S54: for the face video to be verified, the sliding window continuously moves, and if a short video in the sliding window is judged to have blinking actions in step S53, the live body verification of the video section passes; otherwise, the video live body verification of the segment is not passed.
Example 2
The present embodiment provides a proof of presence device, as shown in fig. 6, including: the acquiring module 62 is used for acquiring a face video to be verified; an extracting module 64, configured to extract multiple frames of face and eye images in the face video to be verified; the first feature value acquisition module 66 is configured to perform eye opening and closing judgment on the multiple frames of human face and eye images to obtain a first eye opening and closing feature value; the second characteristic value obtaining module 68 is configured to perform eye occlusion judgment on the multiple frames of human face eye images to obtain a first eye occlusion characteristic value; and the verification module 70 is configured to verify whether the face corresponding to the face video to be verified is a live human face at least according to the first open-close eye characteristic value and the first blocking eye characteristic value.
The device solves the problem that in the prior art, in the process of performing the in-vivo verification by checking the blinking motion, the problem that the in-vivo verification result is incorrect due to the fact that the eye forms the local motion to simulate the blinking disguised blinking motion can not be eliminated, and therefore the influence of the blinking counterfeiting condition on the in-vivo verification result is eliminated.
Optionally, the verification module 70 is specifically configured to determine that a face corresponding to the face video to be verified is a live human face under the condition that the first open-close eye characteristic value indicates that the face video to be verified has a blinking motion, and the first block-out eye characteristic value indicates that none of the plurality of frames of face eye images has a blocking phenomenon.
Optionally, the first feature value obtaining module 66 is specifically configured to input the multi-frame facial eye images into a deep neural network for classifying open and closed eyes, so as to obtain a first open and closed eye feature value; the open-closed eye classification deep neural network is used for judging whether the multi-frame human face eye images have the phenomena of open eyes or closed eyes.
Optionally, the second feature value obtaining module 68 is specifically configured to input the multiple frames of facial eye images into a deep neural network for classifying the occlusion eyes, so as to obtain a first occlusion eye feature value; the depth neural network for shielding eye classification is used for judging whether the multi-frame human face eye images are real human eye images or shielded disguised human eye images.
Optionally, the verification module 70 is specifically configured to input the first open-close eye characteristic value and the first blocking eye characteristic value into a video-level blink classifier, and verify whether a face corresponding to the face video to be verified is a live human face according to the first open-close eye characteristic value and the first blocking eye characteristic value; the video-level blink classifier is used for verifying whether the face corresponding to the face video to be verified is a live human face or not.
Optionally, the apparatus further comprises: a first training module 72 for training the deep neural network for open-closed eye classification using the plurality of open-eye images and the plurality of closed-eye images.
Optionally, the apparatus further comprises: a second training module 74, configured to train the deep neural network for shielding eye classification using the eye images of multiple real human faces and the eye images of multiple camouflaged living human faces; the eye image of the disguised living body face is a disguised living body face image which simulates blinking actions by shielding eyes of the living body face.
Optionally, the apparatus further comprises: a third training module 76, configured to train the blink classifier at the video level before inputting the first open-close eye characteristic value and the first occlusion eye characteristic value into the blink classifier at the video level, as shown in fig. 7, the third training module 76 includes: a first acquiring unit 762 for acquiring a video positive example including a normal blink of a real person and a video negative example not including a normal blink of a real person; a first extraction unit 764 extracting feature values of a second open-close eye of a multi-frame image in a video from a video positive example and a video negative example through the open-close eye classified deep neural network, and extracting feature values of a second open-close eye of a multi-frame image in a video from the video positive example and the video negative example through the open-close eye classified deep neural network; a training unit 766 for training the video-level blink classifier using the second open-closed eye feature value and the second occluded eye feature value.
As shown in fig. 8, the extraction module 64 further includes: a second obtaining unit 822, configured to obtain a short video from a face video to be verified by using a sliding window; the second extracting unit 824 is configured to extract the plurality of frames of facial-eye images in the short video.
As shown in fig. 9, the apparatus further includes: the calculating module 92 is used for calculating the motion speed value of the face image in the face video to be verified; the verification module 70 is specifically configured to input the first open-close eye characteristic value, the first block-out eye characteristic value, and the motion velocity value of the face image into the video-level blink classifier, and verify whether the face corresponding to the face video to be verified is a live human face.
Optionally, the verification module 70 is further configured to determine that the face corresponding to the to-be-verified face video is a live human face when the first open-close eye characteristic value indicates that the to-be-verified face video has a blinking motion, the first block-out eye characteristic value indicates that none of the multiple frames of human face eye images has a blocking phenomenon, and the motion velocity value of the human face image is smaller than a predetermined threshold, and the verification passes.
As shown in fig. 10, the calculation module 92 includes: a third obtaining unit 922, configured to obtain coordinate information of key feature points of a face of two adjacent frames of face images in a face video to be verified; the calculating unit 924 is configured to calculate a motion velocity value of a face image in the face video to be verified according to the coordinate information of the key feature point of the face.
Example 3
The embodiment provides a verification system for a live experience, which includes: there is also provided a proof of presence system comprising: the camera device is used for capturing a face video to be verified; the processor is connected with the camera device and used for receiving the face video to be verified and executing the following steps: extracting a plurality of frames of human face and eye images in a human face video to be verified; carrying out open-close eye judgment on the multi-frame human face eye images to obtain a first open-close eye characteristic value; judging shielding eyes of the multi-frame human face eye images to obtain a first shielding eye characteristic value; and verifying whether the face corresponding to the face video to be verified is a live human face at least according to the first opening and closing eye characteristic value and the first shielding eye characteristic value.
In summary, the embodiment of the present invention provides a robust and efficient living body verification method based on blink judgment, which uses a deep neural network model, trains not only an efficient classifier for judging open and closed eyes, but also trains human eyes in a paper photo or an electronic screen display photo to effectively judge whether to simulate a blink situation by using a counterfeit method. Meanwhile, global information of face motion is added, and a video-level blink judgment module based on a short video is trained, so that the in-vivo verification is robust and efficient. The method is suitable for active in-vivo verification (the user makes a blinking motion according to a prompt) and passive (silent) in-vivo verification (the user is not sensitive, and the in-vivo verification is passed as long as the blinking motion of the user is detected in the using process). The open-close eye and shielding eye classifiers are trained by adopting a deep neural network model, and the classification effect on whether the open eyes or the closed eyes and whether the eyes are shielded is good. Besides the characteristics of opening and closing eyes, the characteristics of covering eyes and speed are added, and the eye-closing-eye-covering-speed-based anti-blink-blocking eye-closing device has a remarkable effect on effectively resisting various types of false blinking situations. The deep neural network model occupies small space, the whole algorithm flow computation cost is small, the flow operation can be carried out on a general mobile intelligent terminal, and the blink detection effect is good.
It should be understood that the above examples are only for clarity of illustration and are not intended to limit the embodiments. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications therefrom are within the scope of the invention.

Claims (23)

1. A method for verifying a liveness experience, comprising:
acquiring a face video to be verified;
extracting a plurality of frames of human face and eye images in the human face video to be verified;
carrying out eye opening and closing judgment on the multi-frame human face eye images to obtain a first eye opening and closing characteristic value, wherein the first eye opening and closing characteristic value is used for judging whether blinking motions occur in the multi-frame human face eye images;
judging whether the eyes of the multi-frame human face eye images are shielded to obtain a first shielded eye characteristic value, wherein the first shielded eye characteristic value is used for judging whether the multi-frame human face eye images are shielded, and whether the multi-frame human face eye images are shielded comprises the following steps: the multi-frame human face eye image is a real human eye image or a shielded disguised human eye image;
verifying whether the face corresponding to the face video to be verified is a live human face or not at least according to the first opening and closing eye characteristic value and the first shielding eye characteristic value;
and determining that the face corresponding to the face video to be verified is a live human face under the condition that the first opening and closing eye characteristic value indicates that the face video to be verified has blinking actions and the first shielding eye characteristic value indicates that no shielding phenomenon exists in a plurality of frames of face eye images.
2. The method of claim 1, wherein the determining the open/close eyes of the plurality of frames of facial eye images to obtain a first open/close eye feature value comprises:
inputting the multi-frame human face eye image into a depth neural network for classifying open and closed eyes to obtain a first open and closed eye characteristic value; the open-closed eye classified deep neural network is used for judging whether the multi-frame human face eye image has an open eye phenomenon or a closed eye phenomenon.
3. The method according to claim 1, wherein the step of judging the occlusion eye of the plurality of frames of facial eye images to obtain a first occlusion eye feature value comprises:
inputting the multi-frame human face eye images into a depth neural network for classifying the shielding eyes to obtain the first shielding eye characteristic value; the depth neural network for shielding eye classification is used for judging whether the multi-frame human face eye images are real human eye images or shielded disguised human eye images.
4. The method according to claim 1, wherein verifying whether the face corresponding to the face video to be verified is a live human face at least according to the first open-close eye characteristic value and the first occlusion eye characteristic value comprises:
inputting the first opening and closing eye characteristic value and the first shielding eye characteristic value into a video-level blink classifier, and verifying whether the face corresponding to the face video to be verified is a live human face or not; the video-level blink classifier is used for verifying whether the face corresponding to the face video to be verified is a live human face or not.
5. The method of claim 2, wherein training the open-closed eye classified deep neural network is performed before inputting the plurality of frames of facial images into the open-closed eye classified deep neural network, wherein training the open-closed eye classified deep neural network comprises:
training the deep neural network for open-closed eye classification using the eye images of the plurality of open eyes and the eye images of the plurality of closed eyes.
6. The method of claim 3, wherein training the deep neural network for the occlusion eye classification is performed before the plurality of frames of facial eye images are input into the deep neural network for the occlusion eye classification, wherein training the deep neural network for the occlusion eye classification comprises:
training the depth neural network for shielding eye classification by using a plurality of eye images of real human faces and a plurality of eye images of disguised living human faces; the eye image of the disguised living body face is a disguised living body face image which simulates blinking actions by shielding eyes of the living body face.
7. The method of claim 4, wherein training the video-level blink classifier is performed before entering the first open-closed eye characteristic value and the first occluded eye characteristic value into a video-level blink classifier, wherein training the video-level blink classifier comprises:
acquiring a video positive sample including normal blinking of a real person and a video negative sample not including normal blinking of the real person;
extracting second open-close eye characteristic values of multi-frame images in the video from the video positive sample and the video negative sample through the open-close eye classified deep neural network, and extracting second occlusion eye characteristic values of the multi-frame images in the video from the video positive sample and the video negative sample through the occlusion eye classified deep neural network;
training the blink classifier at video level using the feature values of the second open and closed eyes and the feature values of the second occluded eyes.
8. The method according to claim 1, wherein the extracting the plurality of frames of face-eye images in the face video to be verified comprises:
acquiring a short video from the face video to be verified by using a sliding window;
and extracting the plurality of frames of human face eye images in the short video.
9. The method of claim 1, further comprising:
calculating the motion speed value of the face image in the face video to be verified;
the verifying whether the face corresponding to the face video to be verified is a live human face at least according to the first opening and closing eye characteristic value and the first shielding eye characteristic value comprises:
and verifying whether the face corresponding to the face video to be verified is a live human face according to the first opening and closing eye characteristic value, the first shielding eye characteristic value and the motion velocity value of the face image.
10. The method according to claim 9, wherein the verifying whether the face corresponding to the face video to be verified is a live human face according to the first open-close eye characteristic value, the first occlusion eye characteristic value and the motion velocity value of the face image comprises:
and under the condition that the first opening and closing eye characteristic value indicates that the to-be-verified face video has blinking actions, the first shielding eye characteristic value indicates that a plurality of frames of face eye images have no shielding phenomenon, and the motion speed value of the face images is smaller than a preset threshold value, determining that the face corresponding to the to-be-verified face video is a live human face.
11. The method according to claim 9 or 10, wherein calculating the motion velocity value of the face image in the face video to be verified comprises:
acquiring coordinate information of face key feature points of two adjacent frames of face images in the face video to be verified;
and calculating the motion speed value of the face image in the face video to be verified according to the coordinate information of the key feature points of the face.
12. A witness device, comprising:
the acquisition module is used for acquiring a face video to be verified;
the extraction module is used for extracting a plurality of frames of human face and eye images in the human face video to be verified;
the first characteristic value acquisition module is used for carrying out eye opening and closing judgment on the multi-frame human face eye images to obtain a first eye opening and closing characteristic value, and the first eye opening and closing characteristic value is used for judging whether blinking motions occur in the multi-frame human face eye images;
the second eigenvalue acquisition module is configured to perform judgment on the eyes of the multi-frame human face eye images to obtain a first value of the eyes of the person, where the first value of the eyes of the person is used to judge whether the multi-frame human face eye images are blocked, and whether the multi-frame human face eye images are blocked includes: the multi-frame human face eye image is a real human eye image or a shielded disguised human eye image;
the verification module is used for verifying whether the face corresponding to the face video to be verified is a live human face or not at least according to the first opening and closing eye characteristic value and the first shielding eye characteristic value;
and under the condition that the first opening and closing eye characteristic value indicates that the to-be-verified face video has blinking actions and the first shielding eye characteristic value indicates that no shielding phenomenon exists in a plurality of frames of face eye images, the verification module determines that the face corresponding to the to-be-verified face video is a live human face.
13. The apparatus according to claim 12, wherein the first feature value obtaining module is specifically configured to input the multiple frames of facial images into a deep neural network for classifying open and closed eyes, so as to obtain the first open and closed eye feature value; the open-closed eye classified deep neural network is used for judging whether the multi-frame human face eye image has an open eye phenomenon or a closed eye phenomenon.
14. The apparatus according to claim 12, wherein the second eigenvalue obtaining module is specifically configured to input the multiple frames of facial eye images into a deep neural network for classifying occlusion eyes, so as to obtain a first occlusion eye eigenvalue; the depth neural network for shielding eye classification is used for judging whether the multi-frame human face eye images are real human eye images or shielded disguised human eye images.
15. The apparatus according to claim 12, wherein the verification module is specifically configured to input the first open-close eye characteristic value and the first occlusion eye characteristic value into a video-level blink classifier, and verify whether the face corresponding to the face video to be verified is a live human face according to the first open-close eye characteristic value and the first occlusion eye characteristic value; the video-level blink classifier is used for verifying whether the face corresponding to the face video to be verified is a live human face or not.
16. The apparatus of claim 13, further comprising:
the first training module is used for training the deep neural network for classifying the open and closed eyes by using the eye images of the open eyes and the eye images of the closed eyes.
17. The apparatus of claim 14, further comprising:
the second training module is used for training the depth neural network for shielding eye classification by using the eye images of a plurality of real human faces and the eye images of a plurality of disguised living human faces; the eye image of the disguised living body face is a disguised living body face image which simulates blinking actions by shielding eyes of the living body face.
18. The apparatus of claim 15, further comprising:
a third training module, configured to train a video-level blink classifier before inputting the first open-close eye feature value and the first block-out eye feature value into the video-level blink classifier, where the third training module includes:
the first acquisition unit is used for acquiring a video positive sample including normal blinking of a real person and a video negative sample not including normal blinking of the real person;
a first extraction unit, configured to extract, through the deep neural network classified by the open-closed eyes, a feature value of a second open-closed eye of a multi-frame image in a video from the video positive example and the video negative example, and extract, through the deep neural network classified by the blocked eyes, a feature value of a second blocked eye of the multi-frame image in the video from the video positive example and the video negative example;
and the training unit is used for training the blink classifier at the video level by using the characteristic value of the second open and closed eye and the characteristic value of the second shielded eye.
19. The apparatus of claim 12, wherein the extraction module comprises:
the second acquisition unit is used for acquiring a short video from the face video to be verified by using a sliding window;
and the second extraction unit is used for extracting the plurality of frames of human face and eye images in the short video.
20. The apparatus of claim 12, further comprising:
the computing module is used for computing the motion speed value of the face image in the face video to be verified;
the verification module is specifically configured to verify whether the face corresponding to the face video to be verified is a live human face according to the first open-close eye characteristic value, the first occlusion eye characteristic value, and the motion velocity value of the face image.
21. The apparatus of claim 20, wherein the verification module is specifically configured to determine that the face corresponding to the face video to be verified is a live human face if the first open-close eye feature value indicates that the face video to be verified has a blinking motion, the first block-out eye feature value indicates that none of the plurality of frames of face eye images has a blocking phenomenon, and a motion velocity value of the face image is smaller than a predetermined threshold value.
22. The apparatus of claim 20 or 21, wherein the computing module comprises:
the third acquisition unit is used for acquiring coordinate information of key feature points of the human face of two adjacent frames of human face images in the human face video to be verified;
and the calculating unit is used for calculating the motion speed value of the face image in the face video to be verified according to the coordinate information of the key feature points of the face.
23. A witness system, comprising:
the camera device is used for capturing a face video to be verified;
the processor is connected with the camera device and used for receiving the face video to be verified and executing the following steps:
extracting a plurality of frames of human face and eye images in the human face video to be verified;
carrying out eye opening and closing judgment on the multi-frame human face eye images to obtain a first eye opening and closing characteristic value, wherein the first eye opening and closing characteristic value is used for judging whether blinking motions occur in the multi-frame human face eye images;
judging whether the eyes of the multi-frame human face eye images are shielded to obtain a first shielded eye characteristic value, wherein the first shielded eye characteristic value is used for judging whether the multi-frame human face eye images are shielded, and whether the multi-frame human face eye images are shielded comprises the following steps: the multi-frame human face eye image is a real human eye image or a shielded disguised human eye image;
verifying whether the face corresponding to the face video to be verified is a live human face or not at least according to the first opening and closing eye characteristic value and the first shielding eye characteristic value;
and determining that the face corresponding to the face video to be verified is a live human face under the condition that the first opening and closing eye characteristic value indicates that the face video to be verified has blinking actions and the first shielding eye characteristic value indicates that no shielding phenomenon exists in a plurality of frames of face eye images.
CN201610051911.6A 2016-01-26 2016-01-26 Living body verification method and device Active CN106997452B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610051911.6A CN106997452B (en) 2016-01-26 2016-01-26 Living body verification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610051911.6A CN106997452B (en) 2016-01-26 2016-01-26 Living body verification method and device

Publications (2)

Publication Number Publication Date
CN106997452A CN106997452A (en) 2017-08-01
CN106997452B true CN106997452B (en) 2020-12-29

Family

ID=59428347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610051911.6A Active CN106997452B (en) 2016-01-26 2016-01-26 Living body verification method and device

Country Status (1)

Country Link
CN (1) CN106997452B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109840406B (en) * 2017-11-29 2022-05-17 百度在线网络技术(北京)有限公司 Living body verification method and device and computer equipment
CN108334817A (en) * 2018-01-16 2018-07-27 深圳前海华夏智信数据科技有限公司 Living body faces detection method and system based on three mesh
CN108921117A (en) * 2018-07-11 2018-11-30 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN109635554A (en) * 2018-11-30 2019-04-16 努比亚技术有限公司 A kind of red packet verification method, terminal and computer storage medium
CN111274846B (en) * 2018-12-04 2023-09-19 北京嘀嘀无限科技发展有限公司 Method and system for identifying opening and closing actions
CN109902667A (en) * 2019-04-02 2019-06-18 电子科技大学 Human face in-vivo detection method based on light stream guide features block and convolution GRU
CN111860056B (en) * 2019-04-29 2023-10-20 北京眼神智能科技有限公司 Blink-based living body detection method, blink-based living body detection device, readable storage medium and blink-based living body detection equipment
WO2020252740A1 (en) * 2019-06-20 2020-12-24 深圳市汇顶科技股份有限公司 Convolutional neural network, face anti-spoofing method, processor chip, and electronic device
CN110287900B (en) * 2019-06-27 2023-08-01 深圳市商汤科技有限公司 Verification method and verification device
CN111340014B (en) * 2020-05-22 2020-11-17 支付宝(杭州)信息技术有限公司 Living body detection method, living body detection device, living body detection apparatus, and storage medium
CN117636484A (en) * 2022-08-12 2024-03-01 北京字跳网络技术有限公司 Living body detection method, living body detection device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102004904A (en) * 2010-11-17 2011-04-06 东软集团股份有限公司 Automatic teller machine-based safe monitoring device and method and automatic teller machine
CN103440479A (en) * 2013-08-29 2013-12-11 湖北微模式科技发展有限公司 Method and system for detecting living body human face
US8856541B1 (en) * 2013-01-10 2014-10-07 Google Inc. Liveness detection

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2915008B1 (en) * 2007-04-12 2015-04-17 Sagem Defense Securite METHOD FOR DETECTING THE LIVING CHARACTER OF A BODY AREA AND OPTICAL DEVICE FOR CARRYING OUT SAID METHOD
CN101999900B (en) * 2009-08-28 2013-04-17 南京壹进制信息技术有限公司 Living body detecting method and system applied to human face recognition
CN104348778A (en) * 2013-07-25 2015-02-11 信帧电子技术(北京)有限公司 Remote identity authentication system, terminal and method carrying out initial face identification at handset terminal
CN103400122A (en) * 2013-08-20 2013-11-20 江苏慧视软件科技有限公司 Method for recognizing faces of living bodies rapidly

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102004904A (en) * 2010-11-17 2011-04-06 东软集团股份有限公司 Automatic teller machine-based safe monitoring device and method and automatic teller machine
US8856541B1 (en) * 2013-01-10 2014-10-07 Google Inc. Liveness detection
CN103440479A (en) * 2013-08-29 2013-12-11 湖北微模式科技发展有限公司 Method and system for detecting living body human face

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Eyeblink-based Anti-Spoofing in Face Recognition from a Generic Webcamera;Gang Pan etal.;《2007 IEEE 11th International Conference on Computer Vision》;20071021;全文 *
Liveness Detection using Gaze Collinearity;Asad Ali etal.;《2012 Third International Conference on Emerging Security Technologies》;。;20121011;全文 *
人脸识别中的活体检测技术研究;孙霖等;《万方数据库》;20141103;全文 *

Also Published As

Publication number Publication date
CN106997452A (en) 2017-08-01

Similar Documents

Publication Publication Date Title
CN106997452B (en) Living body verification method and device
RU2714096C1 (en) Method, equipment and electronic device for detecting a face vitality
CN106557726B (en) Face identity authentication system with silent type living body detection and method thereof
CN108182409B (en) Living body detection method, living body detection device, living body detection equipment and storage medium
CN107451510B (en) Living body detection method and living body detection system
KR102286468B1 (en) Method and system for integrity verification of fake video created through deep learning
US20180034852A1 (en) Anti-spoofing system and methods useful in conjunction therewith
US20090135188A1 (en) Method and system of live detection based on physiological motion on human face
KR101159164B1 (en) Fake video detecting apparatus and method
CN105844203B (en) A kind of human face in-vivo detection method and device
WO2019127262A1 (en) Cloud end-based human face in vivo detection method, electronic device and program product
CN107844748A (en) Auth method, device, storage medium and computer equipment
CN108549854A (en) A kind of human face in-vivo detection method
CN108280418A (en) The deception recognition methods of face image and device
CN106557723A (en) A kind of system for face identity authentication with interactive In vivo detection and its method
CN105631439A (en) Human face image collection method and device
JP2008146539A (en) Face authentication device
Shen et al. Vla: A practical visible light-based attack on face recognition systems in physical world
CN105844206A (en) Identity authentication method and identity authentication device
CN111353404B (en) Face recognition method, device and equipment
CN110223322A (en) Image-recognizing method, device, computer equipment and storage medium
CN107316029A (en) A kind of live body verification method and equipment
CN109815813A (en) Image processing method and Related product
WO2020195732A1 (en) Image processing device, image processing method, and recording medium in which program is stored
CN111860394A (en) Gesture estimation and gesture detection-based action living body recognition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant