US20200380279A1 - Method and apparatus for liveness detection, electronic device, and storage medium - Google Patents

Method and apparatus for liveness detection, electronic device, and storage medium Download PDF

Info

Publication number
US20200380279A1
US20200380279A1 US16/998,279 US202016998279A US2020380279A1 US 20200380279 A1 US20200380279 A1 US 20200380279A1 US 202016998279 A US202016998279 A US 202016998279A US 2020380279 A1 US2020380279 A1 US 2020380279A1
Authority
US
United States
Prior art keywords
target image
spoofing
liveness detection
pixel points
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/998,279
Other languages
English (en)
Inventor
Guowei Yang
Jing Shao
Junjie Yan
Xiaogang Wang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Assigned to BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD. reassignment BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SHAO, Jing, WANG, XIAOGANG, YAN, JUNJIE, Yang, Guowei
Publication of US20200380279A1 publication Critical patent/US20200380279A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06K9/00906
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2433Single-class perspective, e.g. one-against-all classification; Novelty detection; Outlier detection
    • G06K9/00228
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/758Involving statistics of pixels or of feature values, e.g. histogram matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • Face recognition technologies are widely applied to scenarios such as face unlocking, face payment, identity authentication, and video surveillance.
  • a face recognition system has the risk of being easily deceived by spoofing such as pictures and videos with faces as well as masks.
  • a liveness detection technology is needed to confirm the authenticity of a face entered into the system, i.e., to determine whether submitted biometric features are from a living individual.
  • a time duration required for a single liveness detection using a face recognition method based on the face movement is too long, thus reducing the overall efficiency of the face recognition system.
  • Additional hardware facilities such as a multi-ocular camera or a 3D structured optical device are usually introduced in the recognition and detection methods based on single image frames, thus increasing the deployment costs and reducing the applicability.
  • How to improve the accuracy of liveness detection of a single image frame is a technical problem to be solved urgently in this field.
  • the disclosure relates to, but is not limited to, the field of computer vision technologies, and specifically relates to a method and apparatus for liveness detection, an electronic device, and a storage medium.
  • Embodiments of the disclosure provide a method and apparatus for liveness detection, an electronic device, and a storage medium.
  • a first aspect of the embodiments of the disclosure provides a method for liveness detection, including: processing a target image to obtain probabilities of multiple pixel points of the target image to be corresponding to spoofing; determining a predicted face region in the target image; and obtaining, based on the probabilities of the multiple pixel points of the target image to be corresponding to spoofing and the predicted face region, a liveness detection result of the target image.
  • a second aspect of the embodiments of the disclosure provides an apparatus for liveness detection, including a memory storing processor-executable instructions; and a processor arranged to execute the stored processor-executable instructions to perform operations of: processing a target image to obtain probabilities of multiple pixel points of the target image to be corresponding to spoofing; determining a predicted face region in the target image; and obtaining, based on the probabilities of the multiple pixel points of the target image to be corresponding to spoofing and the predicted face region, a liveness detection result of the target image.
  • a third aspect of the embodiments of the disclosure provides a non-transitory computer-readable storage medium, having stored thereon computer program instructions that, when executed by a computer, cause the computer to perform the following: processing a target image to obtain probabilities of multiple pixel points of the target image to be corresponding to spoofing; determining a predicted face region in the target image; and obtaining, based on the probabilities of the multiple pixel points of the target image to be corresponding to spoofing and the predicted face region, a liveness detection result of the target image.
  • FIG. 1 is a schematic flowchart of a method for liveness detection disclosed in embodiments of the disclosure.
  • FIG. 2 is a schematic flowchart of another method for liveness detection disclosed in embodiments of the disclosure.
  • FIG. 3 is a schematic diagram of a processing process of a neural network disclosed in embodiments of the disclosure.
  • FIG. 4 is a schematic structural diagram of an apparatus for liveness detection disclosed in embodiments of the disclosure.
  • FIG. 5 is a schematic structural diagram of an electronic device disclosed in embodiments of the disclosure.
  • a and/or B can indicate the following three cases: only A exists, both A and B exist, and only B exists.
  • the term “at least one” herein indicates any one of multiple elements or any combination of at least two of multiple elements.
  • including at least one of A, B, or C can indicate including any one or more elements selected from a set consisting of A, B, and C.
  • the terms “first”, “second”, and the like in the description, the claims, and the accompanying drawings in the disclosure are used for distinguishing different objects, rather than describing specific sequences.
  • the terms “include” and “have” and any deformation thereof aim at covering non-exclusive inclusion.
  • a process, a method, a system, a product, or a device including a series of operations or units is not limited to the listed operations or units, but also optionally includes operations or units that are not listed or other operations or units inherent to the process, method, product, or device.
  • An apparatus for liveness detection related in the embodiments of the disclosure is an apparatus capable of performing liveness detection which may be an electronic device, where the electronic device includes a terminal device.
  • the terminal device includes, but is not limited to, portable devices such as a mobile phone, a laptop computer, or a tablet computer having a touch sensitive surface (such as a touch screen display and/or a touch panel).
  • portable devices such as a mobile phone, a laptop computer, or a tablet computer having a touch sensitive surface (such as a touch screen display and/or a touch panel).
  • the device is a desktop computer having a touch sensitive surface (such as a touch screen display and/or a touch panel), instead of a portable communication device.
  • a multilayer perceptron including multiple hidden layers is a deep learning structure.
  • low-level features are combined to form a more abstract high-level representation attribute category or feature to discover distributed feature representation of data.
  • Deep learning is a method based on representation learning performed on data in machine learning. Observed values (such as an image) may be represented in a variety of ways, such as a vector of an intensity value of each pixel point, or more abstractly represented as a series of edges, regions of particular shapes, etc. It is easier to learn tasks (for example, face recognition or facial expression recognition) from examples using some specific representation methods.
  • the benefit of deep learning is to replace manual feature acquisition with unsupervised or semi-supervised feature learning and hierarchical feature extraction efficient algorithms Deep learning is a new field in machine learning research, and has a motivation of creating a neural network, which imitates a mechanism of a human brain to interpret data such as an image, sound, and text, for performing analysis and learning by simulating the human brain.
  • a deep machine learning method also includes supervised learning and unsupervised learning. Learning models created under different learning frameworks are much different.
  • a Convolutional Neural Network (CNN) is a machine learning model based on deep supervised learning, which may also be called a network structure model based on deep learning, falls within a category of feed forward neural networks including convolutional calculation and having deep structures, and is one of representative algorithms of deep learning.
  • a Deep Belief Net (DBN) is a machine learning model based on unsupervised learning.
  • FIG. 1 is a schematic flowchart of a method for liveness detection disclosed in the embodiments of the disclosure. As shown in FIG. 1 , the method for liveness detection includes the following operations.
  • Liveness detection is a method for determining true physiological features of an object in some identity verification scenarios.
  • liveness detection can verify whether a user performing an operation is a real living person using technologies such as face key point positioning and face tracking by means of combined actions such as blinking, opening the mouth, shaking the head, and nodding, so that common attack means by, such as, photos, face swapping, masks, blocking, and images recaptured from screens can be resisted, thus facilitating identifying fraud behaviors and protecting interests of users.
  • the method for liveness detection may be applied to various scenarios that need face application.
  • the method for liveness detection may be applied to the security field.
  • a security device in the security field performs face verification for security, whether a currently acquired image is an image acquired from a living person can be determined by the method for liveness detection provided in the embodiments of the disclosure.
  • an access control device in the security field upon acquiring a face image or receiving a face image from other acquisition devices, performs spoofing verification by using the method provided in the embodiments of the disclosure, if the spoofing verification is passed, determines that the currently acquired image is acquired from a real living person, and performs security verification in combination with other biometric verification technologies such as face verification and/or iris verification. On the one hand, the accuracy of a biometric result is ensured so as to ensure security in the security field.
  • pixel-level spoofing verification may be performed based on a single image, etc., thereby quickly completing spoofing verification, improving the verification rate, and reducing time delay.
  • a mobile terminal in order to ensure the security of payment, payment verification may be performed in combination with biometric features.
  • a mobile terminal and the like also performs spoofing verification in the embodiments of the disclosure.
  • the mobile terminal may autonomously perform spoofing verification of the disclosure after acquiring an image, so as to reduce the risk of counterfeiting by spoofing.
  • using the spoofing verification method provided by the embodiments of the disclosure for spoofing verification has the characteristics of fewer images to be acquired and high verification speed.
  • the time duration required for a single detection by such a method for liveness detection based on face movement is relatively long, thus reducing the overall efficiency of a face recognition system.
  • An execution subject of the method for liveness detection may be the apparatus for liveness detection.
  • the method for liveness detection may be performed by a terminal device or a server or other processing devices, where the terminal device may be a User Equipment (UE), a mobile device, a user terminal, a terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, etc.
  • the method for liveness detection may be implemented by invoking, by a processor, computer-readable instructions stored in a memory.
  • the embodiments of the disclosure can mainly solve the technical problem of liveness detection for a single image frame.
  • the aforementioned target image may be a single image frame, and may be an image acquired by a camera, such as a photo captured by a camera of a terminal device or a single image frame in a video recorded by a camera of a terminal device. No limitation is made to the acquisition manner of the target image and to specific implementations of examples in the embodiments of the disclosure.
  • the single image frame mentioned in the embodiments of the disclosure is a still picture.
  • An animation effect such as a TV video, can be formed by consecutive frames.
  • the number of frames is simply the number of frames of pictures transmitted in 1 second, may also be understood as the number of times a graphics processing unit can perform refreshing per second, and are usually expressed in fps (Frames Per Second). Smooth and realistic animation can be obtained with a high frame rate.
  • the target image may be input to a neural network for processing so as to output a probability of each pixel point of the target image to be corresponding to spoofing.
  • the target image may be processed based on a trained convolutional neural network, where the convolutional neural network may be any end-to-end and point-to-point convolutional neural network, and may be an existing semantic segmentation network, including a semantic segmentation network for full supervision.
  • the convolutional neural network may be trained by using sample data having pixel-level labels.
  • the trained convolutional neural network may predict, pixel point by pixel point, probabilities of pixel points in an input single image frame corresponding to spoofing.
  • the sample data includes: a first type of data and a second type of data, where the first type of data is sample data from spoofing, and the second type of data is non-spoofing data from an image captured from a real person.
  • the sample data is image data, in which each pixel is marked with a label, where the label is a pixel-level label.
  • the multiple pixel points may be all or some of pixel points of the target image. No limitation is made thereto in the embodiments of the disclosure.
  • the apparatus for liveness detection in the embodiments of the disclosure may recognize pixel points in the target image and predict the probabilities of multiple pixel points of the target image to be corresponding to spoofing.
  • the target image may be an image including a face.
  • an input to the apparatus for liveness detection may be the target image including a face
  • an output may be the probabilities of multiple pixel points of the target image to be corresponding to spoofing.
  • the probabilities of the multiple pixel points corresponding to spoofing may be in a form of a probability matrix, i.e., a probability matrix of the pixel points of the target image may be obtained.
  • the probability matrix may indicate the probabilities of the multiple pixel points of the target image to be corresponding to spoofing.
  • a predicted face region in the target image is determined.
  • a main face region may be determined by means of a face recognition algorithm after detecting a face in the image and positioning key feature points of the face.
  • the face region may be understood as a region where the face is located in the target image.
  • the predicted face region in the target image may be determined based on a face key point detection algorithm.
  • face key point detection may be performed on the target image to obtain key point prediction information; and then the predicted face region in the target image may be determined based on the key point prediction information.
  • key points of the face in the target image may be obtained by means of face key point detection and a convex hull may be calculated, where the convex hull may be used as a rough face region.
  • a convex hull of X In a real vector space V, for a given set X, an intersection S of all convex sets including X is called a convex hull of X.
  • the convex hull of X may be constructed by a convex combination of all points (X1, . . . , Xn) in X.
  • the convex hull may be understood as a convex polygon formed by connecting the outermost points, it may include all the points in the set of points, and may be represented as a bounded face region in the target image.
  • the face key point detection algorithm may be any algorithm with several points on a plane as input and the convex hull of the points as output, such as a rotating calipers algorithm, Graham scanning algorithm, and Jarvis operationping algorithm, or include related algorithms in OpenCV.
  • OpenCV is a cross-platform computer vision library released based on the BSD license (open source), and may run on Linux, Windows, Android and Mac OS operating systems. OpenCV is lightweight and efficient: it is composed of a series of C functions and a small number of C++ classes, it provides interfaces for languages such as Python, Ruby, MATLAB, and implements many general algorithms in image processing and computer vision.
  • the method before performing face key point detection on the target image to obtain the key point prediction information, the method further includes: performing face detection on the target image to obtain a face bounding region in the target image; and performing face key point detection on the target image to obtain the key point prediction information includes: performing face key point detection on the image in the face bounding region to obtain the key point prediction information.
  • face detection may be performed at first (relatively high accuracy is required, but any feasible face detection algorithm is acceptable) to obtain a contour bounding box of the face, i.e., the face bounding region; next, the face bounding region is input for face key point detection to obtain the key point prediction information; and then the predicted face region is determined.
  • face detection may be performed on the target image to obtain the predicted face region in the target image.
  • face detection may be performed based on a face segmentation method to determine the predicted face region in the target image.
  • the accuracy requirement for the face region is not strict in the embodiments of the disclosure; therefore, relevant algorithms that can roughly determine the face region can all be used to determine the predicted face region. No limitation is made thereto in the embodiments of the disclosure.
  • operation 103 may be executed.
  • a liveness detection result of the target image is obtained based on the probabilities of the multiple pixel points of the target image to be corresponding to spoofing and the predicted face region.
  • the authenticity of the face in the target image may be determined by a comprehensive analysis based on the obtained probabilities of the multiple pixel points corresponding to spoofing and the approximate position of the face (the predicted face region) obtained.
  • a probability distribution map may be generated based on the probabilities of the multiple pixel points corresponding to spoofing, where the probability distribution map may be understood as an image that reflects the probabilities of the pixel points of the target image to be corresponding to spoofing, and is intuitive.
  • the probabilities of pixel points in the predicted face region corresponding to spoofing may be determined in combination with the predicted face region, thereby facilitating the determination in liveness detection.
  • the pixel points may be determined according to a preset threshold.
  • At least two pixel points included in the predicted face region may be determined from the multiple pixel points based on position information of the multiple pixel points and the predicted face region; and the liveness detection result of the target image is determined based on the probability of each of the at least two pixel points corresponding to spoofing.
  • the positions of the pixel points in the target image may be determined.
  • the apparatus for liveness detection may determine the position information of each pixel point, and then determine relative positions of the pixel points and the predicted face region according to the position information of the pixel points and the predicted face region, so as to further determine pixel points in the predicted face region, i.e., determining at least two pixel points included in the predicted face region, which may be denoted as P and may be the total number of pixel points in the predicted face region.
  • the liveness detection result may be determined based on the probability of each of the at least two pixel points corresponding to spoofing.
  • determining, based on the probability of each of the at least two pixel points corresponding to spoofing, the liveness detection result of the target image includes: determining, based on the probability of each of the at least two pixel points corresponding to spoofing, at least one spoofing pixel point in the at least two pixel points; and determining, based on a proportion of the at least one spoofing pixel point in the at least two pixel points, the liveness detection result of the target image.
  • the probability of each pixel point of the target image to be corresponding to spoofing is obtained, and at least two pixel points included in the predicted face region are determined, it can be determined that at least one spoofing pixel point in the at least two pixel points is determined based on the probability of each of the at least two pixel points corresponding to spoofing, where the spoofing pixel point may be understood as a pixel point determined to correspond to spoofing.
  • the determination of the spoofing pixel point may be based on comparison of the probability with a preset threshold. Generally speaking, the higher the proportion of the spoofing pixel point in the pixel points of the predicted face region, the greater the possibility of the liveness detection indicating spoofing.
  • a preset threshold ⁇ 1 may be stored in the apparatus for liveness detection, and the number of pixel points in the at least two pixel points, of which the probabilities corresponding to spoofing are greater than the preset threshold ⁇ 1, may be obtained, i.e., the number of spoofing pixel points, which may be denoted as Q.
  • a proportion Q/P of the at least one spoofing pixel point in the at least two pixel points may be calculated, and after determining the proportion, the liveness detection result of the target image may be determined.
  • determining, based on the proportion of the at least one spoofing pixel point in the at least two pixel points, the liveness detection result of the target image includes: in response to the proportion being greater than or equal to a first threshold, determining that the liveness detection result of the target image is spoofing.
  • a first threshold ⁇ 2 may be set in advance, and the apparatus for liveness detection may store the first threshold ⁇ 2 for pixel-by-pixel analysis to perform determination in the liveness detection, that is, whether the face in the target image is spoofing is analyzed by comparing the proportion Q/P with the first threshold ⁇ 2. In general, the higher the proportion Q/P, the greater the probability of the spoofing result being spoofing. If the proportion Q/P is greater than or equal to the first threshold ⁇ 2, it is determined that the liveness detection result of the target image is spoofing; and if the proportion Q/P is less than the first threshold ⁇ 2, it is determined that the liveness detection result of the target image is non-spoofing.
  • the thresholds used for determination of pixel points in the embodiments of the disclosure may be preset or determined according to actual conditions, and may be modified, added, or deleted. No limitation is made thereto in the embodiments of the disclosure.
  • the liveness detection result of the target image includes whether the face in the target image is non-spoofing or spoofing. After the liveness detection result is obtained, the liveness detection result may be output.
  • the method further includes: displaying at least one spoofing pixel point determined based on the probabilities of the multiple pixel points corresponding to spoofing.
  • the method further includes: outputting information of the at least one spoofing pixel point determined based on the probabilities of the multiple pixel points corresponding to spoofing for displaying.
  • the apparatus for liveness detection may display the liveness detection result, may display the at least one spoofing pixel point, and may also output the information of the at least one spoofing pixel point determined based on the probabilities of the multiple pixel points corresponding to spoofing, where the information may be used for displaying the spoofing pixel point, i.e., the information may also be transmitted to other terminal devices to display the spoofing pixel point.
  • the exact region on which each determination is based in the image may be intuitively seen, so that the detection result has high interpretability.
  • a target image may be processed to obtain probabilities of multiple pixel points of the target image to be corresponding to spoofing; a predicted face region in the target image is determined; and a liveness detection result of the target image is then obtained based on the probabilities of the multiple pixel points of the target image to be corresponding to spoofing and the predicted face region.
  • No additional hardware facilities such as a multi-ocular camera or 3D structured light are needed, and the accuracy of liveness detection of a single image frame may also be greatly improved when there is only one monocular camera, thereby achieving high adaptability and reducing detection costs.
  • FIG. 2 is a schematic flowchart of another method for liveness detection disclosed in embodiments of the disclosure.
  • FIG. 2 is further optimized based on FIG. 1 .
  • the subject executing the operations of the embodiments of the disclosure may be the aforementioned apparatus for liveness detection.
  • the method for liveness detection includes the following operations.
  • a neural network is used to process a target image to output a probability of each pixel point of the target image to be corresponding to spoofing.
  • a trained neural network obtains a probability of each pixel point in a target image to be corresponding to spoofing.
  • the target image with a size of M ⁇ N may be obtained, the target image including a face is processed via the neural network, and an M ⁇ N-order probability matrix may be output, where elements in the M ⁇ N-order probability matrix may indicate the probabilities of the pixel points of the target image to be corresponding to spoofing, and M and N are integers greater than 1.
  • a length and width of the image size in the embodiments of the disclosure may be in units of pixels.
  • Pixel and resolution pixel are the most basic units in digital images. Each pixel is a small dot, and dots (pixels) with different colors are aggregated into a picture.
  • Image resolution is an imaging size that many terminal devices may choose, and the unit thereof is dpi. For example, common image resolution includes 640 ⁇ 480, 1024 ⁇ 768, 1600 ⁇ 1200, and 2048 ⁇ 1536. In the two numbers of the imaging size, the former is the width of a picture, and latter is the height of the picture, and the two numbers are multiplied to obtain the pixels of the picture.
  • the embodiments of the disclosure mainly solve the technical problem of liveness detection for a single image frame.
  • the aforementioned target image may be a single image frame, and may be an image acquired by a camera, such as a photo captured by a camera of a terminal device or a single image frame in a video recorded by a camera of a terminal device.
  • the method before processing the target image, the method further includes: obtaining the target image that is acquired by a monocular camera.
  • the single image frame mentioned in the embodiments of the disclosure is a still picture.
  • An animation effect such as a TV video, may be formed by consecutive frames.
  • the number of frames is simply the number of frames of pictures transmitted in 1 second, may also be understood as the number of times a graphics processing unit can perform refreshing per second, and are usually expressed in fps. Smooth and realistic animation can be obtained with a high frame rate.
  • the target image including a face may be processed based on a trained convolutional neural network, where the convolutional neural network may be any end-to-end and point-to-point convolutional neural network, and may be an existing semantic segmentation network, including a semantic segmentation network for full supervision.
  • the convolutional neural network may be any end-to-end and point-to-point convolutional neural network, and may be an existing semantic segmentation network, including a semantic segmentation network for full supervision.
  • the convolutional neural network may be trained by using sample data having pixel-level labels, so that the amount of data required for achieving the same accuracy can be reduced by one or two orders of magnitude, compared with existing methods that use data having image-level labels.
  • the trained convolutional neural network may predict, pixel point by pixel point, probabilities of pixel points in an input single image frame corresponding to spoofing.
  • An execution subject of the method for liveness detection may be the apparatus for liveness detection.
  • said method may be performed by a terminal device or a server or other processing devices, where the terminal device may be a UE, a mobile device, a user terminal, a terminal, a cellular phone, a cordless phone, a PDA, a handheld device, a computing device, a vehicle-mounted device, a wearable device, etc.
  • the method for liveness detection may be implemented by invoking, by a processor, computer-readable instructions stored in a memory. No limitation is made in the embodiments of the disclosure.
  • the apparatus for liveness detection may recognize the image size M ⁇ N of the target image, and process the target image including a face by means of a convolutional neural network to predict the probability of each pixel point of the target image to be corresponding to spoofing, which may be output in the form of a corresponding M ⁇ N-order probability matrix.
  • elements in the M ⁇ N-order probability matrix respectively indicate the probabilities of the pixel points of the target image to be corresponding to spoofing, where M and N are integers greater than 1.
  • a probability distribution map may also be generated based on the convolutional neural network.
  • the probability distribution map may be understood as an image that reflects the probabilities of the pixel points of the target image to be corresponding to spoofing, is relatively intuitive, and also facilitates the determination in liveness detection.
  • the convolutional neural network may be obtained by being trained based on a mini-match stochastic gradient descent algorithm and a learning rate decay strategy, which may also be replaced with an optimization algorithm having a similar effect, so as to ensure that a network model can converge during a training process.
  • a mini-match stochastic gradient descent algorithm and a learning rate decay strategy which may also be replaced with an optimization algorithm having a similar effect, so as to ensure that a network model can converge during a training process.
  • Gradient descent is one of the iterative methods that can be used to solve least squares problems (both linear and nonlinear).
  • gradient descent is one of the most commonly used methods.
  • a gradient descent method can be used for calculation iteratively operation by operation to obtain a minimized loss function and model parameter values.
  • two gradient descent methods are developed based on the basic gradient descent method, namely, a Stochastic Gradient Descent (SGD) method and a Batch Gradient Descent (BGD) method.
  • SGD Stochastic Gradient Descent
  • BGD Batch Gradient Descent
  • Mini-Batch Gradient Descent (MBGD) in the embodiments of the disclosure is a compromise between BGD and SGD.
  • the idea thereof is to use “batch_size” samples to update the parameters in each iteration.
  • optimizing the neural network parameters by means of matrix operation on one batch each time is not much slower than on a single piece of data; moreover, the use of one batch each time can greatly reduce the number of iterations required for convergence, and can also make the convergence result closer to the effect of gradient descent.
  • Learning rate determines whether an objective function can converge to a local minimum and when it converges to the minimum.
  • a proper learning rate can make the objective function converge to a local minimum in a suitable time duration.
  • adjustable parameters in the learning rate decay strategy include an initial learning rate which is set as 0.005 for example, and power of a decay polynomial which is set as, for example, 0.9; and adjustable parameters in the gradient descent algorithm include a momentum set to, for example, 0.5 and a weight attenuation parameter set to, for example, 0.001.
  • the parameters may be set and modified according to actual conditions during training and application. No limitation is made to the specific parameter setting in a training process in the embodiments of the disclosure.
  • a predicted face region in the target image is determined.
  • operation 203 After determining the predicted face region and obtaining the probability of each pixel point of the target image to be corresponding to spoofing, operation 203 may be executed.
  • At 203 at least two pixel points included in the predicted face region are determined from among the pixel points based on position information of each pixel point and the predicted face region.
  • the positions of the pixel points in the target image may be determined.
  • the apparatus for liveness detection may determine the position information of each pixel point, and then determine relative positions of the pixel points and the predicted face region according to the position information of the pixel points and the predicted face region, so as to further determine pixel points in the predicted face region, i.e., determining at least two pixel points included in the predicted face region, where the number thereof may be denoted as P and may be the total number of pixel points in the predicted face region. Then, operation 204 may be executed.
  • At 204 at least one spoofing pixel point in the at least two pixel points is determined based on the probability of each of the at least two pixel points corresponding to spoofing.
  • the probability of each pixel point of the target image to be corresponding to spoofing is obtained, and at least two pixel points included in the predicted face region are determined, it can be determined that at least one spoofing pixel point in the at least two pixel points is determined based on the probability of each of the at least two pixel points corresponding to spoofing, where the spoofing pixel point may be understood as a pixel point determined to correspond to spoofing.
  • the determination of the spoofing pixel point may be based on comparison of the probability with a preset threshold.
  • a preset threshold ⁇ 1 may be stored in the apparatus for liveness detection, and the number of pixel points in the at least two pixel points, of which the probabilities corresponding to spoofing are greater than the preset threshold ⁇ 1, may be obtained, i.e., the number of spoofing pixel points, which may be denoted as Q.
  • operation 205 After determining the at least one spoofing pixel point in the at least two pixel points, operation 205 may be executed.
  • a proportion of the at least one spoofing pixel point in the at least two pixel points is determined. Furthermore, after determining the spoofing pixel point, a proportion Q/P of the at least one spoofing pixel point in the at least two pixel points may be calculated, i.e., a proportion of the spoofing pixel point in the predicted face region. After determining the proportion, operation 206 and/or operation 207 may be executed.
  • a first threshold ⁇ 2 may be set in advance, and the apparatus for liveness detection may store the first threshold ⁇ 2 for pixel-by-pixel analysis to perform determination in the liveness detection, that is, whether the face in the target image is spoofing is analyzed by determining whether the proportion Q/P is greater than the first threshold ⁇ 2.
  • the proportion Q/P is greater than or equal to the first threshold ⁇ 2, it means that the proportion of pixel points determined as spoofing pixel points in the predicted face region is high, and it can be determined that the liveness detection result of the target image is spoofing, and the liveness detection result may be output. If the proportion Q/P is less than the first threshold ⁇ 2, it means that the proportion of pixel points determined as spoofing pixel points in the predicted face region is low, and operation 207 may be executed, i.e., determining that the liveness detection result of the target image is non-spoofing.
  • alarming information may be output or the alarming information may be sent to a preset terminal device to prompt a user that spoofing is detected in a face recognition process, so as to ensure the security of face recognition.
  • the method further includes:
  • averaging processing may be performed on the probabilities of the at least two pixel points corresponding to spoofing to obtain an average probability, i.e., an average probability R of the pixel points in the predicted face region corresponding to spoofing.
  • a target threshold 23 may be set in advance and stored in the apparatus for liveness detection, and then it can be determined whether the average probability R is greater than the target threshold 23 so as to perform the determination in the liveness detection. If the average probability R is greater than the target threshold 23 , it means that the probabilities of the pixel points of the face corresponding to spoofing are relatively high, and it can be determined that the liveness detection result of the target image is spoofing; and if the average probability R is not greater than the target threshold 23 , it means that the probabilities of the pixel points of the face corresponding to spoofing are relatively low, and it can be determined that the liveness detection result of the target image is non-spoofing.
  • obtaining, based on the probabilities of the multiple pixel points of the target image to be corresponding to spoofing and the predicted face region, the liveness detection result of the target image may include: determining, based on the probabilities of the multiple pixel points of the target image to be corresponding to spoofing, a spoofing region of the target image; and determining, based on positions of the spoofing region and the predicted face region, the liveness detection result of the target image.
  • the spoofing region may be understood as a region where pixel points, the probabilities of which corresponding to spoofing are relatively high, in the target image are gathered.
  • a second threshold ⁇ 4 may be stored in the apparatus for liveness detection; the probabilities of the multiple pixel points corresponding to spoofing may be compared with the second threshold ⁇ 4 to determine a region where pixel points having probabilities greater than or equal to the second threshold ⁇ 4 are located as a spoofing region.
  • the position of the spoofing region may be compared with that of the predicted face region, and the overlapping condition therebetween may be mainly compared to determine the liveness detection result.
  • an overlapping region between the spoofing region and the predicted face region may be determined based on the positions of the spoofing region and the predicted face region; and the liveness detection result of the target image is determined based on a proportion of the overlapping region in the predicted face region.
  • the overlapping region between the spoofing region and the predicted face region may be determined, and then a proportion n of the overlapping region in the predicted face region may be calculated, where the proportion n may be the ratio of the area of the overlapping region to the area of the predicted face region, and the proportion n may be used to determine the liveness detection result of the target image.
  • the proportion n may be the ratio of the area of the overlapping region to the area of the predicted face region
  • the proportion n may be used to determine the liveness detection result of the target image.
  • the greater the proportion n the greater the probability of the detection result being spoofing.
  • a third threshold ⁇ 5 may be stored in the apparatus for liveness detection, and the proportion n may be compared with the third threshold ⁇ 5.
  • the proportion n is greater than or equal to the third threshold ⁇ 5, it can be determined that the liveness detection result of the target image is spoofing, and if the proportion n is smaller than the third threshold ⁇ 5, it can be determined that the liveness detection result of the target image is non-spoofing.
  • the thresholds used for determination of pixel points in the embodiments of the disclosure may be preset or determined according to actual conditions, and may be modified, added or deleted. No limitation is made thereto in the embodiments of the disclosure.
  • an image A is the target image, and more specifically, is an image including a face. Liveness detection is required in the process of face recognition.
  • Process B represents the use of a trained neural network to perform convolution processing on the input image A in the embodiments of the disclosure, where the white boxes may be understood as multiple feature maps extracted in a feature extraction process in the convolution layers.
  • FIGS. 1 and 2 For the processing process of the neural network, reference may be made to relevant descriptions in FIGS. 1 and 2 , and details are not described herein again.
  • an image C including a predicted face region and a determined probability of each pixel point in the image corresponding to spoofing may be output, i.e., a liveness detection result (spoofing or non-spoofing) may be obtained.
  • a liveness detection result spoke or non-spoofing
  • the predicted face region shown in the image C is a spoofing region (the light-colored region in the middle of the image C), where the included pixel points determined by the probabilities may be referred to as spoofing pixel points, and dark-colored regions at the corners are roughly determined as the background portion of the image and have little influence on the liveness detection.
  • the exact region in the image on which the determination is based may also be intuitively seen from the output result, so that the liveness detection result is more interpretable.
  • the embodiments of the disclosure may be used as a part of a face recognition system to determine the authenticity of a face input to the system, thereby ensuring the security of the entire face recognition system.
  • the method is applicable to face recognition scenarios such as monitoring systems or attendance checking systems, and compared with a method for directly predicting a probability of whether a face in an image is spoofing, probability analysis based on pixel points improves the accuracy of liveness detection, is applicable to a monocular camera and the detection in a single image frame, has high adaptability, and reduces costs compared with liveness detection using hardware devices such as a multi-ocular camera or 3D structured light.
  • the use of sample data having pixel-level labels to train the convolutional neural network can reduce the amount of data required for achieving the same accuracy by one or two orders of magnitude, compared with the general use of data having image-level labels.
  • the amount of data required for training is reduced while improving the liveness detection accuracy, thereby increasing the processing efficiency.
  • a neural network is used to process a target image to output a probability of each pixel point of the target image to be corresponding to spoofing; a predicted face region in the target image is determined; at least two pixel points included in the predicted face region are determined from the pixel points based on position information of each pixel point and the predicted face region; then at least one spoofing pixel point in the at least two pixel points is determined based on the probability of each of the at least two pixel points corresponding to spoofing; next, a proportion of the at least one spoofing pixel point in the at least two pixel points is determined; and it is determined, in response to the proportion being greater than or equal to a first threshold, that the liveness detection result of the target image is spoofing, or it is determined, in response to the proportion being less than the first threshold, that the liveness detection result of the target image is non-spoofing.
  • the apparatus for liveness detection includes hardware structures and/or software modules corresponding to the functions.
  • the disclosure may be implemented by hardware, or a combination of hardware and computer software, in combination with the units and operations of algorithms in the examples described in the embodiments disclosed herein. Whether a particular function is executed by hardware or by computer software driving hardware depends on the particular applications and design constraint conditions of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that the implementation goes beyond the scope of the disclosure.
  • the apparatus for liveness detection may be divided into functional units according to the method examples.
  • the functional units can be correspondingly divided according to respective functions, or two or more functions are integrated into one processing unit.
  • the integrated unit may be implemented in a form of hardware and may also be implemented in a form of a software functional unit. It should be noted that the division of units in the embodiments of the disclosure is merely exemplary, is merely logical function division, and may be implemented in other division modes in actual implementation.
  • FIG. 4 is a schematic structural diagram of an apparatus for liveness detection disclosed in embodiments of the disclosure.
  • the apparatus 300 for liveness detection includes a pixel prediction module 310 , a face detection module 320 , and an analysis module 330 , where the pixel prediction module 310 is configured to process a target image to obtain probabilities of multiple pixel points of the target image to be corresponding to spoofing; the face detection module 320 is configured to determine a predicted face region in the target image; and the analysis module 330 is configured to obtain, based on the probabilities of the multiple pixel points of the target image to be corresponding to spoofing and the predicted face region, a liveness detection result of the target image.
  • the pixel prediction module 310 is configured to input the target image into a convolutional neural network for processing to obtain a probability of each pixel point of the target image to be corresponding to spoofing.
  • the convolutional neural network is obtained by being trained based on sample data having pixel-level labels.
  • the analysis module 330 includes a first unit 331 and a second unit 332 , where the first unit 331 is configured to determine, based on position information of the multiple pixel points and the predicted face region, at least two pixel points included in the predicted face region from among the multiple pixel points; and the second unit 332 is configured to determine, based on the probability of each of the at least two pixel points corresponding to spoofing, the liveness detection result of the target image.
  • the second unit 332 is configured to determine, based on the probability of each of the at least two pixel points corresponding to spoofing, at least one spoofing pixel point in the at least two pixel points; and determine, based on a proportion of the at least one spoofing pixel point in the at least two pixel points, the liveness detection result of the target image.
  • the second unit 332 is configured to: determine, in response to the proportion being greater than or equal to a first threshold, that the liveness detection result of the target image is spoofing; and/or determine, in response to the proportion being less than the first threshold, that the liveness detection result of the target image is non-spoofing.
  • the second unit 332 is configured to: perform averaging processing on the probabilities of the at least two pixel points corresponding to spoofing to obtain an average probability; and determine, based on the average probability, the liveness detection result of the target image.
  • the analysis module 330 is configured to: determine, based on the probabilities of the multiple pixel points of the target image to be corresponding to spoofing, a spoofing region of the target image; and determine, based on positions of the spoofing region and the predicted face region, the liveness detection result of the target image.
  • the analysis module 330 is configured to: determine, based on the positions of the spoofing region and the predicted face region, an overlapping region between the spoofing region and the predicted face region; and determine, based on a proportion of the overlapping region in the predicted face region, the liveness detection result of the target image.
  • the apparatus 300 for liveness detection further includes: a display module 340 , configured to display at least one spoofing pixel point determined based on the probabilities of the multiple pixel points corresponding to spoofing; and/or a transmission module 350 , configured to output information of the at least one spoofing pixel point determined based on the probabilities of the multiple pixel points corresponding to spoofing for displaying.
  • a display module 340 configured to display at least one spoofing pixel point determined based on the probabilities of the multiple pixel points corresponding to spoofing
  • a transmission module 350 configured to output information of the at least one spoofing pixel point determined based on the probabilities of the multiple pixel points corresponding to spoofing for displaying.
  • the face detection module 320 is configured to: perform face key point detection on the target image to obtain key point prediction information; and determine, based on the key point prediction information, the predicted face region in the target image.
  • the face detection module 320 is further configured to perform face detection on the target image to obtain a face bounding region in the target image; and the face detection module 320 is configured to perform face key point detection on the image in the face bounding region to obtain the key point prediction information.
  • the face detection module 320 is configured to: perform face detection on the target image to obtain the predicted face region in the target image.
  • the apparatus 300 for liveness detection further includes an image obtaining module 360 configured to obtain the target image that is acquired by a monocular camera.
  • the method for liveness detection in the embodiments in FIGS. 1 and 2 can be implemented by using the apparatus 300 for liveness detection in the embodiments of the disclosure.
  • the apparatus 300 for liveness detection may process a target image to obtain probabilities of multiple pixel points of the target image to be corresponding to spoofing, determine a predicted face region in the target image, and then obtain, based on the probabilities of the multiple pixel points of the target image to be corresponding to spoofing and the predicted face region, a liveness detection result of the target image.
  • No additional hardware facilities such as a multi-ocular camera or 3D structured light are needed, and the accuracy of liveness detection of a single image frame may also be greatly improved when there is only one monocular camera, thereby achieving high adaptability and reducing detection costs.
  • FIG. 5 is a schematic structural diagram of an electronic device disclosed in embodiments of the disclosure.
  • the electronic device 400 includes a processor 401 and a memory 402 , the electronic device 400 may further include a bus 403 , the processor 401 may be connected to the memory 402 by means of the bus 403 , and the bus 403 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, etc.
  • the bus 403 may include an address bus, a data bus, a control bus, etc. For ease of representation, only one thick line is used in FIG. 4 , but it does not mean that there is only one bus or one type of bus.
  • the electronic device 400 may further include an input/output device 404 , which may include a display screen, such as a liquid crystal display screen.
  • the memory 402 is configured to store a computer program; the processor 401 is configured to invoke the computer program stored in the memory 402 to execute some or all of the operations of the method mentioned in the embodiments of FIG. 1 and FIG. 2 above.
  • the electronic device 400 may process a target image to obtain probabilities of multiple pixel points of the target image to be corresponding to spoofing, determine a predicted face region in the target image, and then obtain, based on the probabilities of the multiple pixel points of the target image to be corresponding to spoofing and the predicted face region, a liveness detection result of the target image.
  • No additional hardware facilities such as a multi-ocular camera or 3D structured light are needed, and the accuracy of liveness detection of a single image frame may also be greatly improved when there is only one monocular camera, thereby achieving high adaptability and reducing detection costs.
  • Embodiments of the disclosure further provides a computer storage medium, where the computer storage medium is configured to store a computer program, and the computer program enables a computer to execute some or all of the operations of the method for liveness detection described in any one of the foregoing method embodiments.
  • Embodiments of the disclosure provide a computer program product, where the computer program product includes a computer program, the computer program is configured to be executed by a processor, and the processor is configured to execute some or all of the operations of the method for liveness detection described in any one of the foregoing method embodiments.
  • the disclosed apparatus in several embodiments provided in the disclosure may be implemented in other modes.
  • the apparatus embodiments described above are merely exemplary.
  • the division of the units is merely logical function division and may be implemented in other division modes in actual implementation.
  • a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not executed.
  • the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented by means of some interfaces.
  • the indirect couplings or communication connections between the apparatuses or units may be electrical or in other forms.
  • the units (modules) described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located at one position, or may be distributed on a plurality of network units. Some of or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • functional units in the embodiments of the disclosure may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units may be integrated into one unit.
  • the integrated unit may be implemented in a form of hardware and may also be implemented in a form of a software functional unit.
  • the integrated unit When being implemented in a form of a software functional unit and sold or used as an independent product, the integrated unit may be stored in a computer-readable memory. Based on such an understanding, the technical solutions of the disclosure or a part thereof contributing to the prior art may be essentially embodied in the form of a software product.
  • the computer software product is stored in one memory and includes several instructions so that one computer device (which may be a personal computer, a server, a network device, or the like) implements all or some of the operations of the method in the embodiments of the disclosure.
  • the foregoing memory includes: various media capable of storing a program code, such as a USB flash drive, a Read-only Memory (ROM), a Random Access Memory (RAM), a mobile hard disk drive, a floppy disk, or an optical disc.
  • a program code such as a USB flash drive, a Read-only Memory (ROM), a Random Access Memory (RAM), a mobile hard disk drive, a floppy disk, or an optical disc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Operations Research (AREA)
  • Probability & Statistics with Applications (AREA)
  • Algebra (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)
US16/998,279 2019-04-01 2020-08-20 Method and apparatus for liveness detection, electronic device, and storage medium Abandoned US20200380279A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201910257350.9A CN111767760A (zh) 2019-04-01 2019-04-01 活体检测方法和装置、电子设备及存储介质
CN201910257350.9 2019-04-01
PCT/CN2019/120404 WO2020199611A1 (zh) 2019-04-01 2019-11-22 活体检测方法和装置、电子设备及存储介质

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/120404 Continuation WO2020199611A1 (zh) 2019-04-01 2019-11-22 活体检测方法和装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
US20200380279A1 true US20200380279A1 (en) 2020-12-03

Family

ID=72664509

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/998,279 Abandoned US20200380279A1 (en) 2019-04-01 2020-08-20 Method and apparatus for liveness detection, electronic device, and storage medium

Country Status (7)

Country Link
US (1) US20200380279A1 (ja)
JP (1) JP7165742B2 (ja)
KR (1) KR20200118076A (ja)
CN (1) CN111767760A (ja)
SG (1) SG11202008103YA (ja)
TW (1) TWI754887B (ja)
WO (1) WO2020199611A1 (ja)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200302159A1 (en) * 2017-12-11 2020-09-24 Analog Devices, Inc. Multi-modal far field user interfaces and vision-assisted audio processing
US20210286975A1 (en) * 2020-08-20 2021-09-16 Beijing Baidu Netcom Science And Technology Co., Ltd. Image processing method, electronic device, and storage medium
US20210304437A1 (en) * 2018-08-21 2021-09-30 Siemens Aktiengesellschaft Orientation detection in overhead line insulators
US20210326617A1 (en) * 2020-04-17 2021-10-21 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for spoof detection
US11670069B2 (en) * 2020-02-06 2023-06-06 ID R&D, Inc. System and method for face spoofing attack detection
CN116363762A (zh) * 2022-12-23 2023-06-30 北京百度网讯科技有限公司 活体检测方法、深度学习模型的训练方法及装置
JP7490141B2 (ja) 2021-01-28 2024-05-24 ▲騰▼▲訊▼科技(深▲セン▼)有限公司 画像検出方法、モデルトレーニング方法、画像検出装置、トレーニング装置、機器及びプログラム

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112651311A (zh) * 2020-12-15 2021-04-13 展讯通信(天津)有限公司 一种人脸识别方法和相关设备
CN112883902B (zh) * 2021-03-12 2023-01-24 百度在线网络技术(北京)有限公司 视频检测方法、装置、电子设备及存储介质
CN113705428B (zh) * 2021-08-26 2024-07-19 北京市商汤科技开发有限公司 活体检测方法及装置、电子设备及计算机可读存储介质
CN113869906A (zh) * 2021-09-29 2021-12-31 北京市商汤科技开发有限公司 人脸支付方法及装置、存储介质
CN113971841A (zh) * 2021-10-28 2022-01-25 北京市商汤科技开发有限公司 一种活体检测方法、装置、计算机设备及存储介质
CN114550244A (zh) * 2022-02-11 2022-05-27 支付宝(杭州)信息技术有限公司 一种活体检测方法、装置及设备
CN114648814A (zh) * 2022-02-25 2022-06-21 北京百度网讯科技有限公司 人脸活体检测方法及模型的训练方法、装置、设备及介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040131235A1 (en) * 2002-12-13 2004-07-08 Canon Kabushiki Kaisha Image processing method, apparatus and storage medium
US20160379050A1 (en) * 2015-06-26 2016-12-29 Kabushiki Kaisha Toshiba Method for determining authenticity of a three-dimensional object
US20180276488A1 (en) * 2017-03-27 2018-09-27 Samsung Electronics Co., Ltd. Liveness test method and apparatus
US20180322366A1 (en) * 2017-05-02 2018-11-08 General Electric Company Neural network training image generation system
US20180357501A1 (en) * 2017-06-07 2018-12-13 Alibaba Group Holding Limited Determining user authenticity with face liveness detection
US20190209052A1 (en) * 2016-06-30 2019-07-11 Koninklijke Philips N.V. Method and apparatus for face detection/recognition systems
US20210082136A1 (en) * 2018-12-04 2021-03-18 Yoti Holding Limited Extracting information from images

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1306456C (zh) * 2002-12-13 2007-03-21 佳能株式会社 图像处理方法和装置
JP4812497B2 (ja) 2006-03-31 2011-11-09 セコム株式会社 生体照合システム
JP5402026B2 (ja) 2009-01-30 2014-01-29 株式会社ニコン 電子カメラおよび画像処理プログラム
CN105389554B (zh) * 2015-11-06 2019-05-17 北京汉王智远科技有限公司 基于人脸识别的活体判别方法和设备
EP3380859A4 (en) 2015-11-29 2019-07-31 Arterys Inc. AUTOMATED SEGMENTATION OF CARDIAC VOLUME
CN107220635A (zh) * 2017-06-21 2017-09-29 北京市威富安防科技有限公司 基于多造假方式的人脸活体检测方法
CN108229479B (zh) * 2017-08-01 2019-12-31 北京市商汤科技开发有限公司 语义分割模型的训练方法和装置、电子设备、存储介质
CN108280418A (zh) * 2017-12-12 2018-07-13 北京深醒科技有限公司 脸部图像的欺骗识别方法及装置
TWI632509B (zh) * 2017-12-29 2018-08-11 技嘉科技股份有限公司 人臉辨識裝置及方法、提升影像辨識率的方法、及電腦可讀儲存介質
CN108121977A (zh) * 2018-01-08 2018-06-05 深圳天珑无线科技有限公司 一种移动终端及其活体人脸识别方法和系统
CN108549854B (zh) * 2018-03-28 2019-04-30 中科博宏(北京)科技有限公司 一种人脸活体检测方法
CN108537193A (zh) * 2018-04-17 2018-09-14 厦门美图之家科技有限公司 一种人脸属性中的种族属性识别方法及移动终端
CN108764330A (zh) * 2018-05-25 2018-11-06 西安电子科技大学 基于超像素分割和卷积反卷积网络的sar图像分类方法
CN109191424B (zh) * 2018-07-23 2022-04-22 哈尔滨工业大学(深圳) 一种乳腺肿块检测与分类系统、计算机可读存储介质
CN109035516A (zh) * 2018-07-25 2018-12-18 深圳市飞瑞斯科技有限公司 控制智能锁的方法、装置、设备及存储介质
CN109086718A (zh) * 2018-08-02 2018-12-25 深圳市华付信息技术有限公司 活体检测方法、装置、计算机设备及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040131235A1 (en) * 2002-12-13 2004-07-08 Canon Kabushiki Kaisha Image processing method, apparatus and storage medium
US20160379050A1 (en) * 2015-06-26 2016-12-29 Kabushiki Kaisha Toshiba Method for determining authenticity of a three-dimensional object
US20190209052A1 (en) * 2016-06-30 2019-07-11 Koninklijke Philips N.V. Method and apparatus for face detection/recognition systems
US20180276488A1 (en) * 2017-03-27 2018-09-27 Samsung Electronics Co., Ltd. Liveness test method and apparatus
US20180322366A1 (en) * 2017-05-02 2018-11-08 General Electric Company Neural network training image generation system
US20180357501A1 (en) * 2017-06-07 2018-12-13 Alibaba Group Holding Limited Determining user authenticity with face liveness detection
US20210082136A1 (en) * 2018-12-04 2021-03-18 Yoti Holding Limited Extracting information from images

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200302159A1 (en) * 2017-12-11 2020-09-24 Analog Devices, Inc. Multi-modal far field user interfaces and vision-assisted audio processing
US11830289B2 (en) * 2017-12-11 2023-11-28 Analog Devices, Inc. Multi-modal far field user interfaces and vision-assisted audio processing
US20210304437A1 (en) * 2018-08-21 2021-09-30 Siemens Aktiengesellschaft Orientation detection in overhead line insulators
US11861480B2 (en) * 2018-08-21 2024-01-02 Siemens Mobility GmbH Orientation detection in overhead line insulators
US11670069B2 (en) * 2020-02-06 2023-06-06 ID R&D, Inc. System and method for face spoofing attack detection
US20210326617A1 (en) * 2020-04-17 2021-10-21 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for spoof detection
US20210286975A1 (en) * 2020-08-20 2021-09-16 Beijing Baidu Netcom Science And Technology Co., Ltd. Image processing method, electronic device, and storage medium
US11741684B2 (en) * 2020-08-20 2023-08-29 Beijing Baidu Netcom Science And Technology Co., Ltd. Image processing method, electronic device and storage medium for performing skin color recognition on a face image
JP7490141B2 (ja) 2021-01-28 2024-05-24 ▲騰▼▲訊▼科技(深▲セン▼)有限公司 画像検出方法、モデルトレーニング方法、画像検出装置、トレーニング装置、機器及びプログラム
CN116363762A (zh) * 2022-12-23 2023-06-30 北京百度网讯科技有限公司 活体检测方法、深度学习模型的训练方法及装置

Also Published As

Publication number Publication date
JP7165742B2 (ja) 2022-11-04
SG11202008103YA (en) 2020-11-27
TWI754887B (zh) 2022-02-11
TW202038191A (zh) 2020-10-16
JP2021520530A (ja) 2021-08-19
CN111767760A (zh) 2020-10-13
WO2020199611A1 (zh) 2020-10-08
KR20200118076A (ko) 2020-10-14

Similar Documents

Publication Publication Date Title
US20200380279A1 (en) Method and apparatus for liveness detection, electronic device, and storage medium
US11170210B2 (en) Gesture identification, control, and neural network training methods and apparatuses, and electronic devices
US11182592B2 (en) Target object recognition method and apparatus, storage medium, and electronic device
EP3422250B1 (en) Facial verification method and apparatus
US10832069B2 (en) Living body detection method, electronic device and computer readable medium
JP6629513B2 (ja) ライブネス検査方法と装置、及び映像処理方法と装置
US10339402B2 (en) Method and apparatus for liveness detection
TWI686774B (zh) 人臉活體檢測方法和裝置
WO2016172872A1 (zh) 用于验证活体人脸的方法、设备和计算机程序产品
US10318797B2 (en) Image processing apparatus and image processing method
CN110738116B (zh) 活体检测方法及装置和电子设备
CN112733802A (zh) 图像的遮挡检测方法、装置、电子设备及存储介质
CN108875468B (zh) 活体检测方法、活体检测系统以及存储介质
US20220198836A1 (en) Gesture recognition method, electronic device, computer-readable storage medium, and chip
KR102257897B1 (ko) 라이브니스 검사 방법과 장치,및 영상 처리 방법과 장치
CN110287848A (zh) 视频的生成方法及装置
CN106778574A (zh) 用于人脸图像的检测方法和装置
US20230306792A1 (en) Spoof Detection Based on Challenge Response Analysis
KR101961462B1 (ko) 객체 인식 방법 및 장치
US11335128B2 (en) Methods and systems for evaluating a face recognition system using a face mountable device
CN110276313B (zh) 身份认证方法、身份认证装置、介质和计算设备
CN112115811A (zh) 基于隐私保护的图像处理方法、装置和电子设备
CN112580395A (zh) 基于深度信息的3d人脸活体识别方法、系统、设备及介质
CN110363111A (zh) 基于镜头失真原理的人脸活体检测方法、装置及存储介质
CN108875467B (zh) 活体检测的方法、装置及计算机存储介质

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: BEIJING SENSETIME TECHNOLOGY DEVELOPMENT CO., LTD., CHINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:YANG, GUOWEI;SHAO, JING;YAN, JUNJIE;AND OTHERS;REEL/FRAME:054044/0661

Effective date: 20200518

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION