WO2019127262A1 - Procédé de détection in vivo de visage humain basé sur une extrémité en nuage, dispositif électronique et produit de programme - Google Patents

Procédé de détection in vivo de visage humain basé sur une extrémité en nuage, dispositif électronique et produit de programme Download PDF

Info

Publication number
WO2019127262A1
WO2019127262A1 PCT/CN2017/119543 CN2017119543W WO2019127262A1 WO 2019127262 A1 WO2019127262 A1 WO 2019127262A1 CN 2017119543 W CN2017119543 W CN 2017119543W WO 2019127262 A1 WO2019127262 A1 WO 2019127262A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
user
image
distance
human face
Prior art date
Application number
PCT/CN2017/119543
Other languages
English (en)
Chinese (zh)
Inventor
刘兆祥
廉士国
王敏
Original Assignee
深圳前海达闼云端智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海达闼云端智能科技有限公司 filed Critical 深圳前海达闼云端智能科技有限公司
Priority to CN201780002701.0A priority Critical patent/CN108124486A/zh
Priority to PCT/CN2017/119543 priority patent/WO2019127262A1/fr
Publication of WO2019127262A1 publication Critical patent/WO2019127262A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Definitions

  • the present invention relates to the field of face detection technologies, and in particular, to a cloud-based method for detecting a living body of a living body, an electronic device, and a program product.
  • face recognition technology can directly acquire the camera through the camera. It is convenient and fast, but it also brings some information security issues, such as face photos or face videos. Deceive the face recognition system.
  • the embodiment of the present application provides a cloud-based human face detection method, an electronic device, and a program product, which are mainly used for blind navigation.
  • the embodiment of the present application provides a cloud-based method for detecting a living body of a human body, including:
  • each first face image is a living image, identifying whether there is a micro motion in the plurality of consecutive first face images;
  • an embodiment of the present application provides an electronic device, where the electronic device includes:
  • a memory one or more processors; a memory coupled to the processor via a communication bus; a processor configured to execute instructions in the memory; the storage medium having stored therein for performing the steps of the method of the first aspect of the claims instruction.
  • an embodiment of the present application provides a computer program product for use in conjunction with an electronic device including a display, the computer program product comprising a computer readable storage medium and a computer program mechanism embedded therein, the computer
  • the program mechanism includes instructions for performing the various steps in the method of the first aspect described above.
  • a plurality of first face images of the user are continuously collected, and after each first face image is a living image, whether there is a micro motion in the plurality of consecutive first face images, if there is a micro
  • the action confirms that the user's face is detected by the living body, and the living body detection is performed on the user through the living body recognition and the micro motion recognition, thereby effectively improving the accuracy of the face detection, and preventing the face recognition system from being passed through the face photo or the face video.
  • the behavior of the real person to distinguish the function of the real person dummy, to ensure information security.
  • FIG. 1 is a schematic flowchart of a cloud-based method for detecting a human face in a cloud according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of a key feature part of a face in the embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a deep neural network for micro-expression recognition according to an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of another cloud-based method for detecting a human face in the embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • face recognition applications are more and more widely, but there is a core security problem in face recognition: face fraud, such as face recognition system can be deceived by face photo, face video or 3D face film.
  • the embodiment of the present application provides a cloud-based method for detecting a human face in vivo, which continuously collects a plurality of first face images of a user, and determines that each first face image is a living image, and identifies a plurality of consecutive first persons. Whether there is a micro-action in the face image. If there is a micro-action, it is confirmed that the user's face is detected by the living body, and the living body detection is performed on the user through the living body recognition and the micro-motion recognition, thereby effectively improving the accuracy of the face detection and preventing the person passing through.
  • the face photo or the face video deceives the behavior of the face recognition system to realize the function of distinguishing the real person dummy and ensure the information security.
  • the cloud-based method for detecting a human face in vivo includes:
  • intrusion methods for face recognition are usually printed photos including face images or face videos/mobile screens/computer screens/3D masks, etc. These invasive tools usually have characteristic differences from normal living faces.
  • this proposal first requires the distance between the user (such as the face) from the recognition device (such as the camera), and recognizes the difference between the features while keeping the camera and the face at an appropriate distance. .
  • Step 1 Acquire a second face image of the user.
  • the second face image is an image used by the user to adjust the distance, which is different from the image used for subsequent face recognition.
  • Step 2 Acquire a face area in the second face image.
  • step 3 the user distance is determined according to the face area.
  • the user distance may be determined according to the proportion of the face area occupying the second face image. It is also possible to extract the distance between the face preset parts from the face area, and determine the user distance according to the ratio of the distance to the width and height of the second face image.
  • Step 4 If the user distance matches the distance requirement, it is determined that the user meets the distance requirement.
  • Step 5 If the user distance does not match the distance requirement, the user is instructed to move to meet the distance requirement.
  • a prompt (such as a voice prompt or a text prompt) can be sent to the user to guide the user to adjust their position, manner, and the like.
  • step 1 to step 3 are performed again to determine whether the adjusted distance matches the distance requirement. If it matches, step 4 is performed again; otherwise, step 5 is performed again. This cycle until the user meets the distance requirement.
  • face detection is performed first, and a face area is obtained.
  • the distance of the face can be approximated according to the size of the face area and the proportion of the area in the image. If it is within the proper specific gravity range, it is considered to be within the optimal distance, otherwise the user is approached or away according to the magnitude of the ratio.
  • the 2D coordinates of the key points can be obtained, and then the 3D pose Euler angle and the 3D translation (T x , T y , T z ) of the face relative to the camera are obtained by the solvepnp algorithm, and the 3D distance is further obtained, and then judged. Is the distance within the proper range?
  • the posture of the user's face may be reminded (such as roll, pitch, yaw) according to the detection result of the position and posture described above, and the position in the 2D image is reminded ( Left, right, upper, lower, etc.).
  • the roll is rotated around the Z axis, also called the roll angle.
  • the pitch is rotated around the X axis, also known as the pitch angle.
  • Yaw is rotated around the Y axis, also called the yaw angle.
  • the user's continuous and multiple face images that is, the first face image
  • the first face image user performs the basis for the face detection of the user.
  • any of the first face images is a living image
  • any one of the first face images is stored in the image sequence.
  • the image sequence here is initially empty, and it is determined that a first face image is a living image, and the first face image is stored in the image sequence, and then whether the next face image of a certain frame is performed For the detection of the living image, if the next picture of a certain picture is a living body image, the next picture of a certain picture is stored in the image sequence, and the loop is repeated until all the first face images are subjected to the living body image detection. If a next non-living image of a certain piece is found during the detection, the face image in the image sequence is cleared at this time.
  • the process is terminated, the image sequence is cleared, and the face detection of the user does not pass.
  • non-living images including but not limited to: photos (such as print photos, photos on the phone screen, photos on the computer screen), videos (such as video on the phone screen, video on the computer screen), facial mask (such as 3D face mask).
  • photos such as print photos, photos on the phone screen, photos on the computer screen
  • videos such as video on the phone screen, video on the computer screen
  • facial mask such as 3D face mask
  • Filtering of a single image can be achieved by step 103.
  • the single image is classified and discriminated by a method of machine learning.
  • CNN Convolutional Neural Network
  • deep learning is used for classification and discrimination, such as using a very popular resnet classification network for classification and discrimination.
  • each first face image is classified and identified by using the successfully trained network model and weight. Which type of output probability is large, that is, which type can be considered, and the threshold can be set for further discrimination, for example.
  • the maximum probability is greater than a set value.
  • step 104 is performed to perform image sequence classification discrimination. If the classification results in other categories, the image sequence is cleared and the entire inspection process is restarted.
  • step 105 is performed.
  • the process is terminated, the image sequence is cleared, and the user's face biometric detection does not pass.
  • the recognition of the print photo/mobile phone screen/computer screen/3D face film including the face image or the face video can be realized, but only the recognition result is determined to determine whether the user face biometric detection still exists. The case of misjudgment.
  • the image sequence classification filtering is performed through step 104.
  • image sequence classification filtering is performed.
  • the image sequence is input into a deep neural network for classification and discrimination, and the output is two categories: normal face and abnormal face.
  • the deep neural network can be directly based on the 3D convolutional neural network, or a general 2D convolutional neural network, such as resnet, except that the network input is the stacked sequence image data, as shown in FIG.
  • the general resnet classification network input is 1 channel or 3 channels. After the image sequence is stacked, the color image is taken as an example, which is equivalent to inputting N*3 channel data.
  • N is the length of the image sequence of the input depth neural network, ie the number of first face images in the image sequence of the input depth neural network.
  • N is the number of all the first face images in the image sequence.
  • the 3D convolutional neural network or the 2D convolutional neural network can be trained. After the training is completed, the input image sequence is directly discriminated using the trained model and weight. Which category has a large output probability, which is the same type, and the threshold can also be set for further filtering.
  • the image sequence is directly input into the deep neural network for classification and discrimination, if the final output is a normal face, it is determined that there is a micro-action, and the step 105 is performed to perform the bio-detection, otherwise it is determined that there is no micro-action, the process is terminated, and the image sequence is cleared. The user's face is not detected by the living body, and the entire detection process is restarted.
  • the face distance detection is performed to remind the user and the camera to maintain a suitable distance for subsequent living body detection; then, a single face image is collected for classification, and it is judged to be a print photo/mobile screen/computer screen/3D face/normal face. Filtering abnormal faces; finally, sorting the sequence of consecutive pictures filtered by face to determine whether it is a real person.
  • a plurality of first face images of the user are continuously collected, and after each first face image is a living body image, whether there are micro motions in the plurality of consecutive first face images, if there is a micro action , to confirm that the user's face is detected by the living body, and the living body detection is performed on the user through the living body recognition and the micro motion recognition, thereby effectively improving the accuracy of the face detection, and preventing the face recognition system from being deceived by the face photo or the face video. Behavior, to achieve the function of distinguishing between real people and dummy, to ensure information security.
  • the embodiment of the present application further provides an electronic device.
  • the electronic device includes:
  • the storage medium stores instructions for performing the following steps:
  • each first face image is a living image, identifying whether there are micro actions in the plurality of consecutive first face images;
  • the method before continuously acquiring the plurality of first face images, the method further includes:
  • determining that the user meets the distance requirement includes:
  • determining the user distance according to the face area including:
  • the distance between the face preset parts is extracted from the face area, and the user distance is determined according to the ratio of the distance to the width and height of the second face image.
  • the method further includes:
  • the user is instructed to move to meet the distance requirement.
  • the method further includes:
  • any one of the first face images is determined to be a living image, any one of the first face images is stored in the image sequence; if any one of the first face images is determined If the image is not a live image, the process is terminated, the image sequence is cleared, and the user's face is not detected.
  • the non-living image includes: a photo, a video, a face film.
  • the micro-motions include micro-changes in the face organs, slight changes in the face muscles, and micro-movement of the faces.
  • the method further includes:
  • the process is terminated, and the first face image in the image sequence is cleared, and the user's face biometric detection does not pass.
  • a plurality of first face images of the user are continuously collected, and after each first face image is a living body image, whether there are micro motions in the plurality of consecutive first face images, if there is a micro action , to confirm that the user's face is detected by the living body, and the living body detection is performed on the user through the living body recognition and the micro motion recognition, thereby effectively improving the accuracy of the face detection, and preventing the face recognition system from being deceived by the face photo or the face video. Behavior, to achieve the function of distinguishing between real people and dummy, to ensure information security.
  • an embodiment of the present application further provides a computer program product for use in conjunction with an electronic device including a display, the computer program product comprising a computer readable storage medium and a computer program mechanism embedded therein,
  • the computer program mechanism includes instructions for performing the various steps described below:
  • each first face image is a living image, identifying whether there are micro actions in the plurality of consecutive first face images;
  • the method before continuously acquiring the plurality of first face images, the method further includes:
  • determining that the user meets the distance requirement includes:
  • determining the user distance according to the face area including:
  • the distance between the face preset parts is extracted from the face area, and the user distance is determined according to the ratio of the distance to the width and height of the second face image.
  • the method further includes:
  • the user is instructed to move to meet the distance requirement.
  • the method further includes:
  • any one of the first face images is determined to be a living image, any one of the first face images is stored in the image sequence; if any one of the first face images is determined If the image is not a live image, the process is terminated, the image sequence is cleared, and the user's face is not detected.
  • the non-living image includes: a photo, a video, a face film.
  • the micro-motions include micro-changes in the face organs, slight changes in the face muscles, and micro-movement of the faces.
  • the method further includes:
  • the process is terminated, and the first face image in the image sequence is cleared, and the user's face biometric detection does not pass.
  • a plurality of first face images of the user are continuously collected, and after each first face image is a living body image, whether there are micro motions in the plurality of consecutive first face images, if there is a micro action , to confirm that the user's face is detected by the living body, and the living body detection is performed on the user through the living body recognition and the micro motion recognition, thereby effectively improving the accuracy of the face detection, and preventing the face recognition system from being deceived by the face photo or the face video. Behavior, to achieve the function of distinguishing between real people and dummy, to ensure information security.
  • embodiments of the present application can be provided as a method, system, or computer program product.
  • the present application can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment in combination of software and hardware.
  • the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Abstract

L'invention concerne un procédé de détection in vivo de visage humain basé sur une extrémité en nuage, un dispositif électronique et un produit de programme, ceux-ci étant appliqués au domaine technique de la détection de visage humain. Le procédé consiste à : collecter consécutivement de multiples premières images de visage humain d'un utilisateur ; reconnaître s'il existe une micro-action dans les multiples premières images de visage humain consécutives après avoir déterminé que chacune des premières images de visage humain est une image in vivo ; et s'il existe une micro-action, confirmer que la détection in vivo du visage humain de l'utilisateur est réussie. Selon la présente invention, de multiples premières images de visage humain d'un utilisateur sont collectées consécutivement sur la base d'une extrémité en nuage ; l'existence ou non d'une micro-action dans les multiples premières images de visage humain consécutives est reconnue après avoir déterminé que chacune des premières images de visage humain est une image in vivo ; et s'il existe une micro-action, il est confirmé que la détection in vivo du visage humain de l'utilisateur est réussie. La détection in vivo du visage humain est réalisée sur l'utilisateur au moyen d'une reconnaissance in vivo et de la reconnaissance d'une micro-action, ce qui permet d'améliorer efficacement la précision de la détection in vivo de visage humain, d'empêcher le comportement consistant à utiliser la photo d'un visage humain ou la vidéo d'un visage humain pour tromper un système de reconnaissance de visage humain, de réaliser la fonction de distinction d'une personne réelle d'un mannequin, et de garantir la sécurité des informations.
PCT/CN2017/119543 2017-12-28 2017-12-28 Procédé de détection in vivo de visage humain basé sur une extrémité en nuage, dispositif électronique et produit de programme WO2019127262A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201780002701.0A CN108124486A (zh) 2017-12-28 2017-12-28 基于云端的人脸活体检测方法、电子设备和程序产品
PCT/CN2017/119543 WO2019127262A1 (fr) 2017-12-28 2017-12-28 Procédé de détection in vivo de visage humain basé sur une extrémité en nuage, dispositif électronique et produit de programme

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/119543 WO2019127262A1 (fr) 2017-12-28 2017-12-28 Procédé de détection in vivo de visage humain basé sur une extrémité en nuage, dispositif électronique et produit de programme

Publications (1)

Publication Number Publication Date
WO2019127262A1 true WO2019127262A1 (fr) 2019-07-04

Family

ID=62233594

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/119543 WO2019127262A1 (fr) 2017-12-28 2017-12-28 Procédé de détection in vivo de visage humain basé sur une extrémité en nuage, dispositif électronique et produit de programme

Country Status (2)

Country Link
CN (1) CN108124486A (fr)
WO (1) WO2019127262A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111259757A (zh) * 2020-01-13 2020-06-09 支付宝实验室(新加坡)有限公司 一种基于图像的活体识别方法、装置及设备
CN111783617A (zh) * 2020-06-29 2020-10-16 中国工商银行股份有限公司 人脸识别数据处理方法及装置
CN112818918A (zh) * 2021-02-24 2021-05-18 浙江大华技术股份有限公司 一种活体检测方法、装置、电子设备及存储介质
CN114863515A (zh) * 2022-04-18 2022-08-05 厦门大学 基于微表情语义的人脸活体检测方法及装置
CN115035579A (zh) * 2022-06-22 2022-09-09 支付宝(杭州)信息技术有限公司 基于人脸交互动作的人机验证方法和系统

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108124486A (zh) * 2017-12-28 2018-06-05 深圳前海达闼云端智能科技有限公司 基于云端的人脸活体检测方法、电子设备和程序产品
CN109255322B (zh) * 2018-09-03 2019-11-19 北京诚志重科海图科技有限公司 一种人脸活体检测方法及装置
CN109684927A (zh) * 2018-11-21 2019-04-26 北京蜂盒科技有限公司 活体检测方法、装置、计算机可读存储介质和电子设备
CN109684924B (zh) * 2018-11-21 2022-01-14 奥比中光科技集团股份有限公司 人脸活体检测方法及设备
CN109784175A (zh) * 2018-12-14 2019-05-21 深圳壹账通智能科技有限公司 基于微表情识别的异常行为人识别方法、设备和存储介质
CN109815944A (zh) * 2019-03-21 2019-05-28 娄奥林 一种针对人工智能对视频面部替换识别的防御方法
CN111931544B (zh) * 2019-05-13 2022-11-15 中国移动通信集团湖北有限公司 活体检测的方法、装置、计算设备及计算机存储介质
CN112997185A (zh) * 2019-09-06 2021-06-18 深圳市汇顶科技股份有限公司 人脸活体检测方法、芯片及电子设备
CN111507286B (zh) * 2020-04-22 2023-05-02 北京爱笔科技有限公司 一种假人检测方法及装置
CN112506204B (zh) * 2020-12-17 2022-12-30 深圳市普渡科技有限公司 机器人遇障处理方法、装置、设备和计算机可读存储介质
CN112990167B (zh) * 2021-05-19 2021-08-10 北京焦点新干线信息技术有限公司 图像处理方法及装置、存储介质及电子设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090135188A1 (en) * 2007-11-26 2009-05-28 Tsinghua University Method and system of live detection based on physiological motion on human face
CN104361326A (zh) * 2014-11-18 2015-02-18 新开普电子股份有限公司 一种判别活体人脸的方法
CN105718925A (zh) * 2016-04-14 2016-06-29 苏州优化智能科技有限公司 基于近红外和面部微表情的真人活体身份验证终端设备
CN106557726A (zh) * 2015-09-25 2017-04-05 北京市商汤科技开发有限公司 一种带静默式活体检测的人脸身份认证系统及其方法
CN107016608A (zh) * 2017-03-30 2017-08-04 广东微模式软件股份有限公司 一种基于身份信息验证的远程开户方法及系统
CN108124486A (zh) * 2017-12-28 2018-06-05 深圳前海达闼云端智能科技有限公司 基于云端的人脸活体检测方法、电子设备和程序产品

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102662334A (zh) * 2012-04-18 2012-09-12 深圳市兆波电子技术有限公司 控制用户与电子设备屏幕之间距离的方法及其电子设备
CN104143078B (zh) * 2013-05-09 2016-08-24 腾讯科技(深圳)有限公司 活体人脸识别方法、装置和设备
CN104794464B (zh) * 2015-05-13 2019-06-07 上海依图网络科技有限公司 一种基于相对属性的活体检测方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090135188A1 (en) * 2007-11-26 2009-05-28 Tsinghua University Method and system of live detection based on physiological motion on human face
CN104361326A (zh) * 2014-11-18 2015-02-18 新开普电子股份有限公司 一种判别活体人脸的方法
CN106557726A (zh) * 2015-09-25 2017-04-05 北京市商汤科技开发有限公司 一种带静默式活体检测的人脸身份认证系统及其方法
CN105718925A (zh) * 2016-04-14 2016-06-29 苏州优化智能科技有限公司 基于近红外和面部微表情的真人活体身份验证终端设备
CN107016608A (zh) * 2017-03-30 2017-08-04 广东微模式软件股份有限公司 一种基于身份信息验证的远程开户方法及系统
CN108124486A (zh) * 2017-12-28 2018-06-05 深圳前海达闼云端智能科技有限公司 基于云端的人脸活体检测方法、电子设备和程序产品

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111259757A (zh) * 2020-01-13 2020-06-09 支付宝实验室(新加坡)有限公司 一种基于图像的活体识别方法、装置及设备
CN111259757B (zh) * 2020-01-13 2023-06-20 支付宝实验室(新加坡)有限公司 一种基于图像的活体识别方法、装置及设备
CN111783617A (zh) * 2020-06-29 2020-10-16 中国工商银行股份有限公司 人脸识别数据处理方法及装置
CN111783617B (zh) * 2020-06-29 2024-02-23 中国工商银行股份有限公司 人脸识别数据处理方法及装置
CN112818918A (zh) * 2021-02-24 2021-05-18 浙江大华技术股份有限公司 一种活体检测方法、装置、电子设备及存储介质
CN112818918B (zh) * 2021-02-24 2024-03-26 浙江大华技术股份有限公司 一种活体检测方法、装置、电子设备及存储介质
CN114863515A (zh) * 2022-04-18 2022-08-05 厦门大学 基于微表情语义的人脸活体检测方法及装置
CN115035579A (zh) * 2022-06-22 2022-09-09 支付宝(杭州)信息技术有限公司 基于人脸交互动作的人机验证方法和系统

Also Published As

Publication number Publication date
CN108124486A (zh) 2018-06-05

Similar Documents

Publication Publication Date Title
WO2019127262A1 (fr) Procédé de détection in vivo de visage humain basé sur une extrémité en nuage, dispositif électronique et produit de programme
CN105612533B (zh) 活体检测方法、活体检测系统以及计算机程序产品
KR102596897B1 (ko) 모션벡터 및 특징벡터 기반 위조 얼굴 검출 방법 및 장치
JP7040952B2 (ja) 顔認証方法及び装置
US10275672B2 (en) Method and apparatus for authenticating liveness face, and computer program product thereof
WO2019127365A1 (fr) Procédé de détection de corps vivant de visage, dispositif électronique et produit de programme informatique
US10621454B2 (en) Living body detection method, living body detection system, and computer program product
CN106407914B (zh) 用于检测人脸的方法、装置和远程柜员机系统
CN104361276B (zh) 一种多模态生物特征身份认证方法及系统
CN105989264B (zh) 生物特征活体检测方法及系统
CN106557726B (zh) 一种带静默式活体检测的人脸身份认证系统及其方法
US9985963B2 (en) Method and system for authenticating liveness face, and computer program product thereof
US20180239955A1 (en) Liveness detection
CN109858375B (zh) 活体人脸检测方法、终端及计算机可读存储介质
CN107798279B (zh) 一种人脸活体检测方法及装置
US20240021015A1 (en) System and method for selecting images for facial recognition processing
CN106874830B (zh) 一种基于rgb-d相机和人脸识别的视障人士辅助方法
US20150169943A1 (en) System, method and apparatus for biometric liveness detection
WO2016172923A1 (fr) Procédé de détection de vidéo, système de détection de vidéo, et produit programme d'ordinateur
CN110612530A (zh) 用于选择脸部处理中使用的帧的方法
CN111626240B (zh) 一种人脸图像识别方法、装置、设备及可读存储介质
JP2008090452A (ja) 検出装置、方法およびプログラム
JP7268725B2 (ja) 画像処理装置、画像処理方法、及び画像処理プログラム
CN107480628B (zh) 一种人脸识别方法及装置
CN113642497A (zh) 用于脸部防欺骗的方法、服务器和设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17936763

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 18.11.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17936763

Country of ref document: EP

Kind code of ref document: A1