CN109376608B - Human face living body detection method - Google Patents

Human face living body detection method Download PDF

Info

Publication number
CN109376608B
CN109376608B CN201811127323.1A CN201811127323A CN109376608B CN 109376608 B CN109376608 B CN 109376608B CN 201811127323 A CN201811127323 A CN 201811127323A CN 109376608 B CN109376608 B CN 109376608B
Authority
CN
China
Prior art keywords
frame
detection
point
face
eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811127323.1A
Other languages
Chinese (zh)
Other versions
CN109376608A (en
Inventor
章东平
葛俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Jiliang University
Original Assignee
China Jiliang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Jiliang University filed Critical China Jiliang University
Priority to CN201811127323.1A priority Critical patent/CN109376608B/en
Publication of CN109376608A publication Critical patent/CN109376608A/en
Application granted granted Critical
Publication of CN109376608B publication Critical patent/CN109376608B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a face in-vivo detection method, aiming at improving the safety and reliability of face recognition by using in-vivo detection. The method comprises the steps of (1) realizing face living body detection by using an MTCNN method and extracting 13 key points; (2) judging whether the human eye blinking motion is a living body or not by using the extracted key points; (3) judging whether the mobile phone is a living body or not by detecting the photo frame, the video frame and the mobile phone frame; (4) and the living human face detection is carried out by integrating human eye blinking action judgment and detection of a photo frame, a video frame and a mobile phone frame. The deep learning method adopted by the invention can well prevent the attack of the photos and the videos, and greatly improves the accuracy and the robustness of the living body detection.

Description

Human face living body detection method
Technical Field
The invention belongs to the field of image processing, and particularly relates to a human face living body detection method.
Background
With the coming of big data era and the development of deep learning, the face recognition technology makes a significant breakthrough. Face recognition has been widely applied to the fields of public security, bank finance, public security criminal investigation, social media and the like, and particularly, the face recognition technology is applied to unlocking a mobile phone screen recently. However, there are various counterfeit means based on face recognition defects, wherein face photos and video images are mainly used as the counterfeit means. The safety and the reliability of the face recognition are improved, and the living body detection technology is used for effectively preventing the impersonation means, which is a problem to be solved urgently.
Disclosure of Invention
The invention aims to improve the reliability of a living body detection technology used for a face recognition system, and provides a face living body detection method.
The technical scheme adopted by the invention is as follows:
a human face living body detection method comprises the following steps:
step 1: face detection and key point positioning:
training a face detection model and predicting 13 key point positions by utilizing an MTCNN open source target detection algorithm, wherein the MTCNN algorithm is a target detection algorithm based on deep learning, and detecting face frame position information (chi) by utilizing the trained modelf,yf,wf,hf) And outputting 13 key point position information ((x)1,y1),(x2,y2)…(x13,y13));
Step 2: liveness detection based on blink action:
the 1 st point of the 13 key points obtained in the step 1 is a left eye corner point of a left eye, the 2 nd point is a point of an upper eyelid of the left eye, the 3 rd point is a right eye corner point of the left eye, the 4 th point is a key point of a lower eyelid of the left eye, the 5 th point is a left eye corner point of a right eye, the 6 th point is a point of an upper eyelid of the right eye, the 7 th point is a right eye corner point of the right eye, and the 8 th point is a key point of a lower eyelid of the right eye. Calculating the eye height h of the ith frame in the video by using Euclidean distanceiAnd width wiRatio u ofiRecord uiValue of (a), uiThe method comprises the steps of displaying up-down fluctuation on a set threshold value T, calculating the total eye opening times q and the eye closing times k within a period of time T, judging that blinking motions occur when | q + k | ≧ 2 and | q-k | ≦ a, for example, a is 3 when the conditions are met, and judging that the detected human face is a living body.
Judging the condition of occurrence of the blinking action: ratio u at i + m framei+m>T>ui+m+1And then r frames appear ui+m+r<T<ui+m+r+1Or u at the i + r framei+r<T<ui+r+1And then m frames appear ui+r+m>T>ui+r+m+1Wherein m is the frame number in the time interval from the eye opening to the eye closing, and r is the frame number in the time interval from the eye closing to the eye opening;
Figure GDA0002976153150000021
wherein wiAnd hiThe calculation formula of (2) is as follows:
Figure GDA0002976153150000022
(j ═ 1 or 5) (2)
Figure GDA0002976153150000023
(j is 1 or 5) (3) the value taking steps of the threshold value T are as follows:
firstly, counting the width and height of human eyes of n pictures in the data set, and solving the average value of the width and the height
Figure GDA0002976153150000024
And
Figure GDA0002976153150000025
Figure GDA0002976153150000026
Figure GDA0002976153150000027
calculating a threshold value T:
Figure GDA0002976153150000028
and step 3: live body detection based on frame detection:
deep learning is adopted to train a frame detection convolutional neural network model for detecting a photo frame, a video frame and a mobile phone frame, and the position information of the face frame detected in the ith frame of the video is (x)fi,yfi,wfi,hfi) If the border detection model detects that the border exists, the position information is (x)bi,ybi,wbi,hbi) If the face frame is contained in the frame detected by the model, the living body is not considered, otherwise, the living body is considered;
the formula for judging whether the face frame is contained in the detected frame is as follows:
χfibi;yfi>ybi (7)
χfi+wfibi+wbi (8)
yfi+hfi<ybi+hbi (9)
(3a) training and test data preparation: shooting printed pictures, recording played mobile phone videos and recording videos containing the pictures through a camera, totaling N pictures, and marking position information of a frame, wherein the marking frame is slightly larger than the frame;
(3b) performing data enhancement on image data: the data of the N pictures are increased by translating and rotating the images (0 degrees, 90 degrees, 180 degrees and 270 degrees), so that the data are changed into 16 times of the original data; the data set is as follows 8: 1: 1, dividing the ratio into a training set, a verification set and a test set;
(3c) processing the image data: scaling the image to an M multiplied by M size, converting the image into a gray level image, and storing the processed image Data as an HDF5(Hierarchical Data Format) file;
(3d) the network structure is as follows: the convolutional neural network layer comprises 3 convolutional layers, 1 BN layer and 2 full-connection layers, the number of output layer nodes is 4, and a loss function is selected
Figure GDA0002976153150000031
(3f) Training: sending the processed data into a convolutional neural network, setting the size of a network input batch (batch size) to be 64, randomly initializing the connection weight W and the bias b of each layer, and setting a learning rate eta;
(3e) and (3) testing: inputting the test set data into the trained frame detection model, detecting whether edges exist, and outputting frame position information (x) if the edges existb,yb,wb,hb);
And 4, step 4: combining frame detection and living body detection of human eye blink judgment:
the combined in-vivo detection is carried out by the two methods described in the step 2 and the step 3. Firstly, carrying out face detection on a video acquired by a camera, starting frame detection if a face is detected, and if a frame is detected in the video and the frame contains the detected face frame, not allowing the frame to pass through and considering that the video is a non-living body; if the frame is not detected or the detected frame does not contain the human face, judging whether the human eyes in the video blink, if so, judging that the human eyes are living, otherwise, judging that the human eyes are not living.
Drawings
The following detailed description of embodiments of the invention is provided in connection with the accompanying drawings.
FIG. 1 is a schematic diagram illustrating a human eye blinking behavior determination method according to the present invention;
FIG. 2 is a network diagram of photo, video and mobile phone frame detection according to the living human face detection method of the present invention;
FIG. 3 is a schematic flow chart of a live detection method of the present invention.
Detailed Description
The invention discloses a human face living body detection method, and the specific implementation mode of the invention is explained in detail below by combining the attached drawings.
Step 1: face detection and key point positioning:
training a face detection model and predicting 13 key point positions by utilizing an MTCNN open source target detection algorithm, wherein the MTCNN algorithm is a target detection algorithm based on deep learning, and detecting face frame position information (chi) by utilizing the trained modelf,yf,wf,hf) And outputting 13 key point position information ((x)1,y1),(x2,y2)…(x13,y13));
Step 2: liveness detection based on blink action:
the 1 st point of the 13 key points obtained in the step 1 is a left eye corner point of a left eye, the 2 nd point is a point of an upper eyelid of the left eye, the 3 rd point is a right eye corner point of the left eye, the 4 th point is a key point of a lower eyelid of the left eye, the 5 th point is a left eye corner point of a right eye, the 6 th point is a point of an upper eyelid of the right eye, the 7 th point is a right eye corner point of the right eye, and the 8 th point is a key point of a lower eyelid of the right eye. Calculating the eye height h of the ith frame in the video by using Euclidean distanceiAnd width wiRatio u ofiRecord uiValue of (a), uiThe method comprises the steps of displaying up-down fluctuation on a set threshold value T, calculating the total eye opening times q and the eye closing times k within a period of time T, judging that blinking motions occur when | q + k | ≧ 2 and | q-k | ≦ a, for example, a is 3 when the conditions are met, and judging that the detected human face is a living body.
Judging the condition of occurrence of the blinking action: ratio at i + m frameui+m>T>ui+m+1And then r frames appear ui+m+r<T<ui+m+r+1Or u at the i + r framei+r<T<ui+r+1And then m frames appear ui+r+m>T>ui+r+m+1Wherein m is the frame number in the time interval from the eye opening to the eye closing, and r is the frame number in the time interval from the eye closing to the eye opening;
Figure GDA0002976153150000041
wherein wiAnd hiThe calculation formula of (2) is as follows:
Figure GDA0002976153150000042
(j ═ 1 or 5) (2)
Figure GDA0002976153150000051
(j ═ 1 or 5) (3)
The value of the threshold value T comprises the following steps:
firstly, counting the width and height of human eyes of n pictures in the data set, and solving the average value of the width and the height
Figure GDA0002976153150000052
And
Figure GDA0002976153150000053
Figure GDA0002976153150000054
Figure GDA0002976153150000055
calculating a threshold value T:
Figure GDA0002976153150000056
and step 3: live body detection based on frame detection:
deep learning is adopted to train a frame detection convolutional neural network model for detecting a photo frame, a video frame and a mobile phone frame, and the position information of the face frame detected in the ith frame of the video is (x)fi,yfi,wfi,hfi) If the border detection model detects that the border exists, the position information is (x)bi,ybi,wbi,hbi) If the face frame is contained in the frame detected by the model, the living body is not considered, otherwise, the living body is considered;
the formula for judging whether the face frame is contained in the detected frame is as follows:
χfibi;yfi>ybi (7)
χfi+wfibi+wbi (8)
yfi+hfi<ybi+hbi (9)
(3a) training and test data preparation: shooting printed pictures, recording played mobile phone videos and recording videos containing the pictures through a camera, totaling N pictures, and marking position information of a frame, wherein the marking frame is slightly larger than the frame;
(3b) performing data enhancement on image data: the data of the N pictures are increased by translating and rotating the images (0 degrees, 90 degrees, 180 degrees and 270 degrees), so that the data are changed into 16 times of the original data; the data set is as follows 8: 1: 1, dividing the ratio into a training set, a verification set and a test set;
(3c) processing the image data: scaling the image to an M multiplied by M size, converting the image into a gray level image, and storing the processed image Data as an HDF5(Hierarchical Data Format) file;
(3d) the network structure is as follows: the convolutional neural network layer comprises 3 convolutional layers, 1 BN layer and 2 full-connection layers, the number of output layer nodes is 4, and a loss function is selected
Figure GDA0002976153150000061
(3f) Training: sending the processed data into a convolutional neural network, setting the size of a network input batch (batch size) to be 64, randomly initializing the connection weight W and the bias b of each layer, and setting a learning rate eta;
(3e) and (3) testing: inputting the test set data into the trained frame detection model, detecting whether edges exist, and outputting frame position information (x) if the edges existb,yb,wb,hb);
And 4, step 4: combining frame detection and living body detection of human eye blink judgment:
the combined in-vivo detection is carried out by the two methods described in the step 2 and the step 3. Firstly, carrying out face detection on a video acquired by a camera, starting frame detection if a face is detected, and if a frame is detected in the video and the frame contains the detected face frame, not allowing the frame to pass through and considering that the video is a non-living body; if the frame is not detected or the detected frame does not contain the human face, judging whether the human eyes in the video blink, if so, judging that the human eyes are living, otherwise, judging that the human eyes are not living.

Claims (3)

1. A human face living body detection method is characterized by comprising the following steps:
step 1: face detection and key point positioning;
step 2: liveness detection based on blink activity;
and step 3: live body detection based on frame detection;
and 4, step 4: combining frame detection and living body detection of human eye blink judgment;
the step 1 is specifically as follows:
training a face detection model and predicting 13 key point positions by utilizing an MTCNN open source target detection algorithm, wherein the MTCNN algorithm is a target detection algorithm based on deep learning, and detecting face frame position information (chi) by utilizing the trained modelf,yf,wf,hf) And outputting 13 key point position information ((x)1,y1),(x2,y2)…(x13,y13));
The step 2 is specifically as follows:
the 1 st point of the 13 key points obtained in the step 1 is a left eye angular point of a left eye, the 2 nd point is a point of an upper eyelid of the left eye, the 3 rd point is a right eye angular point of the left eye, the 4 th point is a key point of a lower eyelid of the left eye, the 5 th point is a left eye angular point of a right eye, the 6 th point is a point of an upper eyelid of the right eye, the 7 th point is a right eye angular point of the right eye, and the 8 th point is a key point of a lower eyelid of the right eye; calculating the eye height h of the ith frame in the video by using Euclidean distanceiAnd width wiRatio u ofiRecord uiValue of (a), uiThe method comprises the steps of (1) floating up and down on a set threshold value T, calculating the total eye opening times q and the eye closing times k within a period of time T, and judging that blinking motions occur when | q + k | > is more than or equal to 2 and | q-k | < a and a is less than or equal to 3 when the conditions are met, wherein the detected human face is a living body;
judging the condition of occurrence of the blinking action: ratio u at i + m framei+m>T>ui+m+1And then r frames appear ui+m+r<T<ui+m+r+1Or u at the i + r framei+r<T<ui+r+1And then m frames appear ui+r+m>T>ui+r+m+1Wherein m is the frame number in the time interval from the eye opening to the eye closing, and r is the frame number in the time interval from the eye closing to the eye opening;
Figure FDA0002976153140000011
wherein wiAnd hiThe calculation formula of (2) is as follows:
Figure FDA0002976153140000012
Figure FDA0002976153140000013
the value of the threshold value T comprises the following steps:
firstly, counting the width and height of human eyes of n pictures in the data set, and solving the average value of the width and the height
Figure FDA0002976153140000014
And
Figure FDA0002976153140000015
Figure FDA0002976153140000021
Figure FDA0002976153140000022
calculating a threshold value T:
Figure FDA0002976153140000023
2. the face liveness detection method as recited in claim 1, characterized in that:
the step 3 is as follows:
deep learning is adopted to train a frame detection convolutional neural network model for detecting a photo frame, a video frame and a mobile phone frame, and the position information of the face frame detected in the ith frame of the video is (x)fi,yfi,wfi,hfi) If the border detection model detects that the border exists, the position information is (x)bi,ybi,wbi,hbi) If the face frame is contained in the frame detected by the model, the living body is not considered, otherwise, the living body is considered;
the formula for judging whether the face frame is contained in the detected frame is as follows:
χfibi;yfi>ybi (7)
χfi+wfibi+wbi (8)
yfi+hfi<ybi+hbi (9)
(3a) training and test data preparation: shooting printed pictures, recording played mobile phone videos and recording videos containing the pictures through a camera, totaling N pictures, and marking position information of a frame, wherein the marking frame is slightly larger than the frame;
(3b) performing data enhancement on image data: the N pictures are subjected to translation and rotation at 0 degrees, 90 degrees, 180 degrees and 270 degrees to increase data, so that the data are changed into 16 times of the original data; the dataset is looked for 8: 1: 1, dividing the ratio into a training set, a verification set and a test set;
(3c) processing the image data: scaling the image to the size of M multiplied by M, converting the image into a gray picture, and storing the processed picture data as an HDF5 file;
(3d) the network structure is as follows: the convolutional neural network layer comprises 3 convolutional layers, 1 BN layer and 2 full-connection layers, the number of output layer nodes is 4, and a loss function is selected
Figure FDA0002976153140000024
(3f) Training: sending the processed data into a convolutional neural network, setting the size of the network input batch to be 64, randomly initializing the connection weight W and the bias b of each layer, and setting a learning rate eta;
(3e) and (3) testing: inputting the test set data into the trained frame detection model, detecting whether edges exist, and outputting frame position information (x) if the edges existb,yb,wb,hb)。
3. The face liveness detection method as recited in claim 2, characterized in that:
the step 4 is specifically as follows:
the combined in-vivo detection is carried out by the two methods described in the step 2 and the step 3: firstly, carrying out face detection on a video acquired by a camera, starting frame detection if a face is detected, and if a frame is detected in the video and the frame contains the detected face frame, not allowing the frame to pass through and considering that the video is a non-living body; if the frame is not detected or the detected frame does not contain the human face, judging whether the human eyes in the video blink, if so, judging that the human eyes are living, otherwise, judging that the human eyes are not living.
CN201811127323.1A 2018-09-26 2018-09-26 Human face living body detection method Active CN109376608B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811127323.1A CN109376608B (en) 2018-09-26 2018-09-26 Human face living body detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811127323.1A CN109376608B (en) 2018-09-26 2018-09-26 Human face living body detection method

Publications (2)

Publication Number Publication Date
CN109376608A CN109376608A (en) 2019-02-22
CN109376608B true CN109376608B (en) 2021-04-27

Family

ID=65402104

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811127323.1A Active CN109376608B (en) 2018-09-26 2018-09-26 Human face living body detection method

Country Status (1)

Country Link
CN (1) CN109376608B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110069983A (en) * 2019-03-08 2019-07-30 深圳神目信息技术有限公司 Vivo identification method, device, terminal and readable medium based on display medium
CN110059624B (en) * 2019-04-18 2021-10-08 北京字节跳动网络技术有限公司 Method and apparatus for detecting living body
CN110399780B (en) * 2019-04-26 2023-09-29 努比亚技术有限公司 Face detection method and device and computer readable storage medium
CN110349310B (en) * 2019-07-03 2021-08-27 源创客控股集团有限公司 Customized reminding cloud platform service system for park enterprises
CN112560553A (en) * 2019-09-25 2021-03-26 北京中关村科金技术有限公司 Living body detection method, living body detection device, and storage medium
CN110717430A (en) * 2019-09-27 2020-01-21 聚时科技(上海)有限公司 Long object identification method and identification system based on target detection and RNN
CN112861586B (en) * 2019-11-27 2022-12-13 马上消费金融股份有限公司 Living body detection, image classification and model training method, device, equipment and medium
TWI731503B (en) 2019-12-10 2021-06-21 緯創資通股份有限公司 Live facial recognition system and method
CN111241949B (en) * 2020-01-03 2021-12-14 中科智云科技有限公司 Image recognition method and device, electronic equipment and readable storage medium
CN111428570A (en) * 2020-02-27 2020-07-17 深圳壹账通智能科技有限公司 Detection method and device for non-living human face, computer equipment and storage medium
CN111666835A (en) * 2020-05-20 2020-09-15 广东志远科技有限公司 Face living body detection method and device
CN111539389B (en) * 2020-06-22 2020-10-27 腾讯科技(深圳)有限公司 Face anti-counterfeiting recognition method, device, equipment and storage medium
CN112200086A (en) * 2020-10-10 2021-01-08 深圳市华付信息技术有限公司 Face living body detection method based on video
CN112560819A (en) * 2021-02-22 2021-03-26 北京远鉴信息技术有限公司 User identity verification method and device, electronic equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9305225B2 (en) * 2013-10-14 2016-04-05 Daon Holdings Limited Methods and systems for determining user liveness
US20160140390A1 (en) * 2014-11-13 2016-05-19 Intel Corporation Liveness detection using progressive eyelid tracking
CN107239735A (en) * 2017-04-24 2017-10-10 复旦大学 A kind of biopsy method and system based on video analysis
CN107609463B (en) * 2017-07-20 2021-11-23 百度在线网络技术(北京)有限公司 Living body detection method, living body detection device, living body detection equipment and storage medium
CN107609494A (en) * 2017-08-31 2018-01-19 北京飞搜科技有限公司 A kind of human face in-vivo detection method and system based on silent formula
CN108140123A (en) * 2017-12-29 2018-06-08 深圳前海达闼云端智能科技有限公司 Face living body detection method, electronic device and computer program product

Also Published As

Publication number Publication date
CN109376608A (en) 2019-02-22

Similar Documents

Publication Publication Date Title
CN109376608B (en) Human face living body detection method
Verdoliva Media forensics and deepfakes: an overview
CN110472519B (en) Human face in-vivo detection method based on multiple models
Chakraborty et al. An overview of face liveness detection
Chakka et al. Competition on counter measures to 2-d facial spoofing attacks
CN108182409B (en) Living body detection method, living body detection device, living body detection equipment and storage medium
TWI754887B (en) Method, device and electronic equipment for living detection and storage medium thereof
WO2020199577A1 (en) Method and device for living body detection, equipment, and storage medium
CN109614910B (en) Face recognition method and device
CN108229335A (en) It is associated with face identification method and device, electronic equipment, storage medium, program
CN111325051B (en) Face recognition method and device based on face image ROI selection
CN113592736B (en) Semi-supervised image deblurring method based on fused attention mechanism
EP2370931B1 (en) Method, apparatus and computer program product for providing an orientation independent face detector
CN108416256A (en) The family&#39;s cloud intelligent monitor system and monitoring method of feature based identification
CN111209820B (en) Face living body detection method, system, equipment and readable storage medium
CN105243376A (en) Living body detection method and device
CN113205002B (en) Low-definition face recognition method, device, equipment and medium for unlimited video monitoring
CN110059634A (en) A kind of large scene face snap method
EP3859673A1 (en) Model generation
CN112052830A (en) Face detection method, device and computer storage medium
CN111079688A (en) Living body detection method based on infrared image in face recognition
CN112052832A (en) Face detection method, device and computer storage medium
CN111860394A (en) Gesture estimation and gesture detection-based action living body recognition method
Li et al. Exploiting facial symmetry to expose deepfakes
Zhou et al. Research and application of face anti-spoofing based on depth camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant