CN111222432A - Face living body detection method, system, equipment and readable storage medium - Google Patents

Face living body detection method, system, equipment and readable storage medium Download PDF

Info

Publication number
CN111222432A
CN111222432A CN201911387733.4A CN201911387733A CN111222432A CN 111222432 A CN111222432 A CN 111222432A CN 201911387733 A CN201911387733 A CN 201911387733A CN 111222432 A CN111222432 A CN 111222432A
Authority
CN
China
Prior art keywords
face
living body
sobel
image
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911387733.4A
Other languages
Chinese (zh)
Inventor
黄泽斌
刘小扬
何学智
王心莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Newland Digital Technology Co ltd
Original Assignee
Newland Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Newland Digital Technology Co ltd filed Critical Newland Digital Technology Co ltd
Priority to CN201911387733.4A priority Critical patent/CN111222432A/en
Publication of CN111222432A publication Critical patent/CN111222432A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • G06V10/993Evaluation of the quality of the acquired pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/259Fusion by voting

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a face in-vivo detection method, which comprises the steps of obtaining a face frame and coordinates of face key points through face detection, and aligning a face through the coordinates of the face key points; using a face tracking technology to associate the same face ID in the continuous video frames; intercepting a human face from an original image, converting an RGB channel into HSV and YCbCr spaces, and overlapping the converted HSV and YCbCr images to obtain an overlay image; extracting Sobel features from the face region through a Sobel operator, and obtaining an obtained Sobel feature map; inputting the Sobel feature map and the superimposed map of a preset frame number of face ID from two input channels of the double-flow neural network respectively to obtain a living body judgment result of each frame of image; voting is performed on all living body judgment results of the face ID, and when the number of the living body judgment results is judged to be large, the object is determined to be a living body, and when the number of the attacking frames is judged to be large, the object is determined to be a non-living body. The method has the advantages of high safety factor, good user experience and strong robustness.

Description

Face living body detection method, system, equipment and readable storage medium
Technical Field
The invention relates to the technical field of image recognition, in particular to a human face living body detection method, a human face living body detection system, human face living body detection equipment and a readable storage medium.
Background
Face recognition is a biological recognition technology for identifying an identity based on facial feature information of a person, and the technology is increasingly and widely applied to various industrial fields such as security, finance, education and the like. At present, the mainstream face recognition technology can only distinguish different faces, and cannot distinguish whether a user uses face recognition. When the face information of the user is leaked, a lawbreaker can make a photo, a video and a three-dimensional face model through the face information of the user to deceive a face recognition system, so that property and information loss of the user is caused, and therefore a face living body detection technology is needed to judge whether a carrier of a face to be recognized is a real person or not, or whether a photo, a video, a mask and other non-living body attacking means are adopted.
The existing monocular in vivo detection methods include: 1. extracting texture information of a face in a single-frame picture or Moire in a screen; 2. performing single-frame monocular face estimation through deep learning; 3. and performing multi-frame monocular face estimation through deep learning. The texture features are related to ambient illumination and camera types, and Moire is related to the resolution of the camera, so that the robustness of the features is poor, and the in-vivo detection of multiple types of cameras in multiple environments cannot be dealt with; the face depth estimation algorithm of the single-frame face image based on the depth learning only considers the features of the face region and is strongly related to the texture of the image, and the robustness of the features is still not high; the face depth estimation algorithm of the multi-frame face images based on the depth learning fuses information in the multi-frame images, the robustness of depth estimation is improved to a certain extent, but only the features of the face region are considered, and more robust features are not introduced.
Disclosure of Invention
The invention aims to provide a human face in-vivo detection method, a human face in-vivo detection system, human face in-vivo detection equipment and a readable storage medium with good robustness.
In order to solve the technical problems, the technical scheme of the invention is as follows:
in a first aspect, the present invention provides a face in-vivo detection method, including:
obtaining coordinates of a face frame and face key points through face detection, and aligning the face through the coordinates of the face key points;
using a face tracking technology to associate the same face ID in the continuous video frames;
intercepting a human face from an original image, converting an RGB channel into HSV and YCbCr spaces, and overlapping the converted HSV and YCbCr images to obtain an overlay image; extracting Sobel features from the face region through a Sobel operator, and obtaining an obtained Sobel feature map;
inputting the Sobel feature map and the superimposed map of a preset frame number of face ID from two input channels of the double-flow neural network respectively to obtain a living body judgment result of each frame of image;
voting is performed on all living body judgment results of the face ID, and when the number of the living body judgment results is judged to be large, the object is determined to be a living body, and when the number of the attacking frames is judged to be large, the object is determined to be a non-living body.
Preferably, the pictures with the face frame area larger than the preset proportion of the pictures and the pictures with the face length and width smaller than the preset size are deleted.
Preferably, before the step of extracting Sobel features from the region by using a Sobel operator, denoising the region by using a gaussian operator is further included.
Preferably, after the face alignment, the method further comprises the following steps: and filtering poor-quality face pictures.
Preferably, before the step of extracting the Sobel features for the face region through the Sobel operator, the method further includes expanding the center of the face frame in the original image outwards to expand the face frame.
Preferably, for each input image A, GxAnd GyRespectively convolved with the image A to obtain
Figure BDA0002344027530000021
Thereafter, image A is outputtedGThe value of each pixel is:
Figure BDA0002344027530000022
wherein G isxConvolution kernel, G, representing the x-directionyRepresenting the convolution kernel in the y-direction.
Preferably, the coordinates of the face frame and the face key points are detected by a multi-task cascade convolution neural network.
On the other hand, the invention also provides a face in-vivo detection system, which comprises:
the face detection module: obtaining coordinates of a face frame and face key points through face detection, and aligning the face through the coordinates of the face key points;
a face tracking module: using a face tracking technology to associate the same face ID in the continuous video frames;
a graph acquisition module: intercepting a human face from an original image, converting an RGB channel into HSV and YCbCr spaces, and overlapping the converted HSV and YCbCr images to obtain an overlay image; extracting Sobel features from the face region through a Sobel operator, and obtaining an obtained Sobel feature map;
a living body detection module: inputting the Sobel feature map and the superimposed map of a preset frame number of face ID from two input channels of the double-flow neural network respectively to obtain a living body judgment result of each frame of image;
a voting module: voting is performed on all living body judgment results of the face ID, and when the number of the living body judgment results is judged to be large, the object is determined to be a living body, and when the number of the attacking frames is judged to be large, the object is determined to be a non-living body.
In another aspect, the present invention provides a living human face detection apparatus, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the living human face detection method when executing the program.
In still another aspect, the present invention further provides a readable storage medium for human face live body detection, having a computer program stored thereon, wherein: the computer program is executed by a processor to implement the steps of the human face living body detection method.
By adopting the technical scheme, the human face is intercepted from the original image, the RGB channel is converted into HSV and YCbCr spaces, and the converted HSV and YCbCr images are superposed to obtain a superposed image; extracting Sobel features from the face region through a Sobel operator, and obtaining an obtained Sobel feature map; respectively inputting the Sobel feature map and the superimposed map with a preset frame number of face ID from two input channels of the double-flow neural network to obtain a living body judgment result of each frame of image, and completing living body detection on the face ID; the living body detection is realized based on the monocular camera, no additional equipment is needed, the cost is low, and the application range is wide; the features except the human face are added, so that the robustness of the model is improved; the method based on a large amount of data acquired by multiple cameras and deep learning overcomes the problem that the traditional monocular living method is not suitable for different cameras and scenes; the face optimization and multi-frame voting strategy are adopted, so that the stability of the algorithm in practical application is improved. In addition, the whole living body judgment process does not need user cooperation, the speed is high, and good experience can be brought to the user.
Drawings
FIG. 1 is a flowchart illustrating steps of a living human face detection method according to an embodiment of the present invention;
FIG. 2 is a convolution kernel in the x direction of a Sobel operator according to an embodiment of the human face in-vivo detection method of the present invention;
fig. 3 is a convolution kernel in the y direction of the Sobel operator in the embodiment of the human face in-vivo detection method of the present invention.
Detailed Description
The following further describes embodiments of the present invention with reference to the drawings. It should be noted that the description of the embodiments is provided to help understanding of the present invention, but the present invention is not limited thereto. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Referring to fig. 1, the invention provides a human face in-vivo detection method, which comprises the following steps:
and obtaining coordinates of the face frame and the face key points through face detection, aligning the face through the coordinates of the face key points, and filtering poor-quality face pictures. (ii) a
Using a face tracking technology to associate the same face ID in the continuous video frames;
intercepting a human face from an original image, converting an RGB channel into HSV and YCbCr spaces, and overlapping the converted HSV and YCbCr images to obtain an overlay image; and denoising the region by using a Gaussian operator. And expanding the center of the face frame in the original image outwards to expand the face frame. Extracting Sobel features from the face region through a Sobel operator, and obtaining an obtained Sobel feature map;
respectively inputting a Sobel feature map and a superimposed map of a preset frame number of face ID from two input channels of a double-flow neural network to obtain a living body judgment result of each frame of image;
voting is performed on all living body judgment results of the face ID, and when the number of the living body judgment results is judged to be large, the object is determined to be a living body, and when the number of the attacking frames is judged to be large, the object is determined to be a non-living body.
Specifically, the picture with the face frame area larger than the preset proportion of the picture and the picture with the face length and width smaller than the preset size are deleted.
Specifically, for each of the inputted images A, GxAnd GyRespectively convolved with the image A to obtain
Figure BDA0002344027530000031
Thereafter, image A is outputtedGThe value of each pixel is:
Figure BDA0002344027530000032
wherein G isxConvolution kernel, G, representing the x-directionyRepresenting the convolution kernel in the y-direction.
Specifically, the coordinates of the face frame and the face key points are detected and obtained through a multitask cascade convolution neural network.
By adopting the technical scheme, the human face is intercepted from the original image, the RGB channel is converted into HSV and YCbCr spaces, and the converted HSV and YCbCr images are superposed to obtain a superposed image; extracting Sobel features from the face region through a Sobel operator, and obtaining an obtained Sobel feature map; respectively inputting a Sobel feature map and a superimposed map of a preset frame number of face ID from two input channels of a double-flow neural network to obtain a living body judgment result of each frame of image, and completing living body detection on the face ID; the living body detection is realized based on the monocular camera, no additional equipment is needed, the cost is low, and the application range is wide; the features except the human face are added, so that the robustness of the model is improved; the method based on a large amount of data acquired by multiple cameras and deep learning overcomes the problem that the traditional monocular living method is not suitable for different cameras and scenes; the face optimization and multi-frame voting strategy are adopted, so that the stability of the algorithm in practical application is improved. In addition, the whole living body judgment process does not need user cooperation, the speed is high, and good experience can be brought to the user.
In another embodiment of the present invention, the step of in vivo detection is:
s1 face frame detection
The invention uses mtcnn to detect the face to obtain the coordinate position of the face frame and 5-point face key points, wherein the face frame coordinates are used for filtering the face with the face frame area larger than 1/3 or the face length and width smaller than 200 pixels, and the 5-point face key points are used for aligning the face to a fixed template to obtain an aligned face image.
S2 face image optimization
In the step, a face optimization algorithm is adopted to filter faces under extreme conditions (overexposure, overlarge face angle and the like), and a face tracking technology is used to associate the same face id in continuous video frames, so that the independent face id in each video is ensured to only carry out living body judgment of a certain optimal frame.
S3 Living body detection method
S31: for the face detected in S1, after the face passing S2 is preferred, the face is cut out from the original image, converted into HSV and YCbCr spaces from RGB channels, and the HSV and YCbCr images after conversion are superimposed to be used as the first input of the dual-stream network.
S32: and (4) expanding the face frame outwards to 1.5 times of the original face frame center detected in the S1, denoising the region by using a Gaussian operator, and extracting Sobel features from the region by using a Sobel operator to obtain a feature map. The convolution kernel of the Sobel operator is shown in fig. 2 and fig. 3.
Wherein G isxConvolution kernel, G, representing the x-directionyRepresenting the convolution kernel in the y-direction. For each input image A, GxAnd GyRespectively convolving with the image A to obtain AGx,AGyThen the value of each pixel of the output image AG is:
Figure BDA0002344027530000041
the Sobel profile is taken as the second input to the dual stream network.
S33: inputting a plurality of pictures (5 pictures in the invention) meeting the requirements of the same id preferably selected in the step S2 into a deep learning network which is well learned in the step S32, performing multi-frame voting on a judgment result, and determining that the object is a living body when the number of frames of the living body is large; when the number of frames of the attack is determined to be large, the object is determined to be a non-living body.
In the embodiment of the invention, a Resnet (residual error network) is used as a double-input channel of a basic network in a deep learning network for living body judgment of a picture, after feature extraction is respectively carried out on two input branches, se-module is used for carrying out selective excitation fusion on the features extracted from the two branches, and feature extraction is carried out on the fused features through multilayer convolution to obtain a living body judgment result. The objective function of the deep learning network is the focal loss function.
It should be noted that Resnet (residual network) is a convolutional neural network proposed by 4 scholars from Microsoft Research, and wins image classification and object recognition in the 2015 ImageNet large-scale visual recognition competition. The residual network is characterized by easy optimization and can improve accuracy by adding considerable depth. The inner residual block uses jump connection, and the problem of gradient disappearance caused by depth increase in a deep neural network is relieved.
On the other hand, the invention also provides a face in-vivo detection system, which comprises:
the face detection module: obtaining coordinates of a face frame and face key points through face detection, and aligning the face through the coordinates of the face key points;
a face tracking module: using a face tracking technology to associate the same face ID in the continuous video frames;
a graph acquisition module: intercepting a human face from an original image, converting an RGB channel into HSV and YCbCr spaces, and overlapping the converted HSV and YCbCr images to obtain an overlay image; extracting Sobel features from the face region through a Sobel operator, and obtaining an obtained Sobel feature map;
a living body detection module: respectively inputting a Sobel feature map and a superimposed map of a preset frame number of face ID from two input channels of a double-flow neural network to obtain a living body judgment result of each frame of image;
a voting module: voting is performed on all living body judgment results of the face ID, and when the number of the living body judgment results is judged to be large, the object is determined to be a living body, and when the number of the attacking frames is judged to be large, the object is determined to be a non-living body.
In another aspect, the present invention provides a living human face detection apparatus, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the living human face detection method when executing the computer program.
In still another aspect, the present invention further provides a readable storage medium for human face live body detection, having a computer program stored thereon, wherein: the computer program is executed by a processor to implement the steps of the human face living body detection method.
The system filters some poor-quality human faces such as shielding and overlarge head posture angle through a human face detection and image optimization module, extracts sobel features in a certain range around the human face, fuses images of human face regions to serve as network input, uses an improved double-flow Resnet network for learning, learns based on a large amount of high-quality data to obtain high-robustness features, and can effectively resist common attack means. The whole living body judgment process does not need user cooperation, is high in speed, can bring better experience to users, and is high in robustness.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the described embodiments. It will be apparent to those skilled in the art that various changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, and the scope of protection is still within the scope of the invention.

Claims (10)

1. A human face living body detection method is characterized by comprising the following steps:
detecting key points and frames of the human face, and aligning the human face;
using a face tracking technology to associate the same face ID in the continuous video frames;
intercepting a human face from an original image, converting an RGB channel into HSV and YCbCr spaces, and overlapping the converted HSV and YCbCr images to obtain an overlay image; extracting Sobel features from the face region through a Sobel operator, and obtaining an obtained Sobel feature map;
inputting the Sobel feature map and the superimposed map of a preset frame number of face ID from two input channels of the double-flow neural network respectively to obtain a living body judgment result of each frame of image;
voting is performed on all living body judgment results of the face ID, and when the number of the living body judgment results is judged to be large, the object is determined to be a living body, and when the number of the attacking frames is judged to be large, the object is determined to be a non-living body.
2. The face liveness detection method according to claim 1, characterized in that: before the step of extracting the Sobel features of the region through the Sobel operator, denoising the region by using a Gaussian operator.
3. The face liveness detection method according to claim 1, characterized in that: before the step of extracting the Sobel features for the face region through the Sobel operator, the method further comprises the step of expanding the center of the face frame in the original image outwards so as to expand the face frame.
4. The face liveness detection method according to claim 1, characterized in that:
for each input image A, GxAnd GyRespectively convolved with the image A to obtain
Figure FDA0002344027520000011
Thereafter, image A is outputtedGThe value of each pixel is:
Figure FDA0002344027520000012
wherein G isxConvolution kernel, G, representing the x-directionyRepresenting the convolution kernel in the y-direction.
5. The face liveness detection method according to any one of claims 1 to 3, characterized in that: and detecting and obtaining the coordinates of the face frame and the face key points through a multitask cascade convolution neural network.
6. The face liveness detection method according to any one of claims 1 to 3, characterized in that: and deleting the pictures with the face frame area larger than the preset proportion of the pictures and the pictures with the face length and width smaller than the preset size.
7. The face liveness detection method according to any one of claims 1 to 3, characterized in that: after the human face alignment, the method also comprises the following steps: and filtering poor-quality face pictures.
8. A face liveness detection system, comprising:
the face detection module: detecting key points and frames of the human face, and aligning the human face;
a face tracking module: using a face tracking technology to associate the same face ID in the continuous video frames;
a graph acquisition module: intercepting a human face from an original image, converting an RGB channel into HSV and YCbCr spaces, and overlapping the converted HSV and YCbCr images to obtain an overlay image; extracting Sobel features from the face region through a Sobel operator, and obtaining an obtained Sobel feature map;
a living body detection module: inputting the Sobel feature map and the superimposed map of a preset frame number of face ID from two input channels of the double-flow neural network respectively to obtain a living body judgment result of each frame of image;
a voting module: voting is performed on all living body judgment results of the face ID, and when the number of the living body judgment results is judged to be large, the object is determined to be a living body, and when the number of the attacking frames is judged to be large, the object is determined to be a non-living body.
9. A face liveness detection device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that: the processor, when executing the program, performs the steps of the face liveness detection method of any one of claims 1-7.
10. A readable storage medium for live human face detection, having a computer program stored thereon, wherein: the computer program is executed by a processor to perform the steps of implementing the face liveness detection method of any one of claims 1 to 7.
CN201911387733.4A 2019-12-30 2019-12-30 Face living body detection method, system, equipment and readable storage medium Pending CN111222432A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911387733.4A CN111222432A (en) 2019-12-30 2019-12-30 Face living body detection method, system, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911387733.4A CN111222432A (en) 2019-12-30 2019-12-30 Face living body detection method, system, equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN111222432A true CN111222432A (en) 2020-06-02

Family

ID=70830870

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911387733.4A Pending CN111222432A (en) 2019-12-30 2019-12-30 Face living body detection method, system, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111222432A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113011385A (en) * 2021-04-13 2021-06-22 深圳市赛为智能股份有限公司 Face silence living body detection method and device, computer equipment and storage medium
CN113111750A (en) * 2021-03-31 2021-07-13 智慧眼科技股份有限公司 Face living body detection method and device, computer equipment and storage medium
CN113609944A (en) * 2021-07-27 2021-11-05 东南大学 Silent in-vivo detection method
CN114639129A (en) * 2020-11-30 2022-06-17 北京君正集成电路股份有限公司 Paper medium living body detection method for access control system
CN113011385B (en) * 2021-04-13 2024-07-05 深圳市赛为智能股份有限公司 Face silence living body detection method, face silence living body detection device, computer equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016033184A1 (en) * 2014-08-26 2016-03-03 Hoyos Labs Ip Ltd. System and method for determining liveness
CN108921041A (en) * 2018-06-06 2018-11-30 深圳神目信息技术有限公司 A kind of biopsy method and device based on RGB and IR binocular camera
CN109858471A (en) * 2019-04-03 2019-06-07 深圳市华付信息技术有限公司 Biopsy method, device and computer equipment based on picture quality
CN109902667A (en) * 2019-04-02 2019-06-18 电子科技大学 Human face in-vivo detection method based on light stream guide features block and convolution GRU
CN110188715A (en) * 2019-06-03 2019-08-30 广州二元科技有限公司 A kind of video human face biopsy method of multi frame detection ballot
CN110598580A (en) * 2019-08-25 2019-12-20 南京理工大学 Human face living body detection method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016033184A1 (en) * 2014-08-26 2016-03-03 Hoyos Labs Ip Ltd. System and method for determining liveness
CN108921041A (en) * 2018-06-06 2018-11-30 深圳神目信息技术有限公司 A kind of biopsy method and device based on RGB and IR binocular camera
CN109902667A (en) * 2019-04-02 2019-06-18 电子科技大学 Human face in-vivo detection method based on light stream guide features block and convolution GRU
CN109858471A (en) * 2019-04-03 2019-06-07 深圳市华付信息技术有限公司 Biopsy method, device and computer equipment based on picture quality
CN110188715A (en) * 2019-06-03 2019-08-30 广州二元科技有限公司 A kind of video human face biopsy method of multi frame detection ballot
CN110598580A (en) * 2019-08-25 2019-12-20 南京理工大学 Human face living body detection method

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114639129A (en) * 2020-11-30 2022-06-17 北京君正集成电路股份有限公司 Paper medium living body detection method for access control system
CN114639129B (en) * 2020-11-30 2024-05-03 北京君正集成电路股份有限公司 Paper medium living body detection method for access control system
CN113111750A (en) * 2021-03-31 2021-07-13 智慧眼科技股份有限公司 Face living body detection method and device, computer equipment and storage medium
CN113011385A (en) * 2021-04-13 2021-06-22 深圳市赛为智能股份有限公司 Face silence living body detection method and device, computer equipment and storage medium
CN113011385B (en) * 2021-04-13 2024-07-05 深圳市赛为智能股份有限公司 Face silence living body detection method, face silence living body detection device, computer equipment and storage medium
CN113609944A (en) * 2021-07-27 2021-11-05 东南大学 Silent in-vivo detection method

Similar Documents

Publication Publication Date Title
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN109376681B (en) Multi-person posture estimation method and system
Li et al. Single image dehazing using the change of detail prior
CN103679636B (en) Based on point, the fast image splicing method of line double characteristic
Zhang et al. Detecting and extracting the photo composites using planar homography and graph cut
CN104205826B (en) For rebuilding equipment and the method for density three-dimensional image
JP4597391B2 (en) Facial region detection apparatus and method, and computer-readable recording medium
CN111222432A (en) Face living body detection method, system, equipment and readable storage medium
Bonny et al. Feature-based image stitching algorithms
CN109190522B (en) Living body detection method based on infrared camera
JP2015502058A (en) Multispectral imaging system
Mistry et al. Image stitching using Harris feature detection
CN112287868B (en) Human body action recognition method and device
CN112287867B (en) Multi-camera human body action recognition method and device
US20210056668A1 (en) Image inpainting with geometric and photometric transformations
CN111209820A (en) Face living body detection method, system, equipment and readable storage medium
Kanter Color Crack: Identifying Cracks in Glass
Wu et al. Single-shot face anti-spoofing for dual pixel camera
WO2015069063A1 (en) Method and system for creating a camera refocus effect
JP5662890B2 (en) Image processing method, image processing apparatus, image processing program, and radiation dose estimation method by image processing
KR20100121817A (en) Method for tracking region of eye
JP4898655B2 (en) Imaging apparatus and image composition program
Jung et al. Multispectral fusion of rgb and nir images using weighted least squares and convolution neural networks
KR20160000533A (en) The method of multi detection and tracking with local feature point for providing information of an object in augmented reality
JP4803148B2 (en) Binocular position detection method and detection apparatus for human face image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination