CN111582197A - Living body based on near infrared and 3D camera shooting technology and face recognition system - Google Patents

Living body based on near infrared and 3D camera shooting technology and face recognition system Download PDF

Info

Publication number
CN111582197A
CN111582197A CN202010395850.1A CN202010395850A CN111582197A CN 111582197 A CN111582197 A CN 111582197A CN 202010395850 A CN202010395850 A CN 202010395850A CN 111582197 A CN111582197 A CN 111582197A
Authority
CN
China
Prior art keywords
face
living body
dimensional
infrared
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010395850.1A
Other languages
Chinese (zh)
Inventor
邓祖平
李思
国静
田婷婷
刘辉
罗帮才
陈亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou Planning & Design Institute Of Posts & Telecommunications Co ltd
Original Assignee
Guizhou Planning & Design Institute Of Posts & Telecommunications Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou Planning & Design Institute Of Posts & Telecommunications Co ltd filed Critical Guizhou Planning & Design Institute Of Posts & Telecommunications Co ltd
Priority to CN202010395850.1A priority Critical patent/CN111582197A/en
Publication of CN111582197A publication Critical patent/CN111582197A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Collating Specific Patterns (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a living body and face recognition system based on near infrared and 3D camera shooting technologies, which comprises a recognition device hardware module, a living body anti-cheating module and a background three-dimensional face recognition module, and is characterized in that: the identification device hardware module comprises a TOF camera, an RGB camera, an infrared transmitting end, an infrared camera, an ambient light sensor and a processing chip, the living body anti-cheating module supports the three-dimensional face identification function of the three-dimensional face identification device and the mobile device, and the background three-dimensional face identification module adopts a three-dimensional depth analysis system. The living body and the face recognition system of the invention can be independent of special hardware equipment, thereby reducing the cost of living body face verification, and on the other hand, effectively preventing attacks in various modes such as photos, videos, 3D face models or masks and the like, thereby improving the accuracy of living body face verification, expanding the application width and further improving the safety and reliability of identity authentication.

Description

Living body based on near infrared and 3D camera shooting technology and face recognition system
Technical Field
The invention relates to the technical field of computer information service, in particular to a living body and face recognition system based on near infrared and 3D camera shooting technologies.
Background
The face recognition technology is mature day by day, and the commercial application is wider, however, the face is very easy to be copied by the modes of photos, videos and the like, so that the counterfeit of the face of a legal user is an important threat to the safety of the face recognition and authentication system. In recent years, the living body face detection technology has made some progress, but the safety and reliability of the existing method are not high in practical application.
Disclosure of Invention
The invention aims to provide a living body and a face recognition system based on near infrared and 3D camera shooting technologies.
In order to solve the technical problems, the technical scheme provided by the invention is as follows: the utility model provides a live body and face identification system based on near-infrared and 3D technique of making a video recording, includes recognition device hardware module, live body anti-cheating module, three-dimensional face identification module in backstage, its characterized in that: the identification device hardware module comprises a TOF camera, an RGB camera, an infrared transmitting end, an infrared camera, an ambient light sensor and a processing chip, wherein the TOF camera is used for collecting depth information, the RGB camera is used for collecting a first RGB image of the face, the infrared transmitting end is used for transmitting infrared light to the face, the ambient light sensor is used for sensing second light intensity of the ambient light, the processing chip is used for determining whether the depth information accords with three-dimensional face characteristics, determining whether the ambient light accords with preset conditions after determining that the depth information accords with the three-dimensional face characteristics, and triggering the RGB camera when the ambient light accords with the preset conditions, and triggering the infrared transmitting end and the infrared camera when the ambient light does not accord with the preset conditions, the living body anti-spoofing face module supports three-dimensional face identification functions of three-dimensional face identification equipment and mobile equipment, the system comprises a three-dimensional face recognition device, a living body anti-cheating module, a background three-dimensional face recognition module, a three-dimensional depth analysis system and a three-dimensional depth analysis system, wherein the three-dimensional face recognition device is used for collecting face information at the front end, the operated user is a real living body face, the living body anti-cheating module is used as the front end through a binocular camera to realize a software and hardware technology of living body detection, the background three-dimensional face recognition module is used for obtaining a whole face image in a face living body detection technical link, then recognizing an identity card through scanning, comparing and recognizing a field face and the face on the identity card after obtaining a head image of the identity card, and judging.
Compared with the prior art, the invention has the advantages that: the living body and the face recognition system of the invention can be independent of special hardware equipment, thereby reducing the cost of living body face verification, and on the other hand, effectively preventing attacks in various modes such as photos, videos, 3D face models or masks and the like, thereby improving the accuracy of living body face verification, expanding the application width and further improving the safety and reliability of identity authentication.
As improvement, the living body anti-cheating module comprises a 3D face living body detection algorithm system based on LBP characteristics, an infrared 3D face living body detection algorithm system based on gray level co-occurrence matrix characteristics and an infrared 3D face living body detection algorithm system based on multi-scale gray level co-occurrence matrix characteristics.
As an improvement, the 3D face living body detection algorithm system based on the LBP features acquires a depth map of a living body face through an infrared camera, then extracts the LBP features from a face region image in the face depth map, and respectively extracts the LBP features from the living body face region depth map and an ipad attack face region depth map.
As an improvement, the infrared 3D face in-vivo detection algorithm system based on the gray level co-occurrence matrix feature is used for infrared 3D face in-vivo detection of the gray level co-occurrence matrix feature, and also does not directly use RGB face images, and firstly calculates a face region image of the RGB image, then calculates a face depth image of the region to obtain co-occurrence matrices in 4 directions thereof, and finally calculates contrast, energy, entropy and correlation of the 4 direction co-occurrence matrices to obtain a feature value of the matrices thereof, which is used for representing texture features of the face depth image.
As an improvement, the infrared 3D human face in-vivo detection algorithm system based on the multi-scale gray level co-occurrence matrix characteristic has a low dimensionality and still has a space for improving the accuracy rate based on the infrared 3D human face in-vivo detection method based on the gray level co-occurrence characteristic, and based on the situation, a multi-scale GLCM characteristic in-vivo detection algorithm based on a human face depth map is provided. When the face RGB image and the depth map are obtained, firstly, face detection is carried out on the RGB image, the position of a face area is determined, then, the image of the face area is extracted from the depth map, and the depth map of the face area is obtained.
As improvement, the background three-dimensional face recognition module adopts a three-dimensional depth analysis system, wherein the three-dimensional depth analysis system has the characteristics that the three-dimensional face data has the unchanged posture and the unchanged illumination, and factors such as makeup, exposure, shadow and the like greatly influence the RGB image and have smaller influence on the depth image, so that the three-dimensional face living body detection robustness is better.
Drawings
Fig. 1 is a basic block diagram of a living body and a face recognition system based on near infrared and 3D camera technologies.
Fig. 2 is a graph comparing the difference between a live face and a photo face in a state of use based on fourier spectrum analysis.
Fig. 3 is a depth map of a real face and a picture attack face of a three-dimensional depth analysis system.
Fig. 4 is a table comparing the reflectance of different materials and skin.
FIG. 5 is a graph of a linear support vector machine.
Fig. 6 is a logical block diagram of a living body based on near infrared and 3D imaging technologies and a device of a face recognition system.
Fig. 7 is a comparison graph of live face depth and LBP map.
Fig. 8 is a comparison graph of the depth of the live face and the depth of the attack face after histogram equalization.
FIG. 9 is a comparison graph of three sets of live face depths and attack face depths.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
The invention is implemented concretely, a living body and face recognition system based on near infrared and 3D camera shooting technologies, which comprises a recognition device hardware module, a living body anti-cheating module and a background three-dimensional face recognition module, and is characterized in that: the identification device hardware module comprises a TOF camera, an RGB camera, an infrared transmitting end, an infrared camera, an ambient light sensor and a processing chip, wherein the TOF camera is used for collecting depth information, the RGB camera is used for collecting a first RGB image of the face, the infrared transmitting end is used for transmitting infrared light to the face, the ambient light sensor is used for sensing second light intensity of the ambient light, the processing chip is used for determining whether the depth information accords with three-dimensional face characteristics, determining whether the ambient light accords with preset conditions after determining that the depth information accords with the three-dimensional face characteristics, and triggering the RGB camera when the ambient light accords with the preset conditions, and triggering the infrared transmitting end and the infrared camera when the ambient light does not accord with the preset conditions, the living body anti-spoofing face module supports three-dimensional face identification functions of three-dimensional face identification equipment and mobile equipment, the system comprises a three-dimensional face recognition device, a living body anti-cheating module, a background three-dimensional face recognition module, a three-dimensional depth analysis system and a three-dimensional depth analysis system, wherein the three-dimensional face recognition device is used for collecting face information at the front end, the operated user is a real living body face, the living body anti-cheating module is used as the front end through a binocular camera to realize a software and hardware technology of living body detection, the background three-dimensional face recognition module is used for obtaining a whole face image in a face living body detection technical link, then recognizing an identity card through scanning, comparing and recognizing a field face and the face on the identity card after obtaining a head image of the identity card, and judging.
The living body anti-cheating module comprises a 3D face living body detection algorithm system based on LBP characteristics, an infrared 3D face living body detection algorithm system based on gray level co-occurrence matrix characteristics and an infrared 3D face living body detection algorithm system based on multi-scale gray level co-occurrence matrix characteristics.
The 3D human face living body detection algorithm system based on the LBP characteristics obtains a depth map of a living body human face through an infrared camera, then extracts the LBP characteristics from a human face area image in the human face depth map, and respectively extracts the LBP characteristics from the living body human face area depth map and an ipad attack human face area depth map.
The infrared 3D human face in-vivo detection algorithm system based on the gray level co-occurrence matrix characteristics aims at infrared 3D human face in-vivo detection of the gray level co-occurrence matrix characteristics, RGB human face images are not directly used, a human face area image of the RGB image is firstly calculated, then a human face depth image of the area is calculated to obtain co-occurrence matrices in 4 directions, and finally contrast, energy, entropy and correlation of the co-occurrence matrices in 4 directions are calculated to obtain characteristic values of the matrices, wherein the characteristic values are used for representing texture characteristics of the human face depth image.
The infrared 3D human face in-vivo detection algorithm system based on the multi-scale gray level co-occurrence matrix characteristics has the advantages that the dimension is low, the accuracy rate still has improved space, and based on the condition, the multi-scale GLCM characteristic in-vivo detection algorithm based on the human face depth map is provided. When the face RGB image and the depth map are obtained, firstly, face detection is carried out on the RGB image, the position of a face area is determined, then, the image of the face area is extracted from the depth map, and the depth map of the face area is obtained.
The background three-dimensional face recognition module adopts a three-dimensional depth analysis system, wherein the three-dimensional depth analysis system has the characteristics that three-dimensional face data has unchanged posture and unchanged illumination, and factors such as makeup, exposure, shadow and the like greatly affect RGB images and have small influence on depth images, so that the three-dimensional face in-vivo detection robustness is better.
The working principle of the invention is as follows: the basic block diagram of the living body and face recognition system of the present invention is shown in fig. 1. in order to use an anti-spoofing system, a user needs to present the relevant biometric features to a sensor, in most cases a camera. The captured facial image is pre-processed into an acceptable form (e.g., by normalization and noise removal techniques) because then different live facial features can be extracted later at the feature extraction module. The output of the feature extraction is a biometric template that contains salient features that can distinguish live samples from spoofed samples. Only live samples will be used for face authentication, while fraudulent authentication attempts will be automatically intercepted by the live detection system.
The diversity of impersonation attacks makes face recognition research face a huge challenge, and the following types of in vivo detection algorithms are most widely applied in the market at present:
firstly, a human face living body detection algorithm based on texture features, and a forged human face collected by the same equipment has detail difference or loss compared with a real human face, including differences of human face deformation, local highlight and the like, and the differences can cause differences in human face textures.
Secondly, a living body detection algorithm based on motion information is adopted, the human face does not move autonomously due to nerve regulation, including blinking, mouth motion, head motion and the like, and the motion information extracted from the human face video can be used for judging the authenticity of the human face.
And thirdly, based on a multispectral living body detection algorithm, the human face skin and other materials have difference in spectral emissivity, and the true face and the false face are distinguished by finding out wave bands other than visible light so that the true face and the false face are different.
And fourthly, the living body detection method based on the multi-feature fusion has different human face living body detection performances, and due to the diversity of impersonation attacks, the accuracy and the robustness of the human face living body detection are improved by selecting the multi-feature fusion.
The following 5 methods are available for detecting the living human face of most devices on the market:
(ii) Fourier spectral analysis
Fourier spectra are the basis for digital image processing, and extracted image features are analyzed by transforming the image in the frequency and time domains. When the face image is clear, the high-frequency components of the Fourier spectrogram are more. As shown in fig. 2, when the photo face and the video face invade the face detection system, the face image is acquired for the second time, the imaging definition is different from the definition of the real face image acquired for the first time, and the true face and the false face are distinguished by analyzing the high-low frequency information of the face image.
The Fourier spectrum description analyzes the authenticity of the face image and is mainly based on two criteria:
a. the size of the real face is larger than that of the face photo, and the face photo is 2D, so that the high-frequency component of the forged face image is lower than that of the real face image;
b. the forged face image has no local motion in time, so that the frequency domain transformation is small in time domain.
The fourier spectrum analysis method is simpler, but is easily affected by illumination and image resolution. Under different illumination conditions, the definition of face images acquired by different devices is different, and an attack face image acquired by a high-quality device may be clearer than a face image acquired by a common device.
② three-dimensional depth analysis
The difference between three-dimensional face live body detection and two-dimensional face live body detection is that three-dimensional face data is used, and the advantages thereof include:
the three-dimensional face data has the characteristics of unchanged posture, unchanged illumination and the like, and factors such as makeup, exposure, shadow and the like have great influence on RGB images and have small influence on depth images, so that the three-dimensional face living body detection robustness is better.
The three-dimensional face data can reflect the face shape characteristics more vividly, and the three-dimensional living body face depth image has face contour characteristics and is distinguished from depth images of video, photo attack faces and the like more obviously.
As shown in fig. 3, by collecting RGB images and depth maps of a real face and a 3D mask, LBP features of the RGB images and the depth map of a face region are extracted, respectively, and are discriminantly analyzed using classifiers such as Linear Discriminant Analysis (LDA) and SVM. The different curvatures of the living human face and the photo human face are used for distinguishing the true human face from the false human face. First, the covariance of the image points under the Cartesian product is calculated using a singular value decomposition or PCA decomposition, the image points are distributed on a sphere with r as the radius, and the curvature of the image point p is defined as:
Figure RE-GSB0000188291250000051
wherein v is the eigenvector corresponding to the minimum eigenvalue after matrix decomposition, b is the center of gravity of the curved surface point, and d is the average distance of the curved surface point. The living human face can have obvious curvature transformation, the curvature values of the face attacked by the photo video are nearly the same, and the true and false human faces can be distinguished by obtaining the curvature of the face image.
The human face living body detection method based on the three-dimensional depth does not need excessive user interaction, including blinking, shaking and smiling physiological behaviors, has a good recognition effect on photos, videos and the like, but has a poor recognition effect on three-dimensional models. The method has high practicability because the technology content for reconstructing the three-dimensional model of the legal user is high and is difficult to realize.
② analysis of facial luminous flux
The human face optical flow analysis method is to find the corresponding relation between the human face of the previous frame and the human face of the current frame by using the change of pixels in a human face image sequence in a time domain and the correlation of adjacent frames, thereby calculating the motion information of the human face of the adjacent frame. One difference between a live face and a photo face is that there are non-rigid deformations including mouth changes, eye changes, etc. The living human face generates different optical flows due to different movements of each part of the face when rotating and swinging, the geometric structure of the photo human face is two-dimensional, the movements of different areas of the face are basically consistent, and the generated optical flows are greatly different from the living human face.
And calculating optical flow values generated by the rotation of the human face, and training and classifying the optical flow values by an SVM (support vector machine). Firstly, calculating the living grade of the human face, and further distinguishing whether the human face image to be detected is a living human face. Because the facial expression can be changed to a certain extent in the rotation process of the face, a local face can generate a stronger light flow area, and the photo face has no obvious change, and finally, the optical flow in the face image is trained and classified by an SVM method to judge whether the face to be detected is a living face.
Thirdly, human-computer interaction detection of behaviors such as blinking
Blinking and mouth opening are normal physiological behaviors of the living human face, and the condition does not exist when a common photo attacks the human face. Common photo attacks can be well detected by judging whether the human face has behaviors of blinking, opening the mouth and the like. Some people propose a method for recognizing a living human face based on mouth movement and eye movement, the method firstly detects the human face, extracts a mouth region and an eye region, then carries out principal component analysis on the mouth region and the eye region, and distinguishes the living human face from an attack human face by analyzing the principal component attributes of a video picture sequence. The method has a good effect on photo-attacking faces, but has a poor effect on detecting video-attacking face images, because the video-attacking faces may have behaviors of blinking, opening mouths and the like. At present, most methods used in the human face living body detection industry are in a mode of matching instruction actions, including physiological behaviors of human face left turning, blinking, mouth opening and the like, and if matching is wrong, the human face is considered to be attacked. However, such a method faces a great challenge, and an attacker digs out the eye and mouth regions of the face picture of the legal user and makes corresponding actions according to instructions after the picture.
Iv multispectral analysis
The multispectral-based living body detection method considers that human face skin and other materials have difference in spectral emissivity, and finds out wave bands other than visible light, so that the difference between a real face and a forged face can be used for distinguishing the real face from the fake face. The human face detection of visible wave bands and infrared wave bands is comprehensively tested, and the test result shows that the infrared wave bands are more suitable for the human face detection. After the spectral reflectivity of a common mask is researched, 850nm and 685nm are selected as illumination wave bands, because the wave band under 850nm can distinguish skin and most forged materials, and the wave band under 685nm can effectively distinguish skin colors of different ethnic groups. As shown in fig. 4, the forehead portion skin is irradiated with the selected wavelength band, and the average gray value of a specific region thereof is calculated as a feature, and finally the living body detection is performed using the linear classification method. The method uses the active light source to irradiate the skin of the forehead part and carries out classification and judgment in real time, so that a better detection rate is obtained.
The living body detection method of thermal infrared imaging is based on that living body skin has stable thermal signals, is greatly different from background images under most conditions, the background images under thermal infrared rays are dim, and the influence of external illumination on the thermal infrared images is small.
Several common in vivo detection methods were compared, as shown in the following table:
Figure RE-GSB0000188291250000061
as shown in the above table, each living body detection method has its own advantages and disadvantages and application range, the face living body detection method based on three-dimensional depth map information is more complex in calculation, but factors such as makeup, exposure and shadow have less influence on the face depth map, and the face detection method has a good recognition effect on both photo faces and video faces, but has a poor recognition effect on a three-dimensional model of the face. The method based on the human face light stream analysis requires a user to shake the human face for matching, is greatly influenced by illumination, and has strong resistance to attacking human face photos. The man-machine interaction needs more cooperation of users, the detection time is long, additional equipment is needed for assistance, and the photo-taking device has good resistance to common photos. The Fourier spectrum analysis method is simple, but is sensitive to light, and the identification effect on the three-dimensional face model is poor. The blinking method requires a small amount of cooperation of users, is less influenced by illumination, has strong resistance to general face photograph attacks, but has poor video resistance, the thermal infrared imaging method is less influenced by light, has good resistance to both face photographs and three-dimensional models, and needs additional expensive equipment.
Whether the human face living body detection technology can be well developed is necessarily based on whether the human face living body database is complete. The database is important for evaluating human face living body detection algorithms. If the data in the database is comprehensive and can reflect the situation of a real scene, the performance of the human face living body detection algorithm in the real scene can be reflected more accurately on the database; if the data in the database is too single or less, the data distribution in the real scene cannot be estimated well, and the quality of the face living body detection algorithm cannot be evaluated objectively. With the great attention paid to the face living body detection technology, correspondingly, the public face living body database is gradually increased, and the current common face living body detection database mainly comprises the following parts:
NUAA: the database was published in 2010 by nan et al, university of aerospace, Nanjing, and was recognized as the first anti-photo-fraud database. The database uses a plurality of cheap cameras purchased in the electronic market for collection, the collection is divided into three times, the interval is 2 weeks each time, and the place and the illumination condition of each time of collection are different. A total of 15 volunteers (nos. 1 to 15) were involved in data collection. The database covers more appearance transformations, including the conditions of glasses, gender, illumination and the like, and in the acquisition process, a volunteer is required to look directly at the camera, blink normally, without obvious expression transformation, and the facial area occupies 2/3 of the whole image. A total of 3491 live images and 9123 false face images were collected, with the negative sample photographs being 8.9cm 12.3cm and 6.8cm 12.7cm face photographs printed using a conventional wash and printer, the author moving the face photographs horizontally and vertically back and forth, rotating the photographs side to side along a vertical axis, rotating the photographs side to side along a horizontal axis, curving the photographs side to side along a vertical axis, and curving the photographs side to side along a horizontal axis, in each case with the spatial positions of top to side, side to side, front to back, and finally saving the collected face images to a size of 640 x 480.
REPLAY-ATTAK: the Replay-attach two-dimensional human face living body detection database has 1300 videos in total, and comprises a photo Attack human face and a video Attack human face under 50 persons with different illumination. The data is divided into 4 subgroups, including:
firstly, training data is used for training a living body detection classifier;
developing data for threshold estimation;
testing data for reporting error data;
and fourthly, registering data for verifying sensitive human face living body detection algorithm.
Live face data of Replay-attach is collected by using a Macbook built-in camera, each video segment has the time of at least 9 seconds, each video segment has 25 frames, and the resolution is 320 x 240. The collection is carried out under two illumination conditions, one is to turn on office lights and close curtains under the condition that the backgrounds are the same, and the other is to turn off the office lights and open the curtains under the environment with complex backgrounds.
The acquirer firstly acquires an attack face photo with 12.1w pixels and an attack face video with 720P pixels by using Canon sx150 IS.
These photos and videos are captured using a fixed camera position and a hand-held camera, respectively. The database includes 10 kinds of attack faces, which are respectively:
the 4 attacking faces included photographs and videos with a resolution of 480 x 320 sampled using Iphone 3GS under two lighting conditions;
another 4 kinds of attack faces include photos and videos with resolution of 1024 x 768 sampled under two lighting conditions using Ipad;
the last two attack faces included the use of the Triumph-Adler DCC.
CASIA Face Anti-Spoofing: the database comprises 600 videos, and three video images with different qualities are provided, namely a low-quality video image, a normal-quality video image and a high-quality video image. The low-quality video image is acquired by using an old USB camera, the image resolution is 640 x 480, the normal video image is acquired by using a newly purchased USB camera, the image resolution is 480 x 640, the high-quality face image is acquired by using a Sony NEX-5 camera, the maximum resolution of the video image can reach 1980 x 1080, but the high-resolution image is inconvenient to store and calculate, so that the high-quality image with the size of 1280 x 720 is acquired.
A total of 50 people are involved in the real-face video image capture, so the videos are all captured in the natural environment and all contain normal blinking actions. The fake face videos are classified into three types, mainly:
the method comprises the following steps of (1) distorting a photo attack, acquiring a high-quality photo of 1280 x 720 by using SonyNEX-5, printing the high-quality photo by using a copperplate paper, and intentionally distorting the photo by an attacker to simulate the head movement of a real person.
And secondly, cutting the photo for attack, obtaining the photo by using the method, cutting off the eye part, hiding the attacker behind the photo, and deceiving the face authentication system by blinking.
And thirdly, video attack, namely playing the acquired high-quality video by using the ipad.
MSU MFSD: the database is collected in a Pattern Recognition and Image Processing (PRIP) laboratory of Michigan State university, 35 people participate in the video collection work, 280 real face videos and attack face videos are obtained, the time of each video is over 9s, and the average time is 12 s. Compared with other mainstream human face living body detection databases, the database has the following characteristics:
the method includes acquiring real face videos and fake face videos by using a mobile phone.
Second, the attack photo is generated on a large-sized paper by using a color printer, so that the quality of the photo printed in the database is superior to that of other databases.
This database uses two kinds of cameras to gather real face video altogether, mainly includes:
the MacBookAir 13 is internally provided with a camera, the resolution is 640 x 480,
② the front camera of Google Nexus 5Android mobile phone, the resolution is 720 × 480.
The fake face video is mainly an attack face video of three types 1080p acquired by using a Canon PowerShot 550D single lens reflex and an iphone5s, and the three attack means are respectively:
ipad replays face video, and the resolution ratio is 2048 x 1536;
the iPhone5s plays back the face video, and the resolution is 1136 × 640;
③ the HP color Laserjet CP6015xh printer prints a face photo with a resolution of 1200X 600dpi on A3 paper.
The videos or photos used by the three types of attack means are collected by a notebook camera and an android camera respectively.
BIWI Kinect Head dose Database: a head pose database was collected with Kinect, which included face pose data for 20 different people, 16 men and 4 women, 4 wearing glasses. Volunteers were asked to sit 1 meters from Kinect and rotate their head as much as possible. Since each frame of picture of the database comprises a face picture, a face part depth image of the database is extracted as a positive sample of the 3D face living body detection.
Support Vector Machine (SVM)
A Support Vector Machine (SVM) is a two-class model, the basic model of which is a linear classifier defined with the maximum interval on a feature space, and the maximum interval makes the SVM different from a perceptron; support vector machines can use kernel techniques to become nonlinear classifiers. The learning strategy of the support vector machine is interval maximization, which can be formalized as a problem of solving convex quadratic programming and is also equivalent to a minimization problem of a regularized hinge loss function, and the learning algorithm of the support vector machine is an optimization algorithm for solving convex quadratic programming.
The learning method of the support vector machine can be divided into the following steps from simple to complex: linear, branched, and non-linear support vector machines. The simple model is the basis of the complex model and is a special case of the complex model. When the training data can be linearly divided, learning a linear classifier through hard interval maximization, namely a linear branch support vector machine, which is also called a hard interval support vector machine; when the training data is approximately linearly separable, a linear classifier which can be learned through soft interval maximization is also called a soft interval support vector machine; when the training data is non-linearly separable, the non-linear support vector machine is learned by using the kernel technique and soft interval maximization.
Interval maximization refers to finding a hyperplane with the largest geometric interval for the training dataset, meaning that the training dataset is classified with a sufficiently large degree of certainty. For the difficult example points, the points can be separated with the maximum certainty, and the hyperplane can be well classified and predicted for unknown new data. Since the real face data and the attack face data of the text are approximately linearly separable, the text learns the linear classifier shown in fig. 5 to distinguish the true face from the false face through soft interval maximization.
The logical structure of the device is shown in FIG. 6
Determining whether the acquired depth information conforms to the three-dimensional face features;
after determining that the depth information accords with the three-dimensional face characteristics, determining whether the ambient light accords with a preset condition;
when the ambient light meets a preset condition, determining a first RGB image of the face;
performing face recognition based on the depth information and the first RGB image;
and when the ambient light does not meet the preset condition, supplementing light to the face through the infrared emission end, and determining the infrared image of the face.
Advantages of the infrared 3D face depth map:
the 3D face depth image has the characteristics of unchanged posture, unchanged illumination and the like, and for in-vivo detection, the 3D face depth image is used for in-vivo detection, so that the algorithm robustness is better;
the 3D face depth map can more clearly reflect the face shape characteristics, the living body face depth map has face contour characteristics, and the living body face depth map is obviously distinguished from depth maps of ipads, photo attack faces and the like.
On the basis of researching a human face living body detection algorithm based on LBP characteristics and gray level symbiosis characteristics, a multi-scale GLCM characteristic living body detection algorithm based on a human face depth map is provided. Firstly, acquiring a face region image of a depth map, then extracting multi-scale GLCM (global likelihood modulation) characteristics from the face depth map, training by an SVM (support vector machine) and verifying the living body detection performance of the face depth map.
1) LBP (local binary pattern) feature-based 3D (three-dimensional) face in-vivo detection algorithm
According to the LBP feature-based 3D living body detection algorithm, a depth map of a living body face is obtained, and then the LBP feature is extracted from a face area image in the face depth map. The LBP features are extracted from the live face region depth map and the ipad attack face region depth map respectively, and the results are shown in fig. 7.
The LBP feature-based 3D face in-vivo detection algorithm is shown in the following table:
Figure RE-GSB0000188291250000101
2) infrared 3D face in-vivo detection algorithm based on gray level co-occurrence matrix characteristics
The infrared 3D face living body detection aiming at the gray level co-occurrence matrix characteristics does not directly use RGB face images. Firstly, calculating a face region image of an RGB image, then calculating a face depth image of the region to obtain co-occurrence matrixes in 4 directions of the face region image, and finally calculating the contrast, energy, entropy and correlation of the co-occurrence matrixes in the 4 directions to obtain a characteristic value of the matrixes, wherein the characteristic value is used for representing the texture characteristics of the face depth image.
As shown in fig. 8, the left image represents the image after histogram equalization of the living body face depth map, and the right image represents the image after histogram equalization of the photo attack face depth map. It can be seen that the depth map of the living human face has more obvious human face contour information, namely more obvious grooves, the texture complexity is higher, and the depth value of the photo-attacked human face has no obvious change. Therefore, the contrast and entropy of the living human face depth image are higher than those of the photo attack human face depth image.
The first set of gray level co-occurrence matrix eigenvalue data is shown in the following table:
Figure RE-GSB0000188291250000102
the second set of gray level co-occurrence matrix eigenvalues are shown in the following table:
Figure RE-GSB0000188291250000103
the third set of gray level co-occurrence matrix eigenvalue data is shown in the following table:
Figure RE-GSB0000188291250000111
the fourth group gray level co-occurrence matrix characteristic value data is shown in the following table:
Figure RE-GSB0000188291250000112
in order to analyze the living performance of the face of the gray level co-occurrence matrix, contrast, energy, entropy and correlation in 4 directions of 0 °, 45 °, 90 ° and 135 ° are respectively calculated for the three groups of living face region depth maps and the attack face region depth map shown in fig. 8, and four groups of data results are shown in the table. In four groups of experimental data, the contrast and entropy of the living body face depth map in 4 directions are higher than those of the attack face image, and the energy and the correlation are lower than those of the attack face image, so that the phenomenon shows that the texture information of the living body face depth map is more complex than that of the attack face depth map. The contrast and entropy of the attack face depth maps in the third group and the fourth group of data are both 0, the energy and the correlation are both 1, and the condition indicates that the gray value of the attack face depth map is not changed, namely the gray value of the whole attack face depth map is equal everywhere. The main steps of the 3D human face in-vivo detection algorithm based on the gray level co-occurrence matrix characteristics are shown in the following table.
The 3D human face living body detection algorithm based on the gray level symbiotic characteristics is shown in the following table:
Figure RE-GSB0000188291250000113
3) infrared 3D face in-vivo detection algorithm based on multi-scale gray level co-occurrence matrix characteristics
The infrared 3D face living body detection method based on the gray level symbiotic characteristics is low in dimensionality and still has a space for improving accuracy. Based on the situation, the chapter provides a multiscale GLCM feature live detection algorithm based on a face depth map. When the face RGB image and the depth map are obtained, firstly, face detection is carried out on the RGB image, the position of a face area is determined, then, the image of the face area is extracted from the depth map, and the depth map of the face area is obtained. Normalizing the obtained face depth map into sizes of 90 multiplied by 90 and 180 multiplied by 180, and respectively obtaining 16 of the depth map of the local area of the face of 90 multiplied by 90 and 180 multiplied by 180
And (4) the dimensional GLCM features are cascaded into 32-dimensional GLCM features, namely the multi-scale GLCM features of the human face depth map are obtained. And finally, verifying the information of the three-dimensional human face living body detection algorithm based on the multi-scale GLCM characteristics by using an SVM (support vector machine), wherein the main steps of the algorithm are shown in a table.
The infrared 3D human face in-vivo detection algorithm based on the multi-scale gray level symbiotic characteristics is shown in the following table:
Figure RE-GSB0000188291250000121
furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature, and in the description of the invention, "plurality" means two or more unless explicitly defined otherwise.
In the present invention, unless otherwise specifically stated or limited, the terms "mounted," "connected," "fixed," and the like are to be construed broadly and may, for example, be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In the present invention, unless otherwise expressly stated or limited, "above" or "below" a first feature means that the first and second features are in direct contact, or that the first and second features are not in direct contact but are in contact with each other via another feature therebetween. Also, the first feature being "on," "above" and "over" the second feature includes the first feature being directly on and obliquely above the second feature, or merely indicating that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature includes the first feature being directly above and obliquely above the second feature, or simply meaning that the first feature is at a lesser level than the second feature.
In the description herein, reference to the terms "one embodiment," "some embodiments," "an example," "a specific example," or "some examples" or the like means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made in the above embodiments by those of ordinary skill in the art without departing from the principle and spirit of the present invention.

Claims (6)

1. The utility model provides a live body and face identification system based on near-infrared and 3D technique of making a video recording, includes recognition device hardware module, live body anti-cheating module, three-dimensional face identification module in backstage, its characterized in that: the identification device hardware module comprises a TOF camera, an RGB camera, an infrared transmitting end, an infrared camera, an ambient light sensor and a processing chip, wherein the TOF camera is used for collecting depth information, the RGB camera is used for collecting a first RGB image of the face, the infrared transmitting end is used for transmitting infrared light to the face, the ambient light sensor is used for sensing second light intensity of the ambient light, the processing chip is used for determining whether the depth information accords with three-dimensional face characteristics, determining whether the ambient light accords with preset conditions after determining that the depth information accords with the three-dimensional face characteristics, and triggering the RGB camera when the ambient light accords with the preset conditions, and triggering the infrared transmitting end and the infrared camera when the ambient light does not accord with the preset conditions, the living body anti-spoofing face module supports three-dimensional face identification functions of three-dimensional face identification equipment and mobile equipment, the system comprises a three-dimensional face recognition device, a living body anti-cheating module, a background three-dimensional face recognition module, a three-dimensional depth analysis system and a three-dimensional depth analysis system, wherein the three-dimensional face recognition device is used for collecting face information at the front end, the operated user is a real living body face, the living body anti-cheating module is used as the front end through a binocular camera to realize a software and hardware technology of living body detection, the background three-dimensional face recognition module is used for obtaining a whole face image in a face living body detection technical link, then recognizing an identity card through scanning, comparing and recognizing a field face and the face on the identity card after obtaining a head image of the identity card, and judging.
2. The living body and face recognition system based on near infrared and 3D camera technology according to claim 1, characterized in that: the living body anti-cheating module comprises a 3D face living body detection algorithm system based on LBP characteristics, an infrared 3D face living body detection algorithm system based on gray level co-occurrence matrix characteristics and an infrared 3D face living body detection algorithm system based on multi-scale gray level co-occurrence matrix characteristics.
3. The living body and face recognition system based on near infrared and 3D camera technology according to claim 1 or 2, characterized in that: the 3D human face living body detection algorithm system based on the LBP characteristics obtains a depth map of a living body human face through an infrared camera, then extracts the LBP characteristics from a human face area image in the human face depth map, and respectively extracts the LBP characteristics from the living body human face area depth map and an ipad attack human face area depth map.
4. The living body and face recognition system based on near infrared and 3D camera technology according to claim 1 or 2, characterized in that: the infrared 3D human face in-vivo detection algorithm system based on the gray level co-occurrence matrix characteristics aims at infrared 3D human face in-vivo detection of the gray level co-occurrence matrix characteristics, RGB human face images are not directly used, a human face area image of the RGB image is firstly calculated, then a human face depth image of the area is calculated to obtain co-occurrence matrices in 4 directions, and finally contrast, energy, entropy and correlation of the co-occurrence matrices in 4 directions are calculated to obtain characteristic values of the matrices, wherein the characteristic values are used for representing texture characteristics of the human face depth image.
5. The living body and face recognition system based on near infrared and 3D camera technology according to claim 1 or 2, characterized in that: the infrared 3D human face in-vivo detection algorithm system based on the multi-scale gray level co-occurrence matrix characteristics has the advantages that the dimension is low, the accuracy rate still has improved space, and based on the condition, the multi-scale GLCM characteristic in-vivo detection algorithm based on the human face depth map is provided. When the face RGB image and the depth map are obtained, firstly, face detection is carried out on the RGB image, the position of a face area is determined, then, the image of the face area is extracted from the depth map, and the depth map of the face area is obtained.
6. The living body and face recognition system based on near infrared and 3D camera technology according to claim 1, characterized in that: the background three-dimensional face recognition module adopts a three-dimensional depth analysis system, wherein the three-dimensional depth analysis system has the characteristics that three-dimensional face data has unchanged posture and unchanged illumination, and factors such as makeup, exposure, shadow and the like greatly affect RGB images and have small influence on depth images, so that the three-dimensional face in-vivo detection robustness is better.
CN202010395850.1A 2020-05-07 2020-05-07 Living body based on near infrared and 3D camera shooting technology and face recognition system Pending CN111582197A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010395850.1A CN111582197A (en) 2020-05-07 2020-05-07 Living body based on near infrared and 3D camera shooting technology and face recognition system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010395850.1A CN111582197A (en) 2020-05-07 2020-05-07 Living body based on near infrared and 3D camera shooting technology and face recognition system

Publications (1)

Publication Number Publication Date
CN111582197A true CN111582197A (en) 2020-08-25

Family

ID=72124903

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010395850.1A Pending CN111582197A (en) 2020-05-07 2020-05-07 Living body based on near infrared and 3D camera shooting technology and face recognition system

Country Status (1)

Country Link
CN (1) CN111582197A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112036375A (en) * 2020-10-01 2020-12-04 深圳奥比中光科技有限公司 Method and device for detecting infrared image and depth image and face recognition system
CN112364842A (en) * 2020-12-24 2021-02-12 杭州宇泛智能科技有限公司 Double-shot face recognition method and device
CN112528969A (en) * 2021-02-07 2021-03-19 中国人民解放军国防科技大学 Face image authenticity detection method and system, computer equipment and storage medium
CN113128429A (en) * 2021-04-24 2021-07-16 新疆爱华盈通信息技术有限公司 Stereo vision based living body detection method and related equipment
CN113326814A (en) * 2021-02-22 2021-08-31 王先峰 Face recognition equipment based on 5G framework
CN113627263A (en) * 2021-07-13 2021-11-09 支付宝(杭州)信息技术有限公司 Exposure method, device and equipment based on face detection
CN113837033A (en) * 2021-09-08 2021-12-24 江西合力泰科技有限公司 Face recognition method carrying TOF module
CN116824768A (en) * 2023-08-30 2023-09-29 杭银消费金融股份有限公司 Face recognition method and medium based on financial self-service terminal

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107153807A (en) * 2016-03-03 2017-09-12 重庆信科设计有限公司 A kind of non-greedy face identification method of two-dimensional principal component analysis
CN107368778A (en) * 2017-06-02 2017-11-21 深圳奥比中光科技有限公司 Method for catching, device and the storage device of human face expression
CN107451575A (en) * 2017-08-08 2017-12-08 济南大学 A kind of face anti-fraud detection method in identity authorization system
CN107506752A (en) * 2017-09-18 2017-12-22 艾普柯微电子(上海)有限公司 Face identification device and method
CN108241865A (en) * 2016-12-26 2018-07-03 哈尔滨工业大学 Multiple dimensioned more subgraph liver fibrosis multi-stage quantizations based on ultrasonoscopy method by stages
CN108289222A (en) * 2018-01-26 2018-07-17 嘉兴学院 A kind of non-reference picture quality appraisement method mapping dictionary learning based on structural similarity
CN108564041A (en) * 2018-04-17 2018-09-21 广州云从信息科技有限公司 A kind of Face datection and restorative procedure based on RGBD cameras
KR101919090B1 (en) * 2017-06-08 2018-11-20 (주)이더블유비엠 Apparatus and method of face recognition verifying liveness based on 3d depth information and ir information
CN108960088A (en) * 2018-06-20 2018-12-07 天津大学 The detection of facial living body characteristics, the recognition methods of specific environment
CN109101871A (en) * 2018-08-07 2018-12-28 北京华捷艾米科技有限公司 A kind of living body detection device based on depth and Near Infrared Information, detection method and its application
CN109684965A (en) * 2018-12-17 2019-04-26 上海资汇信息科技有限公司 A kind of face identification system based near infrared imaging and deep learning
CN110163026A (en) * 2018-01-04 2019-08-23 上海蓥石汽车技术有限公司 A kind of three-dimensional driver identification system and method based on structure light
CN110222486A (en) * 2019-05-18 2019-09-10 王�锋 User ID authentication method, device, equipment and computer readable storage medium
CN110287900A (en) * 2019-06-27 2019-09-27 深圳市商汤科技有限公司 Verification method and verifying device
CN110580454A (en) * 2019-08-21 2019-12-17 北京的卢深视科技有限公司 Living body detection method and device
CN111046703A (en) * 2018-10-12 2020-04-21 杭州海康威视数字技术股份有限公司 Face anti-counterfeiting detection method and device and multi-view camera

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107153807A (en) * 2016-03-03 2017-09-12 重庆信科设计有限公司 A kind of non-greedy face identification method of two-dimensional principal component analysis
CN108241865A (en) * 2016-12-26 2018-07-03 哈尔滨工业大学 Multiple dimensioned more subgraph liver fibrosis multi-stage quantizations based on ultrasonoscopy method by stages
CN107368778A (en) * 2017-06-02 2017-11-21 深圳奥比中光科技有限公司 Method for catching, device and the storage device of human face expression
KR101919090B1 (en) * 2017-06-08 2018-11-20 (주)이더블유비엠 Apparatus and method of face recognition verifying liveness based on 3d depth information and ir information
CN107451575A (en) * 2017-08-08 2017-12-08 济南大学 A kind of face anti-fraud detection method in identity authorization system
CN107506752A (en) * 2017-09-18 2017-12-22 艾普柯微电子(上海)有限公司 Face identification device and method
CN110163026A (en) * 2018-01-04 2019-08-23 上海蓥石汽车技术有限公司 A kind of three-dimensional driver identification system and method based on structure light
CN108289222A (en) * 2018-01-26 2018-07-17 嘉兴学院 A kind of non-reference picture quality appraisement method mapping dictionary learning based on structural similarity
CN108564041A (en) * 2018-04-17 2018-09-21 广州云从信息科技有限公司 A kind of Face datection and restorative procedure based on RGBD cameras
CN108960088A (en) * 2018-06-20 2018-12-07 天津大学 The detection of facial living body characteristics, the recognition methods of specific environment
CN109101871A (en) * 2018-08-07 2018-12-28 北京华捷艾米科技有限公司 A kind of living body detection device based on depth and Near Infrared Information, detection method and its application
CN111046703A (en) * 2018-10-12 2020-04-21 杭州海康威视数字技术股份有限公司 Face anti-counterfeiting detection method and device and multi-view camera
CN109684965A (en) * 2018-12-17 2019-04-26 上海资汇信息科技有限公司 A kind of face identification system based near infrared imaging and deep learning
CN110222486A (en) * 2019-05-18 2019-09-10 王�锋 User ID authentication method, device, equipment and computer readable storage medium
CN110287900A (en) * 2019-06-27 2019-09-27 深圳市商汤科技有限公司 Verification method and verifying device
CN110580454A (en) * 2019-08-21 2019-12-17 北京的卢深视科技有限公司 Living body detection method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邱晨鹏: "基于双目摄像机的人脸活体检测的研究", 《现代计算机(专业版)》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112036375A (en) * 2020-10-01 2020-12-04 深圳奥比中光科技有限公司 Method and device for detecting infrared image and depth image and face recognition system
CN112036375B (en) * 2020-10-01 2024-05-07 奥比中光科技集团股份有限公司 Method, device and face recognition system for detecting infrared image and depth image
CN112364842A (en) * 2020-12-24 2021-02-12 杭州宇泛智能科技有限公司 Double-shot face recognition method and device
CN112528969A (en) * 2021-02-07 2021-03-19 中国人民解放军国防科技大学 Face image authenticity detection method and system, computer equipment and storage medium
CN113326814A (en) * 2021-02-22 2021-08-31 王先峰 Face recognition equipment based on 5G framework
CN113128429A (en) * 2021-04-24 2021-07-16 新疆爱华盈通信息技术有限公司 Stereo vision based living body detection method and related equipment
CN113627263A (en) * 2021-07-13 2021-11-09 支付宝(杭州)信息技术有限公司 Exposure method, device and equipment based on face detection
CN113627263B (en) * 2021-07-13 2023-11-17 支付宝(杭州)信息技术有限公司 Exposure method, device and equipment based on face detection
CN113837033A (en) * 2021-09-08 2021-12-24 江西合力泰科技有限公司 Face recognition method carrying TOF module
CN113837033B (en) * 2021-09-08 2024-05-03 江西合力泰科技有限公司 Face recognition method with TOF module
CN116824768A (en) * 2023-08-30 2023-09-29 杭银消费金融股份有限公司 Face recognition method and medium based on financial self-service terminal
CN116824768B (en) * 2023-08-30 2023-11-28 杭银消费金融股份有限公司 Face recognition method and medium based on financial self-service terminal

Similar Documents

Publication Publication Date Title
CN111582197A (en) Living body based on near infrared and 3D camera shooting technology and face recognition system
Patel et al. Secure face unlock: Spoof detection on smartphones
Shao et al. Deep convolutional dynamic texture learning with adaptive channel-discriminability for 3D mask face anti-spoofing
KR102561723B1 (en) System and method for performing fingerprint-based user authentication using images captured using a mobile device
Chen et al. A multi-task convolutional neural network for joint iris detection and presentation attack detection
Wen et al. Face spoof detection with image distortion analysis
CN103632132B (en) Face detection and recognition method based on skin color segmentation and template matching
CN109101871A (en) A kind of living body detection device based on depth and Near Infrared Information, detection method and its application
Kähm et al. 2d face liveness detection: An overview
KR20080033486A (en) Automatic biometric identification based on face recognition and support vector machines
TWI318108B (en) A real-time face detection under complex backgrounds
CN109858439A (en) A kind of biopsy method and device based on face
Parveen et al. Face anti-spoofing methods
Zhang et al. A survey on face anti-spoofing algorithms
CN111126240A (en) Three-channel feature fusion face recognition method
CN109255319A (en) For the recognition of face payment information method for anti-counterfeit of still photo
Rathgeb et al. Detection of makeup presentation attacks based on deep face representations
CN107862298B (en) Winking living body detection method based on infrared camera device
CN109308436B (en) Living body face recognition method based on active infrared video
Wasnik et al. Presentation attack detection for smartphone based fingerphoto recognition using second order local structures
KR20060058197A (en) Method and apparatus for eye detection
Jingade et al. DOG-ADTCP: A new feature descriptor for protection of face identification system
Gürel Development of a face recognition system
Proença Unconstrained iris recognition in visible wavelengths
GB2471192A (en) Iris and Ocular Recognition using Trace Transforms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Li Si

Inventor after: Deng Zuping

Inventor after: Guo Jing

Inventor after: Tian Tingting

Inventor after: Liu Hui

Inventor after: Luo Bangcai

Inventor after: Chen Liang

Inventor before: Deng Zuping

Inventor before: Li Si

Inventor before: Guo Jing

Inventor before: Tian Tingting

Inventor before: Liu Hui

Inventor before: Luo Bangcai

Inventor before: Chen Liang

CB03 Change of inventor or designer information
RJ01 Rejection of invention patent application after publication

Application publication date: 20200825

RJ01 Rejection of invention patent application after publication