CN109308436B - Living body face recognition method based on active infrared video - Google Patents

Living body face recognition method based on active infrared video Download PDF

Info

Publication number
CN109308436B
CN109308436B CN201710627156.6A CN201710627156A CN109308436B CN 109308436 B CN109308436 B CN 109308436B CN 201710627156 A CN201710627156 A CN 201710627156A CN 109308436 B CN109308436 B CN 109308436B
Authority
CN
China
Prior art keywords
image
frame difference
face
infrared
living body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710627156.6A
Other languages
Chinese (zh)
Other versions
CN109308436A (en
Inventor
李小霞
叶远征
肖娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest University of Science and Technology
Original Assignee
Southwest University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest University of Science and Technology filed Critical Southwest University of Science and Technology
Priority to CN201710627156.6A priority Critical patent/CN109308436B/en
Publication of CN109308436A publication Critical patent/CN109308436A/en
Application granted granted Critical
Publication of CN109308436B publication Critical patent/CN109308436B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Collating Specific Patterns (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a living body face recognition method based on active infrared video, aiming at the problem of photo and video fraud in face recognition, comprising the following steps: step 1, collecting an infrared video, detecting a human face by utilizing a Haar-like + AdaBoost algorithm, and extracting a maximum human face frame; step 2, extracting a brightness salient region by using an iterative quadratic frame difference method on the basis of the maximum face frame; step 3, carrying out binarization processing on the iterative secondary frame difference image, and extracting LBP characteristics; and 4, performing living body judgment by adopting minimum distance matching. The method can accurately distinguish the living body face image and the photo/video in real time, and has the advantages of high living body face detection rate, low photo and video false detection rate, low cost, natural interaction and strong fraud prevention function.

Description

Living body face recognition method based on active infrared video
Technical Field
The invention belongs to the field of computer vision, and particularly relates to a living body face recognition method based on an active infrared video.
Background
With the development of computer vision technology, face recognition technology is mature day by day, and more occasions use face recognition technology for identity authentication, but some applications with high security requirements, such as face-brushing payment and face lock, face the problem of photo/video fraud, and some lawbreakers use photos and videos recorded in advance to obtain the approval of a system to conduct criminal activities, which causes serious loss to users. Therefore, live face recognition and fraud prevention are very important functions in face recognition applications.
The living body face recognition technology aims at distinguishing the authenticity of the face and guaranteeing the safe and stable operation of the face recognition system. At present, the living body face recognition method mainly comprises an optical flow method, a thermal infrared detection method, a face micromotion-based method, a sparse low-rank bilinear discriminant model method, an image quality evaluation-based method and the like.
The optical flow method is to distinguish the human face and the photo of the living body according to the difference generated by the motion of the optical flow field in a two-dimensional plane and a three-dimensional object, the method emphasizes the characteristic of the motion of the face, the photo is required to be on one plane, the method is not suitable for the photos with serious bending and wrinkling, the optical flow field needs to be calculated, and the calculation amount is large.
The thermal infrared detection method utilizes a thermal infrared imaging instrument to capture a human face thermal radiation image to distinguish a living human face from a photo, but the infrared image is greatly influenced by the change of the peripheral temperature, the human face thermal image does not have uniqueness and stability due to the change of the human body condition, the human face detection difficulty is increased, and the cost of the thermal infrared imaging instrument is too high.
The method for judging the living human face by matching the local background contrast with blinking eyes based on the human face micromotion method has high recognition rate and can reject most photos and counterfeit human faces. The disadvantages are that: the Local Binary Pattern (LBP) characteristic is adopted in the background Local contrast part, which takes longer time; for the human-computer interaction method such as blinking, the detection rate of human eyes is not high, the frequency of natural blinking of human eyes is not high, the detection rate is influenced, and most importantly, people do not want to act passively in special environment; lawbreakers can wear the photos on the head and only expose the eyes for cracking.
The sparse low-rank bilinear discrimination model method is used for establishing a sparse bilateral vector logistic discrimination model, has high recognition rate when a single picture is processed, needs to manually select two parameters, is not high in robustness, is complex in iteration and large in calculation amount, and is not suitable for the real-time principle.
The method is based on an image quality evaluation method, eight image quality evaluation parameters such as a peak signal-to-noise ratio, a mean square error and a structural similarity index are utilized to judge whether a given input image is a living body face image, the algorithm is carried out under the condition of natural interaction, and correct judgment cannot be given to a picture and a living body face with the same image quality.
In summary, the existing living body face recognition technology is difficult to meet the actual application requirements in the aspects of cost, real-time performance, recognition rate, natural interaction, fraud prevention and the like, so that the technology has important significance for further research.
Disclosure of Invention
The invention provides a living body face recognition method based on an active infrared video, aiming at the photo/video fraud problem in face recognition, the method is based on the active infrared video, and utilizes an iterative quadratic frame difference method to extract the brightness salient region of the face part hit by active infrared light on an infrared face image on the basis of face detection, thereby obviously distinguishing the living body face from the photo/video, then extracts LBP (local binary pattern) characteristics after binarizing the frame difference image, and finally utilizes minimum distance matching to judge the living body. The method has the characteristics of low cost, natural interaction, high instantaneity and high recognition rate.
The technical solution of the invention comprises the following steps:
step 1, collecting an infrared video, detecting a human face by utilizing a Haar-like + AdaBoost algorithm, and extracting a maximum human face frame;
step 2, extracting a brightness salient region on the maximum face frame by using an iterative quadratic frame difference method;
step 3, performing binarization processing on the secondary frame difference image, and extracting LBP characteristics;
and 4, performing living body judgment by adopting minimum distance matching.
The infrared video in the step 1 is less affected by illumination compared with the visible light video, and the active infrared camera is internally provided with the narrow-band filter so that only infrared image information can be read, thus not only removing the interference of visible light, but also not reading the face information of the reflected photo, the video and the colorful non-reflected photo, and being beneficial to improving the recognition rate.
And 2, extracting the brightness salient regions of the face parts hit by the active infrared light on the infrared face image by using an iterative two-time frame difference method, wherein the parts can obviously distinguish the living face from the photo, so that the recognition rate is improved.
And 3, performing binarization processing on the secondary frame difference image, and extracting LBP (local binary pattern) features, wherein the binarization can reduce interference influence and improve the reliability of identification, and the LBP features can further highlight local features and are favorable for improving the identification rate.
And 4, under the condition that the features have separability, the living body is judged by adopting simple minimum distance matching, so that better instantaneity can be obtained.
Compared with the prior art, the invention has the following remarkable advantages: 1) the method has good real-time performance, only two adjacent frames of infrared images are needed, only secondary frame difference, binaryzation, LBP feature extraction and minimum distance matching are adopted, and the complexity of the algorithm is reduced; 2) the method has high recognition rate, adopts the active infrared video with less influence by illumination, can effectively eliminate photo/video information, adopts an iterative secondary frame difference method to extract the illumination brightness salient region of the infrared face image, can obviously distinguish the living face from the photo/video, and can further highlight the local characteristics by virtue of the LBP (local binary pattern) characteristics, thereby being beneficial to improving the recognition rate; 3) the method adopts the active infrared video, and has low cost and natural interaction.
Drawings
Fig. 1 is a flow chart of the active infrared video-based living body face recognition method.
Fig. 2 is a positive sample binary image of a living human face according to the present invention.
FIG. 3 is a negative sample binary image of a photograph according to the present invention.
Detailed Description
The invention will be further explained with reference to the drawings and the specific embodiments.
The living body face recognition flow chart based on the active infrared video is shown in figure 1 and comprises the steps of infrared video collection, face detection, living body feature extraction by an iterative quadratic frame difference method, binaryzation, recognition feature extraction by an LBP algorithm and living body face recognition.
The method comprises the following specific steps:
step 1, collecting an infrared video, detecting a human face by utilizing a Haar-like + AdaBoost algorithm, and extracting a maximum human face frame;
the infrared video is acquired through an active infrared camera, active infrared fill-in diodes which are evenly distributed and sufficiently illuminated are designed around the camera, the central wavelength of a filter of the camera is 850nm, a Haar-like + AdaBoost algorithm is a classic face detection algorithm, and an OpenCV (open circuit vehicle) self-carried function is adopted for detection.
The human face cannot be detected in the active infrared images of the reflective photo, the video and the color non-reflective photo, the false detection rate is 0, and the human face can be detected only by the black and white non-reflective photo, so the algorithm mainly aims at the situation.
Step 2, extracting a brightness salient region, namely a living body feature, on the maximum face frame by using an iterative secondary frame difference method, wherein the specific extraction method comprises the following steps:
(1) will be provided withGraying the first frame of infrared human face image to be used as an initial image
Figure 500799DEST_PATH_IMAGE001
(2) Setting the number of iterations
Figure 414135DEST_PATH_IMAGE002
(3) Reading current continuous two frames of infrared face images and obtaining images after graying
Figure 356683DEST_PATH_IMAGE003
(4) For the current image
Figure 794617DEST_PATH_IMAGE004
Performing iterative secondary frame difference processing:
Figure 277551DEST_PATH_IMAGE005
|. | indicates that the absolute value is taken,
Figure 750121DEST_PATH_IMAGE006
subscripts "0", "1" and "2" respectively represent an "initial image" and a "current two-frame continuous image" in the frame difference processing process, and a superscript "(m)" is the number of iterations;
Figure 863571DEST_PATH_IMAGE007
is a one-time frame difference image,
Figure 851118DEST_PATH_IMAGE008
is a secondary frame difference image;
(5) and (4) adding 1 to the iteration number, and repeating the steps (3) to (4) until the system is closed.
When the system continuously reads infrared images from the active infrared camera, graying is carried out firstly, a first frame image is used as an initial image, two continuous frame images are obtained through first iteration, a frame difference is obtained between the initial image and the two frame images, and then a secondary frame difference image is obtained through carrying out frame difference on the two frame difference images; and performing second iteration, namely taking the secondary frame difference image of the first iteration as an initial image, acquiring two frames of images, performing the same frame difference processing on the three frames of images to obtain a secondary frame difference image of the second iteration, and taking the secondary frame difference image as the initial image of the next iteration, and thus iterating, extracting the brightness salient region on the infrared face image in real time.
Step 3, performing binarization processing on the secondary frame difference image, and extracting LBP characteristics;
for the secondary frame difference image obtained by each step iteration in the step 2
Figure 75426DEST_PATH_IMAGE008
Carrying out binarization treatment:
Figure 136923DEST_PATH_IMAGE009
μthe threshold value is binarized, and is related to the infrared illumination intensity, and 80% of the brightest gray value 220 of the secondary frame difference image is taken in this example, and is 176.
Fig. 2 is a secondary frame difference binary image of positive samples of different living human faces extracted by the method, and the secondary frame difference binary image has similar areas with prominent brightness: nose, cheek, forehead and eyebrow depth characteristic information. Fig. 3 is a secondary frame difference binary image of negative samples of different photos, the differences between the negative samples are large, no obvious face depth feature information exists, and the two have significance differences.
Texture is one of the intrinsic features of the object surface, and Local Binary Pattern (LBP) is an operator used to describe the Local texture features of an image. The original LBP operator is defined as that in a window of 3 × 3, the gray value of the central pixel of the window is used as a threshold value, the gray values of the adjacent 8 pixels are compared with the threshold value, if the values of the surrounding pixels are greater than the value of the central pixel, the position of the pixel point is marked as 1, otherwise, the position is 0. 8 points in the 3 x 3 neighborhood can generate 8-bit unsigned numbers, i.e. the LBP value of the window is obtained and used to reflect the texture information of the region. The LBP operator is more robust to any monotonic gray scale variation.
And 4, performing living body judgment by adopting minimum distance matching.
A representative matching database is established, which comprises a positive sample of the face of the living body and a negative sample of the black and white photo, as shown in fig. 2 and 3. After performing infrared video iteration secondary frame difference, binarization and LBP feature extraction, calculating the Euclidean distance between the LBP feature of the current processing frame and the LBP feature of each sample in the matching library, and performing living body face judgment according to the minimum distance principle.
The results of the experiment are shown in tables 1 and 2. Table 1 shows statistics of the detection rates of positive samples of 100 live face images, including states of a front face (right to the light source), a right side face 30 °, a left side face 30 °, a head down 30 °, and a head up 30 °. When the light source and the head are aligned, the detection rate is 100 percent; when the face angle is 30 degrees on the right side or 30 degrees on the left side, the detection rate is 98 percent; when the head is lowered by 30 degrees, the detection rate is 97 percent, so that the identification detection rate of the living human face from different angles is over 97 percent. The recognition speeds are all greater than 15 frames/second.
TABLE 1 Positive sample test results
Figure 483591DEST_PATH_IMAGE010
Table 2 shows that the false detection rate of 3 test objects is within 3% as the result of the false detection rate test performed with 100 frames of photos as negative samples. The recognition speeds are all greater than 15 frames/second.
TABLE 2 negative sample test results
Figure 270019DEST_PATH_IMAGE011
The above experimental results show that the living body face recognition method based on the active infrared video can successfully distinguish the living body face and the photo/video, the detection rate of the front face and the head-up living body face is 100%, the detection rate of the face deviated from the range of 30 degrees left and right and when the head is lowered is not less than 97%, the false detection rate of a black-white photo is not more than 3%, the false detection rate of other conditions is 0, the recognition speed is more than 15 frames/second, and the anti-fraud function is strong and the real-time performance is good.

Claims (2)

1. A living body face recognition method based on active infrared video comprises the following steps:
step 1, acquiring an infrared video through an active infrared camera, detecting a face by utilizing a Haar-like + AdaBoost algorithm, and extracting a maximum face frame;
step 2, extracting a brightness salient region on the maximum face frame by using an iterative secondary frame difference method, graying the infrared image when the system continuously reads the infrared image from the active infrared camera, and taking the first frame image as an initial image; for the first iteration, acquiring two continuous frames of images, calculating the frame difference between the initial image and the two frames of images to obtain two frame difference images, and calculating the frame difference between the two frame difference images to obtain a secondary frame difference image; performing second iteration, namely taking the secondary frame difference image of the first iteration as an initial image, acquiring two frames of images, performing the same frame difference processing on the three frames of images to obtain a secondary frame difference image of the second iteration, and taking the secondary frame difference image of the second iteration as an initial image of the next round; continuously iterating by the iteration method, and extracting the brightness salient region on the infrared face image in real time, wherein the specific extraction method comprises the following steps:
(1) graying the infrared human face image of the first frame to be used as an initial image
Figure DEST_PATH_IMAGE001
(2) Setting the number of iterations
Figure 706906DEST_PATH_IMAGE002
(3) Reading current continuous two frames of infrared face images and obtaining images after graying
Figure DEST_PATH_IMAGE003
(4) For the current image
Figure 420784DEST_PATH_IMAGE003
Performing iterative secondary frame difference processing:
Figure 793996DEST_PATH_IMAGE004
|. | indicates that the absolute value is taken,
Figure DEST_PATH_IMAGE005
subscripts "0", "1" and "2" respectively indicate "initial image" and "current two-frame image" in the frame difference processing, and superscripts "(A), (B), (C) and (D)m) "is the number of iterations;
Figure 923626DEST_PATH_IMAGE006
Figure DEST_PATH_IMAGE007
is a one-time frame difference image,
Figure 534736DEST_PATH_IMAGE008
is a secondary frame difference image;
(5) adding 1 to the iteration times, and repeating the steps (3) to (4) until the iteration is finished;
step 3, performing binarization processing on the secondary frame difference image, and extracting LBP characteristics;
and 4, performing living body judgment by adopting minimum distance matching.
2. The method according to claim 1, wherein in step 1, the infrared video is acquired by an active infrared camera, active infrared fill-in diodes which are uniformly distributed and sufficiently illuminated are designed around the camera, and the center wavelength of a filter of the camera is 850 nm.
CN201710627156.6A 2017-07-28 2017-07-28 Living body face recognition method based on active infrared video Active CN109308436B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710627156.6A CN109308436B (en) 2017-07-28 2017-07-28 Living body face recognition method based on active infrared video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710627156.6A CN109308436B (en) 2017-07-28 2017-07-28 Living body face recognition method based on active infrared video

Publications (2)

Publication Number Publication Date
CN109308436A CN109308436A (en) 2019-02-05
CN109308436B true CN109308436B (en) 2021-09-28

Family

ID=65202266

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710627156.6A Active CN109308436B (en) 2017-07-28 2017-07-28 Living body face recognition method based on active infrared video

Country Status (1)

Country Link
CN (1) CN109308436B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109977846B (en) * 2019-03-22 2023-02-10 中国科学院重庆绿色智能技术研究院 Living body detection method and system based on near-infrared monocular photography
CN110942087B (en) * 2019-11-02 2023-05-02 华东理工大学 Matrix type image data classification method based on separation solution
CN113449567B (en) * 2020-03-27 2024-04-02 深圳云天励飞技术有限公司 Face temperature detection method and device, electronic equipment and storage medium
CN111814561A (en) * 2020-06-11 2020-10-23 浙江大华技术股份有限公司 Face recognition method, face recognition equipment and access control system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127145A (en) * 2016-06-21 2016-11-16 重庆理工大学 Pupil diameter and tracking
CN106384237A (en) * 2016-08-31 2017-02-08 北京志光伯元科技有限公司 Member authentication-management method, device and system based on face identification

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9578159B2 (en) * 2011-06-20 2017-02-21 Prasad Muthukumar Fisheye lens based proactive user interface for mobile devices

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127145A (en) * 2016-06-21 2016-11-16 重庆理工大学 Pupil diameter and tracking
CN106384237A (en) * 2016-08-31 2017-02-08 北京志光伯元科技有限公司 Member authentication-management method, device and system based on face identification

Also Published As

Publication number Publication date
CN109308436A (en) 2019-02-05

Similar Documents

Publication Publication Date Title
CN108985134B (en) Face living body detection and face brushing transaction method and system based on binocular camera
CN107862299B (en) Living body face detection method based on near-infrared and visible light binocular cameras
CN106874871B (en) Living body face double-camera identification method and identification device
Lee A novel biometric system based on palm vein image
Han et al. Palm vein recognition using adaptive Gabor filter
CN109308436B (en) Living body face recognition method based on active infrared video
Wang et al. Hand-dorsa vein recognition based on partition local binary pattern
CN110008813B (en) Face recognition method and system based on living body detection technology
Debiasi et al. PRNU variance analysis for morphed face image detection
Cheddad et al. Exploiting Voronoi diagram properties in face segmentation and feature extraction
CN105956578A (en) Face verification method based on identity document information
CN105243357A (en) Identity document-based face recognition method and face recognition device
Scherhag et al. Performance variation of morphed face image detection algorithms across different datasets
CN111582197A (en) Living body based on near infrared and 3D camera shooting technology and face recognition system
CN107862298B (en) Winking living body detection method based on infrared camera device
Deepika et al. An algorithm for improved accuracy in unimodal biometric systems through fusion of multiple feature sets
Benziane et al. Dorsal hand vein identification based on binary particle swarm optimization
KR20120135381A (en) Method of biometrics and device by using pupil geometry
Singh et al. Human identification based on hand dorsal vein pattern using BRISK and SURF algorithm
Zheng A novel thermal face recognition approach using face pattern words
Ganguly et al. Depth based occlusion detection and localization from 3D face image
CN112801034A (en) Finger vein recognition device
Akintoye et al. Challenges of finger vein recognition system: a theoretical perspective
Rossant et al. A robust iris identification system based on wavelet packet decomposition and local comparisons of the extracted signatures
Hamid et al. Dorsal Hand Vein Analysis for Security Systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant