CN112132046A - Static living body detection method and system - Google Patents

Static living body detection method and system Download PDF

Info

Publication number
CN112132046A
CN112132046A CN202011014676.8A CN202011014676A CN112132046A CN 112132046 A CN112132046 A CN 112132046A CN 202011014676 A CN202011014676 A CN 202011014676A CN 112132046 A CN112132046 A CN 112132046A
Authority
CN
China
Prior art keywords
image
face
visible light
infrared
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011014676.8A
Other languages
Chinese (zh)
Inventor
王军
李斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Fengwu Technology Co ltd
Original Assignee
Tianjin Fengwu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Fengwu Technology Co ltd filed Critical Tianjin Fengwu Technology Co ltd
Priority to CN202011014676.8A priority Critical patent/CN112132046A/en
Publication of CN112132046A publication Critical patent/CN112132046A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a static living body detection method and a system, wherein the method comprises the following steps: collecting a visible light image and an infrared image of a user; carrying out face detection on both the visible light image and the infrared image, if a face region is detected, extracting the visible light face region image from the visible light image, and extracting the infrared face region image from the infrared image; and performing living body detection on the visible light face area image and the infrared face area image by adopting a deep neural network to obtain a living body detection result. The method improves user experience.

Description

Static living body detection method and system
Technical Field
The invention relates to the technical field of face recognition, in particular to a static living body detection method and a static living body detection system.
Background
At present, along with the improvement of the recognition rate, the face recognition system is in commercial use in the aspects of railway passenger transport, bank transaction, mobile phone unlocking, face brushing payment and the like, and gradually permeates the fields of schools, buildings, security protection and the like. The live body detection is a technology for judging whether a captured face is a real face or a forged face attack. If the face recognition function is to recognize a person, the live body detection function is to recognize. In order to prevent cheating of photos, videos, masks and the like, the living body detection is generally embedded in a module in face detection and face recognition or verification to verify whether the living body detection is the user, so that real person identity verification is completed for high-security scenes such as finance, entrance guard and the like, and a face recognition system can operate safely and stably.
Face detection and recognition are extremely interesting and challenging areas in biometric-based identity authentication. The face detection and recognition technology has a high development prospect and economic benefit in the fields of public security investigation, access control systems, target tracking, other civil safety control systems and the like. However, although the conventional face recognition system can recognize different faces, it is difficult to determine whether the face is a living body, a photograph or a mask. Compared with biological characteristics such as fingerprints and irises, the human face characteristics are most easily acquired.
However, with face recognition technology alone, the face biometric cannot be used as a security key, because the face biometric is particularly easy to be collected and used for attack. In various occasions such as airport security check, work attendance, company entrance guard, bank account opening, online payment and the like, if a face recognition system is attacked, great loss is caused to individuals and the society. The biometric detection technology is to eliminate the potential safety hazard in the face recognition technology, and in the biometric feature recognition system, in order to prevent a malicious person from forging and stealing the biometric features of other people for identity authentication, the biometric feature recognition system needs to have a biometric detection function. The living body detection comprises monocular silence detection and binocular silence detection, monocular needs to be matched with user actions, but human face living body detection needing to be matched with the user has long interaction time and poor user experience.
Disclosure of Invention
The invention aims to provide a static living body detection method and a static living body detection system so as to improve user experience.
In order to solve the above technical problems, the present invention provides a static biopsy method, comprising:
collecting a visible light image and an infrared image of a user;
carrying out face detection on both the visible light image and the infrared image, if a face region is detected, extracting the visible light face region image from the visible light image, and extracting the infrared face region image from the infrared image;
and performing living body detection on the visible light face area image and the infrared face area image by adopting a deep neural network to obtain a living body detection result.
Preferably, a face detection algorithm is adopted to perform face detection on both the visible light image and the infrared image.
Preferably, the extracting the visible light face region image from the visible light image includes:
and carrying out face quality evaluation, face key point detection and face alignment on the visible light image to obtain a visible light face area image.
Preferably, the extracting an infrared face area image from an infrared image includes:
and performing face quality evaluation, face key point detection and face alignment on the infrared image to obtain an infrared face area image.
The invention also provides a static living body detection system for realizing the method, which comprises the following steps:
the acquisition module is used for acquiring visible light images and infrared images of a user;
the extraction module is used for carrying out face detection on both the visible light image and the infrared image, extracting the visible light face region image from the visible light image and extracting the infrared face region image from the infrared image if the face regions are detected;
and the detection module is used for performing living body detection on the visible light face area image and the infrared face area image by adopting a deep neural network to obtain a living body detection result.
Preferably, the extraction module is specifically configured to perform face detection on both the visible light image and the infrared image by using a face detection algorithm.
Preferably, the extraction module comprises:
the detection unit is used for carrying out face detection on both the visible light image and the infrared image;
and the extraction unit is used for extracting a visible light face region image from the visible light image and extracting an infrared face region image from the infrared image when the face region is detected.
Preferably, the extraction unit includes:
the first evaluation subunit is used for carrying out face quality evaluation, face key point detection and face alignment on the visible light image to obtain a visible light face area image;
and the second evaluation subunit is used for carrying out face quality evaluation, face key point detection and face alignment on the infrared image to obtain an infrared face area image.
The invention provides a static living body detection method and a system, which are used for collecting visible light images and infrared images of a user; carrying out face detection on both the visible light image and the infrared image, if a face region is detected, extracting the visible light face region image from the visible light image, and extracting the infrared face region image from the infrared image; and performing living body detection on the visible light face area image and the infrared face area image by adopting a deep neural network to obtain a living body detection result. Therefore, the visible light image and the infrared image are collected, the face detection is carried out by utilizing the visible light image and the infrared image, the cooperation of a user is not needed, the user experience is improved, the real-time non-inductive entrance guard detection can be realized, and the external influence of illumination and the like is small.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of a static biopsy method according to the present invention;
FIG. 2 is a flow chart of an implementation of in vivo detection;
FIG. 3 is a schematic structural diagram of a static in-vivo detection system according to the present invention.
Detailed Description
The core of the invention is to provide a static living body detection method and a static living body detection system so as to improve user experience.
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart illustrating a static biopsy method according to the present invention, which includes the following steps:
s11: collecting a visible light image and an infrared image of a user;
s12: carrying out face detection on both the visible light image and the infrared image, if a face region is detected, extracting the visible light face region image from the visible light image, and extracting the infrared face region image from the infrared image;
s13: and performing living body detection on the visible light face area image and the infrared face area image by adopting a deep neural network to obtain a living body detection result.
Therefore, the method not only collects the visible light image, but also collects the infrared image, and utilizes the visible light image and the infrared image to detect the face of the person, so that the cooperation of the user is not needed, and the user experience is improved. In addition, the system can realize real-time non-inductive access control detection and is little influenced by the outside such as illumination.
Based on the above method, further, in step S12, a face detection algorithm is used to perform face detection on both the visible light image and the infrared image.
Further, in step S12, the process of extracting the visible light face region image from the visible light image specifically includes: and carrying out face quality evaluation, face key point detection and face alignment on the visible light image to obtain a visible light face area image.
Further, in step S12, the process of extracting the infrared face region image from the infrared image specifically includes: and performing face quality evaluation, face key point detection and face alignment on the infrared image to obtain an infrared face area image.
Referring to fig. 2, fig. 2 is a flowchart of an implementation of in vivo detection, specifically, the implementation flow is as follows:
step 1, video input and face detection;
the living body detection is carried out by combining the infrared input image, so that a better effect can be obtained, and the living body detection device can be more suitable for the change of the external environment. When the light source is used in scenes such as residential area access control, visible light and infrared light supplementing light sources need to be equipped. And respectively carrying out face detection in the visible image and the infrared image, and carrying out the next processing only when a face area is detected simultaneously. The face detection algorithm adopts an SSD-like structure, integrates an FPN module, sets an anchor according to the size and distribution characteristics of a detection target, and combines a lightweight backbone network to perform face detection. And then, the positions of key points of the human face, namely the centers of the left and right eyes, the left and right mouth corners and the nose tip, are positioned on the basis of human face detection. And judging whether the detected input face is a face picture with qualified quality or not by a face quality evaluation mode, and if the detected input face is the face picture with qualified quality, intercepting a face area on the image, expanding and carrying out affine transformation to obtain a front picture corresponding to the face.
Step 2, detecting key points and aligning faces;
the affine transformation is a kind of spatial rectangular coordinate transformation, which is a linear transformation from two-dimensional coordinates to two-dimensional coordinates. Affine transformations can be achieved by the composition of a series of atomic transformations, including translation, scaling, flipping, and rotation. In the method, 5 feature points are regressed in face detection, and the detected face is corrected relative to a standard input face. In the correction process, in order to keep the sizes of the input face pictures consistent as much as possible, the face image corrected relative to the standard face is zoomed, and the zooming proportion is that the eye distance of the actually detected human is divided by the eye distance of the standard face. And sending the corrected face image into a living body detection network for judgment.
And 3, feature extraction and logic output.
And respectively carrying out face detection in the visible light and the infrared picture, simultaneously carrying out face quality evaluation and face alignment on the detected face region, and sending the aligned visible light face region and infrared face region into a deep neural network to judge whether the face region is a living body.
The output of the final result can be judged by adopting a method of detecting continuous multi-frames as living bodies.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a static biopsy system according to the present invention, the system is used for implementing the above method, and the system includes:
the acquisition module 101 is used for acquiring visible light images and infrared images of a user;
the extraction module 102 is configured to perform face detection on both the visible light image and the infrared image, extract a visible light face region image from the visible light image if a face region is detected, and extract an infrared face region image from the infrared image;
the detection module 103 is configured to perform living body detection on the visible light face area image and the infrared face area image by using a deep neural network, so as to obtain a living body detection result.
Therefore, the system not only collects the visible light images, but also collects the infrared images, and utilizes the visible light images and the infrared images to detect the human face, so that the cooperation of users is not needed, and the user experience is improved. In addition, the system can realize real-time non-inductive access control detection and is little influenced by the outside such as illumination.
Based on the system, further, the extraction module is specifically configured to perform face detection on both the visible light image and the infrared image by using a face detection algorithm.
Further, the extraction module comprises:
the detection unit is used for carrying out face detection on both the visible light image and the infrared image;
and the extraction unit is used for extracting a visible light face region image from the visible light image and extracting an infrared face region image from the infrared image when the face region is detected.
Further, the extraction unit includes:
the first evaluation subunit is used for carrying out face quality evaluation, face key point detection and face alignment on the visible light image to obtain a visible light face area image;
and the second evaluation subunit is used for carrying out face quality evaluation, face key point detection and face alignment on the infrared image to obtain an infrared face area image.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. For the system disclosed by the embodiment, the description is relatively simple because the system corresponds to the method disclosed by the embodiment, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above description describes a static biopsy method and system provided by the present invention in detail. The principles and embodiments of the present invention are explained herein using specific examples, which are presented only to assist in understanding the method and its core concepts. It should be noted that, for those skilled in the art, it is possible to make various improvements and modifications to the present invention without departing from the principle of the present invention, and those improvements and modifications also fall within the scope of the claims of the present invention.

Claims (8)

1. A static liveness detection method, comprising:
collecting a visible light image and an infrared image of a user;
carrying out face detection on both the visible light image and the infrared image, if a face region is detected, extracting the visible light face region image from the visible light image, and extracting the infrared face region image from the infrared image;
and performing living body detection on the visible light face area image and the infrared face area image by adopting a deep neural network to obtain a living body detection result.
2. The method of claim 1, wherein the face detection algorithm is used to perform face detection on both the visible light image and the infrared image.
3. The method of claim 1, wherein extracting the visible light face region image from the visible light image comprises:
and carrying out face quality evaluation, face key point detection and face alignment on the visible light image to obtain a visible light face area image.
4. The method of claim 1, wherein the extracting the infrared face region image from the infrared image comprises:
and performing face quality evaluation, face key point detection and face alignment on the infrared image to obtain an infrared face area image.
5. A static liveness detection system for implementing the method according to any one of claims 1 to 4, comprising:
the acquisition module is used for acquiring visible light images and infrared images of a user;
the extraction module is used for carrying out face detection on both the visible light image and the infrared image, extracting the visible light face region image from the visible light image and extracting the infrared face region image from the infrared image if the face regions are detected;
and the detection module is used for performing living body detection on the visible light face area image and the infrared face area image by adopting a deep neural network to obtain a living body detection result.
6. The system of claim 5, wherein the extraction module is specifically configured to perform face detection on both the visible light image and the infrared image using a face detection algorithm.
7. The system of claim 5, wherein the extraction module comprises:
the detection unit is used for carrying out face detection on both the visible light image and the infrared image;
and the extraction unit is used for extracting a visible light face region image from the visible light image and extracting an infrared face region image from the infrared image when the face region is detected.
8. The system of claim 7, wherein the extraction unit comprises:
the first evaluation subunit is used for carrying out face quality evaluation, face key point detection and face alignment on the visible light image to obtain a visible light face area image;
and the second evaluation subunit is used for carrying out face quality evaluation, face key point detection and face alignment on the infrared image to obtain an infrared face area image.
CN202011014676.8A 2020-09-24 2020-09-24 Static living body detection method and system Pending CN112132046A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011014676.8A CN112132046A (en) 2020-09-24 2020-09-24 Static living body detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011014676.8A CN112132046A (en) 2020-09-24 2020-09-24 Static living body detection method and system

Publications (1)

Publication Number Publication Date
CN112132046A true CN112132046A (en) 2020-12-25

Family

ID=73840867

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011014676.8A Pending CN112132046A (en) 2020-09-24 2020-09-24 Static living body detection method and system

Country Status (1)

Country Link
CN (1) CN112132046A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989903A (en) * 2021-11-15 2022-01-28 北京百度网讯科技有限公司 Face living body detection method and device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108985134A (en) * 2017-06-01 2018-12-11 重庆中科云丛科技有限公司 Face In vivo detection and brush face method of commerce and system based on binocular camera
CN109871773A (en) * 2019-01-21 2019-06-11 深圳市云眸科技有限公司 Biopsy method, device and door access machine
CN110443192A (en) * 2019-08-01 2019-11-12 中国科学院重庆绿色智能技术研究院 A kind of non-interactive type human face in-vivo detection method and system based on binocular image
WO2020078243A1 (en) * 2018-10-19 2020-04-23 阿里巴巴集团控股有限公司 Image processing and face image identification method, apparatus and device
CN111353326A (en) * 2018-12-20 2020-06-30 上海聚虹光电科技有限公司 In-vivo detection method based on multispectral face difference image
CN111680588A (en) * 2020-05-26 2020-09-18 广州多益网络股份有限公司 Human face gate living body detection method based on visible light and infrared light
CN111695509A (en) * 2020-06-12 2020-09-22 云从科技集团股份有限公司 Identity authentication method, identity authentication device, machine readable medium and equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108985134A (en) * 2017-06-01 2018-12-11 重庆中科云丛科技有限公司 Face In vivo detection and brush face method of commerce and system based on binocular camera
WO2020078243A1 (en) * 2018-10-19 2020-04-23 阿里巴巴集团控股有限公司 Image processing and face image identification method, apparatus and device
CN111353326A (en) * 2018-12-20 2020-06-30 上海聚虹光电科技有限公司 In-vivo detection method based on multispectral face difference image
CN109871773A (en) * 2019-01-21 2019-06-11 深圳市云眸科技有限公司 Biopsy method, device and door access machine
CN110443192A (en) * 2019-08-01 2019-11-12 中国科学院重庆绿色智能技术研究院 A kind of non-interactive type human face in-vivo detection method and system based on binocular image
CN111680588A (en) * 2020-05-26 2020-09-18 广州多益网络股份有限公司 Human face gate living body detection method based on visible light and infrared light
CN111695509A (en) * 2020-06-12 2020-09-22 云从科技集团股份有限公司 Identity authentication method, identity authentication device, machine readable medium and equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张鹤等: "基于双摄像头下的活体人脸检测方法", 《软件》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989903A (en) * 2021-11-15 2022-01-28 北京百度网讯科技有限公司 Face living body detection method and device, electronic equipment and storage medium
CN113989903B (en) * 2021-11-15 2023-08-29 北京百度网讯科技有限公司 Face living body detection method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108985134B (en) Face living body detection and face brushing transaction method and system based on binocular camera
Galdi et al. Multimodal authentication on smartphones: Combining iris and sensor recognition for a double check of user identity
CN110008813B (en) Face recognition method and system based on living body detection technology
Liu A camera phone based currency reader for the visually impaired
CN106850648B (en) Identity verification method, client and service platform
CN109271950B (en) Face living body detection method based on mobile phone forward-looking camera
US11256902B2 (en) People-credentials comparison authentication method, system and camera
CN102129554B (en) Method for controlling password input based on eye-gaze tracking
CN104348778A (en) Remote identity authentication system, terminal and method carrying out initial face identification at handset terminal
CN111144277B (en) Face verification method and system with living body detection function
WO2019216091A1 (en) Face authentication device, face authentication method, and face authentication system
CN111753271A (en) Account opening identity verification method, account opening identity verification device, account opening identity verification equipment and account opening identity verification medium based on AI identification
CN107609515B (en) Double-verification face comparison system and method based on Feiteng platform
CN111325175A (en) Living body detection method, living body detection device, electronic apparatus, and storage medium
CN111178233A (en) Identity authentication method and device based on living body authentication
CN110599187A (en) Payment method and device based on face recognition, computer equipment and storage medium
Naveen et al. Face recognition and authentication using LBP and BSIF mask detection and elimination
CN108491768A (en) The anti-fraud attack method of corneal reflection face authentication, face characteristic Verification System
JP2023521254A (en) Image processing device, image processing method, and program
CN205318544U (en) Device and system are prevented cheaing by ATM based on three dimensional face identification
CN108647650B (en) Human face in-vivo detection method and system based on corneal reflection and optical coding
CN112132046A (en) Static living body detection method and system
CN107025435A (en) A kind of face recognition processing method and system
Dhikhi et al. Credit card transaction based on face recognition technology
CN112861588A (en) Living body detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201225