CN113255562B - Human face living body detection method and system based on illumination difference elimination - Google Patents

Human face living body detection method and system based on illumination difference elimination Download PDF

Info

Publication number
CN113255562B
CN113255562B CN202110647382.7A CN202110647382A CN113255562B CN 113255562 B CN113255562 B CN 113255562B CN 202110647382 A CN202110647382 A CN 202110647382A CN 113255562 B CN113255562 B CN 113255562B
Authority
CN
China
Prior art keywords
living body
face image
face
information
deception
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110647382.7A
Other languages
Chinese (zh)
Other versions
CN113255562A (en
Inventor
胡海峰
严文俊
曾莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202110647382.7A priority Critical patent/CN113255562B/en
Publication of CN113255562A publication Critical patent/CN113255562A/en
Application granted granted Critical
Publication of CN113255562B publication Critical patent/CN113255562B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a human face living body detection method and a human face living body detection system based on illumination difference elimination, wherein the method comprises the following steps: acquiring a face image and inputting the face image into a deception information generating branch to obtain deception information; inputting the face image into an illumination generating module to simulate different illumination conditions and respectively extracting corresponding face features; carrying out similarity constraint on the corresponding face features under different illumination conditions to obtain common features; and combining and judging the deception information with the common characteristics to obtain a judging result. The system comprises: the device comprises a spoofing information unit, a feature extraction unit, a feature constraint unit and an identification unit. By using the method and the device, the influence of illumination on living body detection can be eliminated, and the accuracy of human face living body detection can be improved. The human face living body detection method and system based on illumination difference elimination can be widely applied to the field of human face image processing.

Description

Human face living body detection method and system based on illumination difference elimination
Technical Field
The application relates to the field of face image processing, in particular to a face living body detection method and system based on illumination difference elimination.
Background
The face living body detection refers to a technology for judging whether a face captured by a camera is a real face or a deception/attack face (such as printing a face photo, performing video playback by digital equipment, performing 3D mask, and camouflage through makeup). The technology has wide application background, can be applied to any scene needing face detection or face recognition, and can firstly judge whether the scene is a real face before the face recognition, wherein the used scene is as follows: financial payment, access control systems, and the like. The existing human face living body detection method is extremely easy to be influenced by difference information, so that the identification accuracy in the actual application scene is low.
Disclosure of Invention
In order to solve the technical problems, the application aims to provide a face living body detection method and a face living body detection system based on illumination difference elimination, which ensure detection performance under the condition of facing unknown attack modes and illumination changes.
The first technical scheme adopted by the application is as follows: a human face living body detection method based on illumination difference elimination comprises the following steps:
acquiring a face image and inputting the face image into a deception information generating branch to obtain deception information;
inputting the face image into an illumination generating module to simulate different illumination conditions and respectively extracting corresponding face features;
carrying out similarity constraint on the corresponding face features under different illumination conditions to obtain common features;
and combining and judging the deception information with the common characteristics to obtain a judging result.
Further, the step of obtaining the face image and inputting the face image to the spoofing information generating branch to obtain spoofing information specifically further includes:
acquiring a face image and adjusting the size;
inputting the face image with the adjusted size into a deception information generating branch;
the spoofed information is generated via a spoofed information encoding network and a spoofed information decoding network.
Further, the step of inputting the face image to the illumination generating module to simulate different illumination conditions and respectively extracting corresponding face features specifically includes:
inputting the face image into an illumination generating module, obtaining pictures of the face image under 5 different illumination conditions and forming a picture sequence;
and respectively carrying out feature extraction encoding network and feature extraction decoding network on 5 pictures of the picture sequence, and extracting to obtain corresponding face features.
Further, the feature extraction encoding network is shared with spoofed information encoding network weights, and the feature extraction decoding network is shared with spoofed information decoding network weights.
Further, the step of combining and judging the spoofing information with the common feature to obtain a judging result specifically includes:
feature fusion is carried out on the deception information and the common features to obtain living body features;
and judging whether the corresponding face image is a real face or not according to the living body characteristics.
Further, the expression of the feature fusion is as follows:
T=αT A +βT B
wherein alpha and beta represent super parameters, T A Representation ofSpoofing information, T B Representing a common characteristic, T represents a living body characteristic.
Further, the step of judging whether the corresponding face image is a real face according to the living body characteristics specifically includes:
performing score judgment on the living body characteristics based on the pre-constructed classification tasks and comparing the living body characteristics with a preset threshold value;
judging that the score of the living body characteristic is larger than a preset value, wherein the corresponding face image is a non-real face;
and judging that the score of the living body characteristic is not larger than a preset value, wherein the corresponding face image is a real face.
The second technical scheme adopted by the application is as follows: a face in-vivo detection system based on illumination difference cancellation, comprising:
the deception information unit is used for acquiring the face image and inputting the face image into the deception information generation branch circuit to obtain deception information;
the feature extraction unit is used for inputting the face image into the illumination generation module to simulate different illumination conditions and respectively extracting corresponding face features;
the feature constraint unit is used for carrying out similarity constraint on the corresponding face features under different illumination conditions to obtain common features;
and the judging unit is used for combining and judging the deception information with the common characteristics to obtain a judging result.
The method and the system have the beneficial effects that: according to the application, the image sequences under different illumination brightness conditions are generated for the same human face image through the illumination generation branch and the characteristics are extracted, so that the model is extremely insensitive to illumination information when in living detection, and the influence of illumination on human face living detection is eliminated.
Drawings
FIG. 1 is a schematic flow chart of an embodiment of the present application;
FIG. 2 is a flow chart of steps of a face living body detection method based on illumination difference elimination of the present application;
fig. 3 is a block diagram of a face living body detection system based on illumination difference elimination according to the present application.
Detailed Description
The application will now be described in further detail with reference to the drawings and to specific examples. The step numbers in the following embodiments are set for convenience of illustration only, and the order between the steps is not limited in any way, and the execution order of the steps in the embodiments may be adaptively adjusted according to the understanding of those skilled in the art.
Referring to fig. 1 and 2, the present application provides a face living body detection method based on illumination difference cancellation, the method comprising the steps of:
s1, acquiring a face image and inputting the face image into a deception information generating branch to obtain deception information;
s2, inputting the face image into an illumination generating module to simulate different illumination conditions and respectively extracting corresponding face features;
s3, carrying out similarity constraint on the corresponding face features under different illumination conditions to obtain common features;
and S4, combining and judging the deception information with the common features to obtain a judging result.
Specifically, during model training, different face images are input, corresponding features are extracted according to the mode of the steps S1-S3, and finally whether the face is a real face or not is judged according to the living body features obtained in the step S4, and the face is compared with real tag data to train.
Further, as a preferred embodiment of the method, the step of obtaining the face image and inputting the face image to the spoofing information generating branch to obtain spoofing information specifically further includes:
acquiring a face image and adjusting the size;
specifically, the face image is uniformly adjusted to 224×224 size before being input into the coding network, and the input image is converted into a three-dimensional array (3,224,224) because the image is in RGB format, wherein "3" represents the channel number of RGB three color components, and "224, 224" represents the size of the image.
Inputting the face image with the adjusted size into a deception information generating branch;
the spoofed information is generated via a spoofed information encoding network and a spoofed information decoding network.
The deception information encoding-decoding network is composed of a residual error network, a real face is regarded as a normal value by utilizing an anomaly detection mechanism, deception information contained in the real face is supposed to be compactly distributed around an origin of a coordinate space, and deception information which is different from the normal value is contained in the deception face regardless of an attack mode, and the deception information is distributed in a space outside the origin in a scattered manner. During training, the characteristic information T of the N input real faces Ai (i ε N) a norm constraint can be formulated as:
further, as a preferred embodiment of the method, the step of inputting the face image to the illumination generating module to simulate different illumination conditions and respectively extract corresponding face features specifically includes:
inputting the face image into an illumination generating module, obtaining pictures of the face image under 5 different illumination conditions and forming a picture sequence;
and respectively carrying out feature extraction encoding network and feature extraction decoding network on 5 pictures of the picture sequence, and extracting to obtain corresponding face features.
Specifically, the codec network for extracting the image sequence shares the weight with the codec network used in step S1. The input image sequence comprises 5 face images of the same face under different illumination conditions, and the extracted 5 face features T B1 -T B5 . The theoretical basis of the illumination generating module is that the color information of the face is determined by the reflecting capability of the face to light rays with different wavelengths, however, due to illumination influence, the original image S (x, y) is actually a mixture of the reflected image R (x, y) and the brightness image L (x, y), and can be expressed by the following formula:
logS(x,y)=logR(x,y)+logL(x,y)
in the above formula, F (x, y) is a center-surrounding function, and by performing convolution operation with S (x, y), a result of weighted average of the pixel point and the surrounding area can be obtained as the illumination amount in the image. Thus, by changing the form of F (x, y), the picture sequences B1-B5 can be constructed.
Then, the cosine similarity is utilized to make the previous difference of the cosine similarity be 0, and the face feature T for eliminating the illumination difference is obtained B
Further as a preferred embodiment of the method, the feature extraction encoding network is shared with spoofed information encoding network weights, and the feature extraction decoding network is shared with spoofed information decoding network weights.
Further, as a preferred embodiment of the method, the step of combining the spoofing information with the common feature to obtain a determination result specifically includes:
feature fusion is carried out on the deception information and the common features to obtain living body features;
and judging whether the corresponding face image is a real face or not according to the living body characteristics.
Further as a preferred embodiment of the method, the expression of the feature fusion is as follows:
T=αT A +βT B
wherein alpha and beta represent super parameters, T A Representing spoofed information, T B Representing a common characteristic, T represents a living body characteristic.
Further, as a preferred embodiment of the method, the step of determining whether the corresponding face image is a real face according to the living body feature specifically includes:
performing score judgment on the living body characteristics based on the pre-constructed classification tasks and comparing the living body characteristics with a preset threshold value;
judging that the score of the living body characteristic is larger than a preset value, wherein the corresponding face image is a non-real face;
and judging that the score of the living body characteristic is not larger than a preset value, wherein the corresponding face image is a real face.
Specifically, a calculation formula of the score (T) of the living body feature can be expressed as:
score(T)=||T|| 1
in the formula of I, I 1 For one norm operation, it is obvious that the more spoofing information the input face image contains if it is an attacking face, the larger the value of score (T), and the smaller the value of score (T) if it is a true face.
As shown in fig. 3, a face living body detection system based on illumination difference cancellation includes:
the deception information unit is used for acquiring the face image and inputting the face image into the deception information generation branch circuit to obtain deception information;
the feature extraction unit is used for inputting the face image into the illumination generation module to simulate different illumination conditions and respectively extracting corresponding face features;
the feature constraint unit is used for carrying out similarity constraint on the corresponding face features under different illumination conditions to obtain common features;
and the judging unit is used for combining and judging the deception information with the common characteristics to obtain a judging result.
The content in the method embodiment is applicable to the system embodiment, the functions specifically realized by the system embodiment are the same as those of the method embodiment, and the achieved beneficial effects are the same as those of the method embodiment.
While the preferred embodiment of the present application has been described in detail, the application is not limited to the embodiment, and various equivalent modifications and substitutions can be made by those skilled in the art without departing from the spirit of the application, and these equivalent modifications and substitutions are intended to be included in the scope of the present application as defined in the appended claims.

Claims (5)

1. The human face living body detection method based on illumination difference elimination is characterized by comprising the following steps of:
acquiring a face image and inputting the face image into a deception information generating branch to obtain deception information;
inputting the face image into an illumination generating module to simulate different illumination conditions and respectively extracting corresponding face features;
carrying out similarity constraint on the corresponding face features under different illumination conditions to obtain common features;
combining and judging the deception information with the common characteristics to obtain a judging result;
the step of acquiring the face image and inputting the face image into the deception information generating branch to obtain deception information specifically comprises the following steps:
acquiring a face image and adjusting the size;
inputting the face image with the adjusted size into a deception information generating branch;
generating deception information through a deception information encoding network and a deception information decoding network;
the step of combining and judging the spoofing information with the common characteristics to obtain a judging result specifically comprises the following steps:
feature fusion is carried out on the deception information and the common features to obtain living body features;
performing score judgment on the living body characteristics based on the pre-constructed classification tasks and comparing the living body characteristics with a preset threshold value;
judging that the score of the living body characteristic is larger than a preset value, wherein the corresponding face image is a non-real face;
and judging that the score of the living body characteristic is not larger than a preset value, wherein the corresponding face image is a real face.
2. The method for detecting human face living body based on illumination difference elimination according to claim 1, wherein the step of inputting the human face image to the illumination generating module to simulate different illumination conditions and respectively extract the corresponding human face features comprises the following steps:
inputting the face image into an illumination generating module, obtaining pictures of the face image under 5 different illumination conditions and forming a picture sequence;
and respectively carrying out feature extraction encoding network and feature extraction decoding network on 5 pictures of the picture sequence, and extracting to obtain corresponding face features.
3. The face living body detection method based on illumination difference elimination according to claim 2, wherein the feature extraction encoding network is shared with spoofing information encoding network weights, and the feature extraction decoding network is shared with spoofing information decoding network weights.
4. A face living body detection method based on illumination difference elimination according to claim 3, wherein the expression of said feature fusion is as follows:
T=αT A +βT B
wherein alpha and beta represent super parameters, T A Representing spoofed information, T B Representing a common characteristic, T represents a living body characteristic.
5. A human face living body detection system based on illumination difference elimination, which is characterized by comprising the following units:
the deception information unit is used for acquiring the face image and inputting the face image into the deception information generation branch circuit to obtain deception information;
the step of acquiring the face image and inputting the face image into a deception information generating branch to obtain deception information, and the step of specifically further comprises: acquiring a face image and adjusting the size; inputting the face image with the adjusted size into a deception information generating branch; generating deception information through a deception information encoding network and a deception information decoding network;
the feature extraction unit is used for inputting the face image into the illumination generation module to simulate different illumination conditions and respectively extracting corresponding face features;
the feature constraint unit is used for carrying out similarity constraint on the corresponding face features under different illumination conditions to obtain common features;
the judging unit is used for combining and judging the deception information with the common characteristics to obtain a judging result;
the method for judging the spoofing information by combining the common characteristics to obtain a judging result specifically comprises the following steps: feature fusion is carried out on the deception information and the common features to obtain living body features; performing score judgment on the living body characteristics based on the pre-constructed classification tasks and comparing the living body characteristics with a preset threshold value; judging that the score of the living body characteristic is larger than a preset value, wherein the corresponding face image is a non-real face; and judging that the score of the living body characteristic is not larger than a preset value, wherein the corresponding face image is a real face.
CN202110647382.7A 2021-06-10 2021-06-10 Human face living body detection method and system based on illumination difference elimination Active CN113255562B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110647382.7A CN113255562B (en) 2021-06-10 2021-06-10 Human face living body detection method and system based on illumination difference elimination

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110647382.7A CN113255562B (en) 2021-06-10 2021-06-10 Human face living body detection method and system based on illumination difference elimination

Publications (2)

Publication Number Publication Date
CN113255562A CN113255562A (en) 2021-08-13
CN113255562B true CN113255562B (en) 2023-10-20

Family

ID=77187506

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110647382.7A Active CN113255562B (en) 2021-06-10 2021-06-10 Human face living body detection method and system based on illumination difference elimination

Country Status (1)

Country Link
CN (1) CN113255562B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111767877A (en) * 2020-07-03 2020-10-13 北京视甄智能科技有限公司 Living body detection method based on infrared features
CN112580576A (en) * 2020-12-28 2021-03-30 华南理工大学 Face spoofing detection method and system based on multiscale illumination invariance texture features
CN112668519A (en) * 2020-12-31 2021-04-16 声耕智能科技(西安)研究院有限公司 Abnormal face recognition living body detection method and system based on MCCAE network and Deep SVDD network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111767877A (en) * 2020-07-03 2020-10-13 北京视甄智能科技有限公司 Living body detection method based on infrared features
CN112580576A (en) * 2020-12-28 2021-03-30 华南理工大学 Face spoofing detection method and system based on multiscale illumination invariance texture features
CN112668519A (en) * 2020-12-31 2021-04-16 声耕智能科技(西安)研究院有限公司 Abnormal face recognition living body detection method and system based on MCCAE network and Deep SVDD network

Also Published As

Publication number Publication date
CN113255562A (en) 2021-08-13

Similar Documents

Publication Publication Date Title
US11457138B2 (en) Method and device for image processing, method for training object detection model
US10896323B2 (en) Method and device for image processing, computer readable storage medium, and electronic device
WO2021022983A1 (en) Image processing method and apparatus, electronic device and computer-readable storage medium
WO2023040679A1 (en) Fusion method and apparatus for facial images, and device and storage medium
CN111079764B (en) Low-illumination license plate image recognition method and device based on deep learning
CN110675328A (en) Low-illumination image enhancement method and device based on condition generation countermeasure network
KR20220117324A (en) Learning from various portraits
WO2023136805A1 (en) End-to-end watermarking system
Liu et al. Overview of image inpainting and forensic technology
CN113111700A (en) Training method of image generation model, electronic device and storage medium
CN115601820A (en) Face fake image detection method, device, terminal and storage medium
Hsu et al. Object detection using structure-preserving wavelet pyramid reflection removal network
Mani et al. A survey on digital image forensics: Metadata and image forgeries
CN113011254B (en) Video data processing method, computer equipment and readable storage medium
Peng et al. Presentation attack detection based on two-stream vision transformers with self-attention fusion
CN116452469B (en) Image defogging processing method and device based on deep learning
CN113627504A (en) Multi-mode multi-scale feature fusion target detection method based on generation of countermeasure network
CN113255562B (en) Human face living body detection method and system based on illumination difference elimination
CN113723310B (en) Image recognition method and related device based on neural network
CN116391200A (en) Scaling agnostic watermark extraction
CN117597702A (en) Scaling-independent watermark extraction
CN113240723A (en) Monocular depth estimation method and device and depth evaluation equipment
Jöchl et al. Deep Learning Image Age Approximation-What is More Relevant: Image Content or Age Information?
CN116012248B (en) Image processing method, device, computer equipment and computer storage medium
Hashem et al. Passive aproaches for detecting image tampering: a review

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant