CN114596615B - Face living body detection method, device, equipment and medium based on countermeasure learning - Google Patents

Face living body detection method, device, equipment and medium based on countermeasure learning Download PDF

Info

Publication number
CN114596615B
CN114596615B CN202210212683.1A CN202210212683A CN114596615B CN 114596615 B CN114596615 B CN 114596615B CN 202210212683 A CN202210212683 A CN 202210212683A CN 114596615 B CN114596615 B CN 114596615B
Authority
CN
China
Prior art keywords
face
face picture
picture
fake
discriminator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210212683.1A
Other languages
Chinese (zh)
Other versions
CN114596615A (en
Inventor
任拓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Zhongke Zhuying Intelligent Technology Research Institute Co ltd
National University of Defense Technology
Original Assignee
Hunan Zhongke Zhuying Intelligent Technology Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Zhongke Zhuying Intelligent Technology Research Institute Co ltd filed Critical Hunan Zhongke Zhuying Intelligent Technology Research Institute Co ltd
Priority to CN202210212683.1A priority Critical patent/CN114596615B/en
Publication of CN114596615A publication Critical patent/CN114596615A/en
Application granted granted Critical
Publication of CN114596615B publication Critical patent/CN114596615B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a face living body detection method, a face living body detection device, a face living body detection computer device and a face living body detection storage medium based on countermeasure learning. The method comprises the following steps: preprocessing a face picture by a face alignment method in a dlib library to obtain a face correction picture; classifying the face correction pictures to obtain real face pictures and fake face pictures; carrying out data pair construction on the real face picture and the fake face picture of the same person to obtain a face data pair; the face data pair is input into a generator of the countermeasure network to obtain fake trace elements and feature images, and the feature images and the trace elements are subjected to linear weighting to obtain a numerical value which is used for judging whether the face picture is a living body or not. The method can improve the accuracy of human face living body detection.

Description

Face living body detection method, device, equipment and medium based on countermeasure learning
Technical Field
The present disclosure relates to the field of computer vision, and in particular, to a face living body detection method, apparatus, computer device, and storage medium based on countermeasure learning.
Background
Because of the progress of the age, the development of computer technology is advancing towards intellectualization and science and technology, and the computer vision technology is also making a breakthrough, wherein the identity authentication technology aiming at various biological characteristics of human body is attracting a great deal of attention. The face recognition technology is gradually diversified in the field and scene of adaptation while gradually developing, the spoofing attack of the face is also generated, and huge safety problems are also generated, so that the safety of recognition equipment is threatened, the loss of personal data of a user is possibly caused, economic loss is caused, and the like, therefore, the face recognition technology has a stable and reliable face authentication system, and can accurately judge the related information of the face and recognize various spoofing means.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a face biopsy method, apparatus, computer device, and storage medium based on antibody learning, which can improve the accuracy of face biopsy.
A face in-vivo detection method based on countermeasure learning, the method comprising:
acquiring a face picture shot by a camera;
preprocessing a face picture by a face alignment method in a dlib library to obtain a face correction picture;
classifying the face correction pictures to obtain real face pictures and fake face pictures;
carrying out data pair construction on the real face picture and the fake face picture of the same person to obtain a face data pair;
inputting the face data pairs into a generator of an initial countermeasure network to obtain fake trace elements and feature graphs between the face data pairs;
linearly reconstructing according to the fake trace elements and the human face data pairs to obtain a reconstructed real human face picture and a fitting fake human face picture;
inputting the real face picture, the fake face picture, the reconstructed real face picture and the fit fake face picture into a main discriminator and an auxiliary discriminator of an initial countermeasure network for scoring to obtain an initial score of the face picture;
training an initial countermeasure network according to the initial score of the face picture to obtain a trained countermeasure network, and inputting face data into the trained countermeasure network to obtain optimized fake trace elements and feature images;
linearly summing the optimized fake trace elements and the feature images to obtain the final score of the face picture;
and detecting whether the face picture is a living body or not by utilizing the final score of the face picture.
In one embodiment, a trained countermeasure network includes a generator and a discriminator; the discriminator includes a primary discriminator and a secondary discriminator; the main discriminator is used for restricting the generator and discriminating the facial skin in the face picture; the auxiliary discriminator is a region discriminator and is used for improving the generation of facial five sense organs details and discriminating facial five sense organs in a facial picture.
In one embodiment, inputting the real face picture, the fake face picture, the reconstructed real face picture and the fit fake face picture into a main discriminator and an auxiliary discriminator of an initial countermeasure network for scoring, obtaining an initial score of the face picture, including:
inputting the real face picture, the fake face picture, the reconstructed real face picture and the fit fake face picture into a main discriminator and an auxiliary discriminator of a trained countermeasure network to score the mask of the facial skin and the mask of the facial five sense organs of the face picture, and obtaining the initial score of the face picture.
In one embodiment, the overall penalty function of the generator includes a base penalty function, a penalty constraint, and a penalty constraint for the face mask; the loss constraint of the face mask is L G3 =L1_Loss(mask2(I j ),mask2(I′ j ) Mask2 represents a mask for generating facial features for a face, I j Representing face pictures, I' j Representing the face picture generated by the generator.
In one embodiment, the loss functions of the discriminators include the loss function of the primary discriminator and the loss function of the secondary discriminator.
In one embodiment, the loss function of the master discriminator is
L D1 =log(D(mask1(I j )))+log(1-D(mask1(G(I j ) ) with D representing a discriminator and mask1 representing a mask for generating facial skin for a face.
In one embodiment, the auxiliary discriminator has a loss function L D2 =log(D(mask2(I j )))+log(1-D(mask2(G(I j ))))。
A face in-vivo detection apparatus based on countermeasure learning, the apparatus comprising:
the image preprocessing module is used for acquiring face images shot by the camera; preprocessing a face picture by a face alignment method in a dlib library to obtain a face correction picture;
the data pair construction module is used for classifying the face correction pictures to obtain real face pictures and fake face pictures; carrying out data pair construction on the real face picture and the fake face picture of the same person to obtain a face data pair;
the linear reconstruction module is used for inputting the face data pairs into a generator of the initial countermeasure network to obtain fake trace elements and feature graphs between the face data pairs; linearly reconstructing according to the fake trace elements and the human face data pairs to obtain a reconstructed real human face picture and a fitting fake human face picture;
the initial scoring module is used for inputting the real face picture, the fake face picture, the reconstructed real face picture and the fitting fake face picture into a main discriminator and an auxiliary discriminator of an initial countermeasure network for scoring to obtain an initial score of the face picture;
the fake trace element optimization module is used for training the initial countermeasure network according to the initial score of the face picture to obtain a trained countermeasure network; inputting the face data into a trained countermeasure network to obtain optimized fake trace elements and feature graphs;
the living body detection module is used for carrying out linear summation on the optimized fake trace elements and the feature images to obtain the final score of the face picture; and detecting whether the face picture is a living body or not by utilizing the final score of the face picture.
A computer device comprising a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring a face picture shot by a camera;
preprocessing a face picture by a face alignment method in a dlib library to obtain a face correction picture;
classifying the face correction pictures to obtain real face pictures and fake face pictures;
carrying out data pair construction on the real face picture and the fake face picture of the same person to obtain a face data pair;
inputting the face data pairs into a generator of an initial countermeasure network to obtain fake trace elements and feature graphs between the face data pairs;
linearly reconstructing according to the fake trace elements and the human face data pairs to obtain a reconstructed real human face picture and a fitting fake human face picture;
inputting the real face picture, the fake face picture, the reconstructed real face picture and the fit fake face picture into a main discriminator and an auxiliary discriminator of an initial countermeasure network for scoring to obtain an initial score of the face picture;
training an initial countermeasure network according to the initial score of the face picture to obtain a trained countermeasure network, and inputting face data into the trained countermeasure network to obtain optimized fake trace elements and feature images;
linearly summing the optimized fake trace elements and the feature images to obtain the final score of the face picture;
and detecting whether the face picture is a living body or not by utilizing the final score of the face picture.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring a face picture shot by a camera;
preprocessing a face picture by a face alignment method in a dlib library to obtain a face correction picture;
classifying the face correction pictures to obtain real face pictures and fake face pictures;
carrying out data pair construction on the real face picture and the fake face picture of the same person to obtain a face data pair;
inputting the face data pairs into a generator of an initial countermeasure network to obtain fake trace elements and feature graphs between the face data pairs;
linearly reconstructing according to the fake trace elements and the human face data pairs to obtain a reconstructed real human face picture and a fitting fake human face picture;
inputting the real face picture, the fake face picture, the reconstructed real face picture and the fit fake face picture into a main discriminator and an auxiliary discriminator of an initial countermeasure network for scoring to obtain an initial score of the face picture;
training an initial countermeasure network according to the initial score of the face picture to obtain a trained countermeasure network, and inputting face data into the trained countermeasure network to obtain optimized fake trace elements and feature images;
linearly summing the optimized fake trace elements and the feature images to obtain the final score of the face picture;
and detecting whether the face picture is a living body or not by utilizing the final score of the face picture.
According to the face living body detection method, the face living body detection device, the computer equipment and the storage medium based on the countermeasure learning, firstly, the face picture is preprocessed through the face alignment method in the dlib library to obtain the face correction picture, so that the face is vertical, facial feature extraction is convenient, and the face correction picture is classified to obtain the real face picture and the fake face picture; carrying out data pair construction on the real face picture and the fake face picture of the same person to obtain a face data pair; inputting the face data pairs into a generator of an initial countermeasure network to obtain fake trace elements and feature graphs between the face data pairs; then inputting the real face picture, the fake face picture, the reconstructed real face picture and the fit fake face picture into a main discriminator and an auxiliary discriminator of an initial countermeasure network for scoring to obtain an initial score of the face picture; the facial features are analyzed by adding an auxiliary discriminator in the countermeasure network, the auxiliary discriminator is mainly used for improving the generation of facial feature details, not only paying attention to global information of the face, but also paying attention to the face detail information, when the face picture is scored, the scoring result is more accurate, the initial countermeasure network is trained according to the initial score of the face picture, a trained countermeasure network is obtained, face data are input into the trained countermeasure network, more accurate optimized fake trace elements and feature images are obtained, and the optimized fake trace elements and feature images are linearly added to obtain the final score of the face picture; and detecting whether the face picture is a living body or not by utilizing the final score of the face picture. According to the face detection method, the two main discriminators and the auxiliary discriminators are arranged, the mask on the five sense organs and the skin is generated by using key points of the face in the designed discriminators, the two discriminators independently process the detail problem of the five sense organs and the skin, so that the generator is more careful about the details on the five sense organs and the skin except for the global features in the learning process, the features on the five sense organs and the skin of the forged face picture are further enhanced, the living body detection of the face is facilitated, more accurate forged trace elements and feature images can be obtained after the initial antibody network is trained according to the initial score of the face picture, further the final score of the more accurate face picture is obtained, and the accuracy of the living body detection of the face is improved.
Drawings
FIG. 1 is a flow chart of a face in-vivo detection method based on countermeasure learning in one embodiment;
FIG. 2 is an effect diagram of a mask of facial features generated by a face in one embodiment;
FIG. 3 is a graph of the effect of a trained countermeasure network in one embodiment;
FIG. 4 is a block diagram of a face in-vivo detection apparatus based on countermeasure learning in one embodiment;
fig. 5 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, there is provided a face in-vivo detection method based on countermeasure learning, including the steps of:
102, acquiring a face picture shot by a camera; preprocessing the face picture by a face alignment method in a dlib library to obtain a face correction picture.
Preprocessing a face picture by a face alignment method in a dlib library, twisting a distorted face into a front picture, and rotating the picture by aligning two outer eye corners of eyes on the face, so that the face is vertical, a face correction picture is obtained, and facial feature extraction is facilitated.
Step 104, classifying the face correction pictures to obtain real face pictures and fake face pictures; and constructing the data pair of the real face picture and the fake face picture of the same person to obtain a face data pair.
The real face picture and the fake face picture of the same person are input into the generator in the form of data pairs, so that the generator can conveniently generate a corresponding feature map under the corresponding resolution and generate fake trace elements.
Step 106, inputting the face data pairs into a generator of an initial countermeasure network to obtain fake trace elements and feature graphs between the face data pairs; and linearly reconstructing according to the forged trace elements and the face data pairs to obtain a reconstructed real face picture and a fitting forged face picture.
The feature map contains face data pair information, and the linear reconstruction means that the fake elements generated by the generator and the face picture are subjected to linear weighting to obtain a reconstructed real face picture and a fitting fake face picture.
Step 108, inputting the real face picture, the fake face picture, the reconstructed real face picture and the fit fake face picture into a main discriminator and an auxiliary discriminator of an initial countermeasure network for scoring to obtain an initial score of the face picture;
the primary discriminator and the auxiliary discriminator score the skin mask and the facial mask, respectively, of the face picture, and determine from these two perspectives whether the input is a generator-generated result. If the true face picture is reconstructed and the fake trace element of the generator with the highest score as possible can well represent the trace of the fake face picture, the fake trace on the face picture can be effectively dissociated by the description generator, meanwhile, the scoring of the face picture by using the main discriminator and the auxiliary discriminator is more accurate through the feedback of the generator data, and the face detection accuracy is further improved. Meanwhile, the auxiliary discriminator is mainly used for improving the generation of facial feature details, the generator can better synthesize a real face and fit fake face pictures when performing countermeasure network training, not only the global information of the face is noted, but also the detail information of the face is noted, and the generated reconstructed real face picture and the fit fake face picture have more realism when linear reconstruction is performed according to fake trace elements and face data.
Step 110, training the initial countermeasure network according to the initial score of the face picture to obtain a trained countermeasure network, and inputting face data into the trained countermeasure network to obtain optimized fake trace elements and feature images.
Judging whether the picture generated by the generator is generated according to the initial score of the face picture, wherein the score interval is a numerical value between 0 and 1, the face picture score in the data set is close to 1, the face picture score generated by the generator is close to 0, the labels given by the loss calculation are 1 and 0, the face picture label in the data set is 1, the picture label generated by the generator is 0, if the face picture is generated by the generator, the initial score of the face picture is utilized to train the initial countermeasure network, if the initial score of the face picture is low, the loss of the generator is increased, the loss of the discriminator is reduced, if the initial score of the face picture is high, the loss of the generator is increased, the loss of the discriminator is reduced, in the training process of the discriminator and the generator, one party is trained alternately, the other party freezes parameters, the gradient is not updated, so that the initial countermeasure network is trained, the face data is input into the trained countermeasure network, and the optimized forged trace elements and the feature map are obtained.
Step 112, linearly summing the optimized fake trace elements and the feature map to obtain a final score of the face picture; and detecting whether the face picture is a living body or not by utilizing the final score of the face picture.
In the training process of the feature map, the real face distribution is as close to 0 as possible, the falsification is as close to 1 as possible, the average value of the feature map and the average value of falsified trace elements are used for calculating the score, weighting judgment is carried out, the final score of the face picture is obtained, the value of the final score is between 0 and 1, the threshold value of the final score is 0.5, if the final score of the face picture is larger than 0.5, the face picture is judged to be a living body, otherwise, the face picture is judged not to be a living body, and the calculation process of the final score is as follows:
Figure SMS_1
wherein M represents the size of the feature map, K represents the trace element, N represents the size of the trace picture, and alpha is the super parameter.
According to the face living body detection method based on countermeasure learning, firstly, the face picture is preprocessed through the face alignment method in the dlib library to obtain the face correction picture, so that the face is vertical, facial feature extraction is convenient, and classification processing is conducted on the face correction picture to obtain a real face picture and a fake face picture; carrying out data pair construction on the real face picture and the fake face picture of the same person to obtain a face data pair; inputting the face data pairs into a generator of an initial countermeasure network to obtain fake trace elements and feature graphs between the face data pairs; then inputting the real face picture, the fake face picture, the reconstructed real face picture and the fit fake face picture into a main discriminator and an auxiliary discriminator of an initial countermeasure network for scoring to obtain an initial score of the face picture; the facial features are analyzed by adding an auxiliary discriminator in the countermeasure network, the auxiliary discriminator is mainly used for improving the generation of facial feature details, not only paying attention to global information of the face, but also paying attention to the face detail information, when the face picture is scored, the scoring result is more accurate, the initial countermeasure network is trained according to the initial score of the face picture, a trained countermeasure network is obtained, face data are input into the trained countermeasure network, more accurate optimized fake trace elements and feature images are obtained, and the optimized fake trace elements and feature images are linearly added to obtain the final score of the face picture; and detecting whether the face picture is a living body or not by utilizing the final score of the face picture. According to the face detection method, the two main discriminators and the auxiliary discriminators are arranged, the mask on the five sense organs and the skin is generated by using key points of the face in the designed discriminators, the two discriminators independently process the detail problem of the five sense organs and the skin, so that the generator is more careful about the details on the five sense organs and the skin except for the global features in the learning process, the features on the five sense organs and the skin of the forged face picture are further enhanced, the living body detection of the face is facilitated, more accurate forged trace elements and feature images can be obtained after the initial antibody network is trained according to the initial score of the face picture, further the final score of the more accurate face picture is obtained, and the accuracy of the living body detection of the face is improved.
In one embodiment, a trained countermeasure network includes a generator and a discriminator; the discriminator includes a primary discriminator and a secondary discriminator; the main discriminator is used for restricting the generator and discriminating the facial skin in the face picture; the auxiliary discriminator is a region discriminator and is used for improving the generation of facial five sense organs details and discriminating facial five sense organs in a facial picture. The effect diagram of the trained countermeasure network is shown in fig. 3, and the skin and the five sense organs of the human face can be identified.
In one embodiment, inputting the real face picture, the fake face picture, the reconstructed real face picture and the fit fake face picture into a main discriminator and an auxiliary discriminator of an initial countermeasure network for scoring, obtaining an initial score of the face picture, including:
inputting the real face picture, the fake face picture, the reconstructed real face picture and the fit fake face picture into a main discriminator and an auxiliary discriminator of a trained countermeasure network to score the mask of the facial skin and the mask of the facial five sense organs of the face picture, and obtaining the initial score of the face picture.
In one embodiment, the overall penalty function of the generator includes a base penalty function, a penalty constraint, and a penalty constraint for the face mask; the loss constraint of the face mask is L G3 =L1_Loss(mask2(I j ),mask2(I′ j ) Mask2 represents a mask for generating facial features for a face, I j Representing face pictures, I' j Representing the face picture generated by the generator.
Basic loss function L of generator G1 As shown in equation 1:
L G1 =log(1-D(mask1(I′ j ))+log(1-D(mask2(I′ j )) (1)
in formula (1), D represents a discriminator, I j Representing face pictures in a dataset, I' j Representing the face picture generated by the generator, mask1 representing the mask for generating facial skin for the face, mask2 representing the mask for generating facial features for the face, the effect diagram is shown in fig. 2 below:
in order to make the generated picture and the real picture as close as possible, an L1 loss constraint is added to the generated picture, so that the target picture achieves the expected effect as much as possible.
L G2 =L1_Loss(I,I′) (2)
The generator is used for taking the information of the Face details of the picture into consideration, paying attention to other detail information of the Face, extracting key points of the Face through a face_alignment algorithm, generating a Face mask according to 68 key points of the Face, and adding L1 loss constraint of the Face mask on the generator.
L G3 ==L1_Loss(mask1(I j ),mask1(I′ j ))+L1_Loss(mask2(I j ),mask2(I′ j )) (3)
In formula (3), mask2 represents a mask for generating facial five sense organs for a face, such as two eyes, nose, mouth of a person, and a specific effect diagram is shown in fig. 2.
In summary, the overall loss of the generator is shown in equation (4):
L G =L G1 +L G2 +L G3 (4)
in one embodiment, the loss functions of the discriminators include the loss function of the primary discriminator and the loss function of the secondary discriminator.
In one embodiment, the master authenticationThe loss function of the other device is L D1 =log(D(mask1(I j )))+log(1-D(mask1(G(I j ) ) with D representing a discriminator and mask1 representing a mask for generating facial skin for a face.
Sending the generated face picture skin mask and the real picture skin mask into a discriminator without updating the gradient, and feeding back the result of the discriminator to a generator, wherein the purpose of the generator is to make the initial score loss obtained by the discriminator of the picture generated by the generator as large as possible; when the discriminator is trained, the generator is fixed, the loss of the discriminator is as small as possible, the discriminator is trained well, the training generator enables the loss of the discriminator to be as large as possible, the data label generated by the generator is still considered to be close to the real label when the discriminator has certain discrimination capability, the effectiveness of the training of the generator is demonstrated, and the training efficiency of the countermeasure network is improved.
In one embodiment, the auxiliary discriminator has a loss function L D2 =log(D(mask2(I j )))+log(1-D(mask2(G(I j ))))。
The auxiliary discriminator is a constraint on the generator, improves the generation of the facial feature details, enables the generator to better synthesize a real face, enables the generated reconstructed real face picture and the generated fitting fake face picture to have more realism when the generator performs linear reconstruction according to fake trace elements and face data, further enables the final score of the face picture to be more accurate, can improve the detection accuracy when performing living detection, and can score the face picture generated by the generator and the mask of the facial feature of the real face picture in the same way according to the mask of the facial feature of the face, such as two eyes, nose and mouth.
It should be understood that, although the steps in the flowchart of fig. 1 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 1 may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of other steps or sub-steps of other steps.
In one embodiment, as shown in fig. 4, there is provided a face in-vivo detection apparatus based on countermeasure learning, including: a picture preprocessing module 402, a data pair construction module 404, a linear reconstruction module 406, an initial scoring module 408, a trace element of forgery optimization module 410, and a biopsy module 412, wherein:
the image preprocessing module 402 is configured to obtain a face image captured by the camera; preprocessing a face picture by a face alignment method in a dlib library to obtain a face correction picture;
the data pair construction module 404 is configured to perform classification processing on the face correction picture to obtain a real face picture and a fake face picture; carrying out data pair construction on the real face picture and the fake face picture of the same person to obtain a face data pair;
a linear reconstruction module 406, configured to input the face data pairs into a generator of the initial countermeasure network to obtain counterfeit trace elements and feature graphs between the face data pairs; linearly reconstructing according to the fake trace elements and the human face data pairs to obtain a reconstructed real human face picture and a fitting fake human face picture;
the initial scoring module 408 is configured to input the real face picture, the fake face picture, the reconstructed real face picture and the fit fake face picture into the main discriminator and the auxiliary discriminator of the initial countermeasure network for scoring, so as to obtain an initial score of the face picture;
the trace element forgery optimization module 410 is configured to train the initial countermeasure network according to the initial score of the face picture, so as to obtain a trained countermeasure network; inputting the face data into a trained countermeasure network to obtain optimized fake trace elements and feature graphs;
the living body detection module 412 is configured to linearly add the optimized counterfeit trace element and the feature map to obtain a final score of the face picture; and detecting whether the face picture is a living body or not by utilizing the final score of the face picture.
In one embodiment, a trained countermeasure network includes a generator and a discriminator; the discriminator includes a primary discriminator and a secondary discriminator; the main discriminator is used for restricting the generator and discriminating the facial skin in the face picture; the auxiliary discriminator is a region discriminator and is used for improving the generation of facial five sense organs details and discriminating facial five sense organs in a facial picture.
In one embodiment, the initial scoring module 408 is further configured to input the real face picture, the fake face picture, the reconstructed real face picture, and the fit fake face picture into the primary discriminator and the auxiliary discriminator of the initial countermeasure network for scoring, to obtain an initial score of the face picture, including: inputting the real face picture, the fake face picture, the reconstructed real face picture and the fit fake face picture into a main discriminator and an auxiliary discriminator of a trained countermeasure network to score the mask of the facial skin and the mask of the facial five sense organs of the face picture, and obtaining the initial score of the face picture.
In one embodiment, the overall penalty function of the generator includes a base penalty function, a penalty constraint, and a penalty constraint for the face mask; the loss constraint of the face mask is L G3 =L1_Loss(mask2(I j ),mask2(I′ j ) Mask2 represents a mask for generating facial features for a face, I j Representing face pictures, I' j Representing the face picture generated by the generator.
In one embodiment, the loss functions of the discriminators include the loss function of the primary discriminator and the loss function of the secondary discriminator.
In one embodiment, the loss function of the primary discriminator is L D1 =log(D(mask1(I j )))+log(1-D(mask1(G(I j ) ) with D representing a discriminator and mask1 representing a mask for generating facial skin for a face.
In one embodiment, the loss of the auxiliary discriminatorThe loss function is L D2 =log(D(mask2(I j )))+log(1-D(mask2(G(I j ))))。
For specific limitations on the face-in-vivo detection apparatus based on the countermeasure learning, reference may be made to the above limitations on the face-in-vivo detection apparatus method based on the countermeasure learning, and a detailed description thereof will be omitted. The respective modules in the above-described face biopsy device based on the countermeasure learning may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure of which may be as shown in fig. 5. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, implements a face in vivo detection apparatus method based on countermeasure learning. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 5 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In an embodiment a computer device is provided comprising a memory storing a computer program and a processor implementing the steps of the method of the above embodiments when the computer program is executed.
In one embodiment, a computer storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the method of the above embodiments.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (9)

1. A method of face in-vivo detection based on challenge learning, the method comprising:
acquiring a face picture shot by a camera;
preprocessing the face picture by a face alignment method in a dlib library to obtain a face correction picture;
classifying the face correction pictures to obtain real face pictures and fake face pictures;
carrying out data pair construction on the real face picture and the fake face picture of the same person to obtain a face data pair;
inputting the face data pairs into a generator of an initial countermeasure network to obtain fake trace elements and feature graphs between the face data pairs;
linearly reconstructing according to the fake trace elements and the face data pairs to obtain a reconstructed real face picture and a fitting fake face picture;
inputting the real face picture, the fake face picture, the reconstructed real face picture and the fit fake face picture into a main discriminator and an auxiliary discriminator of an initial countermeasure network for scoring to obtain an initial score of the face picture; the main discriminator is used for restricting the generator and discriminating the facial skin in the face picture; the auxiliary discriminator is a region discriminator and is used for improving the generation of facial five sense organs details and discriminating facial five sense organs in a facial picture;
training the initial countermeasure network according to the initial score of the face picture to obtain a trained countermeasure network; inputting the face data into a trained countermeasure network to obtain optimized fake trace elements and feature graphs;
linearly summing the optimized fake trace elements and the feature images to obtain the final score of the face picture;
and detecting whether the face picture is a living body or not by utilizing the final score of the face picture.
2. The method of claim 1, wherein inputting the real face picture, the counterfeit face picture, the reconstructed real face picture, and the fitted counterfeit face picture into a primary discriminator and a secondary discriminator of an initial countermeasure network for scoring, obtaining an initial score of the face picture, comprising:
and inputting the real face picture, the fake face picture, the reconstructed real face picture and the fit fake face picture into a main discriminator and an auxiliary discriminator of the initial countermeasure network to score a mask of facial skin and a mask of facial five sense organs of the face picture, so as to obtain an initial score of the face picture.
3. The method of claim 1, wherein the overall penalty function of the generator includes a base penalty function, a penalty constraint, and a penalty constraint of a face mask; the loss constraint of the face mask is L G3 =L1_Loss(mask2(I j ),mask2(I′ j ) Mask2 represents a mask for generating facial features for a face, I j Representing face pictures, I' j Representing the face picture generated by the generator.
4. A method according to claim 3, wherein the loss functions of the discriminators include a loss function of the primary discriminator and a loss function of the secondary discriminator.
5. The method of claim 4, wherein the loss function of the master discriminator is L D1 =log(D(mask1(I j )))+log(1-D(mask1(G(I j ) (v)) wherein D represents a discriminator, mask1 represents generation for a faceMask for facial skin.
6. The method of claim 5, wherein the auxiliary discriminator has a loss function of L D2 =log(D(mask2(I j )))+log(1-D(mask2(G(I j ))))。
7. A face in-vivo detection apparatus based on countermeasure learning, the apparatus comprising:
the image preprocessing module is used for acquiring face images shot by the camera; preprocessing the face picture by a face alignment method in a dlib library to obtain a face correction picture;
the data pair construction module is used for classifying the face correction pictures to obtain real face pictures and fake face pictures; carrying out data pair construction on the real face picture and the fake face picture of the same person to obtain a face data pair;
the linear reconstruction module is used for inputting the face data pairs into a generator of an initial countermeasure network to obtain fake trace elements and feature graphs between the face data pairs; linearly reconstructing according to the fake trace elements and the face data pairs to obtain a reconstructed real face picture and a fitting fake face picture;
the initial scoring module is used for inputting the real face picture, the fake face picture, the reconstructed real face picture and the fitting fake face picture into a main discriminator and an auxiliary discriminator of an initial countermeasure network for scoring to obtain an initial score of the face picture; the main discriminator is used for restricting the generator and discriminating the facial skin in the face picture; the auxiliary discriminator is a region discriminator and is used for improving the generation of facial five sense organs details and discriminating facial five sense organs in a facial picture;
the fake trace element optimization module is used for training the initial countermeasure network according to the initial score of the face picture to obtain a trained countermeasure network; inputting the face data into a trained countermeasure network to obtain optimized fake trace elements and feature graphs;
the living body detection module is used for carrying out linear summation on the optimized fake trace elements and the feature images to obtain the final score of the face picture; and detecting whether the face picture is a living body or not by utilizing the final score of the face picture.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 6 when the computer program is executed.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN202210212683.1A 2022-03-04 2022-03-04 Face living body detection method, device, equipment and medium based on countermeasure learning Active CN114596615B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210212683.1A CN114596615B (en) 2022-03-04 2022-03-04 Face living body detection method, device, equipment and medium based on countermeasure learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210212683.1A CN114596615B (en) 2022-03-04 2022-03-04 Face living body detection method, device, equipment and medium based on countermeasure learning

Publications (2)

Publication Number Publication Date
CN114596615A CN114596615A (en) 2022-06-07
CN114596615B true CN114596615B (en) 2023-05-05

Family

ID=81815515

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210212683.1A Active CN114596615B (en) 2022-03-04 2022-03-04 Face living body detection method, device, equipment and medium based on countermeasure learning

Country Status (1)

Country Link
CN (1) CN114596615B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111368796A (en) * 2020-03-20 2020-07-03 北京达佳互联信息技术有限公司 Face image processing method and device, electronic equipment and storage medium
CN112417414A (en) * 2020-12-04 2021-02-26 支付宝(杭州)信息技术有限公司 Privacy protection method, device and equipment based on attribute desensitization
CN113223128A (en) * 2020-02-04 2021-08-06 北京百度网讯科技有限公司 Method and apparatus for generating image

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109543640B (en) * 2018-11-29 2022-06-17 中国科学院重庆绿色智能技术研究院 Living body detection method based on image conversion
CN109635745A (en) * 2018-12-13 2019-04-16 广东工业大学 A method of Multi-angle human face image is generated based on confrontation network model is generated
CN111753595A (en) * 2019-03-29 2020-10-09 北京市商汤科技开发有限公司 Living body detection method and apparatus, device, and storage medium
CN110490076B (en) * 2019-07-18 2024-03-01 平安科技(深圳)有限公司 Living body detection method, living body detection device, computer equipment and storage medium
CN111028305A (en) * 2019-10-18 2020-04-17 平安科技(深圳)有限公司 Expression generation method, device, equipment and storage medium
CN112967180B (en) * 2021-03-17 2023-12-22 福建库克智能科技有限公司 Training method for generating countermeasure network, image style conversion method and device
CN113255788B (en) * 2021-05-31 2023-04-07 西安电子科技大学 Method and system for generating confrontation network face correction based on two-stage mask guidance
CN113505722B (en) * 2021-07-23 2024-01-02 中山大学 Living body detection method, system and device based on multi-scale feature fusion

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113223128A (en) * 2020-02-04 2021-08-06 北京百度网讯科技有限公司 Method and apparatus for generating image
CN111368796A (en) * 2020-03-20 2020-07-03 北京达佳互联信息技术有限公司 Face image processing method and device, electronic equipment and storage medium
CN112417414A (en) * 2020-12-04 2021-02-26 支付宝(杭州)信息技术有限公司 Privacy protection method, device and equipment based on attribute desensitization

Also Published As

Publication number Publication date
CN114596615A (en) 2022-06-07

Similar Documents

Publication Publication Date Title
WO2020207189A1 (en) Method and device for identity authentication, storage medium, and computer device
Galbally et al. Iris image reconstruction from binary templates: An efficient probabilistic approach based on genetic algorithms
CN111680672B (en) Face living body detection method, system, device, computer equipment and storage medium
CN110334587B (en) Training method and device of face key point positioning model and key point positioning method
CN111310705A (en) Image recognition method and device, computer equipment and storage medium
CN111339897B (en) Living body identification method, living body identification device, computer device, and storage medium
CN111680675B (en) Face living body detection method, system, device, computer equipment and storage medium
CN115050064A (en) Face living body detection method, device, equipment and medium
CN109344709A (en) A kind of face generates the detection method of forgery image
Hassanpour et al. E2F-GAN: Eyes-to-face inpainting via edge-aware coarse-to-fine GANs
CN113298158A (en) Data detection method, device, equipment and storage medium
CN112613445A (en) Face image generation method and device, computer equipment and storage medium
CN114596615B (en) Face living body detection method, device, equipment and medium based on countermeasure learning
CN112308035A (en) Image detection method, image detection device, computer equipment and storage medium
CN108460811B (en) Face image processing method and device and computer equipment
CN115424001A (en) Scene similarity estimation method and device, computer equipment and storage medium
CN115410257A (en) Image protection method and related equipment
CN114299569A (en) Safe face authentication method based on eyeball motion
CN111598144B (en) Training method and device for image recognition model
CN115708135A (en) Face recognition model processing method, face recognition method and device
CN110414347B (en) Face verification method, device, equipment and storage medium
CN113033305A (en) Living body detection method, living body detection device, terminal equipment and storage medium
CN113158858A (en) Behavior analysis method and system based on deep learning
Kaur et al. Improved Facial Biometric Authentication Using MobileNetV2
Hassanpour et al. E2F-Net: Eyes-to-face inpainting via StyleGAN latent space

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Ren Tuo

Inventor before: Xie Jianbin

Inventor before: Ren Tuo

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231227

Address after: 410000 No. 96, tongzipo Road, Yuelu District, Changsha City, Hunan Province

Patentee after: Hunan Zhongke Zhuying Intelligent Technology Research Institute Co.,Ltd.

Patentee after: National University of Defense Technology

Address before: 410000 No. 96, tongzipo Road, Yuelu District, Changsha City, Hunan Province

Patentee before: Hunan Zhongke Zhuying Intelligent Technology Research Institute Co.,Ltd.

TR01 Transfer of patent right