CN114596615A - Face living body detection method, device, equipment and medium based on counterstudy - Google Patents

Face living body detection method, device, equipment and medium based on counterstudy Download PDF

Info

Publication number
CN114596615A
CN114596615A CN202210212683.1A CN202210212683A CN114596615A CN 114596615 A CN114596615 A CN 114596615A CN 202210212683 A CN202210212683 A CN 202210212683A CN 114596615 A CN114596615 A CN 114596615A
Authority
CN
China
Prior art keywords
face
picture
face picture
forged
discriminator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210212683.1A
Other languages
Chinese (zh)
Other versions
CN114596615B (en
Inventor
谢剑斌
任拓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Zhongke Zhuying Intelligent Technology Research Institute Co ltd
National University of Defense Technology
Original Assignee
Hunan Zhongke Zhuying Intelligent Technology Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Zhongke Zhuying Intelligent Technology Research Institute Co ltd filed Critical Hunan Zhongke Zhuying Intelligent Technology Research Institute Co ltd
Priority to CN202210212683.1A priority Critical patent/CN114596615B/en
Publication of CN114596615A publication Critical patent/CN114596615A/en
Application granted granted Critical
Publication of CN114596615B publication Critical patent/CN114596615B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The application relates to a face living body detection method and device based on counterstudy, a computer device and a storage medium. The method comprises the following steps: preprocessing the face picture by a face alignment method in a dlib library to obtain a face correction picture; classifying the face correction pictures to obtain real face pictures and forged face pictures; carrying out data pair construction on a real face picture and a forged face picture of the same person to obtain a face data pair; and inputting the face data pair into a generator of the countermeasure network to obtain a forged trace element and a feature map, and carrying out linear weighting on the feature map and the trace element to obtain a numerical value which is used for judging whether the face picture is a living body. The method can improve the accuracy of human face living body detection.

Description

Face living body detection method, device, equipment and medium based on counterstudy
Technical Field
The present application relates to the field of computer vision technologies, and in particular, to a method and an apparatus for detecting a living human face based on counterstudy, a computer device, and a storage medium.
Background
Due to the progress of the times, the development of computer technology is advancing towards the direction of intellectualization and science and technology, and the computer vision technology has also got a breakthrough, wherein the identity authentication technology aiming at various biological characteristics of human body has attracted extensive attention. The face recognition technology is gradually developed, and the adaptive field and scene are also gradually diversified, face spoofing attack also occurs, a huge security problem also occurs, the security of the recognition device is threatened, personal data of a user can be lost and economic loss is caused, and therefore, a stable and reliable face authentication system is required to be provided, and besides the related information of the face can be accurately and unmistakably judged, the capability of recognizing various spoofing means is required.
Disclosure of Invention
In view of the above, it is necessary to provide a method, an apparatus, a computer device and a storage medium for detecting a living human face based on counterlearning, which can improve the accuracy of detecting a living human face.
A method for detecting a living human face based on counterstudy, the method comprising:
acquiring a face picture shot by a camera;
preprocessing the face picture by a face alignment method in a dlib library to obtain a face correction picture;
classifying the face correction pictures to obtain real face pictures and forged face pictures;
carrying out data pair construction on a real face picture and a forged face picture of the same person to obtain a face data pair;
inputting the face data pairs into a generator of an initial confrontation network to obtain forged trace elements and characteristic graphs between the face data pairs;
performing linear reconstruction according to the forged trace elements and the face data to obtain a reconstructed real face picture and a fitted forged face picture;
inputting the real face picture, the forged face picture, the reconstructed real face picture and the fitted forged face picture into a main discriminator and an auxiliary discriminator of an initial countermeasure network for grading to obtain an initial score of the face picture;
training the initial confrontation network according to the initial score of the face picture to obtain a trained confrontation network, and inputting face data into the trained confrontation network to obtain optimized forged trace elements and feature maps;
linearly adding the optimized forgery trace elements and the feature map to obtain the final score of the face picture;
and detecting whether the face picture is a living body or not by using the final score of the face picture.
In one embodiment, the trained countermeasure network includes a generator and a discriminator; the discriminator comprises a main discriminator and an auxiliary discriminator; the main discriminator is used for restricting the generator, carry on the discrimination of the facial skin in the picture of the human face; the auxiliary discriminator is a region discriminator used for improving the generation of the details of the facial features and identifying the facial features in the facial picture.
In one embodiment, inputting a real face picture, a forged face picture, a reconstructed real face picture and a fitted forged face picture into a main discriminator and an auxiliary discriminator of an initial countermeasure network for scoring to obtain an initial score of the face picture, including:
and inputting the real face picture, the forged face picture, the reconstructed real face picture and the fitted forged face picture into a main discriminator and an auxiliary discriminator of the trained confrontation network to score the mask of the facial skin and the mask of the facial five sense organs of the face picture so as to obtain the initial score of the face picture.
In one embodiment, the overall loss function of the generator includes a base loss function, a loss constraint, and a loss constraint of the face mask; loss constraint of face mask is LG3=L1_Loss(mask2(Ij),mask2(I'j) Where mask2 denotes a reference toFace generation mask for facial features, IjRepresents a face picture, I'jRepresenting a picture of a human face generated by the generator.
In one embodiment, the loss function of the discriminator includes a loss function of the primary discriminator and a loss function of the secondary discriminator.
In one embodiment, the loss function of the primary discriminator is LD1=log(D(mask1(Ij)))+log(1-D(mask1(G(Ij) Id) where D represents the discriminator and mask1 represents a mask for generating facial skin for a human face.
In one embodiment, the loss function of the secondary discriminator is LD2=log(D(mask2(Ij)))+log(1-D(mask2(G(Ij))))。
A living human face detection device based on counterstudy, the device comprising:
the image preprocessing module is used for acquiring a face image shot by the camera; preprocessing the face picture by a face alignment method in a dlib library to obtain a face correction picture;
the data pair construction module is used for classifying the face correction pictures to obtain real face pictures and forged face pictures; carrying out data pair construction on a real face picture and a forged face picture of the same person to obtain a face data pair;
the linear reconstruction module is used for inputting the face data pairs into a generator of the initial countermeasure network to obtain forged trace elements and characteristic maps between the face data pairs; performing linear reconstruction according to the forged trace elements and the face data to obtain a reconstructed real face picture and a fitted forged face picture;
the initial scoring module is used for inputting the real face picture, the forged face picture, the reconstructed real face picture and the fitted forged face picture into a main discriminator and an auxiliary discriminator of an initial countermeasure network for scoring to obtain the initial score of the face picture;
the fake trace element optimization module is used for training the initial confrontation network according to the initial score of the face picture to obtain a trained confrontation network; inputting the face data into a trained confrontation network to obtain optimized forged trace elements and a feature map;
the living body detection module is used for carrying out linear summation on the optimized forged trace elements and the feature map to obtain the final score of the face picture; and detecting whether the face picture is a living body or not by using the final score of the face picture.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring a face picture shot by a camera;
preprocessing the face picture by a face alignment method in a dlib library to obtain a face correction picture;
classifying the face correction pictures to obtain real face pictures and forged face pictures;
carrying out data pair construction on a real face picture and a forged face picture of the same person to obtain a face data pair;
inputting the face data pairs into a generator of an initial confrontation network to obtain forged trace elements and characteristic graphs between the face data pairs;
performing linear reconstruction according to the forged trace elements and the face data to obtain a reconstructed real face picture and a fitted forged face picture;
inputting the real face picture, the forged face picture, the reconstructed real face picture and the fitted forged face picture into a main discriminator and an auxiliary discriminator of an initial countermeasure network for grading to obtain an initial score of the face picture;
training the initial confrontation network according to the initial score of the face picture to obtain a trained confrontation network, and inputting face data into the trained confrontation network to obtain optimized forged trace elements and feature maps;
linearly adding the optimized false trace elements and the feature map to obtain a final score of the face picture;
and detecting whether the face picture is a living body or not by using the final score of the face picture.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a face picture shot by a camera;
preprocessing the face picture by a face alignment method in a dlib library to obtain a face correction picture;
classifying the face correction pictures to obtain real face pictures and forged face pictures;
carrying out data pair construction on a real face picture and a forged face picture of the same person to obtain a face data pair;
inputting the face data pairs into a generator of an initial confrontation network to obtain forged trace elements and characteristic graphs between the face data pairs;
performing linear reconstruction according to the forged trace elements and the face data to obtain a reconstructed real face picture and a fitted forged face picture;
inputting the real face picture, the forged face picture, the reconstructed real face picture and the fitted forged face picture into a main discriminator and an auxiliary discriminator of an initial countermeasure network for grading to obtain an initial score of the face picture;
training the initial confrontation network according to the initial score of the face picture to obtain a trained confrontation network, and inputting face data into the trained confrontation network to obtain optimized forged trace elements and feature maps;
linearly adding the optimized forgery trace elements and the feature map to obtain the final score of the face picture;
and detecting whether the face picture is a living body or not by using the final score of the face picture.
According to the face in-vivo detection method and device based on the counterstudy, the computer equipment and the storage medium, firstly, the face picture is preprocessed through a face alignment method in a dlib library to obtain a face correction picture, so that the face is vertical, the face feature extraction is convenient to carry out, the face correction picture is classified to obtain a real face picture and a forged face picture; carrying out data pair construction on a real face picture and a forged face picture of the same person to obtain a face data pair; inputting the face data pairs into a generator of the initial confrontation network to obtain forged trace elements and feature maps between the face data pairs; then inputting the real face picture, the forged face picture, the reconstructed real face picture and the fitted forged face picture into a main discriminator and an auxiliary discriminator of an initial countermeasure network for grading to obtain an initial score of the face picture; the auxiliary discriminator is added in the confrontation network to analyze the facial features, the auxiliary discriminator is mainly used for improving the generation of the facial features, not only notices the global information of the face, but also notices the facial features information, when the face picture is evaluated, the evaluation result is more accurate, the initial confrontation network is trained according to the initial score of the face picture to obtain the trained confrontation network, the face data is input into the trained confrontation network to obtain more accurate optimized forged trace elements and feature maps, and the optimized forged trace elements and feature maps are linearly added to obtain the final score of the face picture; and detecting whether the face picture is a living body or not by using the final score of the face picture. According to the face image anti-counterfeiting method and device, two main discriminators and auxiliary discriminators are arranged, masks on the five sense organs and the skin are generated by using key points of the face in the designed discriminators, the two discriminators independently process the detail problems of the five sense organs and the skin, so that a generator can pay attention to the details on the five sense organs and the skin besides the global features in the learning process, the features of the five sense organs and the skin of a forged face image are further strengthened, the living body detection of the face is facilitated, more accurate forged trace elements and feature images can be obtained after an initial anti-counterfeiting network is trained according to the initial score of the face image, the final score of the more accurate face image is obtained, and the accuracy of the face living body detection is improved.
Drawings
FIG. 1 is a schematic flow chart illustrating a method for in vivo human face detection based on antagonistic learning according to an embodiment;
FIG. 2 is an illustration of the effect of a mask for generating facial features for a human face in one embodiment;
FIG. 3 is a diagram illustrating the effectiveness of a trained countermeasure network in one embodiment;
FIG. 4 is a block diagram of an embodiment of an apparatus for detecting a living human face based on counterstudy;
FIG. 5 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, there is provided a face live detection method based on counterstudy, including the following steps:
102, acquiring a face picture shot by a camera; and preprocessing the face picture by a face alignment method in the dlib library to obtain a face correction picture.
The method comprises the steps of preprocessing a face picture through a face alignment method in a dlib library, twisting a distorted face into a front picture, rotating the picture through alignment of two outer canthi of eyes on the face, enabling the face to be vertical, obtaining a face correction picture, and facilitating face feature extraction.
104, classifying the face correction pictures to obtain real face pictures and forged face pictures; and carrying out data pair construction on the real face picture and the forged face picture of the same person to obtain a face data pair.
And inputting the real face picture and the forged face picture of the same person into the generator in a data pair mode, so that the generator can generate a corresponding feature map under a corresponding resolution and generate a forged trace element.
Step 106, inputting the face data pairs into a generator of the initial confrontation network to obtain forged trace elements and feature maps between the face data pairs; and carrying out linear reconstruction according to the forged trace elements and the face data to obtain a reconstructed real face picture and a fitted forged face picture.
The characteristic graph comprises face data pair information, and the linear reconstruction means that a reconstructed real face picture and a fitted forged face picture are obtained by performing linear weighting on forged elements generated by the generator and the face picture.
Step 108, inputting the real face picture, the forged face picture, the reconstructed real face picture and the fitted forged face picture into a main discriminator and an auxiliary discriminator of an initial countermeasure network for grading to obtain an initial score of the face picture;
the main discriminator and the auxiliary discriminator respectively score the skin mask and the facial mask of the face picture, and judge whether the input is the result generated by the generator from the two angles. If the scores of the reconstructed real face picture and the fitted forged face picture are as high as possible, the forged trace elements of the generator can well represent the trace of the forged face picture, the explanation generator can effectively separate the forged trace on the face picture, and meanwhile, the main discriminator and the auxiliary discriminator can be more accurate in scoring the face picture through the feedback of the generator data, so that the face detection accuracy is further improved. Meanwhile, the auxiliary discriminator is mainly used for improving the generation of the details of the facial features, the generator can better synthesize a real face and fit a forged picture when the confrontation network training is carried out, not only the global information of the face is noticed, but also the detail information of the face is noticed, and when the linear reconstruction is carried out according to the forged trace elements and the face data, the generated reconstructed real face picture and the fit forged face picture have more sense of reality.
And step 110, training the initial confrontation network according to the initial score of the face picture to obtain a trained confrontation network, and inputting face data into the trained confrontation network to obtain optimized forged trace elements and feature maps.
Judging whether the picture is a picture generated by a generator or not according to the initial score of the face picture, wherein the score interval is a numerical value between 0 and 1, the score of the face picture in the data set is close to 1, the score of the face picture generated by the generator is close to 0, labels given during loss calculation are also 1 and 0, the label of the face picture in the data set is 1, the label of the picture generated by the generator is 0, if the face picture is the picture generated by the generator, the initial score of the face picture is used for training an initial confrontation network, if the initial score of the face picture is low, the loss of the generator is increased, the loss of the discriminator is reduced, if the initial score of the face picture is high, the loss of the generator is reduced, the loss of the discriminator is increased, in the training process of the discriminator and the generator, the discriminator and the generator are alternately trained, and when one of the discriminator and the generator is trained, and on the other hand, parameters are frozen without updating the gradient, so that the initial confrontation network is trained to obtain a trained confrontation network, and the face data is input into the trained confrontation network to obtain optimized forged trace elements and feature maps.
Step 112, linearly summing the optimized forged trace elements and the feature map to obtain the final score of the face picture; and detecting whether the face picture is a living body or not by using the final score of the face picture.
In the characteristic diagram training process, the real face distribution is as close to 0 as possible, the counterfeiting is as close to 1 as possible, the score is calculated by using the average value of the characteristic diagram and the average value of the counterfeiting trace elements, the weighting judgment is carried out, the final score of the face image is obtained, the final score value is between 0 and 1, the threshold value of the final score is 0.5, if the final score of the face image is more than 0.5, the face image is judged to be a living body, otherwise, the face image is judged not to be a living body, and the calculation process of the final score is as follows:
Figure BDA0003531842810000081
wherein, M represents a characteristic diagram, K represents the size of the characteristic diagram, trace represents a forged trace element, N represents the size of the trace picture, and alpha is a super parameter.
According to the face in-vivo detection method based on the counterstudy, firstly, a face picture is preprocessed through a face alignment method in a dlib library to obtain a face correction picture, so that the face is vertical, the face feature extraction is convenient to carry out, the face correction picture is classified, and a real face picture and a forged face picture are obtained; carrying out data pair construction on a real face picture and a forged face picture of the same person to obtain a face data pair; inputting the face data pairs into a generator of an initial confrontation network to obtain forged trace elements and characteristic graphs between the face data pairs; then inputting the real face picture, the forged face picture, the reconstructed real face picture and the fitted forged face picture into a main discriminator and an auxiliary discriminator of an initial confrontation network for scoring to obtain an initial score of the face picture; the method comprises the steps that an auxiliary discriminator is added in an confrontation network to analyze the five sense organs of a face, the auxiliary discriminator is mainly used for improving the generation of the details of the five sense organs of the face, not only is the global information of the face noticed, but also the details information of the face noticed, when a face picture is scored, the scoring result is more accurate, the initial confrontation network is trained according to the initial score of the face picture to obtain a trained confrontation network, face data is input into the trained confrontation network to obtain more accurate optimized forged trace elements and feature maps, and the optimized forged trace elements and feature maps are linearly summed to obtain the final score of the face picture; and detecting whether the face picture is a living body or not by using the final score of the face picture. According to the face image anti-counterfeiting method and device, two main discriminators and auxiliary discriminators are arranged, masks on the five sense organs and the skin are generated by using key points of the face in the designed discriminators, the two discriminators independently process the detail problems of the five sense organs and the skin, so that a generator can pay attention to the details on the five sense organs and the skin besides the global features in the learning process, the features of the five sense organs and the skin of a forged face image are further strengthened, the living body detection of the face is facilitated, more accurate forged trace elements and feature images can be obtained after an initial anti-counterfeiting network is trained according to the initial score of the face image, the final score of the more accurate face image is obtained, and the accuracy of the face living body detection is improved.
In one embodiment, the trained countermeasure network includes a generator and a discriminator; the discriminator comprises a main discriminator and an auxiliary discriminator; the main discriminator is used for restricting the generator, carry on the discrimination of the facial skin in the picture of the human face; the auxiliary discriminator is a region discriminator used for improving the generation of the details of the facial features and identifying the facial features in the facial picture. The effect diagram of the trained confrontation network is shown in fig. 3, and the skin and the five sense organs of the human face can be identified.
In one embodiment, inputting a real face picture, a forged face picture, a reconstructed real face picture and a fitted forged face picture into a main discriminator and an auxiliary discriminator of an initial countermeasure network for scoring to obtain an initial score of the face picture, including:
and inputting the real face picture, the forged face picture, the reconstructed real face picture and the fitted forged face picture into a main discriminator and an auxiliary discriminator of the trained confrontation network to score the mask of the facial skin and the mask of the facial five sense organs of the face picture so as to obtain the initial score of the face picture.
In one embodiment, the overall loss function of the generator includes a base loss function, a loss constraint, and a loss constraint of the face mask; loss constraint of face mask is LG3=L1_Loss(mask2(Ij),mask2(I'j) Mask2 denotes a mask for face generation of facial five sense organs, IjDenotes a face picture, I'jRepresenting a picture of a human face generated by the generator.
Basic loss function L of the generatorG1As shown in equation 1:
LG1=log(1-D(mask1(I'j))+log(1-D(mask2(I'j)) (1)
in the formula (1), D represents a discriminator, IjRepresenting a face picture, I 'in a data set'jRepresenting a picture of a face generated by the generator, mask1 representing masks for generating facial skin for the face, and masks 2 representing masks for generating five sense organs for the face, the effect graph is shown in fig. 2 below:
in order to make the generated picture as close as possible to the real picture, an L1 loss constraint is added to the generated picture to make the target picture as good as possible.
LG2=L1_Loss(I,I′) (2)
The generator considers the information of the facial details of the picture and also pays attention to other detailed information of the Face, extracts key points of the Face through a Face _ Alignment algorithm, generates a Face mask according to 68 key points of the Face, and adds an L1 loss constraint of the Face mask on the generator.
LG3==L1_Loss(mask1(Ij),mask1(I'j))+L1_Loss(mask2(Ij),mask2(I'j)) (3)
In equation (3), mask2 represents masks for generating facial features, such as the two eyes, nose, mouth, of a person, for a face, and the detailed effect diagram is shown in fig. 2.
In summary, the overall loss of the generator is shown in equation (4):
LG=LG1+LG2+LG3 (4)
in one embodiment, the loss function of the discriminator includes a loss function of the primary discriminator and a loss function of the secondary discriminator.
In one embodiment, the loss function of the primary discriminator is
LD1=log(D(mask1(Ij)))+log(1-D(mask1(G(Ij) ) where D represents the discriminator and mask1 represents a mask that generates facial skin for a face.
Sending the generated skin mask of the face picture and the skin mask of the real picture into a discriminator without updating the gradient, and feeding the result of the discriminator back to a generator, wherein the generator aims to ensure that the initial score loss obtained by the discriminator of the picture generated by the generator is as large as possible; when the discriminator is trained, the generator is fixed, the loss of the discriminator is as small as possible, the discriminator can train well, and the training generator enables the loss of the discriminator to be as large as possible, which means that the data label generated by the generator is still considered to be close to a real label when the discriminator has certain discrimination capacity, thus showing the training effect of the generator and improving the training efficiency of the confrontation network.
In one embodiment, the loss function of the secondary discriminator is LD2=log(D(mask2(Ij)))+log(1-D(mask2(G(Ij))))。
The auxiliary discriminator is another constraint on the generator, the generation of the details of the facial features is improved, so that the generator can better synthesize a real face, when the fitting forged picture is linearly reconstructed according to the forged trace elements and the face data pair, the generated reconstructed real face picture and the fitting forged face picture have more reality sense, so that the final score of the face picture is more accurate, the detection accuracy can be improved when the living body detection is carried out, and meanwhile, the auxiliary discriminator also scores the masks of the facial features generated by the generator and the masks of the facial features of the real face picture aiming at the masks of the facial features, such as two eyes, a nose and a mouth.
It should be understood that, although the steps in the flowchart of fig. 1 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not limited to being performed in the exact order illustrated and, unless explicitly stated herein, may be performed in other orders. Moreover, at least a portion of the steps in fig. 1 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 4, there is provided a face liveness detection apparatus based on counterstudy, including: a picture preprocessing module 402, a data pair construction module 404, a linear reconstruction module 406, an initial scoring module 408, and a liveness detection module 410, wherein:
a picture preprocessing module 402, configured to obtain a face picture taken by a camera; preprocessing the face picture by a face alignment method in a dlib library to obtain a face correction picture;
a data pair construction module 404, configured to classify the face correction pictures to obtain real face pictures and forged face pictures; carrying out data pair construction on a real face picture and a forged face picture of the same person to obtain a face data pair;
a linear reconstruction module 406, configured to input the face data pairs into a generator of the initial countermeasure network to obtain forged trace elements and feature maps between the face data pairs; performing linear reconstruction according to the forged trace elements and the face data to obtain a reconstructed real face picture and a fitted forged face picture;
an initial scoring module 408, configured to input the real face picture, the forged face picture, the reconstructed real face picture, and the fitted forged face picture into a main discriminator and an auxiliary discriminator of an initial countermeasure network for scoring, so as to obtain an initial score of the face picture;
the fake trace element optimization module 410 is configured to train an initial confrontation network according to the initial score of the face picture, so as to obtain a trained confrontation network; inputting the face data into a trained confrontation network to obtain optimized forged trace elements and a feature map;
the living body detection module 412 is used for performing linear summation on the optimized forged trace elements and the feature map to obtain a final score of the face picture; and detecting whether the face picture is a living body or not by using the final score of the face picture.
In one embodiment, the trained countermeasure network includes a generator and a discriminator; the discriminator comprises a main discriminator and an auxiliary discriminator; the main discriminator is used for restricting the generator, carry on the discrimination of the facial skin in the picture of the human face; the auxiliary discriminator is a region discriminator used for improving the generation of the details of the facial features and identifying the facial features in the facial picture.
In one embodiment, the initial scoring module 408 is further configured to input the real face picture, the forged face picture, the reconstructed real face picture, and the fitted forged face picture into a primary discriminator and a secondary discriminator of the initial countermeasure network for scoring, so as to obtain an initial score of the face picture, including: and inputting the real face picture, the forged face picture, the reconstructed real face picture and the fitted forged face picture into a main discriminator and an auxiliary discriminator of the trained confrontation network to score the mask of the facial skin and the mask of the facial five sense organs of the face picture so as to obtain the initial score of the face picture.
In one embodiment, the overall loss function of the generator includes a base loss function, a loss constraint, and a loss constraint of the face mask; loss constraint of face mask is LG3=L1_Loss(mask2(Ij),mask2(I'j) Mask2 denotes a mask for face generation of facial five sense organs, IjRepresents a face picture, I'jRepresenting a picture of a human face generated by the generator.
In one embodiment, the loss function of the discriminator includes a loss function of the primary discriminator and a loss function of the secondary discriminator.
In one embodiment, the loss function of the primary discriminator is LD1=log(D(mask1(Ij)))+log(1-D(mask1(G(Ij) ) where D represents the discriminator and mask1 represents a mask that generates facial skin for a face.
In one embodiment, the penalty function for the secondary discriminator is LD2=log(D(mask2(Ij)))+log(1-D(mask2(G(Ij))))。
For specific limitations of the face liveness detection device based on the counterstudy, the above limitations on the face liveness detection device method based on the counterstudy can be referred to, and are not described herein again. The modules in the face living body detection device based on the antagonistic learning can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 5. The computer device comprises a processor, a memory, a network interface, a display screen and an input device which are connected through a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of a face liveness detection apparatus based on counterlearning. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 5 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, a computer device is provided, comprising a memory storing a computer program and a processor implementing the steps of the method in the above embodiments when the processor executes the computer program.
In an embodiment, a computer storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, is adapted to carry out the steps of the method of the above embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is specific and detailed, but not to be understood as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent application shall be subject to the appended claims.

Claims (10)

1. A face living body detection method based on antagonistic learning is characterized by comprising the following steps:
acquiring a face picture shot by a camera;
preprocessing the face picture by a face alignment method in a dlib library to obtain a face correction picture;
classifying the face correction pictures to obtain real face pictures and forged face pictures;
carrying out data pair construction on a real face picture and a forged face picture of the same person to obtain a face data pair;
inputting the face data pairs into a generator of an initial countermeasure network to obtain forged trace elements and characteristic graphs between the face data pairs;
performing linear reconstruction according to the forged trace elements and the face data pair to obtain a reconstructed real face picture and a fitted forged face picture;
inputting the real face picture, the forged face picture, the reconstructed real face picture and the fitted forged face picture into a main discriminator and an auxiliary discriminator of an initial countermeasure network for grading to obtain an initial score of the face picture;
training the initial confrontation network according to the initial score of the face picture to obtain a trained confrontation network; inputting the face data into a trained confrontation network to obtain optimized forged trace elements and a feature map;
linearly adding the optimized forgery trace elements and the feature map to obtain the final score of the face picture;
and detecting whether the face picture is a living body or not by using the final score of the face picture.
2. The method of claim 1, wherein the initial countermeasure network comprises a generator and a discriminator; the discriminator comprises a main discriminator and an auxiliary discriminator; the main discriminator is used for restricting the generator, and discriminating the facial skin in the face picture; the auxiliary discriminator is used for improving the generation of the details of the facial features and identifying the facial features in the facial picture.
3. The method of claim 1, wherein inputting the real face picture, the forged face picture, the reconstructed real face picture and the fitted forged face picture into a primary discriminator and a secondary discriminator of an initial countermeasure network for scoring to obtain an initial score of the face picture, comprises:
and inputting the real face picture, the forged face picture, the reconstructed real face picture and the fitted forged face picture into the main discriminator and the auxiliary discriminator of the trained confrontation network to score the mask of the facial skin and the mask of the facial five sense organs of the face picture so as to obtain the initial score of the face picture.
4. The method of claim 2, wherein the generator's overall loss function comprises a base loss function, a loss constraint, and a loss constraint of a face mask; the loss constraint of the face mask is LG3=L1_Loss(mask2(Ij),mask2(I'j) Mask2 denotes a mask for face generation of facial five sense organs, IjRepresents a face picture, I'jRepresenting a picture of a human face generated by the generator.
5. The method of claim 4, wherein the loss functions of the discriminators comprise a loss function of a primary discriminator and a loss function of a secondary discriminator.
6. The method of claim 5, wherein the loss function of the primary discriminator is LD1=log(D(mask1(Ij)))+log(1-D(mask1(G(Ij) ) where D represents the discriminator and mask1 represents a mask that generates facial skin for a face.
7. The method of claim 6, wherein the loss function of the secondary discriminator is LD2=log(D(mask2(Ij)))+log(1-D(mask2(G(Ij))))。
8. A living human face detection device based on counterstudy, the device comprising:
the image preprocessing module is used for acquiring a face image shot by the camera; preprocessing the face picture by a face alignment method in a dlib library to obtain a face correction picture;
the data pair construction module is used for classifying the face correction pictures to obtain real face pictures and forged face pictures; carrying out data pair construction on a real face picture and a forged face picture of the same person to obtain a face data pair;
the linear reconstruction module is used for inputting the face data pairs into a generator of an initial countermeasure network to obtain forged trace elements and characteristic maps between the face data pairs; performing linear reconstruction according to the forged trace elements and the face data pair to obtain a reconstructed real face picture and a fitted forged face picture;
the initial scoring module is used for inputting the real face picture, the forged face picture, the reconstructed real face picture and the fitted forged face picture into a main discriminator and an auxiliary discriminator of an initial countermeasure network for scoring to obtain the initial score of the face picture;
the fake trace element optimization module is used for training the initial confrontation network according to the initial score of the face picture to obtain a trained confrontation network; inputting the face data into a trained confrontation network to obtain optimized forged trace elements and a feature map;
the living body detection module is used for carrying out linear summation on the optimized forged trace elements and the feature map to obtain the final score of the face picture; and detecting whether the face picture is a living body or not by using the final score of the face picture.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor when executing the computer program performs the steps of the method according to any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202210212683.1A 2022-03-04 2022-03-04 Face living body detection method, device, equipment and medium based on countermeasure learning Active CN114596615B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210212683.1A CN114596615B (en) 2022-03-04 2022-03-04 Face living body detection method, device, equipment and medium based on countermeasure learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210212683.1A CN114596615B (en) 2022-03-04 2022-03-04 Face living body detection method, device, equipment and medium based on countermeasure learning

Publications (2)

Publication Number Publication Date
CN114596615A true CN114596615A (en) 2022-06-07
CN114596615B CN114596615B (en) 2023-05-05

Family

ID=81815515

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210212683.1A Active CN114596615B (en) 2022-03-04 2022-03-04 Face living body detection method, device, equipment and medium based on countermeasure learning

Country Status (1)

Country Link
CN (1) CN114596615B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109543640A (en) * 2018-11-29 2019-03-29 中国科学院重庆绿色智能技术研究院 A kind of biopsy method based on image conversion
CN109635745A (en) * 2018-12-13 2019-04-16 广东工业大学 A method of Multi-angle human face image is generated based on confrontation network model is generated
CN110490076A (en) * 2019-07-18 2019-11-22 平安科技(深圳)有限公司 Biopsy method, device, computer equipment and storage medium
CN111028305A (en) * 2019-10-18 2020-04-17 平安科技(深圳)有限公司 Expression generation method, device, equipment and storage medium
CN111368796A (en) * 2020-03-20 2020-07-03 北京达佳互联信息技术有限公司 Face image processing method and device, electronic equipment and storage medium
US20200364478A1 (en) * 2019-03-29 2020-11-19 Beijing Sensetime Technology Development Co., Ltd. Method and apparatus for liveness detection, device, and storage medium
CN112417414A (en) * 2020-12-04 2021-02-26 支付宝(杭州)信息技术有限公司 Privacy protection method, device and equipment based on attribute desensitization
CN112967180A (en) * 2021-03-17 2021-06-15 福建库克智能科技有限公司 Training method for generating countermeasure network, and image style conversion method and device
CN113223128A (en) * 2020-02-04 2021-08-06 北京百度网讯科技有限公司 Method and apparatus for generating image
CN113255788A (en) * 2021-05-31 2021-08-13 西安电子科技大学 Method and system for generating confrontation network face correction based on two-stage mask guidance
CN113505722A (en) * 2021-07-23 2021-10-15 中山大学 In-vivo detection method, system and device based on multi-scale feature fusion

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109543640A (en) * 2018-11-29 2019-03-29 中国科学院重庆绿色智能技术研究院 A kind of biopsy method based on image conversion
CN109635745A (en) * 2018-12-13 2019-04-16 广东工业大学 A method of Multi-angle human face image is generated based on confrontation network model is generated
US20200364478A1 (en) * 2019-03-29 2020-11-19 Beijing Sensetime Technology Development Co., Ltd. Method and apparatus for liveness detection, device, and storage medium
CN110490076A (en) * 2019-07-18 2019-11-22 平安科技(深圳)有限公司 Biopsy method, device, computer equipment and storage medium
CN111028305A (en) * 2019-10-18 2020-04-17 平安科技(深圳)有限公司 Expression generation method, device, equipment and storage medium
CN113223128A (en) * 2020-02-04 2021-08-06 北京百度网讯科技有限公司 Method and apparatus for generating image
CN111368796A (en) * 2020-03-20 2020-07-03 北京达佳互联信息技术有限公司 Face image processing method and device, electronic equipment and storage medium
CN112417414A (en) * 2020-12-04 2021-02-26 支付宝(杭州)信息技术有限公司 Privacy protection method, device and equipment based on attribute desensitization
CN112967180A (en) * 2021-03-17 2021-06-15 福建库克智能科技有限公司 Training method for generating countermeasure network, and image style conversion method and device
CN113255788A (en) * 2021-05-31 2021-08-13 西安电子科技大学 Method and system for generating confrontation network face correction based on two-stage mask guidance
CN113505722A (en) * 2021-07-23 2021-10-15 中山大学 In-vivo detection method, system and device based on multi-scale feature fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SANDIPAN BANERJEE ET AL.: ""LEGAN: Disentangled Manipulation of Directional Lighting and Facial Expressions whilst Leveraging Human Perceptual Judgements"" *
李策 等: ""采用超复数小波生成对抗网络的活体人脸检测算法"" *

Also Published As

Publication number Publication date
CN114596615B (en) 2023-05-05

Similar Documents

Publication Publication Date Title
Galbally et al. Iris image reconstruction from binary templates: An efficient probabilistic approach based on genetic algorithms
CN112419170B (en) Training method of shielding detection model and beautifying processing method of face image
CN107403142B (en) A kind of detection method of micro- expression
CN111626925B (en) Method and device for generating counterwork patch
CN111680672B (en) Face living body detection method, system, device, computer equipment and storage medium
CN106599872A (en) Method and equipment for verifying living face images
CN111310705A (en) Image recognition method and device, computer equipment and storage medium
CN109829362A (en) Safety check aided analysis method, device, computer equipment and storage medium
CN111339897B (en) Living body identification method, living body identification device, computer device, and storage medium
CN111680675B (en) Face living body detection method, system, device, computer equipment and storage medium
CN109886111A (en) Match monitoring method, device, computer equipment and storage medium based on micro- expression
EP3859663A1 (en) Iris recognition device, iris recognition method and storage medium
CN113139462A (en) Unsupervised face image quality evaluation method, electronic device and storage medium
CN110633659B (en) Living body detection method, living body detection device, computer equipment and storage medium
Hassanpour et al. E2F-GAN: Eyes-to-face inpainting via edge-aware coarse-to-fine GANs
Liu et al. Attack-Agnostic Deep Face Anti-Spoofing
CN113642003A (en) Safety detection method of face recognition system based on high-robustness confrontation sample generation
CN108875530A (en) Vivo identification method, vivo identification equipment, electronic equipment and storage medium
CN112613445A (en) Face image generation method and device, computer equipment and storage medium
Shen et al. Iritrack: Face presentation attack detection using iris tracking
CN114596615A (en) Face living body detection method, device, equipment and medium based on counterstudy
CN110414347B (en) Face verification method, device, equipment and storage medium
CN114299569A (en) Safe face authentication method based on eyeball motion
CN115424001A (en) Scene similarity estimation method and device, computer equipment and storage medium
CN114612991A (en) Conversion method and device for attacking face picture, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Ren Tuo

Inventor before: Xie Jianbin

Inventor before: Ren Tuo

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20231227

Address after: 410000 No. 96, tongzipo Road, Yuelu District, Changsha City, Hunan Province

Patentee after: Hunan Zhongke Zhuying Intelligent Technology Research Institute Co.,Ltd.

Patentee after: National University of Defense Technology

Address before: 410000 No. 96, tongzipo Road, Yuelu District, Changsha City, Hunan Province

Patentee before: Hunan Zhongke Zhuying Intelligent Technology Research Institute Co.,Ltd.

TR01 Transfer of patent right