CN116844198A - Method and system for detecting face attack - Google Patents

Method and system for detecting face attack Download PDF

Info

Publication number
CN116844198A
CN116844198A CN202310589807.2A CN202310589807A CN116844198A CN 116844198 A CN116844198 A CN 116844198A CN 202310589807 A CN202310589807 A CN 202310589807A CN 116844198 A CN116844198 A CN 116844198A
Authority
CN
China
Prior art keywords
face
detected
mapping model
image
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310589807.2A
Other languages
Chinese (zh)
Other versions
CN116844198B (en
Inventor
李继凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Uwonders Technology Co ltd
Original Assignee
Beijing Uwonders Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Uwonders Technology Co ltd filed Critical Beijing Uwonders Technology Co ltd
Priority to CN202310589807.2A priority Critical patent/CN116844198B/en
Publication of CN116844198A publication Critical patent/CN116844198A/en
Application granted granted Critical
Publication of CN116844198B publication Critical patent/CN116844198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The application discloses a method and a system for detecting face attack, wherein the method comprises the following steps: s1: constructing a face image dataset; s2: training a face mapping model; the face mapping model comprises a face feature extraction model and a face feature mapping model; s3: constructing a face attack detection threshold; s4: acquiring a face image to be detected; s5: calculating face parameters to be detected to obtain a face attack detection result; inputting the face image to be detected into the face mapping model, and calculating the face parameters to be detected; and obtaining the face attack detection result according to the relation between the face parameter to be detected and the face attack detection threshold. According to the method, only normal face samples are learned, the face mapping model is established through end-to-end training, so that the recognition of abnormal (camouflage) faces is realized, various attacking face samples are not needed, the calculated amount is small, and the recognition accuracy is high, so that the accurate recognition of various camouflage faces can be successfully realized.

Description

Method and system for detecting face attack
Technical Field
The application relates to the technical field of biological recognition, in particular to a method and a system for detecting face attack.
Background
Along with the rapid development of artificial intelligence technology, the face recognition technology is also greatly improved, and meanwhile, the face attack technology is also more advanced. Face camouflage attack detection is widely focused by scientific research staff as an important research branch in the field of face recognition. Face camouflage faces are broadly divided into two main categories: a face matching-based mode and a static image-based detection mode. The face matching mode is based on the fact that the face matching mode is compulsorily required to be used for blinking, mouth opening, head shaking, head nodding and the like, and the mode is not friendly because people are required to be matched. The detection mode based on the static image does not need human participation, but has the following problems: still image camouflage has low cost but the technical detection difficulty is high and is always questioned. Because the camouflage technique of a still image mainly includes three types: photograph, video and 3D masks have made great progress in recent years in recognition of photo camouflage due to the release of multiple large-scale, high-quality reference data sets, however, with the maturation of 3D printing technology, masks have become a new way of attack that threatens the security of face recognition systems.
In the field of computer vision, an attacker generates specific noise by using an algorithm and adds the specific noise into an original image, so that an image countermeasure sample with an attack effect is generated, the initial definition of the countermeasure sample is based on the attack of the specific image, in order to realize the identification of the face attack in the prior art, a normal face picture is collected from the Internet as a training set, a face identification model is trained, and a face identification target attack model with high identification accuracy is obtained; secondly, supervising the generation of a face countermeasure sample generated in the WGAN-GP by using the target attack model, carrying out local disturbance of the face, generating a face countermeasure sample with higher visual quality, recognizing whether the input image is the face countermeasure sample or not by using the detection image, carrying out face attack recognition, and terminating the face recognition service and reporting the termination reason when the input image is the face countermeasure sample.
Typical challenge sample detection includes both supervised and unsupervised discovery methods. The supervised method needs to produce a large number of face countermeasure samples, a training data set is formed after labeling the data, and then supervised training is carried out, so that a face countermeasure sample classifier is obtained, and images which are countermeasure samples and normal face images can be distinguished.
In order to realize the recognition of the face attack, a large number of face countermeasure samples need to be constructed, the calculation amount is large, and experiments of Athalye et al prove that the 3D printing technology can be used for constructing physical countermeasure samples and successfully spoofing the deep neural network classifier.
In view of this, the present application has been made.
Disclosure of Invention
The application aims at overcoming the defects of the prior art, and provides a method and a system for detecting face attack, which only need normal face images without learning camouflage images, thereby greatly reducing the difficulty of problems and improving the detection precision.
In a first aspect, the present application provides a method for detecting a face attack, including the steps of:
s1: constructing a face image dataset;
the face image data set is a normal face image set; the face image in the normal face image set is a face image without camouflage, and the camouflage comprises a face mask or a video face or face synthesis;
s2: training a face mapping model;
constructing the face mapping model through the face image data set, wherein the face mapping model comprises a face feature extraction model and a face feature mapping model; mapping the face image to a normal face distribution space by adopting the face mapping model;
s3: constructing a face attack detection threshold;
the face attack detection threshold is used for judging whether the face image contains the camouflage or not;
s4: acquiring a face image to be detected;
collecting an image to be detected, and processing the image to be detected to obtain a face image to be detected;
s5: calculating face parameters to be detected to obtain a face attack detection result;
inputting the face image to be detected into the face mapping model, and calculating the face parameters to be detected;
and obtaining the face attack detection result according to the relation between the face parameter to be detected and the face attack detection threshold.
Preferably, in step S1, the face image dataset comprises the normal face image set in a MegaFace dataset or a CelebFaces dataset.
Preferably, in step S2, the face mapping model is formed by connecting the face feature extraction model and the face feature mapping model in series, and the output of the face feature extraction model is the input of the face feature mapping model, and the output of the face feature extraction model is a one-dimensional vector.
Preferably, in step S2, the normal face distribution space is a distribution space defined according to the face image dataset.
Preferably, in step S2, the step of training a face mapping model includes:
s21: inputting the face image data set into the face mapping model, obtaining a face feature set through the face feature extraction model, and inputting the face feature set into the face feature mapping model to obtain the distribution of each face image in the face image data set in the normal face distribution space;
522: and calculating the error of the face mapping model by adopting a loss function, and updating parameters of the face mapping model until the face mapping model converges.
Preferably, the face feature extraction model is a deep convolutional neural network, the face feature mapping model is a standard flow model, and the normal face distribution space is a multidimensional standard normal distribution.
Preferably, in step S3, the step of constructing a face attack detection threshold includes:
s31: calculating the distribution of each face image in the face image data set in the normal face distribution space when the face mapping model converges to obtain optimal training distribution;
s32: according to the optimal training distribution, calculating the negative log likelihood value of each face image in the face image data set to obtain a training sample negative log likelihood value set;
s33: and calculating the average value of the training sample negative log likelihood value set to obtain the face attack detection threshold.
Preferably, in step S5, the step of calculating the face parameter to be detected includes:
s51: inputting the face image to be detected into the face mapping model, and outputting the distribution of the face image to be detected in the normal face distribution space to obtain distribution to be detected;
s52: and calculating the negative log likelihood value of the face image to be detected according to the distribution to be detected to obtain the face parameter to be detected.
Preferably, in step S5, the face attack detection result is obtained according to the magnitude relation between the face parameter to be detected and the face attack detection threshold value:
when the face parameter to be detected is more than the face attack detection threshold, the face attack detection result is that the camouflage is contained, and an abnormal image is output;
and when the face parameter to be detected is less than or equal to the face attack detection threshold, the face attack detection result does not contain the disguise, and a normal image is output.
In a second aspect, the present application provides a system for detecting a face attack, comprising: the face detection system comprises a face image data set construction module, a face mapping model training module, a face attack detection threshold calculation module, a face image to be detected construction module and a face parameter calculation and judgment module;
the face image data set construction module is used for acquiring a face image data set, wherein the face image data set is a normal face image set; the face image in the normal face image set is a face image without camouflage, and the camouflage comprises a face mask or a video face or face synthesis;
the face mapping model training module is used for training a face mapping model, and constructing the face mapping model through the face image data set, wherein the face mapping model comprises a face feature extraction model and a face feature mapping model; mapping the face image to a normal face distribution space by adopting the face mapping model;
the face attack detection threshold calculation module is used for calculating a face attack detection threshold, and the face attack detection threshold is used for judging whether the face image contains the disguise or not;
the face image construction module is used for acquiring a face image to be detected;
the parameter calculation and judgment module is used for calculating the parameters of the face to be detected and obtaining a face attack detection result according to the parameters of the face to be detected;
inputting the face image to be detected into the face mapping model, and calculating the face parameters to be detected;
and obtaining the face attack detection result according to the relation between the face parameter to be detected and the face attack detection threshold.
The application has the beneficial effects that:
(1) In the face recognition process, the face camouflage modes are various, such as synthesis, a face 3D model, a photo and the like, and the scheme realizes the recognition of abnormal (camouflage) faces by only learning normal face samples, does not need various attacking face samples, has small calculated amount and high recognition precision, and can successfully realize the accurate recognition of various camouflage faces.
(2) According to the application, the distribution of the face image in the normal face distribution space is obtained through the face mapping model, the face feature extraction model and the face feature mapping model are connected in series, the output of the face feature extraction model is the input of the face feature mapping model, the end-to-end training is realized, the data marking before each independent learning task is executed is omitted, the face recognition error caused by manual participation is reduced, and the face recognition method is more concise and better in effect.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a method for detecting a face attack.
Fig. 2 is a flowchart of a method step S2 of detecting a face attack.
Fig. 3 is a flowchart illustrating a method step S3 of detecting a face attack.
Fig. 4 is a flowchart of a method step S5 of detecting a face attack.
Fig. 5 is a block diagram of a system for detecting a face attack.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, so that the objects, technical solutions and advantages of the present application will become more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
As known from the background art, the face recognition method based on the countermeasure sample is the most main method for recognizing face camouflage attack, and a large number of face countermeasure samples need to be constructed for realizing the recognition of the face attack by the method, so that the calculation amount is large; meanwhile, with the development of technology, a face camouflage mode used for launching the attack is also continuously optimized, for example, a 3D printing technology can be used for constructing a physical countermeasure sample and successfully spoofing the deep neural network classifier, and the existing recognition scheme cannot meet the requirements of camouflage attack recognition.
Therefore, the embodiment of the application provides a method for detecting a face attack, which establishes a face mapping model for learning a normal face image set, obtains the distribution of a face image to be detected in a distribution space defined by a normal face through the face mapping model, obtains corresponding detection parameters based on the distribution characteristics of the face image, and identifies a very face image, so that the identification of camouflage such as synthesis, a face 3D model, a photo and the like can be realized, and the detection accuracy of the face attack is improved.
The embodiment of the application provides a method for detecting face attack, a flow chart of the method is shown in figure 1, and the method comprises the following steps:
s1: constructing a face image dataset; the face image data set is a normal face image set; the face image in the normal face image set is a face image without camouflage, and the camouflage comprises a face mask or a video face or face synthesis.
In the method provided by the embodiment of the application, the images in the face image dataset are normal face images, and no face images such as face masks, video faces, face synthesis and the like are included.
S2: training a face mapping model; constructing a face mapping model through a face image dataset, wherein the face mapping model comprises a face feature extraction model and a face feature mapping model; and mapping the face image to a normal face distribution space by adopting a face mapping model.
In the method provided by the embodiment of the application, the face mapping model is obtained through deep learning of the face image dataset, the face mapping model comprises a face feature extraction model and a face feature mapping model, the face image is input into the face mapping model to obtain the distribution of the face image in a normal face distribution space, the face mapping model output is a high-dimensional feature vector, and the feature vector meets the preset normal face distribution space.
S3: constructing a face attack detection threshold; the face attack detection threshold is used for judging whether the face image contains camouflage or not.
In the method provided by the embodiment of the application, the face attack detection threshold is constructed to judge whether the face image contains camouflage or not, and is related to the face image data set, namely, the distribution characteristics of the normal face image.
S4: acquiring a face image to be detected; and acquiring an image to be detected, and processing the image to be detected to obtain a face image to be detected.
In the method provided by the embodiment of the application, firstly, the image to be detected is collected, the collected image is the image with different backgrounds, and in order to eliminate the influence of the backgrounds on the human recognition result, firstly, the image is required to be processed, and the extraction of the face target in the image to be detected is carried out, so that the face image to be detected is obtained.
S5: calculating face parameters to be detected, and obtaining a face attack detection result; inputting the face image to be detected into a face mapping model, and calculating face parameters to be detected; and obtaining a face attack detection result according to the relation between the face parameter to be detected and the face attack detection threshold.
In the method provided by the embodiment of the application, the relation between the face parameters to be detected and the face attack detection threshold is used for obtaining the face attack detection result, and the detection result is fed back to the manager so as to facilitate the subsequent processing, for example, the face recognition record corresponding to the face image to be detected can be adjusted, and the corresponding user identification is further carried out on the related face image.
Based on the method provided by the embodiment of the application, the face mapping model is obtained through the deep learning of the face normal image set, and meanwhile, the corresponding face attack detection threshold is obtained based on the face normal image set. In order to identify face attack in an acquired image to be detected, firstly, preprocessing the image to be detected to obtain the image of the face to be detected, inputting the image of the face to be detected into a face mapping model to obtain the distribution of the image of the face to be detected in a defined distribution space, calculating the parameters of the face to be detected based on the distribution characteristics, and judging whether the image of the face contains camouflage or not according to the relation between the parameters of the face to be detected and a face attack detection threshold. By the method provided by the embodiment of the application, the recognition of the abnormal (camouflage) face is realized by only learning the normal face sample, various attacking face samples are not needed, the calculated amount is small, even if the camouflage mode is continuously optimized, the influence on detection is very limited, and the recognition precision is high, so that the accurate recognition of various camouflage faces can be successfully realized.
The individual steps of fig. 1-4 are described below in connection with specific embodiments.
In step S1, the face image dataset comprises a normal face image set in a MegaFace dataset or a CelebFaces dataset.
The MegaFace dataset in this example contains a total of 690572 images of 1027060 identities. This is the first face recognition algorithm test standard at the million scale level; the CelebFaces attribute dataset is a large-scale face attribute dataset containing over 200K images, each image having 40 attribute annotations. The images in the dataset cover large pose changes and background clutter. CelebA has great diversity, quantity and rich annotation, comprising 10177 identities and 202599 face images, and the selection of training samples ensures the recognition precision.
In step S2, the face mapping model is formed by connecting the face feature extraction model and the face feature mapping model in series, the output of the face feature extraction model is the input of the face feature mapping model, and the output of the face feature extraction model is a one-dimensional vector.
In this embodiment, the face mapping model is a serial model, and is formed by connecting a face feature extraction model and a face feature mapping model in series, and a one-dimensional vector is adopted as a correlation between the two models, that is, the output of the face feature extraction model is a one-dimensional vector, and the vector is used as the input of the face feature mapping model, so that bidirectional circulation of data is realized, and forward operation and gradient update reverse operation in deep learning are realized.
In step S2, the distribution space may be set in the training model according to the recognition requirement, and a distribution space of a normal face image may be defined.
Specifically, in this embodiment, a normal face distribution space is defined according to all normal face image samples in the face image dataset, and the model accuracy obtained by diversifying the samples of the normal face image in the face image dataset is higher, and the distribution space defined by the face image dataset is uniform distribution or normal distribution or gamma distribution or exponential distribution.
As shown in fig. 2, in order to implement training of the face mapping model, in step S2, the step of training the face mapping model includes:
s21: inputting a face image data set into a face mapping model, obtaining a face feature set through a face feature extraction model, and inputting the face feature set into the face feature mapping model to obtain the distribution of each face image in the face image data set in a distribution space;
522: and calculating the error of the face mapping model by adopting the loss function, and updating the parameters of the face mapping model until the face mapping model converges.
Specifically, the formula of the loss function in this embodiment is:
wherein: x is x i Z is the face image in the face image dataset i For the distribution of the face image in the normal face distribution space, det represents determinant.
In this embodiment, the face mapping model is in a serial structure, so that end-to-end training can be realized, data labeling before each independent learning task is executed is omitted, face recognition errors caused by manual participation are reduced, and the face mapping model is more concise and better in effect.
On the basis of the model in the embodiment, the embodiment of the application provides a detection method of face recognition attack, wherein the face feature extraction model is a deep convolutional neural network, the face feature mapping model is a standard flow model, and the distribution space is multidimensional standard normal distribution.
As shown in fig. 3, in step S3, the step of constructing a face attack detection threshold is:
s31: calculating the distribution of each face image in a distribution space in a face image data set when the face mapping model converges to obtain optimal training distribution;
let the face mapping model be f θ The optimal training distribution is obtained as f through S31 θ (x i );
Wherein x is i For a face image in the face image dataset, i= (1,2,3 … … n), wherein n is the total number of samples in the face image dataset;
s32: according to the optimal training distribution f θ (x i ) Calculating the negative log likelihood value of each face image in the face image data set to obtain a training sample negative log likelihood value set tau (x) i );
Negative log likelihood value τ (x i ) The method comprises the following steps:
τ(x i )=-logP k (f θ (x i ))
wherein k to N (0, 1);
s33: calculating the average value of the training sample negative log likelihood value set to obtain a face attack detection threshold tau dect
In this embodiment, the face attack detection threshold is obtained through a training process of a face image dataset, specifically, τ of all samples of the last time when convergence is reached through training of a face mapping model i The detection threshold value is calculated, and the last time is the optimal value obtained by the model, so that the suitability of the face attack detection threshold value and the face mapping model is improved, and the detection threshold value is more accurate.
As shown in fig. 4, in step S5, the step of calculating the face parameter to be detected is:
s51: inputting the face image to be detected into a face mapping model, and outputting the distribution of the face image to be detected in a distribution space to obtain the distribution to be detected;
the face image x to be detected text Inputting the detected distribution f into a face mapping model θ (x text );
S52: and according to the distribution to be detected, calculating the negative log likelihood value of the face image to be detected to obtain the face parameter to be detected.
Calculating the distribution f to be detected θ (x text ) Negative log likelihood value τ text (x text ):
τ text (x text )=-logP k (f θ (x text ))
Face parameter tau to be detected text =τ text (x text )。
Further, according to the face parameter tau to be detected text With a face attack detection threshold τ dect The size relation among the face attack detection results are obtained:
when the face parameter tau is to be detected text Face attack detection threshold τ dect When the face attack detection result is that the face attack detection result contains camouflage, outputting an abnormal image;
when the face parameter tau is to be detected text Human face attack detection threshold tau less than or equal to dect And when the face attack detection result does not contain camouflage, outputting a normal image.
In the embodiment, the face attack detection result is obtained according to the size relation between the face attack detection thresholds of the detected face parameters, so that the judgment standard of face abnormality is simplified, and the rapid and efficient face attack recognition is realized.
In order to effectively improve accuracy and reliability of a face attack recognition process and effectively improve automation degree and efficiency of the face attack recognition process, the application provides an embodiment of a system for detecting face attack, which is used for detecting all or part of content in a face attack detection method, and is shown in fig. 5.
A system for detecting a face attack, comprising: the face detection system comprises a face image data set construction module, a face mapping model training module, a face attack detection threshold calculation module, a face image to be detected construction module and a face parameter calculation and judgment module;
the face image data set construction module is used for acquiring a face image data set which is a normal face image set; the face image in the normal face image set is a face image without camouflage, and the camouflage comprises a face mask or a video face or face synthesis;
the face mapping model training module is used for training a face mapping model, and constructing a face mapping model through a face image dataset, wherein the face mapping model comprises a face feature extraction model and a face feature mapping model; mapping the face image to a normal face distribution space by adopting a face mapping model;
the face attack detection threshold calculation module is used for calculating a face attack detection threshold, and the face attack detection threshold is used for judging whether the face image contains disguises or not;
the face image to be detected building module is used for acquiring a face image to be detected;
the parameter calculation and judgment module is used for calculating the parameters of the face to be detected and obtaining a face attack detection result according to the parameters of the face to be detected;
inputting the face image to be detected into a face mapping model, and calculating face parameters to be detected;
and obtaining a face attack detection result according to the relation between the face parameter to be detected and the face attack detection threshold.
The system for detecting the face attack in the implementation realizes automatic identification, is low in cost, simple, convenient and obvious in effect, solves the problems of complex face camouflage type, complex construction process of a countermeasure sample for face attack detection and large calculation amount, can automatically realize the rapid detection of the face attack with high precision, and has the advantages of reducing the workload of staff and improving the working efficiency.
The foregoing is merely a preferred embodiment of the application, and although the application has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described in the foregoing embodiments, or equivalents may be substituted for some of the features thereof. Modifications, equivalents, and alternatives falling within the spirit and principles of the application are intended to be included within the scope of the application.

Claims (10)

1. A method of detecting a face attack, comprising the steps of:
s1: constructing a face image dataset;
the face image data set is a normal face image set; the face image in the normal face image set is a face image without camouflage, and the camouflage comprises a face mask or a video face or face synthesis;
s2: training a face mapping model;
constructing the face mapping model through the face image data set, wherein the face mapping model comprises a face feature extraction model and a face feature mapping model; mapping the face image to a normal face distribution space by adopting the face mapping model;
s3: constructing a face attack detection threshold;
the face attack detection threshold is used for judging whether the face image contains the camouflage or not;
s4: acquiring a face image to be detected;
collecting an image to be detected, and processing the image to be detected to obtain a face image to be detected;
s5: calculating face parameters to be detected to obtain a face attack detection result;
inputting the face image to be detected into the face mapping model, and calculating the face parameters to be detected;
and obtaining the face attack detection result according to the relation between the face parameter to be detected and the face attack detection threshold.
2. The method according to claim 1, wherein in step S1 the face image dataset comprises the normal face image set in a MegaFace dataset or a CelebFaces dataset.
3. The method according to claim 1, wherein in step S2, the face mapping model is formed by connecting the face feature extraction model and the face feature mapping model in series, the output of the face feature extraction model is an input of the face feature mapping model, and the output of the face feature extraction model is a one-dimensional vector.
4. The method according to claim 1, wherein in step S2, the normal face distribution space is a distribution space defined from the face image dataset.
5. The method according to claim 1, wherein in step S2, the step of training a face mapping model comprises:
s21: inputting the face image data set into the face mapping model, obtaining a face feature set through the face feature extraction model, and inputting the face feature set into the face feature mapping model to obtain the distribution of each face image in the face image data set in the normal face distribution space;
522: and calculating the error of the face mapping model by adopting a loss function, and updating parameters of the face mapping model until the face mapping model converges.
6. The method according to claim 1, wherein in step S2, the face feature extraction model is a deep convolutional neural network, the face feature mapping model is a standard flow model, and the normal face distribution space is a multidimensional standard normal distribution.
7. The method according to claim 1, wherein in step S3, the step of constructing a face attack detection threshold is:
s31: calculating the distribution of each face image in the face image data set in the normal face distribution space when the face mapping model converges to obtain optimal training distribution;
s32: according to the optimal training distribution, calculating the negative log likelihood value of each face image in the face image data set to obtain a training sample negative log likelihood value set;
s33: and calculating the average value of the training sample negative log likelihood value set to obtain the face attack detection threshold.
8. The method according to claim 1, wherein in step S5, the step of calculating the face parameter to be detected is:
s51: inputting the face image to be detected into the face mapping model, and outputting the distribution of the face image to be detected in the normal face distribution space to obtain distribution to be detected;
s52: and calculating the negative log likelihood value of the face image to be detected according to the distribution to be detected to obtain the face parameter to be detected.
9. The method according to claim 1, wherein in step S5, the face attack detection result is obtained according to a magnitude relation between the face parameter to be detected and the face attack detection threshold value:
when the face parameter to be detected is more than the face attack detection threshold, the face attack detection result is that the camouflage is contained, and an abnormal image is output;
and when the face parameter to be detected is less than or equal to the face attack detection threshold, the face attack detection result does not contain the disguise, and a normal image is output.
10. A system for detecting a face attack, comprising: the face detection system comprises a face image data set construction module, a face mapping model training module, a face attack detection threshold calculation module, a face image to be detected construction module and a face parameter calculation and judgment module;
the face image data set construction module is used for acquiring a face image data set, wherein the face image data set is a normal face image set; the face image in the normal face image set is a face image without camouflage, and the camouflage comprises a face mask or a video face or face synthesis;
the face mapping model training module is used for training a face mapping model, and constructing the face mapping model through the face image data set, wherein the face mapping model comprises a face feature extraction model and a face feature mapping model; mapping the face image to a normal face distribution space by adopting the face mapping model;
the face attack detection threshold calculation module is used for calculating a face attack detection threshold, and the face attack detection threshold is used for judging whether the face image contains the disguise or not;
the face image construction module is used for acquiring a face image to be detected;
the parameter calculation and judgment module is used for calculating the parameters of the face to be detected and obtaining a face attack detection result according to the parameters of the face to be detected;
inputting the face image to be detected into the face mapping model, and calculating the face parameters to be detected;
and obtaining the face attack detection result according to the relation between the face parameter to be detected and the face attack detection threshold.
CN202310589807.2A 2023-05-24 2023-05-24 Method and system for detecting face attack Active CN116844198B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310589807.2A CN116844198B (en) 2023-05-24 2023-05-24 Method and system for detecting face attack

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310589807.2A CN116844198B (en) 2023-05-24 2023-05-24 Method and system for detecting face attack

Publications (2)

Publication Number Publication Date
CN116844198A true CN116844198A (en) 2023-10-03
CN116844198B CN116844198B (en) 2024-03-19

Family

ID=88158906

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310589807.2A Active CN116844198B (en) 2023-05-24 2023-05-24 Method and system for detecting face attack

Country Status (1)

Country Link
CN (1) CN116844198B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102096823A (en) * 2011-02-12 2011-06-15 厦门大学 Face detection method based on Gaussian model and minimum mean-square deviation
KR20190109772A (en) * 2018-02-28 2019-09-27 동국대학교 산학협력단 Apparatus and method for detection presentaion attack for face recognition system
CN112200057A (en) * 2020-09-30 2021-01-08 汉王科技股份有限公司 Face living body detection method and device, electronic equipment and storage medium
CN112200075A (en) * 2020-10-09 2021-01-08 西安西图之光智能科技有限公司 Face anti-counterfeiting method based on anomaly detection
CN112668519A (en) * 2020-12-31 2021-04-16 声耕智能科技(西安)研究院有限公司 Abnormal face recognition living body detection method and system based on MCCAE network and Deep SVDD network
CN115731620A (en) * 2022-11-04 2023-03-03 支付宝(杭州)信息技术有限公司 Method for detecting counter attack and method for training counter attack detection model
CN115761837A (en) * 2022-10-21 2023-03-07 北京经纬信息技术有限公司 Face recognition quality detection method, system, device and medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102096823A (en) * 2011-02-12 2011-06-15 厦门大学 Face detection method based on Gaussian model and minimum mean-square deviation
KR20190109772A (en) * 2018-02-28 2019-09-27 동국대학교 산학협력단 Apparatus and method for detection presentaion attack for face recognition system
CN112200057A (en) * 2020-09-30 2021-01-08 汉王科技股份有限公司 Face living body detection method and device, electronic equipment and storage medium
CN112200075A (en) * 2020-10-09 2021-01-08 西安西图之光智能科技有限公司 Face anti-counterfeiting method based on anomaly detection
CN112668519A (en) * 2020-12-31 2021-04-16 声耕智能科技(西安)研究院有限公司 Abnormal face recognition living body detection method and system based on MCCAE network and Deep SVDD network
CN115761837A (en) * 2022-10-21 2023-03-07 北京经纬信息技术有限公司 Face recognition quality detection method, system, device and medium
CN115731620A (en) * 2022-11-04 2023-03-03 支付宝(杭州)信息技术有限公司 Method for detecting counter attack and method for training counter attack detection model

Also Published As

Publication number Publication date
CN116844198B (en) 2024-03-19

Similar Documents

Publication Publication Date Title
Bansal et al. The do's and don'ts for cnn-based face verification
CN109815826B (en) Method and device for generating face attribute model
Chen et al. Spectral curvature clustering (SCC)
CN107341463B (en) Face feature recognition method combining image quality analysis and metric learning
CN109145717B (en) Face recognition method for online learning
WO2020253127A1 (en) Facial feature extraction model training method and apparatus, facial feature extraction method and apparatus, device, and storage medium
CN107451994A (en) Object detecting method and device based on generation confrontation network
CN110929679A (en) Non-supervision self-adaptive pedestrian re-identification method based on GAN
WO2019029459A1 (en) Method and device for recognizing facial age, and electronic device
CN112580521B (en) Multi-feature true and false video detection method based on MAML (maximum likelihood markup language) element learning algorithm
CN110796089A (en) Method and apparatus for training face-changing model
CN112016454A (en) Face alignment detection method
CN112686258A (en) Physical examination report information structuring method and device, readable storage medium and terminal
CN115240280A (en) Construction method of human face living body detection classification model, detection classification method and device
CN110598595A (en) Multi-attribute face generation algorithm based on face key points and postures
CN116844198B (en) Method and system for detecting face attack
CN111382791B (en) Deep learning task processing method, image recognition task processing method and device
CN113705310A (en) Feature learning method, target object identification method and corresponding device
CN116347002A (en) Video noise reduction method, system, equipment and storage medium
Dhar et al. On measuring the iconicity of a face
CN112990120B (en) Cross-domain pedestrian re-identification method using camera style separation domain information
CN115457620A (en) User expression recognition method and device, computer equipment and storage medium
CN115019367A (en) Genetic disease face recognition device and method
CN114155590A (en) Face recognition method
CN111062484B (en) Data set selection method and device based on multi-task learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant