CN114898450A - Face confrontation mask sample generation method and system based on generation model - Google Patents

Face confrontation mask sample generation method and system based on generation model Download PDF

Info

Publication number
CN114898450A
CN114898450A CN202210823234.0A CN202210823234A CN114898450A CN 114898450 A CN114898450 A CN 114898450A CN 202210823234 A CN202210823234 A CN 202210823234A CN 114898450 A CN114898450 A CN 114898450A
Authority
CN
China
Prior art keywords
mask
face
confrontation
sample
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210823234.0A
Other languages
Chinese (zh)
Other versions
CN114898450B (en
Inventor
董晶
王伟
彭勃
王丽
王建文
项伟
宋宗泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Automation of Chinese Academy of Science
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN202210823234.0A priority Critical patent/CN114898450B/en
Publication of CN114898450A publication Critical patent/CN114898450A/en
Application granted granted Critical
Publication of CN114898450B publication Critical patent/CN114898450B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a face confrontation mask sample generation method and system based on a generation model. The method comprises the following steps: constructing a face mask shape matrix according to the positions of the face characteristic points in the confrontation face image, and simulating radian transformation of the mask in a physical space through bending rotation mapping transformation to obtain a confrontation mask in the shape of the mask; generating a random face image based on a face generation model, processing the random face image by using a confrontation mask generation method to obtain a mask shape confrontation disturbance, and combining the mask shape confrontation disturbance and the confrontation face image into a confrontation sample; and inputting the countercheck sample and the database face image into the attacked face recognition system network to construct an integral countercheck attack training network. The scheme provided by the invention can realize printing in a physical space, not only can enable a face recognition system to wrongly recognize faces, but also can realize the mobility of resisting samples on different face recognition models.

Description

Face confrontation mask sample generation method and system based on generation model
Technical Field
The invention belongs to the field of face recognition, and particularly relates to a face confrontation mask sample generation method and system based on a generation model.
Background
In recent years, with the development of deep learning technology, a face recognition system based on a deep neural network has become an efficient and accurate identity recognition method and widely appears in daily life. Because the deep learning network is easily attacked by physically realizable counterattack, namely, the network outputs wrong results by adding the counterattack disturbance deception neural network into the network, the result shows that the deep neural network has vulnerability, the face recognition system is also easily deceived and attacked, and great safety problem is brought to the application in the real world. The existing anti-sample attack mainly aims at adding imperceptible fine disturbance to an original sample in a digital space, so that a deep neural network outputs any desired classification with higher confidence, and the method is not only lack of the transportability of a black box model, but also cannot be realized in a physical space.
Disclosure of Invention
In order to solve the technical problems, the invention provides a technical scheme of a face confrontation mask sample generation method based on a generation model, so as to solve the technical problems.
The invention discloses a face confrontation mask sample generation method based on a generation model in a first aspect, which comprises the following steps:
step S1, obtaining a confrontation face image A and a database sample image X;
step S2, preprocessing the face image data: acquiring the face positions in the confrontation face image and the database sample image by using a face detection method, cutting the face images in the face position areas in the confrontation face image and the database sample image, and converting the resolution of the face images in the face position areas in the cut confrontation face image and the database sample image by adopting an interpolation algorithm; obtaining a confrontation face image A 600 And database face image X T
Step S3, constructing a confrontation mask in the shape of a face mask: according to the confrontation face image A 600 Constructing a face mask shape matrix P at the positions of the characteristic points of the Chinese face, and simulating the radian transformation of the mask in a physical space by the face mask shape matrix through bending rotation mapping transformation to obtain a confrontation mask M in the shape of the mask 1
Step S4, constructing a confrontation sample generator: generating a random face image according to the face generation model, and processing the random face image by using the face and mask shape confrontation mask generation method in the step S3 to obtain a mask confrontation disturbance mask M S Comparing the countermeasure mask Ms with the countermeasure mask M 1 And confrontation of the face image A 600 Integrating to obtain a confrontation sample M formed by combining the confrontation disturbance of the mask and the face area A
Step S5, the confrontation sample M A And database face image X T Inputting the data into an attacked face recognition system network F to construct an integral anti-attack training network;
step S6, constructing a total loss function of the overall anti-attack training network training;
step S7, setting a model optimization algorithm: optimizing parameters and weights of the integral anti-attack training network by adopting a parameter optimization algorithm;
and step S8, continuously optimizing and adjusting network parameters until the network parameters are stable by repeatedly training the whole anti-attack training network and taking the minimization of the total loss function as a target, storing the model parameters and the weight of the whole anti-attack training network, storing an anti-sample image matrix in the shape of the face mask generated by final training and optimization, and printing to obtain a final physical space anti-sample.
According to the method of the first aspect of the present invention, in step S3, the method of constructing the face mask shape matrix P based on the positions of the facial feature points in the confrontation face image includes:
pair of anti-human face images A by adopting human face feature point detection algorithm 600 Carrying out human face characteristic point detection to obtain 68 human face characteristic points, and selecting 6 th, 9 th and 12 th points, namely the chin and 30 th points, namely the nose, from the 68 characteristic points, wherein the coordinates of the 4 key point positions are respectively as follows: a is 6 =(x 6 ,y 6 ),a 9 =(x 9 ,y 9 ),a 12 =(x 12 ,y 12 ),a 30 =(x 30 ,y 30 ) Constructing a mask-shaped hexagon according to the positions of 4 key points of the nose and the chin, and acquiring the positions a of the other two points in the face mask hexagon according to the positions of the 4 key points 6 * =( x 6 ,y 6 * ),a 12 * =(x 12 ,y 6 * ) From a to a 6 * 、a 6 、a 9 、a 12 、a 12 * And a 30 Obtaining a face mask shape matrix P;
wherein, y 6 * = y 30 -( y 9 - y 6 )。
According to the method of the first aspect of the present invention, in the step S3, the human face mask shape matrix is modeled by a curvature transformation of a mask in a physical space through a curved rotation mapping transformation to obtain a confrontation mask M of a mask shape 1 The method comprises the following steps:
cutting a mask area of the face mask shape matrix, and obtaining a physical space face mask sample matrix M through digital space transformation;
will be described inThe mask sample matrix simulates the radian in a 3d face mask through bending, rotating, mapping and transforming to obtain a confrontation mask M 1
According to the method of the first aspect of the present invention, in the step S3, the parameters of the bending rotation mapping transformation include: radian parameter of bendl=0.0018, angle parameter θ of rotation = -5; the transformed matrix scaling parameter s = 0.465.
According to the method of the first aspect of the present invention, in the step S4, the random face image is generated according to the face generation model, and the mask confrontation disturbance mask M is obtained by processing the random face image by using the confrontation mask generation method of the face and mask shape in the step S3 S The method comprises the following steps:
randomly generating a generated face image G with the resolution of 600 multiplied by 600 through a face generation model 600 And obtaining an anti-disturbance mask M for generating the shape of the face mask in the face image according to the anti-disturbance mask M S I.e. M S = G 600 •M 1
According to the method of the first aspect of the present invention, in the step S4, the countermeasure mask Ms and the countermeasure mask M are combined 1 And confrontation of the face image A 600 Integrating to obtain a confrontation sample M formed by combining the confrontation disturbance of the mask and the face area A The method comprises the following steps:
taking the disturbance-resisting mask M s The inner element, the element outside the mask is set to zero to obtain the anti-disturbance M of the mask area s •M 1 At the same time, the face image A will be confronted 600 Setting the pixel value in the mask to zero, and reserving the rest pixel values to obtain a face image A outside the mask area 600 •(1-M 1 ) Fusing the two partial images to obtain a face confrontation sample, namely M A =M s •M 1 + A 600 •(1-M 1 )。
According to the method of the first aspect of the present invention, in the step S6, the total loss function is:
Figure 602868DEST_PATH_IMAGE001
wherein the content of the first and second substances,
L(x) As a function of total loss;
L sim (a,x) In order to combat the sample generator loss function,
Figure 246339DEST_PATH_IMAGE002
wherein the content of the first and second substances,E a to combat the feature vectors of the face image a samples,E x feature vectors of the database sample image X samples;
L tv (x) In order to combat the smoothness loss function of the mask,
Figure DEST_PATH_IMAGE003
wherein the content of the first and second substances,x ij the pixel value of the ith row and the jth column in the sample;
L nps in order to be a function of the color difference loss of the printer,
Figure 145287DEST_PATH_IMAGE004
wherein p is c Is to counter the pixel value, s, in the mask perturbation p p Colors s printed for printers A A color value of;
βandγthe weighting factors of the function of the smoothness loss of the face mask and the function of the color difference loss of the printer are respectively.
The invention discloses a face confrontation mask sample generation system based on a generation model in a second aspect, which comprises:
the first processing module is configured to acquire a confrontation face image A and a database sample image X;
the second processing module is configured to preprocess the face image data: acquiring the face positions in the confrontation face image and the database sample image by using a face detection method, cutting the face images in the face position areas in the confrontation face image and the database sample image, and converting the resolution of the face images in the face position areas in the cut confrontation face image and the database sample image by adopting an interpolation algorithm; obtaining a confrontation face image A 600 And database face image X T
A third processing module configured to construct a confrontational mask in the shape of a face mask: according to the confrontation face image A 600 Constructing a face mask shape matrix P at the positions of the characteristic points of the Chinese face, and simulating the radian transformation of the mask in a physical space by the face mask shape matrix through bending rotation mapping transformation to obtain a confrontation mask M in the shape of the mask 1
A fourth processing module configured to construct a confrontation sample generator: generating a random face image according to the face generation model, and processing the random face image by using the face and mask shape confrontation mask generation method in the step S3 to obtain a mask confrontation disturbance mask M S Comparing the countermeasure mask Ms with the countermeasure mask M 1 And confrontation of the face image A 600 Integrating to obtain a confrontation sample M formed by combining the confrontation disturbance of the mask and the face area A
A fifth processing module configured to process the confrontation samples M A And database face image X T Inputting the data into an attacked face recognition system network F to construct an integral anti-attack training network;
a sixth processing module configured to construct a total loss function of the training of the overall anti-attack training network;
a seventh processing module configured to set a model optimization algorithm: optimizing parameters and weights of the integral anti-attack training network by adopting a parameter optimization algorithm;
and the eighth processing module is configured to continuously optimize and adjust network parameters until the network parameters are stable by repeatedly training the overall anti-attack training network and aiming at the minimization of the total loss function, store the model parameters and the weight of the overall anti-attack training network, store the anti-sample image matrix in the shape of the face mask generated by final training and optimization, and obtain the final physical space anti-sample by printing.
According to the system of the second aspect of the present invention, the third processing module configured to construct the face mask shape matrix P according to the positions of the facial feature points in the confrontation facial image comprises:
pair of anti-human face images A by adopting human face feature point detection algorithm 600 Carrying out human face characteristic point detection to obtain 68 human face characteristic points, and selecting 6 th, 9 th and 12 th points, namely the chin and 30 th points, namely the nose, from the 68 characteristic points, wherein the coordinates of the 4 key point positions are respectively as follows: a is 6 =(x 6 ,y 6 ),a 9 =(x 9 ,y 9 ),a 12 =(x 12 ,y 12 ),a 30 =(x 30 ,y 30 ) Constructing a mask-shaped hexagon according to the positions of 4 key points of the nose and the chin, and acquiring the positions a of the other two points in the face mask hexagon according to the positions of the 4 key points 6 * =(x 6 ,y 6 * ),a 12 * =(x 12 ,y 6 * ) From a to a 6 * 、a 6 、a 9 、a 12 、a 12 * And a 30 Obtaining a face mask shape matrix P;
wherein, y 6 * = y 30 -( y 9 - y 6 )。
According to the system of the second aspect of the present invention, the third processing module is configured to make the face mask shape matrix imitate the arc transformation of the mask in the physical space through the bending rotation mapping transformation, so as to obtain the confrontation mask M of the mask shape 1 The method comprises the following steps:
cutting a mask area of the face mask shape matrix, and obtaining a physical space face mask sample matrix M through digital space transformation;
simulating the radian in a 3d face mask by the mask sample matrix through bending, rotating, mapping and transforming to obtain an confrontation mask M 1
The system according to the second aspect of the present invention, the third processing module is configured to, the parameters of the bending rotation mapping transformation include: radian parameter of bendl=0.0018, angle parameter θ of rotation = -5; the transformed matrix scaling parameter s = 0.465.
According to the system of the second aspect of the present invention, the fourth processing module is configured to generate a random face image according to the face generation model, and process the random face image by using the confrontation mask generation method of the face and mask shape in step S3 to obtain a mask confrontation disturbance mask M S The method comprises the following steps:
randomly generating generated face image G with resolution of 600 × 600 by face generation model 600 And obtaining an anti-disturbance mask M for generating the shape of the face mask in the face image according to the anti-disturbance mask M S I.e. M S = G 600 •M 1
According to the system of the second aspect of the present invention, the fourth processing module is configured to compare the countermeasure mask Ms with the countermeasure mask M 1 And confrontation of the face image A 600 Integrating to obtain a confrontation sample M formed by combining the confrontation disturbance of the mask and the face area A The method comprises the following steps:
taking the disturbance-resisting mask M s Inner element, zero setting the element outside the mask to obtain the anti-disturbance M of the mask area s •M 1 At the same time, the face image A will be confronted 600 Setting the pixel value in the mask to zero, and reserving the rest pixel values to obtain a face image A outside the mask area 600 •(1-M 1 ) Fusing the two partial images to obtain a face confrontation sample, namely M A =M s •M 1 + A 600 •(1-M 1 )。
The system according to the second aspect of the present invention, the sixth processing module is configured to, the total loss function is:
Figure DEST_PATH_IMAGE005
wherein the content of the first and second substances,
L(x) As a function of total loss;
L sim (a,x) In order to combat the sample generator loss function,
Figure 874209DEST_PATH_IMAGE006
wherein the content of the first and second substances,E a to combat the feature vectors of the face image a samples,E x feature vectors of the database sample image X samples;
L tv (x) In order to combat the smoothness loss function of the mask,
Figure 15340DEST_PATH_IMAGE007
wherein the content of the first and second substances,x ij the pixel value of the ith row and the jth column in the sample;
L nps in order to be a function of the color difference loss of the printer,
Figure 513318DEST_PATH_IMAGE008
wherein p is c Is to counter the pixel value, s, in the mask perturbation p p Colors s printed for printers A A color value of;
βandγthe weighting factors of the function of the smoothness loss of the face mask and the function of the color difference loss of the printer are respectively.
A third aspect of the invention discloses an electronic device. The electronic device comprises a memory and a processor, the memory stores a computer program, and the processor executes the computer program to realize the steps of the face confrontation mask sample generation method based on the generation model in any one of the first aspect of the disclosure.
A fourth aspect of the invention discloses a computer-readable storage medium. The computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of a generative model-based face confrontation mask sample generation method according to any one of the first aspect of the present disclosure.
According to the scheme provided by the invention, the confrontation sample of the mask patch is generated by utilizing the generation model, so that printing in a physical space can be realized, a face recognition system can wrongly recognize the face, and the mobility of the confrontation sample on different face recognition models can be realized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a face confrontation mask sample generation method based on a generation model according to an embodiment of the present invention;
FIG. 2 is a schematic overall model flow diagram according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a construction process of a confrontation mask in the shape of a face mask according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart of a face mask confrontation sample generator based on a face generation model according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of an overall structure of a challenge sample generation according to an embodiment of the present invention;
fig. 6 is an overall frame diagram of an anti-face mask generator according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a sample of a confrontation mask and the recognition result according to an embodiment of the present invention;
fig. 8 is a schematic diagram illustrating a recognition result of the face confrontation mask according to the embodiment of the invention;
FIG. 9 is a block diagram of a face confrontation mask sample generation system based on a generation model according to an embodiment of the invention;
fig. 10 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present invention. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Example 1:
the invention discloses a face confrontation mask sample generation method based on a generation model. Fig. 1 is a flowchart of a face confrontation mask sample generation method based on a generation model according to an embodiment of the present invention, and details of an implementation process of the confrontation mask sample generation method in the embodiment are described by taking a self-built face database and a face recognition algorithm ArcFace as an example. The database is composed of 300 pictures of 30 different person IDs, each ID contains 10 pictures on average, and the resolution of each picture is 1280 × 720. And training an anti-face mask generator of the face recognition system with ArcFace as an attacked face by using the data in the database, and testing by using the test set. The experiment was carried out in the morning on Ubuntu18.04 using python version 3.6.7, TensorFlow version 2.0.0, CUDA version 10.0.0, cudnn version 7.4.0. As shown in fig. 1 and 2, the method includes:
step S1, obtaining a confrontation face image A and a database sample image X;
step S2, preprocessing the face image data: acquiring the face positions in the confrontation face image and the database sample image by using a face detection method, cutting the face images in the face position areas in the confrontation face image and the database sample image, and converting the resolution of the face images in the face position areas in the cut confrontation face image and the database sample image by adopting an interpolation algorithm; obtaining a confrontation face image A 600 And database face image X T
Cutting the confrontation face image and the face image in the face position area in the database sample image, and converting the cut face image into the confrontation face image A with the resolution of 600 multiplied by 600 by adopting an interpolation algorithm 600 And a database face image X with a resolution of 112X 112 T (ii) a Step S3, constructing a confrontation mask in the shape of a face mask: according to the confrontation face image A 600 Constructing a face mask shape matrix P at the positions of the characteristic points of the Chinese face, and simulating the radian transformation of the mask in a physical space by the face mask shape matrix through bending rotation mapping transformation to obtain a confrontation mask M in the shape of the mask 1
Step S4, constructing a confrontation sample generator: generating a random face image according to the face generation model, and processing the random face image by using the face and mask shape confrontation mask generation method in the step S3 to obtain a mask confrontation disturbance mask M S Comparing the countermeasure mask Ms with the countermeasure mask M 1 And confrontation of the face image A 600 Integrating to obtain a confrontation sample M formed by combining the confrontation disturbance of the mask and the face area A
Step S5, the confrontation sample M A And database face image X T Inputting the data into an attacked face recognition system network F to construct an integral anti-attack training network;
step S6, constructing a total loss function of the training of the integral anti-attack training network;
step S7, setting a model optimization algorithm: optimizing parameters and weights of the integral anti-attack training network by adopting a parameter optimization algorithm;
and step S8, continuously optimizing and adjusting network parameters until the network parameters are stable by repeatedly training the overall anti-attack training network and taking the minimization of the total loss function as a target, storing model parameters and weights of the overall anti-attack training network, storing an anti-sample image matrix in the shape of the face mask generated by final training and optimization, and printing to obtain a final physical space anti-sample.
In step S1, the confrontation face image a and the database sample image X are acquired.
Specifically, a confrontation face image a and a database sample image X are obtained, and the resolution of an initial image is as follows: w × H × C, where W, H and C respectively refer to the width, height and color channel number of the face region image, and in this embodiment, W is 1280, H is 720, and C is 3.
In step S2, the face image data is subjected to preprocessing: acquiring the face positions in the confrontation face image and the database sample image by using a face detection method, cutting the face images in the face position areas in the confrontation face image and the database sample image, and converting the resolution of the face images in the face position areas in the cut confrontation face image and the database sample image by adopting an interpolation algorithm; obtaining the confrontation human face image A with the resolution of 600 multiplied by 600 600 And database facial image X with resolution of 112X 112 T
In some embodiments, in step S2, the face detection method extracts a face image from the sample image, and performs face detection on the initial sample image by using any one of MTCNN, Dilb, OpenCV, OpenFace-based face detection methods, and the interpolation algorithm obtains the face image with a fixed input image resolution by using any one of a linear interpolation method, a nearest neighbor interpolation method, and a Lanczos interpolation algorithm.
Specifically, it is determined that the input resolution in the confrontation sample generator is W1 × H1 × C1, and the resolution of the network input image in the attacked face recognition system is W2 × H2 × C2, where in this embodiment, W1 is 600, H1 is 600, C1 is 3, W2 is 112, H2 is 112, and C2 is 3;
using a face detection method based on MTCNN to obtain the face position in the image, and adopting a linear interpolation algorithm to convert the confrontation face image into a confrontation face image A with the resolution of 600 multiplied by 3 600 Converting the database sample image X into a database face image X with the resolution of 112X 3 T And storing all processed images as input samples of the network.
In step S3, a countermeasure mask in the shape of a face mask is constructed: according to the confrontation face image A 600 Constructing a face mask shape matrix P at the positions of the characteristic points of the Chinese face, and simulating the radian transformation of the mask in a physical space by the face mask shape matrix through bending, rotating, mapping and transforming to obtain the mask shape with the resolution of 600 multiplied by 600Countermeasure mask M 1
In some embodiments, in step S3, the method for constructing a facial mask shape matrix P according to the positions of the facial feature points in the confrontation facial image includes:
pair of anti-human face images A by adopting human face feature point detection algorithm 600 Carrying out human face characteristic point detection to obtain 68 human face characteristic points, and selecting 6 th, 9 th and 12 th points, namely the chin and 30 th points, namely the nose, from the 68 characteristic points, wherein the coordinates of the 4 key point positions are respectively as follows: a is 6 =(x 6 ,y 6 ),a 9 =(x 9 ,y 9 ),a 12 =(x 12 ,y 12 ),a 30 =(x 30 ,y 30 ) Constructing a mask-shaped hexagon according to the positions of 4 key points of the nose and the chin, and acquiring the positions a of the other two points in the face mask hexagon according to the positions of the 4 key points 6 * =(x 6 ,y 6 * ),a 12 * =(x 12 ,y 6 * ) From a to a 6 * 、a 6 、a 9 、a 12 、a 12 * And a 30 Obtaining a face mask shape matrix P;
wherein, y 6 * = y 30 -( y 9 - y 6 )。
The human face mask shape matrix imitates the radian transformation of a mask in a physical space through the bending rotation mapping transformation to obtain a confrontation mask M of the mask shape 1 The method comprises the following steps:
cutting a mask area of the face mask shape matrix, and obtaining a physical space face mask sample matrix M through digital space transformation;
simulating the radian in a 3d face mask by the mask sample matrix through bending, rotating, mapping and transforming to obtain an confrontation mask M 1
The parameters of the curved rotational mapping transformation include: radian parameter of bendl=0.0018, angle parameter θ of rotation = -5; the transformed matrix scaling parameter s = 0.465.
Specifically, as shown in fig. 4, a face feature point detection algorithm is adopted to match a 600 × 600 × 3 confrontation face image a 600 The detection of the facial feature points is carried out to obtain 68 facial feature points, and the 6 th, 9 th and 12 th points, namely the chin and the 30 th point, are selected from the 68 feature points, wherein the coordinates of the 4 key point positions are respectively as follows: a is 6 =(x 6 ,y 6 ),a 9 =(x 9 ,y 9 ),a 12 =(x 12 ,y 12 ),a 30 =(x 30 ,y 30 ) Constructing a mask-shaped hexagon according to the positions of 4 key points of the nose and the chin, and acquiring the positions a of the other two points in the face mask hexagon according to the positions of the 4 key points 6 * =(x 6 ,y 6 * ),a 12 * =(x 12 ,y 6 * ) From a to a 6 * 、a 6 、a 9 、a 12 、a 12 * And a 30 Obtaining a face mask shape matrix P;
wherein, y 6 * = y 30 -( y 9 - y 6 );
The human face mask shape matrix imitates the radian transformation of a mask in a physical space through the bending rotation mapping transformation to obtain a confrontation mask M of the mask shape 1 The method comprises the following steps:
cutting a mask area of the face mask shape matrix, and obtaining a physical space face mask sample matrix M through digital space transformation;
M=stn(P,s)
simulating the radian in a 3d face mask by the mask sample matrix through bending, rotating, mapping and transforming to obtain an confrontation mask M 1
M 1 =f(M,l,θ);
stn is a digital space transformation function, and the shape matrix of the face mask is obtainedPImplementing a scaling transformation;fmask sample matrix as a mapping transformation functionMThe bending and rotating transformation is realized, and the bending and rotating transformation is realized,lis the parameter of the radian of the bending,θas a parameter of the angle of rotation,sscaling for transformed matricesA parameter;
in this example, the parameters of the curved rotational mapping transformation include: radian parameter of bendl=0.0018, angle parameter θ of rotation = -5; the transformed matrix scaling parameter s = 0.465.
At step S4, the confrontation sample generator is constructed: generating a random face image according to the face generation model, and processing the random face image by using the face and mask shape confrontation mask generation method in the step S3 to obtain a mask confrontation disturbance mask M S Comparing the countermeasure mask Ms with the countermeasure mask M 1 And confrontation of the face image A 600 Integrating to obtain a confrontation sample M formed by combining the confrontation disturbance of the mask and the face area A
In some embodiments, in the step S4, the random face image is generated according to the face generation model, and the random face image is processed by using the face and mask shape confrontation mask generation method in the step S3 to obtain the mask confrontation disturbance mask M S The method comprises the following steps:
randomly generating generated face image G with resolution of 600 × 600 through face generator model 600 And acquiring a disturbance matrix (M) of the shape of the face mask in the generated face image according to the confrontation mask S = G 600 •M 1
The countermeasure mask Ms and the countermeasure mask M 1 And confrontation of the face image A 600 Integrating to obtain a confrontation sample M formed by combining the confrontation disturbance of the mask and the face area A The method comprises the following steps:
taking the disturbance-resisting mask M s Inner element, zero setting the element outside the mask to obtain the anti-disturbance M of the mask area s •M 1 At the same time, the face image A will be confronted 600 Setting the pixel value in the mask to zero, and reserving the rest pixel values to obtain a face image A outside the mask area 600 •(1-M 1 ) Fusing the two partial images to obtain a face confrontation sample, namely M A =M s •M 1 + A 600 •(1-M 1 )。
Specifically, the flowchart of the confrontation sample generator based on the face generation model is shown in fig. 4 and 5: loading a pre-trained face generation model StyleGAN, and randomly generating a generated face image G with the resolution of 600 x 600 by using a face generator model 600 And acquiring a disturbance matrix M of the shape of the face mask in the generated face image according to the confrontation mask of the shape of the face mask S = G 600 •M 1
Taking the disturbance-resisting mask M s Inner element, zero setting the element outside the mask to obtain the anti-disturbance M of the mask area s •M 1 At the same time, the face image A will be confronted 600 Setting the pixel value in the mask to zero, and reserving the rest pixel values to obtain a face image A outside the mask area 600 •(1-M 1 ) Fusing the two partial images to obtain a face confrontation sample, namely M A =M s •M 1 + A 600 •(1-M 1 );
The use of the face-based generation model to generate confrontation samples in this embodiment improves the mobility of confrontation mask samples for different recognition networks.
In step S5, the confrontation sample M is processed A And database face image X T And inputting the data into an attacked face recognition system network F to construct an integral anti-attack training network.
Specifically, the face image a is to be confronted 600 Obtaining confrontation sample M of human face by inputting confrontation mask sample generator network A Confrontation sample M of human face A And database face image X T Inputting the data into the attacked face recognition network to construct an integral training network, as shown in fig. 6, the integral training network comprises four parts of input of a noise sequence, a confrontation mask, an confrontation face image and a data sample image, the noise sequence generates a confrontation disturbance matrix through a generator, constructs a training confrontation sample together with the mask and the confrontation face image, and inputs the training confrontation sample together with the data sample image into the face recognition network arcfacce to obtain a feature vector E of the confrontation face image a And feature direction group E of database sample face images T A 1 is mixing E a And E T And inputting a training network for training and optimizing, wherein the Arcface network model and parameters used in the embodiment are pre-trained, and are kept frozen in the training process.
At step S6, a total loss function for the training of the global training network against attacks is constructed.
In some embodiments, in the step S6, the total loss function is:
Figure 347281DEST_PATH_IMAGE009
wherein the content of the first and second substances,
L(x) As a function of total loss;
L sim (a,x) In order to combat the sample generator loss function,
Figure 297920DEST_PATH_IMAGE010
wherein the content of the first and second substances,E a to combat the feature vectors of the face image a samples,E x feature vectors of the database sample image X samples;
L tv (x) In order to combat the smoothness loss function of the mask,
Figure 180425DEST_PATH_IMAGE011
wherein the content of the first and second substances,x ij the pixel value of the ith row and the jth column in the sample;
L nps in order to be a function of the color difference loss of the printer,
Figure 595226DEST_PATH_IMAGE012
wherein, the first and the second end of the pipe are connected with each other,p c is to resist the disturbance of the maskpThe value of the pixel of (1) is,s p colors printed for printerss A A color value of;
βandγthe weighting factors of the function of the smoothness loss of the face mask and the function of the color difference loss of the printer are respectively.
In step S7, a model optimization algorithm is set: and optimizing the parameters and the weights of the integral anti-attack training network by adopting a parameter optimization algorithm.
Specifically, database sample images X are input into a network in batches for training, and a mask M for resisting the disturbance of the shape of the mask is obtained s And adjusting and optimizing the network parameters of the whole anti-attack training network through a minimum total loss function L, and continuously optimizing the values of pixel points in the mask shape matrix sequence.
In step S8, the overall anti-attack training network is repeatedly trained, the total loss function is minimized as a target, network parameters are continuously optimized and adjusted until the network parameters are stable, model parameters and weights of the overall anti-attack training network are saved, an anti-sample image matrix in the shape of the face mask generated by final training and optimization is saved, and a final physical space anti-sample is obtained by printing.
Specifically, the operation of step S7 is repeated, the overall framework of the confrontation mask generator is as shown in fig. 6, the confrontation mask shape matrix is mapped to obtain a confrontation mask shape mask M with a resolution of 600 × 600, the confrontation mask is combined with the confrontation face image to obtain a confrontation sample image, the confrontation sample image is input to a training network to perform feature extraction and model training, the overall confrontation attack training network is continuously trained and optimized according to a loss function until the network parameters are stable, the model training is completed, the hexagonal matrix P of the final mask shape is stored as an image, and the confrontation sample image is printed to obtain the confrontation mask sample with an attack effect in the physical space.
In this embodiment, in the Arcface face recognition system attacked in the physical space, the result of the mask confrontation sample obtained after the face confrontation mask network is trained is shown in fig. 7, the first image is an initialized confrontation sample image generated by the confrontation mask generator, the second image is an confrontation disturbance mask image generated by the final training of the network, the third image is a finally generated face mask confrontation sample, and the sample image can be printed or printed to realize the attack of the physical space on the face recognition system.
The printed anti-attack mask is tested by wearing No. 1 mask, the schematic diagram of the recognition result of the face anti-attack mask is shown in FIG. 8, a rectangle in the diagram is a face position frame of a person, the upper side of a rectangular frame is a person ID with the highest recognition similarity, the lower side of the rectangular frame is a recognition result similarity value, the first diagram is face recognition without wearing the mask, and the face recognition can be correctly recognized as a target person No. 1; the second image is identified by wearing a normal mask and can be correctly identified as the target person No. 1, the third image is identified by wearing a printed confrontation sample mask, the target person is incorrectly identified with a similarity of 0.1866, and the experimental results are shown in table 1.
TABLE 1
Sample(s) Target person 1 Target person 1 (common mask) Target person 1 (gauze mask for wearing confrontation)
Identifying similarity 84.98% 58.01% 18.66%
Recognition result Target person 1 (correct identification) Target person 1 (correct identification) Non-target person (recognition error)
The experimental results show that the confrontation sample generated by the method has high quality, the Arcface face recognition system can recognize and output wrong classification results in physical space, and the effectiveness of the method is proved.
In summary, the scheme provided by the invention can generate the confrontation sample of the mask patch by utilizing the generation model, can realize printing in a physical space, can not only enable the face recognition system to wrongly recognize the face, but also can realize the mobility of the confrontation sample on different face recognition models.
Example 2:
the invention discloses a face confrontation mask sample generation system based on a generation model. FIG. 9 is a block diagram of a face confrontation mask sample generation system based on a generation model according to an embodiment of the invention; as shown in fig. 9, the system 100 includes:
a first processing module 101 configured to acquire a confrontation face image a and a database sample image X;
the second processing module 102 is configured to perform preprocessing on the face image data: acquiring the face positions in the confrontation face image and the database sample image by using a face detection method, cutting the face images in the face position areas in the confrontation face image and the database sample image, and converting the resolution of the face images in the face position areas in the cut confrontation face image and the database sample image by adopting an interpolation algorithm; obtaining a confrontation face image A 600 And database face image X T
A third processing module 103 configured to construct a confrontational mask in the shape of a face mask: according to the confrontation face image A 600 Chinese faceEstablishing a face mask shape matrix P at the positions of the characteristic points, and simulating the radian transformation of the mask in a physical space by the face mask shape matrix through bending rotation mapping transformation to obtain a confrontation mask M of the mask shape 1
A fourth processing module 104 configured to construct a confrontation sample generator: generating a random face image according to the face generation model, and processing the random face image by using the face and mask shape confrontation mask generation method in the step S3 to obtain a mask confrontation disturbance mask M S Comparing the countermeasure mask Ms with the countermeasure mask M 1 And confrontation of the face image A 600 Integrating to obtain a confrontation sample M formed by combining the confrontation disturbance of the mask and the face area A
A fifth processing module 105 configured to process the challenge samples M A And database face image X T Inputting the data into an attacked face recognition system network F to construct an integral anti-attack training network;
a sixth processing module 106 configured to construct a total loss function of the training of the overall anti-attack training network;
a seventh processing module 107 configured to set a model optimization algorithm: optimizing parameters and weights of the integral anti-attack training network by adopting a parameter optimization algorithm;
and the eighth processing module 108 is configured to continuously optimize and adjust network parameters until the network parameters are stable by repeatedly training the overall anti-attack training network and aiming at the minimization of the total loss function, store model parameters and weights of the overall anti-attack training network, store an anti-sample image matrix in the shape of the face mask generated by final training and optimization, and obtain a final physical space anti-sample by printing.
According to the system of the second aspect of the present invention, the third processing module 103 is configured to construct a face mask shape matrix P according to the positions of the facial feature points in the confrontation facial image, and includes:
pair of anti-human face images A by adopting human face feature point detection algorithm 600 Carry out human faceFeature point detection is carried out to obtain 68 individual face feature points, and 6 th, 9 th and 12 th points, namely the chin and 30 th points, namely the nose, are selected from the 68 feature points, wherein the coordinates of the 4 key point positions are respectively as follows: a is 6 =(x 6 ,y 6 ),a 9 =(x 9 ,y 9 ),a 12 =(x 12 ,y 12 ),a 30 =(x 30 ,y 30 ) Constructing a mask-shaped hexagon according to the positions of 4 key points of the nose and the chin, and acquiring the positions a of the other two points in the face mask hexagon according to the positions of the 4 key points 6 * =(x 6 ,y 6 * ),a 12 * =(x 12 ,y 6 * ) From a to a 6 * 、a 6 、a 9 、a 12 、a 12 * And a 30 Obtaining a face mask shape matrix P;
wherein, y 6 * = y 30 -( y 9 - y 6 )。
According to the system of the second aspect of the present invention, the third processing module 103 is configured to make the face mask shape matrix mimic the arc transformation of the mask in physical space through the curved rotation mapping transformation to obtain the confrontation mask M of the mask shape 1 The method comprises the following steps:
cutting a mask area of the face mask shape matrix, and obtaining a physical space face mask sample matrix M through digital space transformation;
simulating the radian in a 3d face mask by the mask sample matrix through bending, rotating, mapping and transforming to obtain an confrontation mask M 1
According to the system of the second aspect of the present invention, the third processing module 103 is configured to, the parameters of the bending rotation mapping transformation include: radian parameter of bendl=0.0018, angle parameter θ of rotation = -5; the transformed matrix scaling parameter s = 0.465.
According to the system of the second aspect of the present invention, the fourth processing module is configured to generate a random face image according to the face generation model by using a face mask having a face mask shape in step S3Processing the random face image by a forming method to obtain a mask M for resisting disturbance of the mask S The method comprises the following steps:
randomly generating generated face image G with resolution of 600 × 600 by face generation model 600 And obtaining an anti-disturbance mask M for generating the shape of the face mask in the face image according to the anti-disturbance mask M S I.e. M S = G 600 •M 1
According to the system of the second aspect of the present invention, the fourth processing module 104 is configured to compare the countermeasure disturbance mask Ms and the countermeasure mask M 1 And confrontation of the face image A 600 Integrating to obtain a confrontation sample M formed by combining the confrontation disturbance of the mask and the face area A The method comprises the following steps:
taking the disturbance-resisting mask M s Inner element, zero setting the element outside the mask to obtain the anti-disturbance M of the mask area s •M 1 While the face image A is confronted 600 Setting the pixel value in the mask to zero, and reserving the rest pixel values to obtain a face image A outside the mask area 600 •(1-M 1 ) Fusing the two partial images to obtain a face confrontation sample, namely M A =M s •M 1 + A 600 •(1-M 1 )。
The system according to the second aspect of the present invention, the sixth processing module 106, is configured to, the total loss function is:
Figure 537774DEST_PATH_IMAGE013
wherein the content of the first and second substances,
L(x) As a function of total loss;
L sim (a,x) In order to combat the sample generator loss function,
Figure 539491DEST_PATH_IMAGE014
wherein the content of the first and second substances,E a to combat the feature vectors of the face image a samples,E x feature vectors of the database sample image X samples;
L tv (x) In order to combat the smoothness loss function of the mask,
Figure 960108DEST_PATH_IMAGE015
wherein the content of the first and second substances,x ij the pixel value of the ith row and the jth column in the sample;
L nps in order to be a function of the color difference loss of the printer,
Figure 432677DEST_PATH_IMAGE008
wherein p is c Is to counter the pixel value, s, in the mask perturbation p p Colors s printed for printers A A color value of;
βandγthe weighting factors of the function of the smoothness loss of the face mask and the function of the color difference loss of the printer are respectively.
Example 3:
the invention discloses an electronic device. The electronic device comprises a memory and a processor, wherein the memory stores a computer program, and the processor executes the computer program to realize the steps of the face confronting mask sample generating method based on the generated model in any one of the embodiments 1 disclosed by the invention.
Fig. 10 is a block diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 10, the electronic device includes a processor, a memory, a communication interface, a display screen, and an input device, which are connected by a system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic equipment comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the electronic device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, Near Field Communication (NFC) or other technologies. The display screen of the electronic equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the electronic equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the electronic equipment, an external keyboard, a touch pad or a mouse and the like.
It will be understood by those skilled in the art that the structure shown in fig. 10 is only a partial block diagram related to the technical solution of the present disclosure, and does not constitute a limitation of the electronic device to which the solution of the present application is applied, and a specific electronic device may include more or less components than those shown in the drawings, or combine some components, or have a different arrangement of components.
Example 4:
the invention discloses a computer readable storage medium. The computer-readable storage medium stores thereon a computer program which, when executed by a processor, implements the steps in a face-confrontation mask sample generation method based on a generation model according to any one of embodiment 1 of the present invention.
It should be noted that the technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, however, as long as there is no contradiction between the combinations of the technical features, the scope of the present description should be considered. The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Embodiments of the subject matter and the functional operations described in this specification can be implemented in: digital electronic circuitry, tangibly embodied computer software or firmware, computer hardware including the structures disclosed in this specification and their structural equivalents, or a combination of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on a tangible, non-transitory program carrier for execution by, or to control the operation of, data processing apparatus. Alternatively or additionally, the program instructions may be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode and transmit information to suitable receiver apparatus for execution by the data processing apparatus. The computer storage medium may be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.
The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform corresponding functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Computers suitable for executing computer programs include, for example, general and/or special purpose microprocessors, or any other type of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory and/or a random access memory. The basic components of a computer include a central processing unit for implementing or executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer does not necessarily have such a device. Moreover, a computer may be embedded in another device, e.g., a mobile telephone, a Personal Digital Assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device such as a Universal Serial Bus (USB) flash drive, to name a few.
Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices (e.g., EPROM, EEPROM, and flash memory devices), magnetic disks (e.g., an internal hard disk or a removable disk), magneto-optical disks, and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. In other instances, features described in connection with one embodiment may be implemented as discrete components or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. Further, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some implementations, multitasking and parallel processing may be advantageous.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A face confrontation mask sample generation method based on a generation model is characterized by comprising the following steps:
step S1, obtaining a confrontation face image A and a database sample image X;
step S2, preprocessing the face image data: acquiring the face positions in the confrontation face image and the database sample image by using a face detection method, and cutting the face images in the confrontation face image and the face position area in the database sample image; adopting an interpolation algorithm to convert the resolution of the cut confrontation face image and the face image in the face position area in the database sample image to obtain a confrontation face image A 600 And database face image X T
Step S3, constructing a confrontation mask in the shape of a face mask: according to the confrontation face image A 600 Constructing a face mask shape matrix P at the positions of the characteristic points of the Chinese face, and simulating the radian transformation of the mask in a physical space by the face mask shape matrix through bending rotation mapping transformation to obtain a confrontation mask M in the shape of the mask 1
Step S4, constructing a confrontation sample generator: root of herbaceous plantGenerating a random face image according to the face generation model, and processing the random face image by using the face and mask shape confrontation mask generation method in the step S3 to obtain a mask confrontation disturbance mask M S Comparing the countermeasure mask Ms with the countermeasure mask M 1 And confrontation of the face image A 600 Integrating to obtain a confrontation sample M formed by combining the confrontation disturbance of the mask and the face area A
Step S5, the confrontation sample M A And database face image X T Inputting the data into an attacked face recognition system network F to construct an integral anti-attack training network;
step S6, constructing a total loss function of the overall anti-attack training network training;
step S7, setting a model optimization algorithm: optimizing parameters and weights of the integral anti-attack training network by adopting a parameter optimization algorithm;
and step S8, continuously optimizing and adjusting network parameters until the network parameters are stable by repeatedly training the whole anti-attack training network and taking the minimization of the total loss function as a target, storing the model parameters and the weight of the whole anti-attack training network, storing an anti-sample image matrix in the shape of the face mask generated by final training and optimization, and printing to obtain a final physical space anti-sample.
2. The method for generating a face confrontation mask sample based on a generation model according to claim 1, wherein in the step S3, the method for constructing the face mask shape matrix P according to the position of the face feature point in the confrontation face image comprises:
pair of anti-human face images A by adopting human face feature point detection algorithm 600 Carrying out human face characteristic point detection to obtain 68 human face characteristic points, and selecting 6 th, 9 th and 12 th points, namely the chin and 30 th points, namely the nose, from the 68 characteristic points, wherein the coordinates of the 4 key point positions are respectively as follows: a is 6 =(x 6 ,y 6 ),a 9 =(x 9 ,y 9 ),a 12 =(x 12 ,y 12 ),a 30 =(x 30 ,y 30 ) Constructing a mask-shaped hexagon according to the positions of 4 key points of the nose and the chin, and acquiring the positions a of the other two points in the face mask hexagon according to the positions of the 4 key points 6 * =(x 6 ,y 6 * ),a 12 * =(x 12 ,y 6 * ) From a to a 6 * 、a 6 、a 9 、a 12 、a 12 * And a 30 Obtaining a face mask shape matrix P;
wherein, y 6 * = y 30 -( y 9 - y 6 )。
3. The method as claimed in claim 2, wherein in step S3, the face-confronted mask matrix is transformed by a curved rotation mapping to mimic the arc transformation of the mask in physical space to obtain the confrontation mask M of the mask shape 1 The method comprises the following steps:
cutting a mask area of the face mask shape matrix, and obtaining a physical space face mask sample matrix M through digital space transformation;
simulating the radian in a 3d face mask by the mask sample matrix through bending, rotating, mapping and transforming to obtain an confrontation mask M 1
4. The face confrontation mask sample generation method based on generation model as claimed in claim 3, wherein in said step S3, the parameters of said curve rotation mapping transformation include: radian parameter of bendl=0.0018, angle parameter θ of rotation = -5; the transformed matrix scaling parameter s = 0.465.
5. The method for generating a model-based face confrontation mask sample according to claim 1, wherein in step S4, the random face image is generated according to the face generation model, and the random face image is utilizedThe face and mask shape confrontation mask generation method in the step S3 processes the random face image to obtain a mask confrontation disturbance mask M S The method comprises the following steps: randomly generating generated face image G with resolution of 600 × 600 by face generation model 600 And obtaining an anti-disturbance mask M for generating the shape of the face mask in the face image according to the anti-disturbance mask M S I.e. M S = G 600 •M 1
6. The method as claimed in claim 4, wherein in step S4, the antithetic mask Ms and the antithetic mask M are combined 1 And confrontation of the face image A 600 Integrating to obtain a confrontation sample M formed by combining the confrontation disturbance of the mask and the face area A The method comprises the following steps:
taking the disturbance-resisting mask M s Inner element, zero setting the element outside the mask to obtain the anti-disturbance M of the mask area s •M 1 At the same time, the face image A will be confronted 600 Setting the pixel value in the mask to zero, and reserving the rest pixel values to obtain a face image A outside the mask area 600 •(1-M 1 ) Fusing the two partial images to obtain a human face confrontation sample, namely M A =M s •M 1 + A 600 •(1- M 1 )。
7. The method for generating a model-based face confrontation mask sample according to claim 1, wherein in step S6, the total loss function is:
Figure 934769DEST_PATH_IMAGE001
wherein the content of the first and second substances,
L(x) As a function of total loss;
L sim (a,x) In order to combat the sample generator loss function,
Figure 520472DEST_PATH_IMAGE002
wherein the content of the first and second substances,E a to combat the feature vectors of the face image a samples,E x feature vectors of the database sample image X samples;
L tv (x) In order to combat the smoothness loss function of the mask,
Figure 950316DEST_PATH_IMAGE003
wherein the content of the first and second substances,x ij the pixel value of the ith row and the jth column in the sample;
L nps in order to be a function of the color difference loss of the printer,
Figure 254258DEST_PATH_IMAGE004
wherein p is c Is to counter the pixel value, s, in the mask perturbation p p Colors s printed for printers A A color value of;
βandγthe weighting factors of the function of the smoothness loss of the face mask and the function of the color difference loss of the printer are respectively.
8. A face confrontation mask sample generation system for generating a model based face confrontation mask, the system comprising:
the first processing module is configured to acquire a confrontation face image A and a database sample image X;
the second processing module is configured to preprocess the face image data: by usingThe face detection method comprises the steps of obtaining face positions in the confrontation face image and the database sample image, cutting the face image in the face position area in the confrontation face image and the database sample image, and converting the resolution of the face image in the face position area in the cut confrontation face image and the database sample image by adopting an interpolation algorithm; obtaining a confrontation face image A 600 And database face image X T
A third processing module configured to construct a confrontational mask in the shape of a face mask: according to the confrontation face image A 600 Constructing a face mask shape matrix P at the positions of the characteristic points of the Chinese face, and simulating the radian transformation of the mask in a physical space by the face mask shape matrix through bending rotation mapping transformation to obtain a confrontation mask M in the shape of the mask 1
A fourth processing module configured to construct a confrontation sample generator: generating a random face image according to the face generation model, and processing the random face image by using the face and mask shape confrontation mask generation method in the step S3 to obtain a mask confrontation disturbance mask M S Comparing the countermeasure mask Ms with the countermeasure mask M 1 And confrontation of the face image A 600 Integrating to obtain a confrontation sample M formed by combining the confrontation disturbance of the mask and the face area A
A fifth processing module configured to process the confrontation samples M A And database face image X T Inputting the data into an attacked face recognition system network F to construct an integral anti-attack training network;
a sixth processing module configured to construct a total loss function of the training of the overall anti-attack training network;
a seventh processing module configured to set a model optimization algorithm: optimizing parameters and weights of the integral anti-attack training network by adopting a parameter optimization algorithm;
and the eighth processing module is configured to continuously optimize and adjust network parameters until the network parameters are stable by repeatedly training the overall anti-attack training network and aiming at the minimization of the total loss function, store the model parameters and the weight of the overall anti-attack training network, store the anti-sample image matrix in the shape of the face mask generated by final training and optimization, and obtain the final physical space anti-sample by printing.
9. An electronic device, comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the model-based face-confrontation mask sample generation method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the steps of the method for generating a face-confrontation mask sample based on a generation model according to any one of claims 1 to 7.
CN202210823234.0A 2022-07-14 2022-07-14 Face confrontation mask sample generation method and system based on generation model Active CN114898450B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210823234.0A CN114898450B (en) 2022-07-14 2022-07-14 Face confrontation mask sample generation method and system based on generation model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210823234.0A CN114898450B (en) 2022-07-14 2022-07-14 Face confrontation mask sample generation method and system based on generation model

Publications (2)

Publication Number Publication Date
CN114898450A true CN114898450A (en) 2022-08-12
CN114898450B CN114898450B (en) 2022-10-28

Family

ID=82729587

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210823234.0A Active CN114898450B (en) 2022-07-14 2022-07-14 Face confrontation mask sample generation method and system based on generation model

Country Status (1)

Country Link
CN (1) CN114898450B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115527254A (en) * 2022-09-21 2022-12-27 北京的卢深视科技有限公司 Face recognition method, model training method, face recognition device, model training device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443203A (en) * 2019-08-07 2019-11-12 中新国际联合研究院 The face fraud detection system counter sample generating method of network is generated based on confrontation
CN110991299A (en) * 2019-11-27 2020-04-10 中新国际联合研究院 Confrontation sample generation method aiming at face recognition system in physical domain
CN113343951A (en) * 2021-08-05 2021-09-03 北京邮电大学 Face recognition countermeasure sample generation method and related equipment
CN113609966A (en) * 2021-08-03 2021-11-05 上海明略人工智能(集团)有限公司 Method and device for generating training sample of face recognition system
US20220067521A1 (en) * 2020-09-03 2022-03-03 Nec Laboratories America, Inc. Robustness enhancement for face recognition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443203A (en) * 2019-08-07 2019-11-12 中新国际联合研究院 The face fraud detection system counter sample generating method of network is generated based on confrontation
CN110991299A (en) * 2019-11-27 2020-04-10 中新国际联合研究院 Confrontation sample generation method aiming at face recognition system in physical domain
US20220067521A1 (en) * 2020-09-03 2022-03-03 Nec Laboratories America, Inc. Robustness enhancement for face recognition
CN113609966A (en) * 2021-08-03 2021-11-05 上海明略人工智能(集团)有限公司 Method and device for generating training sample of face recognition system
CN113343951A (en) * 2021-08-05 2021-09-03 北京邮电大学 Face recognition countermeasure sample generation method and related equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115527254A (en) * 2022-09-21 2022-12-27 北京的卢深视科技有限公司 Face recognition method, model training method, face recognition device, model training device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114898450B (en) 2022-10-28

Similar Documents

Publication Publication Date Title
Yin et al. Rademacher complexity for adversarially robust generalization
CN106372581B (en) Method for constructing and training face recognition feature extraction network
CN109948658B (en) Feature diagram attention mechanism-oriented anti-attack defense method and application
CN109636658B (en) Graph convolution-based social network alignment method
Fang et al. Robust latent subspace learning for image classification
US11636332B2 (en) Systems and methods for defense against adversarial attacks using feature scattering-based adversarial training
Dong et al. Dimensionality reduction and classification of hyperspectral images using ensemble discriminative local metric learning
Jia et al. 3D face anti-spoofing with factorized bilinear coding
Kim et al. Fusing aligned and non-aligned face information for automatic affect recognition in the wild: a deep learning approach
Chen et al. An asymmetric distance model for cross-view feature mapping in person reidentification
Taigman et al. Deepface: Closing the gap to human-level performance in face verification
CN101377814B (en) Face image processing apparatus, face image processing method
Yan et al. Ranking with uncertain labels
Rozsa et al. LOTS about attacking deep features
Parchami et al. Video-based face recognition using ensemble of haar-like deep convolutional neural networks
CN112446423B (en) Fast hybrid high-order attention domain confrontation network method based on transfer learning
CN105930834B (en) Face identification method and device based on ball Hash binary-coding
Lee et al. Learning representations from multiple manifolds
US20230095182A1 (en) Method and apparatus for extracting biological features, device, medium, and program product
CN107016319A (en) A kind of key point localization method and device
CN114898450B (en) Face confrontation mask sample generation method and system based on generation model
Fu et al. Contextual online dictionary learning for hyperspectral image classification
CN112101087B (en) Facial image identity identification method and device and electronic equipment
Zeng et al. Occlusion‐invariant face recognition using simultaneous segmentation
Tanaka et al. Adversarial bone length attack on action recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant