CN116778544A - Face recognition privacy protection-oriented antagonism feature generation method - Google Patents

Face recognition privacy protection-oriented antagonism feature generation method Download PDF

Info

Publication number
CN116778544A
CN116778544A CN202310212400.8A CN202310212400A CN116778544A CN 116778544 A CN116778544 A CN 116778544A CN 202310212400 A CN202310212400 A CN 202310212400A CN 116778544 A CN116778544 A CN 116778544A
Authority
CN
China
Prior art keywords
shadow
face recognition
face
image
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310212400.8A
Other languages
Chinese (zh)
Other versions
CN116778544B (en
Inventor
王志波
金帅帆
张文文
王炎
王和
胡佳慧
孙鹏
任奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202310212400.8A priority Critical patent/CN116778544B/en
Publication of CN116778544A publication Critical patent/CN116778544A/en
Application granted granted Critical
Publication of CN116778544B publication Critical patent/CN116778544B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/95Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/50Maintenance of biometric data or enrolment thereof
    • G06V40/53Measures to keep reference information secret, e.g. cancellable biometrics

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The application discloses a face recognition privacy protection-oriented antagonism feature generation method, which establishes a shadow model to acquire a mapping function from facial features to images, generates antagonism potential noise to destroy the mapping by solving the constraint optimization problem, and therefore provides a privacy protection antagonism feature which can keep excellent defensive performance when facing an attack network with an unknown structure, can resist unknown reconstruction attack while keeping face recognition accuracy, and effectively protects face privacy safety. The identification network can be selectively optimized to meet the requirement of higher identification precision.

Description

Face recognition privacy protection-oriented antagonism feature generation method
Technical Field
The application relates to the field of Artificial Intelligence (AI) security and the field of data security, in particular to an antagonistic characteristic generation method for face recognition privacy protection, which can ensure the accuracy of a face recognition system and resist reconstruction attack to protect the face privacy.
Background
Face recognition is a way to identify identities using personal facial information, which has been widely used in many security-sensitive scene applications. Needless to say, the face image used for biometric identification is personal privacy that should be protected for everyone. To avoid direct leakage of face images, the mainstream face recognition system generally adopts a client-server mode, extracts features from the face images by a feature extractor of the client, and stores the face features instead of the face images at the server side for later online recognition.
Because the human face features inhibit the visual information of the human face, the protection of the privacy of the human face can be realized to a certain extent. Unfortunately, these features can still be exploited to recover face sensitive information once revealed, for example, by reconstructing the appearance of the original image. The existing face privacy protection method has the technical problems that the effectiveness of privacy protection and the accuracy of face recognition tasks cannot be effectively balanced, so that the application requirements cannot be met.
Disclosure of Invention
Aiming at the defects of the prior art, the application provides a antagonism characteristic generation method for face recognition privacy protection, which takes the accuracy of face recognition tasks and the effectiveness of privacy protection into consideration.
In order to achieve the above object, the present application provides the following technical solutions:
in one general aspect, a method for generating an antagonistic feature for face recognition privacy protection is provided, which is characterized by comprising the following steps:
1) The preparation stage:
1.1 After the initial training of the face recognition model is completed, the face recognition model is divided into two parts: feature extractor E (-) and identification network(computationally intensive) distributing the feature extractor E (-) as a client to the user, identifying the network +.>Then the server is set;
2) Shadow model training phase:
2.1 Processing a face image dataset (training set) into facial features corresponding to the dataset by a feature extractor E (·);
2.2 Training according to the corresponding relation between the face image data (training set) and the corresponding facial features to obtain a shadow model S (-) and deploying the shadow model S (-) at a server;
3) Initializing a database:
3.1 If the face recognition database is not available, a face recognition database is established, and if the face recognition database is available, the face recognition database is obtained;
3.2 Processing the face recognition database into a facial feature database by using a feature extractor E ();
3.3 Processing the facial features in the facial feature database into I-type shadow images by a shadow model S (-), and processing the I-type shadow images by a feature extractor E (-), so as to obtain shadow features;
3.4 Processing the shadow features by using the same shadow model S (-), and obtaining a class II shadow image;
3.5 Calculating the reconstruction loss between the class I shadow image and the class II shadow image, and calculating the corresponding gradient size and gradient direction according to the loss;
3.6 Generating antagonistic potential noise by solving a constraint optimization problem according to the gradient magnitude and the gradient direction;
3.7 Adding the shadow feature to the antagonistic potential noise to obtain an antagonistic feature with privacy preserving capability;
3.8 The data in the original face recognition database is replaced by the contrast characteristics, and the initialization or the safety update of the face recognition database is completed;
4) And (3) a system operation stage:
4.1 The face image to be verified, which is obtained from the face recognition terminal, is processed into facial features through a feature extractor E (·) and then is sent to a server;
4.2 Processing the facial features into I-type shadow images by a shadow model S (-), and processing the I-type shadow images by a feature extractor E (-), so as to obtain shadow features;
4.3 Processing the shadow features by using the same shadow model S (-), and obtaining a class II shadow image;
4.4 Calculating the reconstruction loss between the class I shadow image and the class II shadow image, and calculating the corresponding gradient size and gradient direction according to the loss;
4.5 Generating antagonistic potential noise by solving a constraint optimization problem according to the gradient magnitude and the gradient direction;
4.6 Adding the shadow features and the antagonistic potential noise to obtain antagonistic features with privacy protection capability corresponding to the face image to be verified;
4.7 Using an identification networkThe antagonism characteristic corresponding to the face image to be verified is compared with the antagonism characteristic in the face recognition databaseAnd (5) comparing the signs to obtain a face recognition result.
Further, the feature extractor E (-) of step 1.1) is distributed to the client and is a lightweight network that requires little computation to extract shallow features, and the identification networkThe server is deployed at the server for identity recognition.
Further, the shadow model S (-) described in step 2.2) is a reconstruction network of arbitrary structure to learn the mapping relationship from facial features to face images, training the shadow model S (-) on the public face dataset by minimizing the following loss functions:
wherein X is face image data (training set), Z is a corresponding face feature set, X i Is a single raw face image, z i Representing the sum of x i Single facial features extracted from the model.
Further, the facial features in step 3.3) and step 4.2) are processed by a shadow model S (-) to form a class i shadow image, and the class i shadow image is processed by a feature extractor E (-) to obtain shadow features, where the process is formally expressed as:
wherein the method comprises the steps ofIs a class I shadow image reconstructed by the shadow model according to the facial features z submitted by the client, and is +.>Is +.>Shadow features extracted from the image.
Further, the gradient of the shadow model reconstruction loss in the steps 3.5) and 4.4) is obtained The relationship with the added noise (disturbance) δ is as follows:
wherein the method comprises the steps ofRepresenting the contrast feature with delta initialized to zero, after adding noise, the attacker cannot recover class I shadow image from the contrast feature +.>Due to->The method is highly similar to the original image x, so that an attacker also has difficulty in reconstructing the original image, and the parameters of a Batch Normalization (BN) layer in a shadow model are updated and the mean mu and the variance sigma of facial features of each batch are independently calculated because the face images used for training and the face images encountered after the face recognition network is deployed can be greatly different.
Further, the constraint optimization objective of the resistive potential noise described in step 3.6), step 4.5) aims at finding one L p -norm bounded noise delta to perturb the feature, thereby losing reconstructionMaximization, which is formulated as the following constraint optimization problem:
where x is the original face image and z represents the face extracted from xCharacteristic, delta denotes the antagonistic potential noise, ζ denotes the noise margin, R is the reconstructed attack network, the optimization problem is solved by the edgeTo be solved by adding noise to the gradient direction of (a) in order to generate an antagonistic characteristic, at +.>Under direction of (a) injecting resistant potential noise delta into shadow features, using Project Gradient Descent (PGD) algorithm (gradient-based method) to generate resistant features to break the mapping from features to original facial images, thereby enabling the face recognition system to resist reconstruction attacks, which iteratively adds noise along the gradient direction while limiting the disturbance range for each iteration, said generation of resistant features being expressed as:
where S is the shadow model, α controls the noise level, ε limits the noise level added in each iteration, sign (·) is a function that acts on each element, its output 1 represents a positive gradient value, -1 represents a negative gradient value, and 0 represents a gradient value of 0.
Further, the system operation phase described in step 4) has two different options: online mode, offline mode. The online mode is plug and play, the existing face recognition database (face image database or face feature database) is directly updated to be a protected antagonistic feature database, the face privacy can be effectively protected without modifying or retraining a network, and meanwhile, high-precision face recognition is maintained; the offline mode is to further train the identity recognition network of the server by utilizing the antagonistic characteristicsAnd obtaining a more accurate identification result.
The beneficial effects of the application are as follows:
the application relates to the field of Artificial Intelligence (AI) security and the field of data security, and discloses an antagonistic characteristic generation method for face recognition privacy protection. Compared with the defect that the accuracy of face recognition tasks and the effectiveness of privacy protection cannot be balanced in the prior privacy protection work, the application establishes a shadow model to acquire a mapping function from facial features to images, generates antagonistic potential noise to destroy mapping by solving the constraint optimization problem, and therefore provides an antagonistic feature for protecting privacy, which can maintain excellent defending performance when facing an attack network with an unknown structure, can resist unknown reconstruction attacks while maintaining the face recognition accuracy, and effectively protects the privacy safety of the face. The application has high practicability, does not need to change the deployed face recognition model, and can be used as a privacy enhancement module to be quickly integrated into the existing face recognition system, so that the requirement of protecting the face privacy is met; the identification network can be selectively optimized to meet the requirement of higher identification precision. In addition, the antagonistic features provided by the application also provide a safe face data sharing carrier for the cooperation between entities, namely, the antagonistic features are used for carrying out data sharing instead of original images or facial features, so that the requirements of training face tasks are met while the privacy of the faces is protected.
Drawings
FIG. 1 is a training flow chart of a shadow model in an antagonistic feature generation method for face recognition privacy protection;
FIG. 2 is a flowchart of initializing a database in a face recognition privacy protection oriented antagonistic feature generation method;
FIG. 3 is a flow chart of the system operation phase in the face recognition privacy protection oriented antagonistic feature generation method;
fig. 4 is a detailed block diagram of a shadow model in the antagonistic feature generation method for face recognition privacy protection;
FIG. 5 is a graph of index relationships between face recognition accuracy and reconstruction resistance of the present application;
FIG. 6 is a graph comparing accuracy index of the present application with that of the prior art face recognition privacy method;
FIG. 7 is a graph comparing the anti-reconstruction capability index of the present application with the existing face recognition privacy method;
fig. 8 is a graph comparing the metrics of the anti-reconfiguration capability of the present application in the face of different network architecture reconfiguration attacks.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical methods and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The antagonism characteristic generation method facing the face recognition privacy protection comprises the following steps:
1) After the initial training of the face recognition model is completed, the face recognition model is divided into two parts: feature extractor E (-) and identification network(computationally intensive). Distributing the feature extractor E (-) to the user, the identification network +.>Then it is placed on the server side. Specifically, the face recognition model that has been trained or deployed is divided into two sequential modules: feature extractor E (-) and identification network +.>E (-) is distributed to clients, it should be noted that E (-) is a lightweight network that requires little computation to extract shallow features, as opposed to>The deployment is used for identity recognition at the server side, and more computing resources are needed. In this embodiment, a ResNet50 network is taken as an example, before selection3The layer is used as a feature extractor E (& gt), and the rest layer number is used as an identification network & lt/EN & gt>
2) The face image dataset (training set) is processed by a feature extractor E (·) as facial features corresponding to the dataset. The face image data set (training set) can be a public face data set on a network, such as CelebA and CASIA-WebFace, or a private data set built by a face recognition service provider;
3) Training according to the corresponding relation between face image data (training set) and corresponding facial features to obtain a shadow model S (-) and deploying the shadow model S (-) at a server, constructing a powerful shadow model S (-) on the server, wherein the powerful shadow model S (-) is a reconstruction network with any structure so as to learn the mapping relation from the facial features to the face images, and training the shadow model S (-) on a public face data set by minimizing the following loss functions:
wherein X is face image data (training set), Z is a corresponding face feature set, X i Is a single raw face image, z i Representing the sum of x i Single facial features extracted from the model. Furthermore, given the image-feature pairs extracted by the same feature extractor, the different enough network learning mappings from facial features to images are similar, regardless of the particular network employed, FIG. 4 is a detailed architecture of the enough network employed in this embodiment, but is not limited to these architectures, in which the shadow model S (-) employs an Adam optimizer with a learning rate of 1e-4, training 10 rounds on the public dataset CASIA-WebFace;
4) If the face recognition database is not available, a face recognition database is established, and if the face recognition database is available, the face recognition database is obtained. For undeployed face recognition systems, user registration needs to be completed to obtain a face recognition database for comparison, and for deployed face recognition systems, the face recognition database already exists;
5) The face recognition database is processed as a facial feature database by a feature extractor E (·). If the face recognition database is a face image, the step 5) is needed to convert the face recognition database into facial features, if the face recognition database is a face image, the face recognition database is a faceThe features do not require conversion. The transformed facial feature size in this embodiment is 77 2 ×64;
6) The facial features in the facial feature database are processed into class I shadow images by a shadow model S (-), the class I shadow images are processed by a feature extractor E (-) to obtain the shadow features, and the process is formally expressed as follows:
wherein the method comprises the steps ofIs a class I shadow image reconstructed by the shadow model according to the facial features z submitted by the client, and is +.>Is +.>Shadow features extracted from the image.
7) And (3) processing the shadow features by using the shadow model S (-) in the step (6) to obtain a class II shadow image. Notably, the intermediate products such as facial features, class I shadow images, class II shadow images and the like are not stored in the server, and are only participated in operation in a byte stream mode, and destroyed after the operation is finished, so that privacy security is ensured.
8) And calculating the reconstruction loss between the class I shadow image and the class II shadow image, and calculating the corresponding gradient size and gradient direction according to the loss. Gradient of reconstruction loss of shadow modelThe relationship with the added noise (disturbance) δ is as follows:
wherein the method comprises the steps ofRepresenting the contrast feature with delta initialized to zero, after adding noise, the attacker cannot recover class I shadow image from the contrast feature +.>Due to->The method updates parameters of Batch Normalization (BN) layers in the shadow model, specifically, normalizes the input using the parameters learned from the training dataset in the inference phase as compared to a typical BN process, and the method calculates the mean μ and variance σ of each batch of facial features independently to ensure more targeted, efficient, resistant noise generation, considering that the face images used for training and the face images encountered after deployment of the face recognition network may be quite different, as the original image x is highly similar to and therefore difficult for an attacker to reconstruct.
9) Based on the gradient magnitude and gradient direction, antagonistic potential noise is generated by solving a constraint optimization problem. Constraint optimization objective against resistive potential noise aims to find an L p -norm bounded noise delta to perturb the feature, thereby losing reconstructionMaximization, which is formulated as the following constraint optimization problem:
where R is a reconstructed attack network, x is the original facial image, z represents facial features extracted from x, delta represents the antagonistic potential noise, and ζ represents the noise margin, intuitively speaking, the optimization problem is solved by followingTo be solved by adding noise to the gradient direction of (a) in order to generate an antagonistic characteristic, at +.>Under direction of (a) injecting an antagonistic potential noise δ into the shadow feature, in this embodiment using Project Gradient Descent (PGD) algorithm (gradient-based method) to generate the antagonistic feature to break the mapping from the feature to the original facial image, thereby enabling the face recognition system to resist reconstruction attacks, which iteratively adds noise along the gradient direction while limiting the disturbance range per iteration, in particular, the generation of the antagonistic feature can be expressed as:
where S is the shadow model, α controls the noise level, ε limits the noise level added in each iteration, sign (·) is a function that acts on each element, its output 1 represents a positive gradient value, -1 represents a negative gradient value, and 0 represents a gradient value of 0. In this embodiment, α is set to 0.2 ε to 0.2, and the total number of iteration rounds is 40;
10 Adding shadow features to the resistant potential noise to obtain resistant features with privacy protecting capability, the resistant features having excellent resistance to reconstruction attacks, and the effectiveness of the scheme is proved by figures 5, 6, 7 and 8 from the balance of privacy protecting effectiveness and face recognition task accuracy, resistant feature anti-reconstruction capability, resistant feature privacy protecting generalization capability and the like, respectively, the size of the resistant features with privacy protecting capability generated in the embodiment is 77 2 X 64, consistent with the original facial feature size;
11 The antagonistic characteristics are replaced with the data in the original face recognition database, initialization or safety update of the face recognition database is completed, and the privacy safety of the face data is guaranteed;
12 During the system operation stage, the face image to be verified obtained from the face recognition terminal is processed into facial features by the feature extractor E (-), and then sent to the server, and the size of the face image obtained in the embodiment is uniformly processed to be 160 2 ×3;
13 The facial features are processed by adopting the data processing flow described in the steps 6) to 10), and the antagonistic features corresponding to the face images to be verified are obtained, wherein the specific flow is as follows: the facial features are processed into I-type shadow images by a shadow model S, and the I-type shadow images are processed by a feature extractor E to obtain shadow features; processing shadow features by using the same shadow model S to obtain a class II shadow image; calculating reconstruction loss between the class I shadow image and the class II shadow image, and calculating corresponding gradient size and gradient direction according to the loss; generating antagonistic potential noise by solving a constraint optimization problem according to the gradient magnitude and the gradient direction; adding the shadow features and the antagonistic potential noise to obtain antagonistic features with privacy protection capability corresponding to the face image to be verified;
14 Using an identification networkAnd comparing the antagonism characteristics corresponding to the face image to be verified with the antagonism characteristics in the face recognition database to obtain a face recognition result.
The application has two different choices in the system operation stage, including:
on-line mode: plug-and-play, the face privacy can be effectively protected without modifying or retraining the network, and meanwhile, high precision is maintained;
offline mode: identification network capable of continuing training by using antagonism characteristicsAnd obtaining a recognition result with higher precision.
Fig. 1 is a training flowchart of a shadow model in a face recognition privacy protection-oriented antagonistic feature generation method, corresponding to steps 2) to 3); fig. 2 is a flowchart of initializing a database in the face recognition privacy protection-oriented antagonistic feature generation method, corresponding to steps 4) to 11); fig. 3 is a flowchart of a system operation phase in the face recognition privacy protection-oriented antagonistic feature generation method, corresponding to steps 12) to 14).
The application adopts Accuracy (precision) to evaluate the face recognition capability, and the higher the Accuracy is, the stronger the face recognition capability is. The application uses SSIM (structural similarity), PSNR (peak signal to noise ratio), MSE (mean square error) and SRRA (replay attack success rate) to evaluate the quality of the reconstructed image. SSIM is a number between 0 and 1, the greater the difference between the reconstructed image and the original image, the better the anti-reconstruction effect of the method, ssim=1 when the two images are identical; PSNR is also used to compare the similarity between a reconstructed image and the corresponding original image, with smaller values indicating poorer quality of the reconstructed image and better reconstruction resistance; MSE represents the pixel difference value between the reconstructed image and the original image, and the greater the difference value is, the better the privacy protection effect is; SRRA represents replay attack success rate, i.e., the probability of successful face recognition matching using pictures restored by facial features. In fig. 5, epsilon limits the added noise level in each iteration, as epsilon increases, PSNR decreases rapidly, and face recognition Accuracy (Accuracy) is basically unchanged, which shows that the application can well balance the effectiveness of privacy protection and the Accuracy of face recognition task, and can meet different service requirements by adjusting epsilon values. Fig. 6 shows the face recognition precision of the conventional method and the method on the LFW data set, the CFP-CP data set and the AgeDB-30 data set when epsilon=0.2, and the method achieves the aim of face privacy protection under the condition of keeping the face recognition precision. Fig. 7 shows the average SSIM, PSNR, MSE and SRRA of reconstructed pictures on LFW dataset, CFP-CP dataset, ageDB-30 dataset when epsilon = 0.2. It can be seen that on the three test data sets, the average SSIM, PSNR and SRRA values of the pictures reconstructed from the antagonistic features are much lower than those of the other methods, and the MSE is much higher than those of the other methods, which indicates that the application can effectively protect the privacy of various face images. Meanwhile, fig. 8 shows the capability of the application for resisting the reconstruction attacks of different architectures, and various similar indexes indicate that the application has excellent capability for resisting the reconstruction attacks of different network architectures.
It should be understood that the foregoing description of the preferred embodiments is not intended to limit the scope of the application, but rather to limit the scope of the claims, and that those skilled in the art can make substitutions or modifications without departing from the scope of the application as set forth in the appended claims.

Claims (7)

1. The antagonism characteristic generation method for face recognition privacy protection is characterized by comprising the following steps of:
1) The preparation stage:
1.1 After the initial training of the face recognition model is completed, the face recognition model is divided into two parts: feature extractor E (-) and identification network(computationally intensive) distributing the feature extractor E (-) as a client to the user, identifying the network +.>Then the server is set;
2) Shadow model training phase:
2.1 Processing a face image dataset (training set) into facial features corresponding to the dataset by a feature extractor E (·);
2.2 Training according to the corresponding relation between the face image data (training set) and the corresponding facial features to obtain a shadow model S (-) and deploying the shadow model S (-) at a server;
3) Initializing a database:
3.1 If the face recognition database is not available, a face recognition database is established, and if the face recognition database is available, the face recognition database is obtained;
3.2 Processing the face recognition database into a facial feature database by using a feature extractor E ();
3.3 The facial features in the facial feature database are processed into a class I shadow image by a shadow model S (-), and the class I shadow image is processed by a feature extractor E (-) to obtain shadow features;
3.4 Processing the shadow features by using the same shadow model S (-), and obtaining a class II shadow image;
3.5 Calculating the reconstruction loss between the class I shadow image and the class II shadow image, and calculating the corresponding gradient size and gradient direction according to the loss;
3.6 Generating antagonistic potential noise by solving a constraint optimization problem according to the gradient magnitude and the gradient direction;
3.7 Adding the shadow feature to the antagonistic potential noise to obtain an antagonistic feature with privacy preserving capability;
3.8 The data in the original face recognition database is replaced by the contrast characteristics, and the initialization or the safety update of the face recognition database is completed;
4) And (3) a system operation stage:
4.1 The face image to be verified, which is obtained from the face recognition terminal, is processed into facial features through a feature extractor E (·) and then is sent to a server;
4.2 The facial features are processed into class I shadow images by a shadow model S (-), and the class I shadow images are processed by a feature extractor E (-) to obtain shadow features;
4.3 Processing the shadow features by using the same shadow model S (-), and obtaining a class II shadow image;
4.4 Calculating the reconstruction loss between the class I shadow image and the class II shadow image, and calculating the corresponding gradient size and gradient direction according to the loss;
4.5 Generating antagonistic potential noise by solving a constraint optimization problem according to the gradient magnitude and the gradient direction;
4.6 Adding the shadow features and the antagonistic potential noise to obtain antagonistic features with privacy protection capability corresponding to the face image to be verified;
4.7 Using an identification networkAnd comparing the antagonism characteristics corresponding to the face image to be verified with the antagonism characteristics in the face recognition database to obtain a face recognition result.
2. The face recognition privacy protection-oriented antagonistic feature generation method as claimed in claim 1, wherein: the feature extractor E (-) of step 1.1) is distributed to the client and is a lightweight network that requires little computation to extract shallow features, the identification networkThe server is deployed at the server for identity recognition.
3. The face recognition privacy protection-oriented antagonistic feature generation method as claimed in claim 1, wherein: the shadow model S (-) described in step 2.2) is a reconstruction network of arbitrary structure to learn the mapping relationship from facial features to face images, training the shadow model S (-) on the public face dataset by minimizing the following loss functions:
wherein X is face image data (training set), Z is a corresponding face feature set, X i Is a single raw face image, z i Representing the sum of x i Single facial features extracted from the model.
4. A face recognition privacy protection oriented countermeasure feature generation method as claimed in claim 1 or 2 or 3, wherein: the facial features described in step 3.3) and step 4.2) are processed by a shadow model S (-) to form a class i shadow image, and the class i shadow image is processed by a feature extractor E (-) to obtain shadow features, where the process is formally expressed as:
wherein the method comprises the steps ofIs a class I shadow image reconstructed by the shadow model according to the facial features z submitted by the client, and is +.>Is +.>Shadow features extracted from the image.
5. The face recognition privacy protection-oriented antagonism feature generation method of claim 4, wherein: gradient of reconstruction loss of shadow model in step 3.5) and step 4.4)The relationship with the added noise (disturbance) δ is as follows:
wherein the method comprises the steps ofRepresenting the contrast feature with delta initialized to zero, after adding noise, the attacker cannot recover class I shadow image from the contrast feature +.>Due to->The method is highly similar to the original image x, so that an attacker also has difficulty in reconstructing the original image, and the batch normalization of the shadow model is updated because the face images used for training and the face images encountered after the face recognition network is deployed can have great differencesBN) layer, the mean μ and variance σ of facial features for each batch are calculated independently.
6. The face recognition privacy protection-oriented antagonistic feature generation method according to claim 1 or 5, wherein: the constraint optimization objective against potential noise described in step 3.6), step 4.5) aims at finding one L p -norm bounded noise delta to perturb the feature, thereby losing reconstructionMaximization, which is formulated as the following constraint optimization problem:
where x is the original facial image, z represents facial features extracted from x, delta represents the antagonistic potential noise, ζ represents the noise margin, R is the reconstructed attack network, and the optimization problem is described by followingTo be solved by adding noise to the gradient direction of (a) in order to generate an antagonistic characteristic, at +.>Under direction of (a) injecting resistant potential noise delta into shadow features, using Project Gradient Descent (PGD) algorithm (gradient-based method) to generate resistant features to break the mapping from features to original facial images, thereby enabling the face recognition system to resist reconstruction attacks, which iteratively adds noise along the gradient direction while limiting the disturbance range for each iteration, said generation of resistant features being expressed as:
where S is the shadow model, α controls the noise level, ε limits the noise level added in each iteration, sign (·) is a function that acts on each element, its output 1 represents a positive gradient value, -1 represents a negative gradient value, and 0 represents a gradient value of 0.
7. The face recognition privacy protection-oriented countermeasure feature generation method of claim 1 or 2 or 3 or 5, wherein: the system operation stage described in step 4) has two different options: the online mode is plug and play, the existing face recognition database (face image database or facial feature database) is directly updated to be a protected antagonistic feature database, the face privacy can be effectively protected without modifying or retraining a network, and meanwhile, the high-precision face recognition is maintained; the offline mode is to further train the identity recognition network of the server by utilizing the antagonistic characteristicsAnd obtaining a more accurate identification result.
CN202310212400.8A 2023-03-07 2023-03-07 Face recognition privacy protection-oriented antagonism feature generation method Active CN116778544B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310212400.8A CN116778544B (en) 2023-03-07 2023-03-07 Face recognition privacy protection-oriented antagonism feature generation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310212400.8A CN116778544B (en) 2023-03-07 2023-03-07 Face recognition privacy protection-oriented antagonism feature generation method

Publications (2)

Publication Number Publication Date
CN116778544A true CN116778544A (en) 2023-09-19
CN116778544B CN116778544B (en) 2024-04-16

Family

ID=88007027

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310212400.8A Active CN116778544B (en) 2023-03-07 2023-03-07 Face recognition privacy protection-oriented antagonism feature generation method

Country Status (1)

Country Link
CN (1) CN116778544B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117763523A (en) * 2023-12-05 2024-03-26 浙江大学 Privacy protection face recognition method capable of resisting gradient descent

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210300433A1 (en) * 2020-03-27 2021-09-30 Washington University Systems and methods for defending against physical attacks on image classification
CN114626507A (en) * 2022-03-15 2022-06-14 西安交通大学 Method, system, device and storage medium for generating confrontation network fairness analysis
CN115019378A (en) * 2022-08-09 2022-09-06 浙江大学 Cooperative reasoning-oriented method and device for resisting data review attribute inference attack
CN115577262A (en) * 2022-09-30 2023-01-06 中国人民解放军国防科技大学 Interactive visualization system for exploring federal learning privacy

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210300433A1 (en) * 2020-03-27 2021-09-30 Washington University Systems and methods for defending against physical attacks on image classification
CN114626507A (en) * 2022-03-15 2022-06-14 西安交通大学 Method, system, device and storage medium for generating confrontation network fairness analysis
CN115019378A (en) * 2022-08-09 2022-09-06 浙江大学 Cooperative reasoning-oriented method and device for resisting data review attribute inference attack
CN115577262A (en) * 2022-09-30 2023-01-06 中国人民解放军国防科技大学 Interactive visualization system for exploring federal learning privacy

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHIBO WANG 等: "Fairness-aware Adversarial Perturbation Towards Bias Mitigation for Deployed Deep Models", ARXIV, 3 March 2022 (2022-03-03) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117763523A (en) * 2023-12-05 2024-03-26 浙江大学 Privacy protection face recognition method capable of resisting gradient descent

Also Published As

Publication number Publication date
CN116778544B (en) 2024-04-16

Similar Documents

Publication Publication Date Title
Zhang et al. Exploiting defenses against gan-based feature inference attacks in federated learning
Wu et al. Fedcg: Leverage conditional gan for protecting privacy and maintaining competitive performance in federated learning
CN111242290B (en) Lightweight privacy protection generation countermeasure network system
Li et al. Privynet: A flexible framework for privacy-preserving deep neural network training
Luo et al. Scalable differential privacy with sparse network finetuning
CN114186237A (en) Truth-value discovery-based robust federated learning model aggregation method
Feng et al. Masquerade attack on transform-based binary-template protection based on perceptron learning
Peng et al. A robust coverless steganography based on generative adversarial networks and gradient descent approximation
CN116778544B (en) Face recognition privacy protection-oriented antagonism feature generation method
Ding et al. Privacy-preserving feature extraction via adversarial training
CN112668044A (en) Privacy protection method and device for federal learning
Xu et al. CGIR: Conditional generative instance reconstruction attacks against federated learning
Zhao et al. Deep leakage from model in federated learning
Thapar et al. Anonymizing egocentric videos
Jasmine et al. A privacy preserving based multi-biometric system for secure identification in cloud environment
Yin et al. Ginver: Generative model inversion attacks against collaborative inference
CN112330551A (en) Remote sensing image outsourcing noise reduction method based on secret sharing
Shi et al. Scale-mia: A scalable model inversion attack against secure federated learning via latent space reconstruction
CN115168633A (en) Face recognition privacy protection method capable of realizing strong scrambling
CN114723990A (en) Image classification robustness improving method based on metric learning
Zhou et al. Feature correlation attack on biometric privacy protection schemes
Zhu et al. People taking photos that faces never share: Privacy protection and fairness enhancement from camera to user
Lin et al. A probabilistic union approach to robust face recognition with partial distortion and occlusion
Sun et al. Client-Side Gradient Inversion Attack in Federated Learning Using Secure Aggregation
Chen et al. Adversarial representation sharing: A quantitative and secure collaborative learning framework

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant