CN116778544B - Face recognition privacy protection-oriented antagonism feature generation method - Google Patents

Face recognition privacy protection-oriented antagonism feature generation method Download PDF

Info

Publication number
CN116778544B
CN116778544B CN202310212400.8A CN202310212400A CN116778544B CN 116778544 B CN116778544 B CN 116778544B CN 202310212400 A CN202310212400 A CN 202310212400A CN 116778544 B CN116778544 B CN 116778544B
Authority
CN
China
Prior art keywords
shadow
face recognition
face
image
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310212400.8A
Other languages
Chinese (zh)
Other versions
CN116778544A (en
Inventor
王志波
金帅帆
张文文
王炎
王和
胡佳慧
孙鹏
任奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202310212400.8A priority Critical patent/CN116778544B/en
Publication of CN116778544A publication Critical patent/CN116778544A/en
Application granted granted Critical
Publication of CN116778544B publication Critical patent/CN116778544B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/95Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/50Maintenance of biometric data or enrolment thereof
    • G06V40/53Measures to keep reference information secret, e.g. cancellable biometrics

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses a face recognition privacy protection-oriented antagonism feature generation method, which establishes a shadow model to acquire a mapping function from facial features to images, generates antagonism potential noise to destroy the mapping by solving the constraint optimization problem, and therefore provides a privacy protection antagonism feature which can keep excellent defensive performance when facing an attack network with an unknown structure, can resist unknown reconstruction attack while keeping face recognition accuracy, and effectively protects face privacy safety. The identification network can be selectively optimized to meet the requirement of higher identification precision.

Description

Face recognition privacy protection-oriented antagonism feature generation method
Technical Field
The invention relates to the field of Artificial Intelligence (AI) security and the field of data security, in particular to an antagonistic characteristic generation method for face recognition privacy protection, which can ensure the accuracy of a face recognition system and resist reconstruction attack to protect the face privacy.
Background
Face recognition is a way to identify identities using personal facial information, which has been widely used in many security-sensitive scene applications. Needless to say, the face image used for biometric identification is personal privacy that should be protected for everyone. To avoid direct leakage of face images, the mainstream face recognition system generally adopts a client-server mode, extracts features from the face images by a feature extractor of the client, and stores the face features instead of the face images at the server side for later online recognition.
Because the human face features inhibit the visual information of the human face, the protection of the privacy of the human face can be realized to a certain extent. Unfortunately, these features can still be exploited to recover face sensitive information once revealed, for example, by reconstructing the appearance of the original image. The existing face privacy protection method has the technical problems that the effectiveness of privacy protection and the accuracy of face recognition tasks cannot be effectively balanced, so that the application requirements cannot be met.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a antagonism characteristic generation method for face recognition privacy protection, which takes the accuracy of face recognition tasks and the effectiveness of privacy protection into consideration.
In order to achieve the above object, the present application provides the following technical solutions:
In one general aspect, a method for generating an antagonistic feature for face recognition privacy protection is provided, which is characterized by comprising the following steps:
1) The preparation stage:
1.1 After the initial training of the face recognition model is completed, the face recognition model is divided into two parts: the feature extractor E (-) and the identity recognition network (computationally intensive), the feature extractor E (-) is distributed to users as clients, and the identity recognition network/> is arranged at a server;
2) Shadow model training phase:
2.1 Processing a face image dataset (training set) into facial features corresponding to the dataset by a feature extractor E ();
2.2 Training according to the corresponding relation between the face image data (training set) and the corresponding facial features to obtain a shadow model S (-) and deploying the shadow model S (-) at a server;
3) Initializing a database:
3.1 If the face recognition database is not available, a face recognition database is established, and if the face recognition database is available, the face recognition database is obtained;
3.2 Processing the face recognition database into a facial feature database by using a feature extractor E ();
3.3 Processing the facial features in the facial feature database into I-type shadow images by a shadow model S (-), and processing the I-type shadow images by a feature extractor E (-), so as to obtain shadow features;
3.4 Processing the shadow features by using the same shadow model S (-), and obtaining a class II shadow image;
3.5 Calculating the reconstruction loss between the class I shadow image and the class II shadow image, and calculating the corresponding gradient size and gradient direction according to the loss;
3.6 Generating antagonistic potential noise by solving a constraint optimization problem according to the gradient magnitude and the gradient direction;
3.7 Adding the shadow feature to the antagonistic potential noise to obtain an antagonistic feature with privacy preserving capability;
3.8 The data in the original face recognition database is replaced by the contrast characteristics, and the initialization or the safety update of the face recognition database is completed;
4) And (3) a system operation stage:
4.1 The face image to be verified, which is obtained from the face recognition terminal, is processed into facial features through a feature extractor E () and then is sent to a server;
4.2 Processing the facial features into I-type shadow images by a shadow model S (-), and processing the I-type shadow images by a feature extractor E (-), so as to obtain shadow features;
4.3 Processing the shadow features by using the same shadow model S (-), and obtaining a class II shadow image;
4.4 Calculating the reconstruction loss between the class I shadow image and the class II shadow image, and calculating the corresponding gradient size and gradient direction according to the loss;
4.5 Generating antagonistic potential noise by solving a constraint optimization problem according to the gradient magnitude and the gradient direction;
4.6 Adding the shadow features and the antagonistic potential noise to obtain antagonistic features with privacy protection capability corresponding to the face image to be verified;
4.7 Using the identification network to compare the corresponding antagonism characteristic of the face image to be verified with the antagonism characteristic in the face identification database to obtain the face identification result.
Further, the feature extractor E (-) in step 1.1) is distributed to the client, and is a lightweight network, and only needs few calculations to extract shallow features, and the identification network is deployed at the server for identification.
Further, the shadow model S (-) described in step 2.2) is a reconstruction network of arbitrary structure to learn the mapping relationship from facial features to face images, training the shadow model S (-) on the public face dataset by minimizing the following loss functions:
Where X is face image data (training set), Z is a corresponding set of facial features, X i is a single raw face image, and Z i represents a single facial feature extracted from X i.
Further, the facial features in step 3.3) and step 4.2) are processed by a shadow model S (-) to form a class i shadow image, and the class i shadow image is processed by a feature extractor E (-) to obtain shadow features, where the process is formally expressed as:
where is the class i shadow image reconstructed by the shadow model from the client-submitted facial features z,/> is the shadow features extracted from the class i shadow image/> using the feature extractor.
Further, the gradient of the shadow model reconstruction loss described in step 3.5) and step 4.4) is related to the added noise (disturbance) as follows:
Wherein represents the contrast feature of initializing delta to zero, after adding noise, an attacker cannot recover the class i shadow image/> from the contrast feature, and because/> is highly similar to the original image x, the attacker also has difficulty reconstructing the original image, and because face images used for training and face images encountered after the face recognition network is deployed may have great differences, parameters of a Batch Normalization (BN) layer in the shadow model are updated, and the mean and the variance of the face features of each batch are calculated independently.
Further, the constraint optimization objective for resistive potential noise described in step 3.6), step 4.5) aims to find an L p -norm bounded noise to perturb the feature, thereby maximizing the reconstruction loss , which is formulated as a constraint optimization problem:
where x is the original facial image, z represents facial features extracted from x, represents the antagonistic potential noise, represents the noise margin, R is the reconstruction attack network, the optimization problem is solved by adding noise along the gradient direction of , in order to generate the antagonistic features, the antagonistic potential noise is injected into the shadow features under the guidance of/> , the Project GRADIENT DESCENT (PGD) algorithm (gradient-based method) is used to generate the antagonistic features to break the mapping from the features to the original facial image, thereby enabling the face recognition system to resist reconstruction attacks, which iteratively adds noise along the gradient direction while limiting the disturbance range of each iteration, the generation of the antagonistic features is expressed as:
Where S is the shadow model, controls the noise level, limits the noise level added in each iteration, sign () is a function that acts on each element, its output 1 represents a positive gradient value, -1 represents a negative gradient value, and 0 represents a gradient value of 0.
Further, the system operation phase described in step 4) has two different options: online mode, offline mode. The online mode is plug and play, the existing face recognition database (face image database or face feature database) is directly updated to be a protected antagonistic feature database, the face privacy can be effectively protected without modifying or retraining a network, and meanwhile, high-precision face recognition is maintained; the offline mode is to further train the identity recognition network at the server to obtain a more accurate recognition result by using the antagonistic characteristics.
The beneficial effects of the invention are as follows:
The invention relates to the field of Artificial Intelligence (AI) security and the field of data security, and discloses an antagonistic characteristic generation method for face recognition privacy protection. Compared with the defect that the accuracy of face recognition tasks and the effectiveness of privacy protection cannot be balanced in the prior privacy protection work, the invention establishes a shadow model to acquire a mapping function from facial features to images, generates antagonistic potential noise to destroy mapping by solving the constraint optimization problem, and therefore provides an antagonistic feature for protecting privacy, which can maintain excellent defending performance when facing an attack network with an unknown structure, can resist unknown reconstruction attacks while maintaining the face recognition accuracy, and effectively protects the privacy safety of the face. The invention has high practicability, does not need to change the deployed face recognition model, and can be used as a privacy enhancement module to be quickly integrated into the existing face recognition system, so that the requirement of protecting the face privacy is met; the identification network can be selectively optimized to meet the requirement of higher identification precision. In addition, the antagonistic features provided by the invention also provide a safe face data sharing carrier for the cooperation between entities, namely, the antagonistic features are used for carrying out data sharing instead of original images or facial features, so that the requirements of training face tasks are met while the privacy of the faces is protected.
Drawings
FIG. 1 is a training flow chart of a shadow model in an antagonistic feature generation method for face recognition privacy protection;
FIG. 2 is a flowchart of initializing a database in a face recognition privacy protection oriented antagonistic feature generation method;
FIG. 3 is a flow chart of the system operation phase in the face recognition privacy protection oriented antagonistic feature generation method;
Fig. 4 is a detailed block diagram of a shadow model in the antagonistic feature generation method for face recognition privacy protection;
FIG. 5 is a graph of index relationships between face recognition accuracy and reconstruction resistance of the present invention;
FIG. 6 is a graph comparing accuracy index of the present invention with that of the prior art face recognition privacy method;
FIG. 7 is a graph comparing the anti-reconstruction capability index of the present invention with the existing face recognition privacy method;
Fig. 8 is a graph comparing the metrics of the anti-reconfiguration capability of the present invention in the face of different network architecture reconfiguration attacks.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical methods and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The antagonism characteristic generation method facing the face recognition privacy protection comprises the following steps:
1) After the initial training of the face recognition model is completed, the face recognition model is divided into two parts: feature extractor E (-) and identification network (computationally intensive). And distributing the feature extractor E (-) to the user, and arranging the identification network/> at the server. Specifically, the face recognition model that has been trained or deployed is divided into two sequential modules: the feature extractor E (-) and the identification network/> E (-) are distributed to the client, and it should be noted that E (-) is a lightweight network, and only needs few calculations to extract shallow features, in contrast to the fact that/> is deployed at the server for identification, and needs more computing resources. In this embodiment, resNet networks are taken as an example, the first 3 layers are selected as a feature extractor E (), and the remaining layers are used as an identification network/>
2) The face image dataset (training set) is processed by a feature extractor E () as facial features corresponding to the dataset. The face image dataset (training set) can be a public face dataset on the network, such as CelebA and CASIA-WebFace, or a private dataset built by a face recognition service provider;
3) Training according to the corresponding relation between face image data (training set) and corresponding facial features to obtain a shadow model S (-) and deploying the shadow model S (-) at a server, constructing a powerful shadow model S (-) on the server, wherein the powerful shadow model S (-) is a reconstruction network with any structure so as to learn the mapping relation from the facial features to the face images, and training the shadow model S (-) on a public face data set by minimizing the following loss functions:
Where X is face image data (training set), Z is a corresponding set of facial features, X i is a single raw face image, and Z i represents a single facial feature extracted from X i. Furthermore, given the image-feature pairs extracted by the same feature extractor, the different enough-capacity network learns similar mappings from facial features to images, regardless of the particular network employed, FIG. 4 is a detailed architecture of the enough-capacity network employed in this embodiment, but is not limited to these architectures, in which the shadow model S (-) employs an Adam optimizer with a learning rate of 1e-4, training 10 rounds on the public dataset CASIA-WebFace;
4) If the face recognition database is not available, a face recognition database is established, and if the face recognition database is available, the face recognition database is obtained. For undeployed face recognition systems, user registration needs to be completed to obtain a face recognition database for comparison, and for deployed face recognition systems, the face recognition database already exists;
5) The face recognition database is processed as a facial feature database by a feature extractor E (). If the face recognition database is a face image, the step 5) is needed to be converted into the facial feature, and if the face recognition database is the facial feature, the conversion is not needed. The size of the facial features converted in this example is 77 2 64;
6) The facial features in the facial feature database are processed into class I shadow images by a shadow model S (-), the class I shadow images are processed by a feature extractor E (-) to obtain the shadow features, and the process is formally expressed as follows:
Where is the class i shadow image reconstructed by the shadow model from the client-submitted facial features z,/> is the shadow features extracted from the class i shadow image/> using the feature extractor.
7) And (3) processing the shadow features by using the shadow model S (-) in the step (6) to obtain a class II shadow image. Notably, the intermediate products such as facial features, class I shadow images, class II shadow images and the like are not stored in the server, and are only participated in operation in a byte stream mode, and destroyed after the operation is finished, so that privacy security is ensured.
8) And calculating the reconstruction loss between the class I shadow image and the class II shadow image, and calculating the corresponding gradient size and gradient direction according to the loss. The gradient of the shadow model reconstruction penalty is related to the added noise (perturbation) as follows:
Wherein denotes the challenge feature where is initialized to zero, after adding noise, an attacker cannot recover the class i shadow image/> from the challenge feature, and since/> is highly similar to the original image x, the attacker also has difficulty reconstructing the original image, and in addition, considering that the face image used for training and the face image encountered after the face recognition network is deployed may be quite different, the method updates parameters of the Batch Normalization (BN) layer in the shadow model, specifically, normalizes the input using the parameters learned from the training dataset in the reasoning phase with the typical BN procedure, and the method independently calculates the mean and variance of each batch of facial features to ensure more targeted and more efficient challenge noise generation.
9) Based on the gradient magnitude and gradient direction, antagonistic potential noise is generated by solving a constraint optimization problem. The constrained optimization objective against the potential noise aims at finding an L p -norm bounded noise to perturb the feature, thereby maximizing the reconstruction loss , which is formulated as a constrained optimization problem:
Where R is a reconstruction attack network, x is an original facial image, z represents facial features extracted from x, represents an antagonistic potential noise, represents a noise limit, intuitively speaking, the optimization problem is solved by adding noise along the gradient direction of , in order to generate an antagonistic feature, the antagonistic potential noise is injected into a shadow feature under the guidance of/> , in this embodiment, the Project GRADIENT DESCENT (PGD) algorithm (gradient-based method) is used to generate the antagonistic feature to break the mapping from the feature to the original facial image, so that the face recognition system can resist reconstruction attack, it iteratively adds noise along the gradient direction while limiting the disturbance range of each iteration, specifically, the generation of the antagonistic feature can be expressed as:
Where S is the shadow model, controls the noise level, limits the noise level added in each iteration, sign () is a function that acts on each element, its output 1 represents a positive gradient value, -1 represents a negative gradient value, and 0 represents a gradient value of 0. In this embodiment, is set to 0.2 to 0.2, and the total number of iteration rounds is 40;
10 Adding the shadow feature to the contrast potential noise to obtain a contrast feature with privacy protection capability, wherein the contrast feature has excellent resistance capability to reconstruction attack, and fig. 5, 6, 7 and 8 prove the effectiveness of the scheme from the angles of balance capability of privacy protection effectiveness and face recognition task accuracy, contrast feature anti-reconstruction capability, contrast feature privacy protection generalization and the like, and the size of the contrast feature with privacy protection capability generated in the embodiment is 77 2 64 and is consistent with the size of the original face feature;
11 The antagonistic characteristics are replaced with the data in the original face recognition database, initialization or safety update of the face recognition database is completed, and the privacy safety of the face data is guaranteed;
12 In the system operation stage, the face image to be verified obtained from the face recognition terminal is processed into facial features by a feature extractor E (-), and then sent to the server, and the size of the face image obtained in the embodiment is uniformly processed to 160 2 multiplied by 3;
13 The facial features are processed by adopting the data processing flow described in the steps 6) to 10), and the antagonistic features corresponding to the face images to be verified are obtained, wherein the specific flow is as follows: the facial features are processed into I-type shadow images by a shadow model S, and the I-type shadow images are processed by a feature extractor E to obtain shadow features; processing shadow features by using the same shadow model S to obtain a class II shadow image; calculating reconstruction loss between the class I shadow image and the class II shadow image, and calculating corresponding gradient size and gradient direction according to the loss; generating antagonistic potential noise by solving a constraint optimization problem according to the gradient magnitude and the gradient direction; adding the shadow features and the antagonistic potential noise to obtain antagonistic features with privacy protection capability corresponding to the face image to be verified;
14 Using the identification network to compare the corresponding antagonism characteristic of the face image to be verified with the antagonism characteristic in the face identification database to obtain the face identification result.
The invention has two different choices in the system operation stage, including:
on-line mode: plug-and-play, the face privacy can be effectively protected without modifying or retraining the network, and meanwhile, high precision is maintained;
Offline mode: the identification network , which can be continuously trained on the resistance characteristics, can be utilized to obtain a higher-precision identification result.
Fig. 1 is a training flowchart of a shadow model in a face recognition privacy protection-oriented antagonistic feature generation method, corresponding to steps 2) to 3); fig. 2 is a flowchart of initializing a database in the face recognition privacy protection-oriented antagonistic feature generation method, corresponding to steps 4) to 11); fig. 3 is a flowchart of a system operation phase in the face recognition privacy protection-oriented antagonistic feature generation method, corresponding to steps 12) to 14).
The invention adopts Accuracy (precision) to evaluate the face recognition capability, and the higher the Accuracy is, the stronger the face recognition capability is. The invention uses SSIM (structural similarity), PSNR (peak signal to noise ratio), MSE (mean square error) and SRRA (replay attack success rate) to evaluate the quality of the reconstructed image. SSIM is a number between 0 and 1, the greater the difference between the reconstructed image and the original image, the better the anti-reconstruction effect of the method, ssim=1 when the two images are identical; PSNR is also used to compare the similarity between a reconstructed image and the corresponding original image, with smaller values indicating poorer quality of the reconstructed image and better reconstruction resistance; MSE represents the pixel difference value between the reconstructed image and the original image, and the greater the difference value is, the better the privacy protection effect is; SRRA represents replay attack success rate, i.e., the probability of successful face recognition matching using pictures restored by facial features. In fig. 5, epsilon limits the added noise level in each iteration, as epsilon increases, PSNR decreases rapidly, and face recognition Accuracy (Accuracy) is basically unchanged, which shows that the invention can well balance the effectiveness of privacy protection and the Accuracy of face recognition task, and can meet different service requirements by adjusting epsilon values. Fig. 6 shows the face recognition accuracy of the conventional method and the present method on LFW data set, CFP-CP data set, ageDB-30 data set when epsilon=0.2, and the present method achieves the goal of face privacy protection while maintaining the face recognition accuracy. 7 fig. 7 shows the average SSIM, PSNR, MSE and SRRA of reconstructed pictures on LFW dataset, CFP-CP dataset, ageDB-30 dataset when epsilon = 0.2. It can be seen that on the three test data sets, the average SSIM, PSNR and SRRA values of the pictures reconstructed from the antagonistic features are much lower than those of the other methods, and the MSE is much higher than those of the other methods, which indicates that the invention can effectively protect the privacy of various face images. Meanwhile, fig. 8 shows the capability of the invention for resisting the reconstruction attacks of different architectures, and various similar indexes indicate that the invention has excellent capability for resisting the reconstruction attacks of different network architectures.
It should be understood that the foregoing description of the preferred embodiments is not intended to limit the scope of the invention, but rather to limit the scope of the claims, and that those skilled in the art can make substitutions or modifications without departing from the scope of the invention as set forth in the appended claims.

Claims (6)

1. The antagonism characteristic generation method for face recognition privacy protection is characterized by comprising the following steps of:
1) The preparation stage:
1.1 After the initial training of the face recognition model is completed, the face recognition model is divided into two parts: the feature extractor E (-) and the identity recognition network serve as a client to distribute the feature extractor E (-) to users, and the identity recognition network/> is arranged at a server;
2) Shadow model training phase:
2.1 Processing the face image dataset into facial features corresponding to the dataset by a feature extractor E ();
2.2 Training according to the corresponding relation between the face image dataset and the corresponding facial features to obtain a shadow model S (-) and deploying the shadow model S (-) at a server;
3) Initializing a database:
3.1 If the face recognition database is not available, a face recognition database is established, and if the face recognition database is available, the face recognition database is obtained;
3.2 Processing the face recognition database into a facial feature database by using a feature extractor E ();
3.3 The facial features in the facial feature database are processed into a class I shadow image by a shadow model S (-), and the class I shadow image is processed by a feature extractor E (-) to obtain shadow features;
3.4 Processing the shadow features by using the same shadow model S (-), and obtaining a class II shadow image;
3.5 Calculating the reconstruction loss between the class I shadow image and the class II shadow image, and calculating the corresponding gradient size and gradient direction according to the loss;
3.6 Generating antagonistic potential noise by solving a constraint optimization problem according to the gradient magnitude and the gradient direction;
3.7 Adding the shadow feature to the antagonistic potential noise to obtain an antagonistic feature with privacy preserving capability;
3.8 The data in the original face recognition database is replaced by the contrast characteristics, and the initialization or the safety update of the face recognition database is completed;
4) And (3) a system operation stage:
4.1 The face image to be verified, which is obtained from the face recognition terminal, is processed into facial features through a feature extractor E () and then is sent to a server;
4.2 The facial features are processed into class I shadow images by a shadow model S (-), and the class I shadow images are processed by a feature extractor E (-) to obtain shadow features;
4.3 Processing the shadow features by using the same shadow model S (-), and obtaining a class II shadow image;
4.4 Calculating the reconstruction loss between the class I shadow image and the class II shadow image, and calculating the corresponding gradient size and gradient direction according to the loss;
4.5 Generating antagonistic potential noise by solving a constraint optimization problem according to the gradient magnitude and the gradient direction;
4.6 Adding the shadow features and the antagonistic potential noise to obtain antagonistic features with privacy protection capability corresponding to the face image to be verified;
4.7 Using the identity recognition network to compare the corresponding antagonistic characteristics of the face image to be verified with the antagonistic characteristics in the face recognition database to obtain a face recognition result;
The constraint optimization objective for the resistive potential noise described in step 3.6), step 4.5) aims to find an L p -norm bounded noise to perturb the feature, thereby maximizing the reconstruction loss , which is formulated as the constraint optimization problem:
Where x is the original facial image, z represents facial features extracted from x, represents the antagonistic potential noise, represents the noise margin, R is the reconstruction attack network, the optimization problem is solved by adding noise along the gradient direction of , in order to generate the antagonistic features, the antagonistic potential noise is injected into the shadow features under the guidance of/> , the gradient-based Project GRADIENT DESCENT algorithm is used to generate the antagonistic features to break the mapping from features to the original facial image, thereby enabling the face recognition system to resist reconstruction attacks, it iteratively adds noise along the gradient direction while limiting the disturbance range of each iteration, the generation of the antagonistic features is expressed as:
Where S is the shadow model, controls the noise level, limits the noise level added in each iteration, sign () is a function that acts on each element, its output 1 represents a positive gradient value, -1 represents a negative gradient value, and 0 represents a gradient value of 0.
2. The face recognition privacy protection-oriented antagonistic feature generation method as claimed in claim 1, wherein: the feature extractor E (-) described in step 1.1) is distributed to the client, is a lightweight network, and only needs few calculations to extract shallow features, and the identification network is deployed at the server for identification.
3. The face recognition privacy protection-oriented antagonistic feature generation method as claimed in claim 1, wherein: the shadow model S (-) described in step 2.2) is a reconstruction network of arbitrary structure to learn the mapping relationship from facial features to face images, training the shadow model S (-) on the public face dataset by minimizing the following loss functions:
Where X is the face image dataset, Z is the corresponding face feature set, X i is the single raw face image, and Z i represents the single face feature extracted from X i.
4. A face recognition privacy protection oriented countermeasure feature generation method as claimed in claim 1 or 2 or 3, wherein: the facial features described in step 3.3) and step 4.2) are processed by a shadow model S (-) to form a class i shadow image, and the class i shadow image is processed by a feature extractor E (-) to obtain shadow features, where the process formally represents:
Where is the class i shadow image reconstructed by the shadow model from the client-submitted facial features z,/> is the shadow features extracted from the class i shadow image/> using the feature extractor.
5. The face recognition privacy protection-oriented antagonism feature generation method of claim 4, wherein: the gradient of the shadow model reconstruction loss described in step 3.5), step 4.4) is related to the added noise disturbance as follows:
Wherein represents the contrast feature of initializing delta to zero, after adding noise, an attacker cannot recover the class i shadow image/> from the contrast feature, and because/> is highly similar to the original image x, the attacker also has difficulty reconstructing the original image, and because face images used for training and face images encountered after the face recognition network is deployed may have great differences, parameters of batch normalization BN layers in the shadow model are updated, and the mean and variance of facial features of each batch are independently calculated.
6. The face recognition privacy protection-oriented countermeasure feature generation method of claim 1 or 2 or 3 or 5, wherein: the system operation stage described in step 4) has two different options: the online mode is plug and play, the existing face recognition database is directly updated to be a protected antagonism characteristic database, the face privacy can be effectively protected without modifying or retraining a network, and meanwhile, the high-precision face recognition is maintained; the offline mode is to further train the identity recognition network at the server to obtain a more accurate recognition result by using the antagonistic characteristics.
CN202310212400.8A 2023-03-07 2023-03-07 Face recognition privacy protection-oriented antagonism feature generation method Active CN116778544B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310212400.8A CN116778544B (en) 2023-03-07 2023-03-07 Face recognition privacy protection-oriented antagonism feature generation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310212400.8A CN116778544B (en) 2023-03-07 2023-03-07 Face recognition privacy protection-oriented antagonism feature generation method

Publications (2)

Publication Number Publication Date
CN116778544A CN116778544A (en) 2023-09-19
CN116778544B true CN116778544B (en) 2024-04-16

Family

ID=88007027

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310212400.8A Active CN116778544B (en) 2023-03-07 2023-03-07 Face recognition privacy protection-oriented antagonism feature generation method

Country Status (1)

Country Link
CN (1) CN116778544B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117763523B (en) * 2023-12-05 2024-07-02 浙江大学 Privacy protection face recognition method capable of resisting gradient descent

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114626507A (en) * 2022-03-15 2022-06-14 西安交通大学 Method, system, device and storage medium for generating confrontation network fairness analysis
CN115019378A (en) * 2022-08-09 2022-09-06 浙江大学 Cooperative reasoning-oriented method and device for resisting data review attribute inference attack
CN115577262A (en) * 2022-09-30 2023-01-06 中国人民解放军国防科技大学 Interactive visualization system for exploring federal learning privacy

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210300433A1 (en) * 2020-03-27 2021-09-30 Washington University Systems and methods for defending against physical attacks on image classification

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114626507A (en) * 2022-03-15 2022-06-14 西安交通大学 Method, system, device and storage medium for generating confrontation network fairness analysis
CN115019378A (en) * 2022-08-09 2022-09-06 浙江大学 Cooperative reasoning-oriented method and device for resisting data review attribute inference attack
CN115577262A (en) * 2022-09-30 2023-01-06 中国人民解放军国防科技大学 Interactive visualization system for exploring federal learning privacy

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Fairness-aware Adversarial Perturbation Towards Bias Mitigation for Deployed Deep Models;Zhibo Wang 等;arxiv;20220303;全文 *

Also Published As

Publication number Publication date
CN116778544A (en) 2023-09-19

Similar Documents

Publication Publication Date Title
Zhang et al. Exploiting defenses against gan-based feature inference attacks in federated learning
CN111242290B (en) Lightweight privacy protection generation countermeasure network system
Pittaluga et al. Learning privacy preserving encodings through adversarial training
Li et al. Privynet: A flexible framework for privacy-preserving deep neural network training
Feng et al. Masquerade attack on transform-based binary-template protection based on perceptron learning
CN114186237A (en) Truth-value discovery-based robust federated learning model aggregation method
Li et al. Deepobfuscator: Obfuscating intermediate representations with privacy-preserving adversarial learning on smartphones
Li et al. Deepobfuscator: Adversarial training framework for privacy-preserving image classification
CN116778544B (en) Face recognition privacy protection-oriented antagonism feature generation method
CN111625820A (en) Federal defense method based on AIoT-oriented security
Ding et al. Privacy-preserving feature extraction via adversarial training
CN112668044A (en) Privacy protection method and device for federal learning
Shao et al. Federated test-time adaptive face presentation attack detection with dual-phase privacy preservation
Xu et al. CGIR: Conditional generative instance reconstruction attacks against federated learning
Liu et al. Dynamic user clustering for efficient and privacy-preserving federated learning
Jasmine et al. A privacy preserving based multi-biometric system for secure identification in cloud environment
CN117807597A (en) Robust personalized federal learning method facing back door attack
Guo et al. Robust and privacy-preserving collaborative learning: A comprehensive survey
Savenko et al. Botnet detection approach based on the distributed systems
Hidayat et al. Privacy-Preserving Federated Learning With Resource Adaptive Compression for Edge Devices
Yin et al. Ginver: Generative model inversion attacks against collaborative inference
Zhao et al. PriFace: a privacy-preserving face recognition framework under untrusted server
Zhao et al. Deep leakage from model in federated learning
CN116865938A (en) Multi-server federation learning method based on secret sharing and homomorphic encryption
CN116233844A (en) Physical layer equipment identity authentication method and system based on channel prediction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant