CN117763523A - Privacy protection face recognition method capable of resisting gradient descent - Google Patents

Privacy protection face recognition method capable of resisting gradient descent Download PDF

Info

Publication number
CN117763523A
CN117763523A CN202311655949.0A CN202311655949A CN117763523A CN 117763523 A CN117763523 A CN 117763523A CN 202311655949 A CN202311655949 A CN 202311655949A CN 117763523 A CN117763523 A CN 117763523A
Authority
CN
China
Prior art keywords
face
face recognition
feature
gradient descent
identity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311655949.0A
Other languages
Chinese (zh)
Other versions
CN117763523B (en
Inventor
王志波
王和
金帅帆
何源
张文文
胡佳慧
任奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202311655949.0A priority Critical patent/CN117763523B/en
Priority claimed from CN202311655949.0A external-priority patent/CN117763523B/en
Publication of CN117763523A publication Critical patent/CN117763523A/en
Application granted granted Critical
Publication of CN117763523B publication Critical patent/CN117763523B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

Compared with the defect that the existing face privacy protection work cannot guarantee the accuracy of face recognition tasks and cannot effectively defend face reconstruction attacks, the invention establishes a lightweight face recognition system for protecting the privacy, removes visual information which is not important for face recognition from face images through a frequency domain, and generates further confusing gradient descent resistant characteristics in a feature space so as to resist gradient descent in reconstruction attacks based on deep learning, and can resist unknown reconstruction attacks while maintaining the face recognition accuracy and effectively protect the privacy safety of the faces. The privacy protection capability of the invention is improved by about 90% compared with the existing privacy protection method, and the time cost for completing face recognition is equivalent to that of a face recognition system without the privacy protection function, and the storage cost for storing the gradient descent resistance characteristic is reduced by 33% compared with that of the face recognition system without the privacy protection function.

Description

Privacy protection face recognition method capable of resisting gradient descent
Technical Field
The invention relates to the field of Artificial Intelligence (AI) security and the field of data security, in particular to a privacy protection face recognition method capable of resisting gradient descent, which can ensure the accuracy of a face recognition system and can resist reconstruction attack to protect the privacy of a face.
Background
Face recognition is a technology for performing biological feature recognition by using a face, and is widely applied to the field of security. Since facial information is a unique, extremely difficult-to-change individual biometric feature that cannot be recovered once revealed, the privacy problem of face recognition has received increasing attention in recent years. This makes it increasingly important to protect face privacy. Many commercial face recognition systems store raw face pictures directly or use machine learning to extract face features from face pictures.
When the face picture is revealed, the privacy of the user is revealed directly. When the face features are revealed, the face privacy can be protected to a certain extent because the face features inhibit the visual information of the face. Unfortunately, these leaked features can still be exploited to recover face sensitive information, for example, by reconstructing the appearance of the original image. The existing face privacy protection method has the technical problems that the effectiveness of privacy protection and the accuracy of face recognition tasks cannot be effectively balanced, so that the application requirements cannot be met.
Disclosure of Invention
aiming at the defects of the prior art, the invention provides a privacy protection face recognition method for resisting gradient descent, which takes the accuracy of face recognition tasks and the effectiveness of privacy protection into consideration.
in order to achieve the above object, the present application provides the following technical solutions:
the invention discloses a privacy protection face recognition method for resisting gradient descent, which comprises the following steps:
1) Frequency domain channel analysis stage:
1.1 The importance of different frequency channels to face recognition is measured by using a designed auxiliary network;
2) Privacy protection face recognition model training phase:
2.1 If the face recognition training data set does not exist, a face recognition training data set is established, if the face recognition training data set exists, the face recognition training data set is obtained, and the face recognition training data set comprises RGB face pictures and corresponding identity labels;
2.2 Dividing the privacy protection face recognition model into 4 parts, namely a visual information deleting part P (-), a random confusion part E (-), an identity information recovering part D (-), and a face recognition part R (-);
2.3 Inputting RGB face pictures in the face recognition training data set into a visual information deleting part P (·) and outputting a primary gradient descent resistance characteristic f1
2.4 Primary anti-gradient descent feature f)1Input to random confusion E (·) and output anti-gradient descent feature f2
2.5 Will resist the gradient descent feature f2Input to the identity information recovery part D (·) and output the resumption feature f3
2.6 Will restore feature f3Inputting the face identification information to a face identification part R (-) and outputting a face identity predicted value id;
2.7 Using RGB face pictures in the face recognition training data set and corresponding identity labels to combine with a visual information deleting part P (-), a random confusion part E (-), an identity information recovering part D (-), and training a face recognition part R (-);
2.8 After training is completed, the visual information deleting part P (-) and the random confusion part E (-) are used as clients to be distributed to users, and the identity information recovering part D (-) and the face recognition part R (-) are arranged at a server;
3) Initializing a database:
3.1 If the face recognition database is not available, a face recognition database is established, and if the face recognition database is available, the face recognition database is obtained;
3.2 Processing the face picture of the face recognition database into a primary gradient descent resistant characteristic f by utilizing a visual information deleting part P ()1
3.3 Using random confusion E (·) to characterize the preliminary anti-gradient descent f1Treatment as gradient descent resistant feature f2and using the anti-gradient descent feature f2forming a facial feature database to replace the original database;
4) And (3) a system operation stage:
4.1 The RGB face image of the face recognition database is processed into the primary gradient descent resistant characteristic f of the face image to be verified through the visual information deleting part P (-) by the client side from the face image to be verified obtained by the face recognition terminal1
4.2 The client uses the random confusion part E (-) to make the primary gradient descent resistant characteristic f of the face image to be verified1Treatment as gradient descent resistant feature f2sending the data to a server;
4.3 The server uses the identity information recovery part D (-) to make the gradient descent resistant characteristic f of the face image to be verified2Processing is the heavy recovery feature f3The method comprises the steps of carrying out a first treatment on the surface of the At the same time, the anti-gradient descent feature of the reference identity in the facial feature databasethe identity information recovery part D (-) is also input to obtain the re-recovery characteristic/>, of the reference identity
4.4 Resumption of the feature f of the face image to be verified3Re-recovery feature with reference identityrespectively inputting face recognition parts R (-) and outputting face identity characteristics f of the face image to be verifiedidAnd face identity feature/>, referring to identityAnd comparing to obtain a face recognition result, namely outputting success or failure of face verification.
As a further improvement, in step 1.1) of the present invention, the designed auxiliary network is a single-layer network for weighting each frequency domain channel, and the operation for measuring the importance of different frequency channels to face recognition can be divided into: human face picture graying, frequency domain and order unified operation BDCT+(-), the weight training operation of the auxiliary network is two parts, and the process can be formally expressed as:
f0=BDCT+(x),fα=α*f0
Where x is the original face picture of a single RGB space,is the identity label corresponding to the face picture, f0Is the face characteristics composed of frequency channels, alpha is the weight of the auxiliary network, fα=α*f0Is the calculation mode of the auxiliary network, fαIs an auxiliary feature containing weight values to be trained, X is a face picture data set, ID is a corresponding identity tag set, N is the number of samples in the face data set, margin_loss is a general measurement accuracy Loss function in the face recognition field, Rnorrmal(. Cndot.) is a common face recognition network, and the auxiliary network weight alpha obtained by final training represents the importance of each frequency domain channel.
as a further improvement, in the step 2.2) of the present invention, the visual information deleting part P (·) includes two parts, namely, a step of graying the face picture and converting the face picture into a frequency domain space operation BDCT (·), and a step of deleting an unimportant frequency domain channel operation Del (·), where the process can be formally expressed as:
f1=P(x)=Del(BDCT(x)),
f0=BDCT(x),f1=P(f0)
Wherein f0The face features representing the frequency domain space are obtained by deleting the unimportant frequency domain channels to operate Del (&) and sorting the importance of the face recognition according to different frequency channels, and deleting the channels which are unimportant to the face recognition while guaranteeing the face recognition precision so as to obtain the primary gradient descent resistant feature f1
As a further improvement, in step 2.3) of the present invention, the Random confusion part E (·) includes a normalization operation self_bn (·), the candidate feature set generates an operation generation (·), and the random_pick (·) is randomly selected, which can be expressed in a formalized manner as:
f2=E(f1)=Random_Pick(Generate(Self_BN(f1)))
f1′=Self_BN(f1),
S=Generate(f1′),
f2=Random_Pick(S)
Self_BN (. Cndot.) will calculate the individual variance and mean for each frequency domain channel, and separately normalize each frequency domain channel to obtain normalized frequency domain feature f1′∶
Wherein f1kRepresenting a preliminary anti-gradient descent feature f1The kth channel in the c frequency domain channels, mean is the Mean value of the elements, var is the variance of the elements after Bayesian correction, and delta is an offset term for preventing denominator infinity;
Generating rate (·) is a candidate feature set generating method, the generating method is not unique, features in the generated set need to be satisfied and are different from each other, and values are staggered to ensure normalized frequency domain features f1' one-to-one correspondence with candidate feature set S:
1=Jm×n×c,all elements in J are(-1)u
Mask:∈←∈1⊙∈2,all elements in∈are(-1)u×bv
S=Generate(f1′)=S(f1′,b)=f1′⊙∈=f1′×(-1)u×bv
Where E is the feature space in which the candidate feature set is located, E1Sum epsilon2In order to generate an intermediate variable of epsilon, u and v are discrete random variables, different values of u and v can obtain different gradient descent resistance characteristics, b is a constant value obtained from a server for initializing a client, and S is an output candidate characteristic set;
random_Pick (·) is a Random selection operation, i.e. the values of u, v in the candidate feature set S are determined randomly, and the output value is the anti-gradient descent feature f2
as a further improvement, in the step 2.4) of the present invention, the identity information restoring section D (·) first determines the gradient descent resistant feature f2The belonging candidate feature set S utilizes the normalized frequency domain feature f1' one-to-one correspondence with candidate feature set S to approximately recover normalized re-recovery feature f3For the anti-gradient descent feature f2the following algorithm is performed for each channel of the i-th channel, where the processing performed for the i-th channel can be formally expressed as:
When max (|f)2i|)>0:
r=round down(-logb max(|f2i|)),
di=f2i×br+bias
When max (|f)2i|)=0:
di=O
Last output resume feature f3can be expressed as:
f3=D(f2)={d1,d2...dc}
Wherein f2iRepresenting a gradient descent resistant feature f2i·| represents absolute value, max (·) represents maximum value, round (·) represents rounding down, bias is a constant bias term, diRepresenting the resumption feature f3o represents a matrix of 0's for all elements, di=d is equivalent to Diall assigned 0.
As a further improvement, in step 2.5) of the present invention, the face recognition part R (-) is divided into a feature processing layer r_p (-) and a classification layer r_fc (-), and this process is formally expressed as:
id=R(f3)=R_FC(R_P(f3)),
fid=R_P(f3),
id=R_FC(fid)
Wherein the feature handling layer R_P (·) will restore the feature f3face identity feature f further processed into 512 dimensionsidThe classification layer R_FC (·) further outputs the face identity prediction value id.
As a further improvement, in step 2.6) of the present invention, the training of the face recognition part R (-) using the face image dataset and the corresponding identity tag is formally expressed as:
Wherein P (-) is a visual information deleting part, E (-) is a random confusion part, D (-) is an identity information recovering part, R (-) is a face recognition part, x is an original face picture in a single RGB space,Is the corresponding identity label of the face picture, X is the face picture data set, ID is the corresponding identity label set, N is the number of samples in the face data set, and Margin_Loss is the general Loss function of measurement accuracy in the face recognition field.
as a further improvement, in the step 4.3) of the invention, the face identity characteristic f of the face image to be verified is outputtedidAnd face identity feature with reference identitythe process situation of obtaining the face recognition result is as follows:
The COS (·) is used to calculate the cosine similarity of the two features, the similarity is a specific value of the cosine similarity, when the similarity is greater than a threshold value calculated dynamically, the two pictures are judged to be the same person, otherwise, the two pictures are judged to be different persons.
the beneficial effects of the invention are as follows:
The invention relates to the field of Artificial Intelligence (AI) security and the field of data security, and discloses a privacy protection face recognition method for resisting gradient descent. Compared with the defect that the existing face privacy protection work cannot guarantee the precision of face recognition tasks and cannot effectively defend face reconstruction attacks, the invention establishes a lightweight face recognition system for protecting the privacy, removes visual information which is not important for face recognition from face images through a frequency domain, and generates further confusing gradient descent resistant features in a feature space so as to resist gradient descent in reconstruction attacks based on deep learning, can resist unknown reconstruction attacks while keeping the face recognition accuracy, and effectively protects the privacy safety of the faces. Specifically, in the step 1.1), the importance of each normalized frequency domain channel is analyzed for the first time, and the method can delete more than 90% of frequency channels under the condition that the accuracy of face recognition is hardly lost (compared with the prior scheme, 50% of frequency channels can be removed at most), so that visual information in frequency domain characteristics is reduced, and the face reconstruction attack can be primarily defended under the condition that the accuracy is not influenced; in the steps 2.3) and 2.3), the invention is characterized by different primary gradient descent resistance f1Designed to include a plurality of gradient descent resistant features f2Is used for generating candidate feature sets and ensuring gradient descent resistant features f in different candidate feature sets2are staggered in feature space rather than repeated. The interleaving performance in feature space prevents an attacker from exploiting the anti-gradient descent feature f2Reconstructing a face picture, the non-repeatability of the feature space enabling the gradient descent resistant feature f to be resistant2Identity information is retained. The privacy protection capability of the invention is improved by about 90% compared with the existing privacy protection method, the loss of face recognition precision is negligible, in addition, the time cost for finishing face recognition is equivalent to that of a face recognition system without privacy protection function, and the storage cost for storing the gradient descent resistance characteristic is reduced by 33% compared with that of the face recognition system without privacy protection function.
Drawings
FIG. 1 is a flow chart of training a privacy preserving face recognition model in a privacy preserving face recognition method against gradient descent;
FIG. 2 is a flowchart of initializing a database in a privacy preserving face recognition method against gradient descent;
FIG. 3 is a flow chart of the system operational phase in the privacy preserving face recognition method against gradient descent;
Detailed Description
the present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical methods and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The privacy protection face recognition method for resisting gradient descent comprises the following steps:
1) The importance of different frequency channels to face recognition is measured by using a designed auxiliary network. Specifically, in this embodiment, MS1Mv3 is selected as the face image-identity data set (training set), or a private data set built by the face recognition service provider may be selected, and then the operation BDCT is performed on each face image in a unified manner of graying, frequency domain and order of magnitude+(. Cndot.) obtaining the face feature f of the frequency domain space0And then adding auxiliary network weights to the features and training, wherein the designed auxiliary network is a single-layer network for weighting each frequency domain channel, and the significance of the auxiliary network is to measure the importance of different frequency domain channels to face recognition. Because the auxiliary network weights corresponding to the frequency domain channels which are more important for face recognition are gradually increased to improve the face recognition accuracy when the auxiliary network is trained. The process can be formally expressed as:
f0=BDCT+(x),fα=α*f0
Where x is the original face picture of a single RGB space,is the identity label corresponding to the face picture, f0Is the face feature consisting of frequency channels converted into frequency domain space, alpha is the weight of the auxiliary network, fα=α*f0Is the calculation mode of the auxiliary network, fαIs an auxiliary feature containing a weight value to be trained, X is a face picture data set, ID is a corresponding identity tag set, N is the number of samples in the face data set, margin_Loss is a general measurement precision Loss function in the face recognition field, in this embodiment, the Loss function is adopted in an Arcface recognition model architecture, and R isnormal(. Cndot.) is a common face recognition network, and in this embodiment, an Arcface architecture is also adopted, where a skeleton network adopts a ResNet50, and the auxiliary network weight α obtained by final training represents the importance of each frequency domain channel.
2) The face image dataset and the corresponding identity tag are utilized to jointly train the visual information deleting part P (-), the random confusion part E (-), the identity information recovering part D (-), and the face recognition part R (-). In this embodiment, MS1Mv3 is selected as the face image-identity data set (training set), and a private data set built by the face recognition server may be selected. The unified size processing of the face image acquired in this embodiment is 1122×3。
3) Firstly, inputting RGB face pictures into a visual information deleting part P (-), and outputting a primary gradient descent resistance characteristic f1Specifically, the BDCT (-) is utilized to gray and convert the face picture into a frequency domain space, and then Del (-) is utilized to delete the frequency domain channels which are not important for face recognition, and the process can be formally expressed as:
f1=P(x)=Del(BDCT(x)),
f0=BDCT(x),f1=P(f0)
Wherein f0The face features representing the frequency domain space contain 64 frequency domain channels in total. The operation Del (-) of deleting the frequency domain channels which are not important for the face recognition is to sort the importance of the face recognition according to the channels with different frequencies, delete the channels which are not important for the face recognition while guaranteeing the face recognition precision so as to obtain the primary gradient descent resistant characteristic f1. Specifically, the present embodiment selects the reserved-only channel AC01channel AC11The two channels (after the gray level picture is subjected to the block discrete cosine transform operation of the JPEG compression standard, the output 64 channels are arranged into 8×8 format according to the TISO0690-93/d005 standard, and the ACijrefers to the i row, j column alternating current channels), all other frequency domain channels are deleted.
4) Then, the preliminary anti-gradient descent feature f1Input to random confusion E (·) and output anti-gradient descent feature f2. Specifically, the Random confusion part E (-) includes a normalization operation Self_BN (-), a candidate feature set generation operation generator (-), a random_Pick (-) Random selection operation, and the processing procedure can be formally expressed as follows:
f2=E(f1)=Random_Pick(Generate(Self_BN(f1)))
f1′=Self_BN(f1),
S=Generate(f1′),
f2=Random_Pick(S)
Self_BN (. Cndot.) will calculate the individual variance and mean for each frequency domain channel, and separately normalize each frequency domain channel to obtain normalized frequency domain feature f1′:
Wherein f1kRepresenting a preliminary anti-gradient descent feature f1And (2) the kth channel in the c frequency domain channels, mean is the Mean value of the elements, var is the variance of the elements after Bayesian correction, and E is an offset term for preventing denominator infinity. In this embodiment c is 2, the primary gradient descent resistance characteristic f1Together comprising two frequency domain channels, the offset term delta takes 1 x 10-5
Generating rate (·) is a candidate feature set generating method, the generating method is not unique, features in the generated set need to be satisfied and are different from each other, and values are staggered to ensure normalized frequency domain features f1The one-to-one correspondence between' and candidate feature set S is generated in this embodiment as follows:
1=Jm×n×c,all elements in J are(-1)u
Mask:∈←∈1⊙∈2,all elements in∈are(-1)u×bv
S=Generate(f1′)=S(f1′,b)=f1′⊙∈=f1′×(-1)u×bv
Where E is the feature space in which the candidate feature set is located, E1Sum epsilon2in order to generate an intermediate variable of epsilon, u and v are discrete random variables, different values of u and v can obtain different gradient descent resistance characteristics, b is a constant value obtained from a server for initializing a client, and S is an output candidate characteristic set. In this embodiment, the values of b are integers within the interval of [ -60, 60] in the values of 2, u and v;
random_Pick (·) is a Random selection operation, i.e. the values of u, v in the candidate feature set S are determined randomly, and the output value is the anti-gradient descent feature f2. The anti-gradient-descent feature has excellent resistance to reconstruction attacks, and the size of the anti-gradient-descent feature with privacy protection capability generated in the embodiment is 1122X 2, 33% smaller than the original face picture. Tables 1 and 2 prove the effectiveness of the scheme from the two viewpoints of the performance of the face recognition task and the defending capability of reconstruction attacks. Table 1 is a comparison list of accuracy index, storage overhead and time overhead of the present invention with the existing face recognition privacy method; table 2 is a table of the reconstruction resistance index comparison of the present invention with the existing face recognition privacy method;
TABLE 1
TABLE 2
5) Next, the anti-gradient descent feature f2Input to the identity information recovery part D (·) and output the resumption feature f3. Specifically, the gradient descent resistance feature f is first determined2The belonging candidate feature set S utilizes the normalized frequency domain feature f1' one-to-one correspondence with candidate feature set S to approximately recover normalized re-recovery feature f3For the anti-gradient descent feature f2the following algorithm is performed for each channel of the i-th channel, where the processing performed for the i-th channel can be formally expressed as:
When max (|f)2i|)>0:
r=round down(-logb max(|f2i|)),
di=f2i×br+bias
When max (|f)2i|)=0:
di=O
Last output resume feature f3can be expressed as:
f3=D(f2)={d1,d2...dc}
Wherein f2iRepresenting a gradient descent resistant feature f2C represents the total number of channels, |·| represents absolute value, max (·) represents maximum value, round (·) represents rounding down, bias is a constant bias term, diRepresenting the resumption feature f3o represents a matrix of 0's for all elements, di=o, i.e. corresponds to diall assigned 0. In this embodiment, the total channel number c is 2, and the constant bias term bias is 0.
6) Still next, the feature f will be restored3And inputting the human face identification information to a human face identification part R (·) and outputting a human face identity predicted value id. Specifically, the face recognition part R (-) can be divided into a feature processing layer r_p (-) and a classification layer r_fc (-), and this process can be formally expressed as:
id=R(f3)=R_FC(R_P(f3)),
fid=R_P(f3),
id=R_FC(fid)
Wherein the feature handling layer R_P (·) will restore the feature f3face identity feature f further processed into 512 dimensionsidThe classification layer R_FC (·) further outputs the face identity prediction value id. In this embodiment, the skeleton network r_p (-) of the face recognition part R (-) adopts the res net50, and the classification network r_fc (-) adopts the full connection layer.
7) And finally, training a face recognition part R (-) by utilizing the face image data set and the corresponding identity tag in combination with a visual information deleting part P (-), a random confusion part E (-) and an identity information recovery part D (-). The process can be formally expressed as:
Wherein P (-) is a visual information deleting part, E (-) is a random confusion part, D (-) is an identity information recovering part, R (-) is a face recognition part, x is an original face picture in a single RGB space,Is the corresponding identity label of the face picture, X is the face picture data set, ID is the corresponding identity label set, N is the number of samples in the face data set, and Margin_Loss is the general Loss function of measurement accuracy in the face recognition field. In this embodiment, margin_loss is a Loss function adopted in an Arcface recognition model architecture; the combined training adopts an SGD optimizer with a learning rate of 0.1, wherein the momentum factor of the optimizer is set to be 0.9, the weight attenuation is set to be 5e-4, and 10 rounds of training are performed on a public data set MS1Mv 3;
8) And initializing a face recognition database. If the face recognition database does not exist, a face recognition database is built, if the face recognition database exists, the face recognition database is obtained, then each face picture is processed by the data processing flow in the step 3) and the step 4), gradient descent resistance characteristics corresponding to all pictures in the face recognition database are obtained, the original face pictures are replaced, initialization or safety updating of the face recognition database is completed, and privacy safety of the face data is guaranteed.
9) In the system operation stage, the face image to be verified, which is acquired by the client from the face recognition terminal, is processed into the primary gradient descent resistant characteristic f of the face image to be verified through the data processing flow described in the step 3) and the step 4)1Then sending to the server;
10 The server uses the data processing flow in the step 5) to make the gradient descent resistant characteristic f of the face image to be verified2Processing is the heavy recovery feature f3The method comprises the steps of carrying out a first treatment on the surface of the At the same time, the anti-gradient descent feature of the reference identity in the facial feature databaseThe data processing flow described in step 5) is also used to obtain the identity-referenced re-recovery feature/>then the re-recovery characteristic f of the face image to be verified3Re-recovery features with reference identity/>Respectively inputting the data processing flow described in the step 6), and outputting the face identity characteristic f of the face image to be verifiedidAnd face identity feature/>, referring to identitythe face recognition result is obtained by comparison, and the comparison process can be characterized in that:
The COS (·) is used to calculate the cosine similarity of the two features, the similarity is a specific value of the cosine similarity, when the similarity is greater than a threshold, the two pictures are judged to be the same person, otherwise, the two pictures are judged to be different persons. In this embodiment, the threshold is obtained by cross-verifying the test sets, and different test sets have different thresholds under cross-verifying.
Fig. 1 is a flowchart of training a privacy preserving face recognition model in a privacy preserving face recognition method against gradient descent, corresponding to steps 2) to 7); FIG. 2 is a flowchart of initializing a database in a privacy preserving face recognition method against gradient descent, corresponding to step 8); FIG. 3 is a flow chart of the system operation phase in the privacy preserving face recognition method against gradient descent, corresponding to steps 9) through 10);
The invention adopts Accuracy (precision) and TAR@FAR=1×10-4/1×10-5to evaluate the face recognition capability, the higher the Accuracy is, the stronger the face recognition capability is; tar@far=1×10-4Indicating that the negative samples are identified as positive sample ratios at 1 x 10-4When the positive sample is correctly identified, the probability is TAR; tar@far=1×10-5Indicating that the negative samples are identified as positive sample ratios at 1 x 10-5When the positive sample is correctly identified, the probability is TAR, and the higher the TAR is, the stronger the face recognition capability is. The invention adopts SSIM (structural similarity), PSNR (peak signal to noise ratio), MSE (mean square error), COS (cosine similarity) and SRRA (replay attack success rate) to evaluate the quality of the reconstructed image. SSIM is a number between 0 and 1, the greater the difference between the reconstructed image and the original image, the better the anti-reconstruction effect of the method, ssim=1 when the two images are identical; PSNR is also used to compare the similarity between a reconstructed image and the corresponding original image, with smaller values indicating poorer quality of the reconstructed image and better reconstruction resistance; MSE represents the pixel difference value between the reconstructed image and the original image, and the greater the difference value is, the better the privacy protection effect is; the COS measures cosine similarity of the identity feature vector of the reconstructed image and the identity feature vector of the reference image in the 512-dimensional face feature space through another independent face recognition system, the less the identity feature of the reconstructed image is, the smaller the COS is, and the better the reconstruction resistance is; SRRA represents replay attack success rate, i.e., the probability of successful face recognition matching using pictures restored by facial features. In table 1, the Accuracy of face recognition of the present invention and 2 mainstream face recognition schemes without privacy protection function and 6 different privacy protection methods are compared, the Accuracy index of each scheme is measured on 6 data sets of LFW, CFP-FF, CFP-FP, ageDB-30, CALFW, cpfw, and tar@far=1×10 is measured on two data sets of IJB-B, IJB-C, respectively-4And tar@far=1×10-5The accuracy of the index is hardly reduced compared with that of a mainstream face recognition scheme without privacy protection function. Meanwhile, the operation time cost and the feature storage cost of different face recognition schemes are also measured in the table 1, and the time cost from inputting images to finishing face recognition is equivalent to that of a main stream face recognition scheme without privacy protection function, and is faster than that of most face privacy protection methods. The gradient descent resistance feature of each picture is stored only by 98KB, which is far lower than that of all face recognition schemes for comparison. Open circles represent no defensive power, semi-black and semi-white circles represent poor defensive power, and filled circles represent good defensive power. The face features are shown in the table 2, under the two different leakage scenes of the transmission channel leakage (C) and the database leakage (D), face reconstruction attacks (DN-based face reconstruction attack) based on the deconvolution network and face reconstruction attacks (cGAN-based face reconstruction attack) based on the generation network, and MSE, PSNR, SSIM, COS and SRRA indexes on 6 data sets of LFW, CFP-FF, CFP-FP, ageDB-30, CALFW and CPLFW are faced, so that all indexes are optimal when the two attack scenes face two different attack methods, particularly on COS and SRRA indexes, the result of the face privacy protection method is improved by two orders of magnitude compared with other face privacy protection schemes, and the face privacy protection method can effectively protect the privacy of face images under various scenes.
It should be understood that the foregoing description of the preferred embodiments is not intended to limit the scope of the invention, but rather to limit the scope of the claims, and that those skilled in the art can make substitutions or modifications without departing from the scope of the invention as set forth in the appended claims.

Claims (8)

1. The privacy protection face recognition method for resisting gradient descent is characterized by comprising the following steps of:
1) Frequency domain channel analysis stage:
1.1 The importance of different frequency channels to face recognition is measured by using a designed auxiliary network;
2) Privacy protection face recognition model training phase:
2.1 If the face recognition training data set does not exist, a face recognition training data set is established, if the face recognition training data set exists, the face recognition training data set is obtained, and the face recognition training data set comprises RGB face pictures and corresponding identity labels;
2.2 Dividing the privacy protection face recognition model into 4 parts, namely a visual information deleting part P (-), a random confusion part E (-), an identity information recovering part D (-), and a face recognition part R (-);
2.3 Inputting RGB face pictures in the face recognition training data set into a visual information deleting part P (·) and outputting a primary gradient descent resistance characteristic f1
2.4 Primary anti-gradient descent feature f)1Input to random confusion E (·) and output anti-gradient descent feature f2
2.5 Will resist the gradient descent feature f2Input to the identity information recovery part D (·) and output the resumption feature f3
2.6 Will restore feature f3Inputting the face identification information to a face identification part R (-) and outputting a face identity predicted value id;
2.7 Using RGB face pictures in the face recognition training data set and corresponding identity labels to combine with a visual information deleting part P (-), a random confusion part E (-), an identity information recovering part D (-), and training a face recognition part R (-);
2.8 After training is completed, the visual information deleting part P (-) and the random confusion part E (-) are used as clients to be distributed to users, and the identity information recovering part D (-) and the face recognition part R (-) are arranged at a server;
3) Initializing a database:
3.1 If the face recognition database is not available, a face recognition database is established, and if the face recognition database is available, the face recognition database is obtained;
3.2 Processing the face picture of the face recognition database into a primary gradient descent resistant characteristic f by utilizing a visual information deleting part P ()1
3.3 Using random confusion E (·) to characterize the preliminary anti-gradient descent f1Treatment as gradient descent resistant feature f2and using the anti-gradient descent feature f2forming a facial feature database to replace the original database;
4) And (3) a system operation stage:
4.1 The RGB face image of the face recognition database is processed into the primary gradient descent resistant characteristic f of the face image to be verified through the visual information deleting part P (-) by the client side from the face image to be verified obtained by the face recognition terminal1
4.2 The client uses the random confusion part E (-) to make the primary gradient descent resistant characteristic f of the face image to be verified1Treatment as gradient descent resistant feature f2sending the data to a server;
4.3 The server uses the identity information recovery part D (-) to make the gradient descent resistant characteristic f of the face image to be verified2Processing is the heavy recovery feature f3The method comprises the steps of carrying out a first treatment on the surface of the At the same time, the anti-gradient descent feature of the reference identity in the facial feature databasethe identity information recovery part D (-) is also input to obtain the re-recovery characteristic/>, of the reference identity
4.4 Resumption of the feature f of the face image to be verified3Re-recovery feature with reference identityrespectively inputting face recognition parts R (-) and outputting face identity characteristics f of the face image to be verifiedidAnd face identity feature/>, referring to identityAnd comparing to obtain a face recognition result, namely outputting success or failure of face verification.
2. The gradient descent resistant privacy preserving face recognition method of claim 1, wherein: in the step 1.1), the designed auxiliary network is a single-layer network for weighting each frequency domain channel, and the operation for measuring the importance of different frequency channels to face recognition can be divided into: human face picture graying, frequency domain and order unified operation BDCT+(-), the weight training operation of the auxiliary network is two parts, and the process can be formally expressed as:
f0=BDCT+(x),fα=α*f0
Where x is the original face picture of a single RGB space,is the identity label corresponding to the face picture, f0Is the face characteristics composed of frequency channels, alpha is the weight of the auxiliary network, fα=α*f0Is the calculation mode of the auxiliary network, fαIs an auxiliary feature containing weight values to be trained, X is a face picture data set, ID is a corresponding identity tag set, N is the number of samples in the face data set, margin_loss is a general measurement accuracy Loss function in the face recognition field, Rnorrmal(. Cndot.) is a common face recognition network, and the auxiliary network weight alpha obtained by final training represents the importance of each frequency domain channel.
3. The gradient descent resistant privacy preserving face recognition method of claim 2, wherein: in the step 2.2), the visual information deleting part P (·) includes two parts, namely, performing graying on the face picture and converting the face picture into a frequency domain space operation BDCT (·), and deleting an unimportant frequency domain channel operation Del (·), and the process can be formally expressed as:
f1=P(x)=Del(BDCT(x)),
f0=BDCT(x),f1=P(f0)
Wherein f0The face features representing the frequency domain space are obtained by deleting the unimportant frequency domain channels to operate Del (&) and sorting the importance of the face recognition according to different frequency channels, and deleting the channels which are unimportant to the face recognition while guaranteeing the face recognition precision so as to obtain the primary gradient descent resistant feature f1
4. A gradient descent resistant privacy preserving face recognition method as claimed in claim 1 or 2 or 3, wherein: in the step 2.3), the Random confusion part E (·) includes a normalization operation self_bn (·), the candidate feature set generates an operation generator (·), and the random_pick (·) is randomly selected, which can be expressed in a formalized manner as:
f2=E(f1)=Random_Pick(Generate(Self_BN(f1)))
f1′=Self_BN(f1),
S=Generate(f1′),
f2=Random_Pick(S)
Self_BN (. Cndot.) will calculate the individual variance and mean for each frequency domain channel, and separately normalize each frequency domain channel to obtain normalized frequency domain feature f1′:
Wherein f1kRepresenting a preliminary anti-gradient descent feature f1The kth channel in the c frequency domain channels, mean is the Mean value of the elements, var is the variance of the elements after Bayesian correction, and delta is an offset term for preventing denominator infinity;
Generating rate (·) is a candidate feature set generating method, the generating method is not unique, features in the generated set need to be satisfied and are different from each other, and values are staggered to ensure normalized frequency domain features f1' one-to-one correspondence with candidate feature set S:
1=Jm×n×c,allelements in Jare(-1)u
Mask:∈←∈1⊙∈2,all elements in∈are(-1)u×bv
S=Generate(f1′)=S(f1′,b)=f1′⊙∈=f1′×(-1)u×bv
Where E is the feature space in which the candidate feature set is located, E1Sum epsilon2In order to generate an intermediate variable of epsilon, u and v are discrete random variables, different values of u and v can obtain different gradient descent resistance characteristics, b is a constant value obtained from a server for initializing a client, and S is an output candidate characteristic set;
random_Pick (·) is a Random selection operation, i.e. the values of u, v in the candidate feature set S are determined randomly, and the output value is the anti-gradient descent feature f2
5. The gradient descent resistant privacy preserving face recognition method of claim 4, wherein: in the step 2.4), the identity information recovery part D (-) first judges the gradient descent resistance characteristic f2The belonging candidate feature set S utilizes the normalized frequency domain feature f1' one-to-one correspondence with candidate feature set S to approximately recover normalized re-recovery feature f3For the anti-gradient descent feature f2the following algorithm is performed for each channel of the i-th channel, where the processing performed for the i-th channel can be formally expressed as:
When max (|f)2i|)>0:
r=round down(-logb max(|f2i|)),
di=f2i×br+bias
When max (|f)2i|)=0:
di=O
Last output resume feature f3can be expressed as:
f3=D(f2)={d1,d2…dc}
Wherein f2iRepresenting a gradient descent resistant feature f2i·| represents absolute value, max (·) represents maximum value, round (·) represents rounding down, bias is a constant bias term, diRepresenting the resumption feature f3o represents a matrix of 0's for all elements, di=o, i.e. corresponds to diall assigned 0.
6. The gradient descent resistant privacy preserving face recognition method of claim 5, wherein: in the step 2.5), the face recognition part R (-) is divided into a feature processing layer r_p (-) and a classification layer r_fc (-), and the process is formally expressed as:
id=R(f3)=R_FC(R_P(f3)),
fid=R-P(f3),
id=R_FC(fid)
Wherein the feature handling layer R_P (·) will restore the feature f3face identity feature f further processed into 512 dimensionsidThe classification layer R_FC (·) further outputs the face identity prediction value id.
7. The privacy preserving face recognition method of claim 1 or 2 or 3 or 5 or 6, wherein the gradient descent resistant face recognition method is characterized by: in the step 2.6), the training of the face recognition part R (-) by using the face image dataset and the corresponding identity tag is formally expressed as:
Wherein P (-) is a visual information deleting part, E (-) is a random confusion part, D (-) is an identity information recovering part, R (-) is a face recognition part, x is an original face picture in a single RGB space,Is the corresponding identity label of the face picture, X is the face picture data set, ID is the corresponding identity label set, N is the number of samples in the face data set, and Margin_Loss is the general Loss function of measurement accuracy in the face recognition field.
8. The gradient descent resistant privacy preserving face recognition method of claim 7, wherein: in the step 4.3), the face identity characteristic f of the face image to be verified is outputidAnd face identity feature with reference identitythe process situation of obtaining the face recognition result is as follows:
The COS (·) is used to calculate the cosine similarity of the two features, the similarity is a specific value of the cosine similarity, when the similarity is greater than a threshold value calculated dynamically, the two pictures are judged to be the same person, otherwise, the two pictures are judged to be different persons.
CN202311655949.0A 2023-12-05 Privacy protection face recognition method capable of resisting gradient descent Active CN117763523B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311655949.0A CN117763523B (en) 2023-12-05 Privacy protection face recognition method capable of resisting gradient descent

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311655949.0A CN117763523B (en) 2023-12-05 Privacy protection face recognition method capable of resisting gradient descent

Publications (2)

Publication Number Publication Date
CN117763523A true CN117763523A (en) 2024-03-26
CN117763523B CN117763523B (en) 2024-07-02

Family

ID=

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112052761A (en) * 2020-08-27 2020-12-08 腾讯科技(深圳)有限公司 Method and device for generating confrontation face image
US20210034729A1 (en) * 2018-04-12 2021-02-04 Georgia Tech Research Corporation Privacy preserving face-based authentication
CN112949535A (en) * 2021-03-15 2021-06-11 南京航空航天大学 Face data identity de-identification method based on generative confrontation network
CN113283377A (en) * 2021-06-10 2021-08-20 重庆师范大学 Face privacy protection method, system, medium and electronic terminal
CN114093001A (en) * 2021-11-16 2022-02-25 中国电子科技集团公司第三十研究所 Face recognition method for protecting privacy security
CN116071793A (en) * 2022-11-29 2023-05-05 中国电子科技集团公司第十五研究所 Identity privacy protection method and device for face image
CN116311439A (en) * 2023-03-03 2023-06-23 杭州师范大学 Face verification privacy protection method and device
CN116778544A (en) * 2023-03-07 2023-09-19 浙江大学 Face recognition privacy protection-oriented antagonism feature generation method
CN116933322A (en) * 2023-08-08 2023-10-24 陕西科技大学 Face image privacy protection method based on self-adaptive differential privacy

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210034729A1 (en) * 2018-04-12 2021-02-04 Georgia Tech Research Corporation Privacy preserving face-based authentication
CN112052761A (en) * 2020-08-27 2020-12-08 腾讯科技(深圳)有限公司 Method and device for generating confrontation face image
CN112949535A (en) * 2021-03-15 2021-06-11 南京航空航天大学 Face data identity de-identification method based on generative confrontation network
CN113283377A (en) * 2021-06-10 2021-08-20 重庆师范大学 Face privacy protection method, system, medium and electronic terminal
CN114093001A (en) * 2021-11-16 2022-02-25 中国电子科技集团公司第三十研究所 Face recognition method for protecting privacy security
CN116071793A (en) * 2022-11-29 2023-05-05 中国电子科技集团公司第十五研究所 Identity privacy protection method and device for face image
CN116311439A (en) * 2023-03-03 2023-06-23 杭州师范大学 Face verification privacy protection method and device
CN116778544A (en) * 2023-03-07 2023-09-19 浙江大学 Face recognition privacy protection-oriented antagonism feature generation method
CN116933322A (en) * 2023-08-08 2023-10-24 陕西科技大学 Face image privacy protection method based on self-adaptive differential privacy

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
ZHIBO WANG: "Privacy-preserving Adversarial Facial Features", 《2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》, 22 August 2023 (2023-08-22) *
杨乾: "一种新的人脸识别隐私保护方案 杨乾", 《信息科技辑》, 15 January 2021 (2021-01-15) *
沈炜: "基于隐私保护的人脸识别研究", 《信息科技辑》, 15 January 2020 (2020-01-15) *
陈冬梅;: "基于隐私保护的人脸识别技术应用研究", 电脑知识与技术, no. 21, 25 July 2020 (2020-07-25) *
马玉琨: "基于人脸的安全身份认证关键技术研究", 《信息科技辑》, 15 May 2019 (2019-05-15) *

Similar Documents

Publication Publication Date Title
Manisha et al. Cancelable biometrics: a comprehensive survey
Tuyls et al. Practical biometric authentication with template protection
US8542886B2 (en) System for secure face identification (SCIFI) and methods useful in conjunction therewith
Kekre et al. Iris recognition using texture features extracted from walshlet pyramid
US9710631B2 (en) Method for enrolling data in a base to protect said data
Chang et al. Robust extraction of secret bits from minutiae
Wong et al. Kernel PCA enabled bit-string representation for minutiae-based cancellable fingerprint template
Jindal et al. Securing face templates using deep convolutional neural network and random projection
Monga et al. Robust image hashing via non-negative matrix factorizations
CN114663986B (en) Living body detection method and system based on double decoupling generation and semi-supervised learning
Wu et al. Fingerprint bio‐key generation based on a deep neural network
CN110598522A (en) Identity comparison method based on face and palm print palm vein recognition
US11328095B2 (en) Peceptual video fingerprinting
CN117763523B (en) Privacy protection face recognition method capable of resisting gradient descent
CN111130794B (en) Identity verification method based on iris and private key certificate chain connection storage structure
CN117763523A (en) Privacy protection face recognition method capable of resisting gradient descent
CN115438753B (en) Method for measuring security of federal learning protocol data based on generation
Otroshi Shahreza et al. Benchmarking of cancelable biometrics for deep templates
Cho et al. Block-based image steganalysis for a multi-classifier
Zhou et al. Feature correlation attack on biometric privacy protection schemes
Bringer et al. Adding localization information in a fingerprint binary feature vector representation
Kevenaar Protection of biometric information
US20080106373A1 (en) Compensating For Acquisition Noise In Helper Data Systems
Grailu Improving the fingerprint verification performance of set partitioning coders at low bit rates
Ohana et al. HoneyFaces: Increasing the security and privacy of authentication using synthetic facial images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant