CN115795406A - Reversible face anonymization processing system - Google Patents

Reversible face anonymization processing system Download PDF

Info

Publication number
CN115795406A
CN115795406A CN202211632388.8A CN202211632388A CN115795406A CN 115795406 A CN115795406 A CN 115795406A CN 202211632388 A CN202211632388 A CN 202211632388A CN 115795406 A CN115795406 A CN 115795406A
Authority
CN
China
Prior art keywords
image
feature
identity
module
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211632388.8A
Other languages
Chinese (zh)
Inventor
李红波
刘林国
袁霖
高新波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202211632388.8A priority Critical patent/CN115795406A/en
Publication of CN115795406A publication Critical patent/CN115795406A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the image processing technology, and particularly relates to a reversible face anonymization processing system, which comprises a feature encoder, a feature de-encoding module and a feature de-encoding module, wherein the feature encoder is used for generating mutually decoupled identity features and attribute features according to an original image; the feature transformation module is used for carrying out transformation controlled by a secret key on the identity features of the images to obtain anonymous identity features; the characteristic mapping module is used for re-splicing the anonymous identity characteristics and the attribute characteristics and then mapping the recombined characteristics into a variable which is in accordance with StyleGAN hidden space distribution through the multilayer perceptron; an image generation module for generating an image according to the output of the feature mapping module; an image restoration module for performing fine adjustment restoration on the output of the image generation module; the anonymous image obtained by the invention has larger difference with the original image, human eyes cannot recognize the original identity of the anonymous image, and the anonymous face and the original face keep a certain distance on an identity characteristic space, thereby ensuring privacy.

Description

Reversible face anonymization processing system
Technical Field
The invention belongs to the image processing technology, and particularly relates to a reversible face anonymization processing system.
Background
As a visual medium most directly exposing personal identification information, a privacy problem of a face image is receiving much attention. A recent research study by CyLab, a security privacy laboratory of the university of kymmolong university in the world, shows that about 89% of investigators have a negative or conservative attitude towards face recognition technology, only 11% of users represent support, and most of the objection sounds come from the potential personal privacy risks in face recognition technology.
In the aspect of face anonymization technology, the prior art can be divided into: 1. the traditional face anonymization method uses image fuzzification filtering such as Gaussian filtering and median filtering, mosaic, frequency domain variation and other technologies to realize anonymization; 2. a countermeasure sample method is used for realizing anonymization, and countermeasure disturbance which is difficult to recognize the human face is added in a human face image by using a neural network, so that the neural network model can give an error output with high confidence level, and therefore anonymization is realized.
Although many achievements have been made in the privacy protection research aiming at the visual information of the face image at present, the privacy protection research still has some defects, the visual quality of the anonymized face needs to be improved, the existing technology focuses more on the anonymization target, and the consideration on the usability of the anonymized image, such as the identity recognition problem, is lacked. Moreover, more importantly, the prior art adopts a relatively simple transformation mechanism, lacks consideration of security and image recovery problems, namely reversibility, does not consider the problem of reversible transformation of an anonymized image back to an original image, and is unable to recover the original image when facing a scene needing reversible transformation, such as a requirement of a supervision department.
Disclosure of Invention
In order to effectively protect the identity privacy of the face and simultaneously keep the face anonymization processing of the privacy protection image identity matching capability of a machine, the invention provides a reversible face anonymization processing system, which comprises an anonymous face image generation network consisting of a feature encoder, a feature transformation module, a feature mapping module, an image generation module and an image restoration module, wherein:
the characteristic encoder is used for generating mutually decoupled identity characteristics and attribute characteristics according to the original image;
the characteristic transformation module is used for carrying out transformation controlled by a secret key on the identity characteristic of the image to obtain an anonymous identity characteristic;
a feature mapping module for re-splicing the anonymous identity feature and attribute feature and mapping the recombined feature into a StyleGAN-compliant hidden space by the multi-layer perceptron
Figure BDA0004006310160000021
A variable of distribution;
an image generation module for generating a StyleGAN-compliant hidden space
Figure BDA0004006310160000022
Inputting the distributed variables into a pre-trained StyleGAN to generate an anonymized image;
and the image restoration module is used for fusing the human face background information of the original image and the human face image generated by StyleGAN by using a joint fine tuning network JR-Net in the cross-domain image face exchange, and injecting the decoupled attribute characteristics serving as style information into the image generation process through an AdaIN residual module based on a coding and decoding structure to obtain a restored image.
Further, the anonymous face image generation network is divided into two stages during training, a feature encoder and a feature mapping module are learned in the first stage, the feature encoder comprises an identity feature encoder and an attribute feature encoder, wherein the identity feature encoder adopts a pre-trained encoder, an original image is input into the identity feature encoder to extract identity features of the original image, a target image is input into the attribute feature extractor to extract attribute features of the target image, the original image and the target image are two same or different images, the identity features of the original image and the attribute features of the target image are spliced together and then subjected to feature mapping through the feature mapping module, and the attribute feature extraction module and the feature mapping module of the feature encoder are optimized by splicing the identity features and the attribute features extracted by the feature encoder and the attribute feature encoder and then recombining the spliced together into a variable conforming to StyleGAN hidden space distribution through the feature mapping module; after the first-stage training is completed, based on the constructed feature encoder and the feature mapping module, the features output by the feature mapping module are used as the input of a pre-training StyleGAN, the StyleGAN outputs the features to obtain an anonymous image, the anonymous image is input into an image restoration module to obtain a restored image, and the image restoration module is optimized by taking a generated high-quality reconstructed image as a target in the process.
Further, a discriminator is introduced during the first stage of training, and is used for discriminating whether the output of the image generation module conforms to the hidden space distribution of the StyleGAN, and the training process includes:
training the feature encoder by using the non-countermeasure loss;
then, calculating the loss of the discriminator by adopting a non-saturation loss function, and optimizing the discriminator;
calculating a loss function of the feature mapping module, and optimizing the feature mapping module by using the loss function;
the above training process is repeated until a maximum number of training times is reached or the loss function converges.
Further, the non-countermeasure loss adopted in the training process of the feature encoder is represented as:
Figure BDA0004006310160000031
Figure BDA0004006310160000032
Figure BDA0004006310160000033
Figure BDA0004006310160000034
wherein E is ID (. Represents the image identity feature extracted by the feature extraction model, I S Representing the original image, I T A representation of the target image is shown,
Figure BDA0004006310160000035
representing the image output by the feature mapping module; LDM (·) represents extracting face key points from a certain point by using a pre-trained face key point extractor;
Figure BDA00040063101600000313
the weighted sum of the depth perception loss LPIPS and the L1 loss between the two images is obtained; i | · | purple wind 2 Represents the L2 loss function; (ii) a Lambda 1 、λ 2 、λ 3 Are respectively as
Figure BDA00040063101600000314
Figure BDA0004006310160000036
And
Figure BDA0004006310160000037
the weight of (c).
Further, the process of calculating the loss of the discriminator by using the unsaturated loss function includes:
Figure BDA0004006310160000038
wherein, E [. C]Expressing to obtain an expected value; d w Is used for judging whether to conform to StyleGAN hidden space
Figure BDA0004006310160000039
A distributed discriminator, w represents the output of the feature mapping module;
Figure BDA00040063101600000310
denotes z ID And z Attr A mathematical expectation of (d); m (-) represents a feature mapping module; z is a radical of ID Representing an identity feature; z is a radical of formula Attr Representing attribute features; gamma represents a weight;
Figure BDA00040063101600000311
representing the gradient of the output w of the feature mapping module; i | · | purple wind 2 The L2 loss function is expressed.
Further, the penalty function of the feature mapping module is expressed as:
Figure BDA00040063101600000312
wherein, E [ ·]Expressing to obtain an expected value;
Figure BDA0004006310160000041
for determining whether to conform to StyleGAN hidden space
Figure BDA0004006310160000042
A distributed discriminator; m (-) represents a feature mapping module; z is a radical of ID Representing an identity feature; z is a radical of Attr The attribute characteristics are represented.
Further, the loss function for training the image inpainting module includes:
Figure BDA0004006310160000043
Figure BDA0004006310160000044
Figure BDA0004006310160000045
wherein,
Figure BDA00040063101600000414
a loss function representing an image restoration module; I.C. A S Representing an original image; e ID () represents an identity feature extractor,
Figure BDA0004006310160000046
representing an image output by the image inpainting module;
Figure BDA0004006310160000047
representing the image perception loss (perceptual loss) of the GAN network, ID (-) representing identity information; BG (-) represents a pre-trained face background segmentation model;
Figure BDA0004006310160000048
representing images
Figure BDA0004006310160000049
And a target image I T Context Loss (context Loss) of style and texture similarity between them; i | · | purple wind 2 Represents the L2 loss function; lambda [ alpha ] 3 、λ 4 Are respectively as
Figure BDA00040063101600000410
The weight of (c).
Furthermore, the feature conversion module adds a disturbance vector controlled by a key to the input original feature vector to realize the inter-class migration of the identity features in the feature space and obtain an anonymous identity feature vector, and the process is called forward transformation; when identity information needs to be verified, extracting anonymous identity features from an anonymous image, subtracting a disturbance vector which is the same as the positive transformation from the anonymous identity features to obtain an original identity feature vector, wherein the process is called inverse transformation; the positive change and inverse transform processes can be represented as:
Figure BDA00040063101600000411
Figure BDA00040063101600000412
wherein z is ID The original identity feature vector is represented by a vector,
Figure BDA00040063101600000413
representing anonymous identity featuresThe vector P (v, k) represents a perturbation vector having the key k and the random noise vector v conforming to the normal distribution as control conditions.
Further, the process of constructing the perturbation vectors and the key vector data set corresponding to each perturbation vector includes:
randomly selecting face images with non-overlapping identities from a face image data set containing identity labels, and ensuring that the L2 distance of the identity characteristics of any image pair between two groups is greater than a threshold value t;
two sets of images respectively comprise 2 M And 2 N Different identities, and K = M + N, then 2 may be formed between the two sets of images K Each identity pair, and each possible value of the key is corresponding to each identity;
sampling L pairs of images from each identity pair, and calculating the difference between identity characteristic vectors as disturbance vectors, wherein each key corresponds to L disturbance vectors;
obtaining L x 2 K A data set of individual perturbation vectors and corresponding key vectors;
where K represents the length of the key K.
Further, the conditional generation countermeasure network CGAN is used for obtaining the perturbation vector, the conditional generation countermeasure network CGAN comprises a generator and a discriminator, the generator generates the perturbation vector P (v, k) according to a random noise vector v and a corresponding secret key k, and the discriminator discriminates the generated perturbation vector
Figure BDA0004006310160000051
Whether the conditional distribution of k is met or not, the process of optimizing the generator according to the principle of the CGAN, namely the process of calculating the optimal disturbance vector P (v, k) is expressed as follows:
Figure BDA0004006310160000052
wherein,
Figure BDA0004006310160000053
represents the output of P after the input D; e [ ·]Expressing the expectation; p meterThe device comprises an indication generator, a key generator and a control unit, wherein the indication generator is used for carrying out splicing combination on a random noise vector v and a secret key k to form an implicit representation and generating a disturbance vector according to the implicit representation; d denotes a discriminator for judging whether the generated perturbation vector conforms to the conditional distribution of the key k.
The invention has the following specific beneficial effects:
1) The anonymization image processed by the method has stronger privacy, and experiments prove that the anonymization image has larger difference with the original face image in the vision of human eye supervisors, human eyes cannot identify the original identity of the anonymization image, and the anonymization face and the original face keep a certain distance in an identity characteristic space, so that the privacy is ensured;
2) The invention has higher security, and experiments prove that the anonymization processing process provided by the method is controlled by the secret key and has certain randomness, so that the same image can generate completely different faces through different secret keys, and the machine can only accurately identify the identity of the original image through the correct secret key and complete the matching process;
3) The invention has stronger reversibility, can extract the identity characteristics of the face image after anonymization processing, and can match the characteristics after inverse transformation with the original characteristics by carrying out inverse transformation on the key, namely the characteristic transformation is reversible.
Drawings
FIG. 1 is a flow chart of a reversible face anonymization processing method of the present invention;
FIG. 2 is a flow chart between network modules in the present invention;
FIG. 3 is a preferred embodiment of the reversible face anonymization processing method according to the present invention;
FIG. 4 is a schematic diagram of an identity transformation model based on vector perturbation in the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a reversible face anonymization processing system, which comprises an anonymization face image generation network consisting of a feature encoder, a feature mapping module, an image generation module and an image restoration module, wherein:
the characteristic encoder is used for generating mutually decoupled identity characteristics and attribute characteristics according to the original image;
a feature mapping module for re-splicing the identity feature and the attribute feature and then mapping the recombined feature into a StyleGAN-compliant hidden space by a multi-layer sensor
Figure BDA0004006310160000061
A variable of the distribution;
the image generation module is used for generating a face image according to a pre-trained StyleGAN;
and the image restoration module is used for fusing the face background information of the original image and the StyleGAN generated face image by using the fine tuning network JR-Net in the cross-domain image face exchange through the fine tuning network JR-Net, and injecting decoupling characteristics serving as style information into the image generation process through the AdaIN residual error module based on a coding and decoding structure.
Extracting identity features and attribute features from an input face image through feature decoupling; controlling the added vector disturbance by the key to perform identity feature transformation on the extracted identity privacy features; performing feature mapping on the transformed privacy features and non-privacy features; carrying out image reconstruction and restoration based on the feature mapping to obtain an anonymized image; the anonymized image can be matched with the original image after feature extraction and feature inverse transformation by using a corresponding key.
In this embodiment, as shown in fig. 2, the adopted human face feature decoupling and image reconstruction model based on GAN inverse mapping includes a feature coding module, an image generation module G, a feature mapping module M and an imageRepair module G R
A feature encoding module: by two encoders, i.e. encoder E for identity extraction ID And an encoder E for attribute feature extraction Attr Composition for parsing an input image and generating mutually decoupled privacy features Z, respectively ID (i.e. identity) and non-privacy features Z Attr (i.e., attribute feature) wherein E ID Adopting a pre-trained face recognition feature extractor as a model and adopting an IncepotionV 3 backbone network as E Attr The structure of (1).
As shown in fig. 4, the feature transformation module T mainly includes a generator for generating a perturbation vector, and the identity feature is added to the perturbation vector to output an encrypted identity feature.
A feature mapping module M: is responsible for splicing and recombining the decoupled privacy (namely encrypted identity characteristics) and non-privacy characteristics (namely attribute characteristics), and mapping the recombined characteristics into a StyleGAN-compliant hidden space through a multi-layer perceptron MLP
Figure BDA0004006310160000071
Distributed variable w, i.e.
Figure BDA0004006310160000072
An image generation module G: adopting a pre-trained StyleGAN as a model for generating a human face image, wherein the StyleGAN conforms to a normally distributed hidden space variable
Figure BDA0004006310160000073
Mapping to another hidden spatial style variable through a non-linear network
Figure BDA0004006310160000074
Then generating a face image by taking w as input, wherein the implicit space variable z is an expectation-meeting mu and the variance is sigma 2 Data of gaussian distribution.
Image restoration module G R : responsible for repairing the output initial reconstructed image of StyleGAN to generate higher qualityAnd (3) reconstructing the image, fusing the human face background information of the original image and the output image of StyleGAN by using a joint fine tuning network JR-Net in cross-domain image face changing, and injecting decoupling characteristics serving as style information into the image generation process through an AdaIN residual module based on a coding and decoding structure.
As an implementation mode, if identity authentication is required, identity features are extracted from an anonymized image and pass through a feature inverse transformation module T -1 As shown in fig. 4, similar to the feature transformation process, the feature inverse transformation module mainly includes a generator for generating a perturbation vector, and the identity features of the anonymized image are subtracted from the perturbation vector generated by the generator and added to output decrypted identity features, which are used for feature matching and identity recognition.
As a preferred embodiment, as shown in fig. 3, partial modules of the present invention adopt pre-trained modules, that is, this embodiment does not train these modules, and this embodiment adopts two-stage training strategies to optimize decoupling and reconstruction tasks respectively for the purpose of efficient identity decoupling and image reconstruction.
A first stage training: feature-pair encoder E Attr And learning with a feature mapping module M, and generating a series of facial image data sets with w labels by using the pre-trained StyleGAN. The training process feeds two different or identical images (respectively called original image I) each time S With the target image I T ) And the generated images are similar in identity domain but similar in the rest attribute domains, so that the decoupling of the identity characteristics is realized. Therefore, the training at the stage adopts the loss of the identity of the reconstructed image, which is to extract the original image I S And the image output by the image generation module
Figure BDA0004006310160000088
And calculating the L2 distance between the two identity information, the identity loss of the reconstructed image is expressed as:
Figure BDA0004006310160000081
the loss of the reconstructed image posture refers to the extraction of a target image I T And the image output by the image generation module
Figure BDA0004006310160000082
And calculating the L2 distance between the two image face key points, wherein the loss of the reconstructed image pose is expressed as:
Figure BDA0004006310160000083
wherein the LDM is a pre-trained face key point extractor.
The quality loss of the reconstructed image is related to the selection of the original image and the target image, the target image and the original image can be selected from the same image or different images, and when the same image is selected, the image output by the image generating module is calculated
Figure BDA0004006310160000089
And a target image I T The weighted sum of the depth perception loss LPIPS and the L1 loss in between is specifically expressed as:
Figure BDA0004006310160000084
wherein,
Figure BDA0004006310160000085
for the weighted sum of the depth perception penalty LPIPS and the L1 penalty, an image I 1 With an image I 2 The weighted sum of the depth perception penalty LPIPS and the L1 penalty of (a) is expressed as:
Figure BDA0004006310160000086
the combination of the above three losses is referred to as the non-antagonistic loss:
Figure BDA0004006310160000087
aiming at the feature mapping model, a discriminator network is introduced to discriminate whether the output conforms to the hidden space of StyleGAN
Figure BDA0004006310160000091
In which the discriminator
Figure BDA0004006310160000092
The loss uses the unsaturated loss function:
Figure BDA0004006310160000093
and the loss function of the mapping model M is:
Figure BDA0004006310160000094
finally, respectively optimizing by adopting an alternative training mode
Figure BDA0004006310160000095
And
Figure BDA0004006310160000096
two-stage training:
and training the image restoration module based on the constructed feature encoder and the mapping model. In the stage, a real scene face data set containing an identity label is adopted, an image pair is also adopted as input, and the final reconstructed image is subjected to common identity loss, namely an original image I is extracted S And the image restoration module outputs the image
Figure BDA0004006310160000097
And calculating an L2 distance between the identity information, specifically expressed as:
Figure BDA0004006310160000098
the reconstructed image loss is calculated in two cases, in the invention, the original image and the target image can adopt different or same images, when the same image is adopted, the identities of the images are definitely the same, when different images are adopted, the identities can be the same or different, and the reconstructed image loss is respectively calculated according to whether the identities of the two images are the same, and is expressed as:
Figure BDA0004006310160000099
wherein BG (-) represents the pre-trained face background segmentation model, and ID (I) represents the identity of image I.
Figure BDA00040063101600000910
Indicating the perception loss, the prior art "Perceptual Losses for Real-Time Style Transfer and Super-Resolution" has a specific calculation process,
Figure BDA00040063101600000911
in order to measure The context Loss of The similarity between The overall style of The unaligned Image and The texture in The Image style migration, i.e. The texture Loss, there is a specific calculation process in The prior art "The context Loss for Image Transformation with Non-Aligned Data", and The calculation of The two losses is not repeated in The present invention.
In this embodiment, to construct a feature mapping relationship that conforms to the multiple characteristics, the present invention models the feature transformation problem as a feature vector scrambling process, i.e., adding a perturbation vector controlled by a key to an original feature vector to implement the inter-class migration of identity features in a feature space, and performing feature inverse transformation, i.e., subtracting the same perturbation vector from anonymized features, where the transformation and inverse transformation processes can be respectively expressed as:
Figure BDA0004006310160000101
Figure BDA0004006310160000102
wherein z is ID The original identity feature vector is represented by a vector,
Figure BDA0004006310160000103
representing anonymous identity feature vectors, P (v, k) representing a disturbance vector generation process with a key k as a control condition, and the other input v of the disturbance vector generation process, wherein the other input v is a random noise vector conforming to normal distribution, and the P (v, k) is constructed by a conditional generation countermeasure network CGAN, and an identity feature transformation model schematic diagram based on vector disturbance is shown in FIG. 4.
As a preferred implementation, the embodiment first constructs a satisfactory perturbation vector
Figure BDA0004006310160000104
A data set composed with the corresponding key vector. Randomly selecting face images with non-overlapping identities from a face image data set containing identity labels, and ensuring that the identity characteristic distance L2 of any image pair between two groups is greater than a threshold value t. Two sets of images respectively comprise 2 M And 2 N Different identities (let the length K = M + N of the key vector K), 2 may be formed between the two sets of images M+N (i.e., 2) K ) An identity pair and each possible value of the key is made to correspond to each identity. Sampling L pairs of images from each identity pair, and calculating the difference between identity characteristic vectors as a disturbance vector
Figure BDA0004006310160000105
Then each key corresponds to L perturbation vectors
Figure BDA0004006310160000106
The final constructable inclusion L2 k A disturbance vector
Figure BDA0004006310160000107
A data set corresponding to a condition (key) vector. Designing P as a multi-layer sensor structure and designing another multi-layer sensor
Figure BDA0004006310160000108
Responsible for judging the generated disturbance vector
Figure BDA0004006310160000109
Whether the condition distribution of k is met or not can be finally generalized into the following minimum maximum optimization problem according to the principle of CGAN:
Figure BDA00040063101600001010
in summary, the embodiment of the invention verifies the feasibility of the scheme in the embodiment through training and experiments, the reversible human face visual anonymization processing method provided by the embodiment of the invention ensures that the generated image is anonymous to human eyes and recognizable to a machine, and the generated anonymized human face image can be subjected to feature matching with the original human face image after being subjected to feature inverse transformation to recognize the original identity.
The invention also provides a computer device, which comprises a processor and a memory, wherein the processor is used for running a computer program stored in the memory so as to realize the reversible human face anonymization processing method.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (10)

1. A reversible human face anonymization processing system is characterized by comprising an anonymous human face image generation network consisting of a feature encoder, a feature transformation module, a feature mapping module, an image generation module and an image restoration module, wherein:
the characteristic encoder is used for generating mutually decoupled identity characteristics and attribute characteristics according to the original image;
the feature transformation module is used for carrying out transformation controlled by a secret key on the identity features of the images to obtain anonymous identity features;
a feature mapping module for re-splicing the anonymous identity features and attribute features and mapping the recombined features into a StyleGAN-compliant hidden space by a multi-layer sensor
Figure FDA0004006310150000011
A variable of the distribution;
an image generation module for generating a StyleGAN-compliant hidden space
Figure FDA0004006310150000012
Inputting the distributed variables into a pre-trained StyleGAN to generate an anonymized image;
and the image restoration module is used for fusing the human face background information of the original image and the human face image generated by StyleGAN by using a joint fine tuning network JR-Net in the cross-domain image face exchange, and injecting the decoupled attribute characteristics serving as style information into the image generation process through an AdaIN residual module based on a coding and decoding structure to obtain a restored image.
2. The system of claim 1, wherein the anonymous face image generation network is divided into two stages during training, and the feature encoder and the feature mapping module are learned in a first stage, the feature encoder comprises an identity feature encoder and an attribute feature encoder, wherein the identity feature encoder adopts a pre-trained encoder, an original image is input into the identity feature encoder to extract identity features of the original image, a target image is input into the attribute feature extractor to extract attribute features of the target image, the original image and the target image are two identical or different images, the identity features of the original image and the attribute features of the target image are spliced together and then subjected to feature mapping by the feature mapping module, and the attribute feature extraction module and the feature mapping module of the feature encoder are optimized with the aim that the identity features and the attribute features extracted by the feature encoder and the attribute feature encoder are spliced and then recombined by the feature mapping module into a variable conforming to StyleGAN hidden space distribution; after the first-stage training is completed, based on the constructed feature encoder and the feature mapping module, the features output by the feature mapping module are used as the input of a pre-training StyleGAN, the StyleGAN outputs the features to obtain an anonymous image, the anonymous image is input into an image restoration module to obtain a restored image, and the image restoration module is optimized by taking a generated high-quality reconstructed image as a target in the process.
3. The system of claim 2, wherein a discriminator is introduced during the first stage of training to discriminate whether the output of the image generation module conforms to the hidden space distribution of StyleGAN, and the training process comprises:
training the feature encoder by using the non-countermeasure loss;
then, calculating the loss of the discriminator by adopting a non-saturation loss function, and optimizing the discriminator;
calculating a loss function of the feature mapping module, and optimizing the feature mapping module by using the loss function;
the above training process is repeated until a maximum number of training times is reached or the loss function converges.
4. The system of claim 3, wherein the nonreactive loss used in the training process of the feature encoder is expressed as:
Figure FDA0004006310150000021
Figure FDA0004006310150000022
Figure FDA0004006310150000023
Figure FDA0004006310150000024
wherein E is ID (. Represents the image identity feature extracted by the feature extraction model, I S Representing the original image, I T A representation of the target image is shown,
Figure FDA0004006310150000025
representing the image output by the feature mapping module; LDM (·) represents extracting face key points from a certain point by using a pre-trained face key point extractor;
Figure FDA0004006310150000026
the weighted sum of the depth perception loss LPIPS and the L1 loss between the two images is obtained; i | · | purple wind 2 Represents the L2 loss function; lambda [ alpha ] 1 、λ 2 、λ 3 Are respectively as
Figure FDA0004006310150000027
And
Figure FDA0004006310150000028
the weight of (c).
5. The system of claim 3, wherein the process of calculating the loss of the discriminator using the non-saturation loss function comprises:
Figure FDA0004006310150000031
wherein, E [. C]Expressing to obtain an expected value;
Figure FDA0004006310150000032
for determining whether to conform to StyleGAN hidden space
Figure FDA0004006310150000033
A distributed discriminator, w represents the output of the feature mapping module;
Figure FDA0004006310150000034
denotes z ID And z Attr A mathematical expectation of (d); m (-) represents a feature mapping module; z is a radical of ID Representing an identity feature; z is a radical of Attr Representing attribute features; gamma represents a weight;
Figure FDA0004006310150000035
representing the gradient of the output w of the feature mapping module; i | · | purple wind 2 The L2 loss function is expressed.
6. A reversible face anonymization processing system according to claim 3, wherein the loss function of the feature mapping module is expressed as:
Figure FDA0004006310150000036
wherein, E [. C]Expressing to obtain an expected value;
Figure FDA0004006310150000037
for determining whether to conform to StyleGAN hidden space
Figure FDA0004006310150000038
A distributed discriminator; m (-) represents a feature mapping module; z is a radical of ID Representing an identity feature; z is a radical of formula Attr The attribute features are represented.
7. The system of claim 2, wherein the loss function for training the image inpainting module comprises:
Figure FDA0004006310150000039
Figure FDA00040063101500000310
Figure FDA00040063101500000311
wherein,
Figure FDA00040063101500000312
a loss function representing an image restoration module; i is S Representing an original image; e ID () represents an identity feature extractor,
Figure FDA00040063101500000313
representing an image output by the image inpainting module;
Figure FDA00040063101500000314
representing the image perception loss (perceptual loss) of the GAN network, ID (-) representing identity information; BG (-) represents a pre-trained face-background segmentation model;
Figure FDA00040063101500000315
representing images
Figure FDA00040063101500000316
And a target image I T Context Loss (context Loss) of style and texture similarity between them; i | · | purple wind 2 Represents the L2 loss function; lambda 3 、λ 4 Are respectively as
Figure FDA00040063101500000317
The weight of (c).
8. The system of claim 1, wherein the feature transformation module adds a perturbation vector controlled by a key to an input original feature vector to realize inter-class migration of identity features in a feature space to obtain an anonymous identity feature vector, and the process is called positive transformation; when identity information needs to be verified, extracting anonymous identity features from an anonymous image, subtracting a disturbance vector which is the same as the positive transformation from the anonymous identity features to obtain an original identity feature vector, wherein the process is called inverse transformation; the positive change and inverse transform processes can be represented as:
Figure FDA0004006310150000041
Figure FDA0004006310150000042
wherein z is ID The original identity feature vector is represented by a vector,
Figure FDA0004006310150000043
representing an anonymous identity feature vector, and P (v, k) representing a disturbance vector taking a secret key k and a random noise vector v which conforms to normal distribution as control conditions.
9. The system according to claim 8, wherein the process of constructing perturbation vectors and key vector data sets corresponding to each perturbation vector comprises:
randomly selecting face images with non-overlapping identities from a face image data set containing identity labels, and ensuring that the L2 distance of the identity characteristics of any image pair between two groups is greater than a threshold value t;
two sets of images respectively comprise 2 M And 2 N Different identities, and K = M + N, then 2 can be formed between the two sets of images K Each identity pair, and each possible value of the key is corresponding to each identity;
sampling L pairs of images from each identity pair, and calculating the difference between identity characteristic vectors as disturbance vectors, wherein each key corresponds to L disturbance vectors;
obtaining L x 2 K A data set of individual perturbation vectors and corresponding key vectors;
where K represents the length of the key K.
10. The system of claim 9, wherein the perturbation vector is obtained through a conditional generation countermeasure network CGAN, the conditional generation countermeasure network CGAN comprises a generator and a discriminator, the generator generates the perturbation vector P (v, k) according to a random noise vector v and a key k corresponding to the random noise vector v, and the discriminator discriminates the generated perturbation vector P (v, k)
Figure FDA0004006310150000044
Whether the conditional distribution of k is met or not, the process of optimizing the generator according to the principle of the CGAN, namely the process of calculating the optimal disturbance vector P (v, k) is expressed as follows:
Figure FDA0004006310150000045
wherein,
Figure FDA0004006310150000046
represents the output of P after the input D; e [. C]Expressing the expectation; the P expression generator is used for carrying out splicing combination on the random noise vector v and the secret key k to form an implicit expression and generating a disturbance vector according to the implicit expression; d denotes a discriminator for judging whether the generated perturbation vector conforms to the conditional distribution of the key k.
CN202211632388.8A 2022-12-19 2022-12-19 Reversible face anonymization processing system Pending CN115795406A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211632388.8A CN115795406A (en) 2022-12-19 2022-12-19 Reversible face anonymization processing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211632388.8A CN115795406A (en) 2022-12-19 2022-12-19 Reversible face anonymization processing system

Publications (1)

Publication Number Publication Date
CN115795406A true CN115795406A (en) 2023-03-14

Family

ID=85425696

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211632388.8A Pending CN115795406A (en) 2022-12-19 2022-12-19 Reversible face anonymization processing system

Country Status (1)

Country Link
CN (1) CN115795406A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116402916A (en) * 2023-06-08 2023-07-07 北京瑞莱智慧科技有限公司 Face image restoration method and device, computer equipment and storage medium
CN116524339A (en) * 2023-07-05 2023-08-01 宁德时代新能源科技股份有限公司 Object detection method, apparatus, computer device, storage medium, and program product

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116402916A (en) * 2023-06-08 2023-07-07 北京瑞莱智慧科技有限公司 Face image restoration method and device, computer equipment and storage medium
CN116402916B (en) * 2023-06-08 2023-09-05 北京瑞莱智慧科技有限公司 Face image restoration method and device, computer equipment and storage medium
CN116524339A (en) * 2023-07-05 2023-08-01 宁德时代新能源科技股份有限公司 Object detection method, apparatus, computer device, storage medium, and program product
CN116524339B (en) * 2023-07-05 2023-10-13 宁德时代新能源科技股份有限公司 Object detection method, apparatus, computer device, storage medium, and program product

Similar Documents

Publication Publication Date Title
CN115795406A (en) Reversible face anonymization processing system
CN112949535B (en) Face data identity de-identification method based on generative confrontation network
CN111476200B (en) Face de-identification generation method based on generation of confrontation network
CN108629753A (en) A kind of face image restoration method and device based on Recognition with Recurrent Neural Network
CN112862001B (en) Privacy protection method and system for decentralizing data modeling under federal learning
CN110059465A (en) Auth method, confrontation generate training method, device and the equipment of network
CN114417427B (en) Deep learning-oriented data sensitivity attribute desensitization system and method
AU2019100349A4 (en) Face - Password Certification Based on Convolutional Neural Network
Han et al. Generative model based highly efficient semantic communication approach for image transmission
KR102126197B1 (en) Method, server for training neural network by de-identfication image
CN111966998A (en) Password generation method, system, medium, and apparatus based on variational automatic encoder
CN116563681A (en) Gaze estimation detection algorithm based on attention crossing and two-way feature fusion network
CN111598051A (en) Face verification method, device and equipment and readable storage medium
CN113763268A (en) Blind restoration method and system for face image
CN118196231B (en) Lifelong learning draft method based on concept segmentation
CN114783017A (en) Method and device for generating confrontation network optimization based on inverse mapping
Zhao et al. Removing adversarial noise via low-rank completion of high-sensitivity points
CN111222583A (en) Image steganalysis method based on confrontation training and key path extraction
CN112668401B (en) Face privacy protection method and device based on feature decoupling
CN114036553A (en) K-anonymity-combined pedestrian identity privacy protection method
CN116188439A (en) False face-changing image detection method and device based on identity recognition probability distribution
CN115424337A (en) Iris image restoration system based on priori guidance
CN112950501B (en) Noise field-based image noise reduction method, device, equipment and storage medium
Tang et al. Few-sample generation of amount in figures for financial multi-bill scene based on GAN
Ding et al. InjectionGAN: unified generative adversarial networks for arbitrary image attribute editing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination