CN112668401B - Face privacy protection method and device based on feature decoupling - Google Patents

Face privacy protection method and device based on feature decoupling Download PDF

Info

Publication number
CN112668401B
CN112668401B CN202011447934.1A CN202011447934A CN112668401B CN 112668401 B CN112668401 B CN 112668401B CN 202011447934 A CN202011447934 A CN 202011447934A CN 112668401 B CN112668401 B CN 112668401B
Authority
CN
China
Prior art keywords
face
face image
image
identity
appearance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011447934.1A
Other languages
Chinese (zh)
Other versions
CN112668401A (en
Inventor
操晓春
李京知
张华�
任文琦
韩路通
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Information Engineering of CAS
Original Assignee
Institute of Information Engineering of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Information Engineering of CAS filed Critical Institute of Information Engineering of CAS
Priority to CN202011447934.1A priority Critical patent/CN112668401B/en
Publication of CN112668401A publication Critical patent/CN112668401A/en
Application granted granted Critical
Publication of CN112668401B publication Critical patent/CN112668401B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses a face privacy protection method and device based on feature decoupling. The method comprises the following steps: 1) Data preprocessing, pretraining identity extractor E I And appearance feature extractor E A . 2) The face camouflage generation model is trained to execute the same face generation task. 3) And training the face camouflage generation model to execute different face generation tasks. 4) After model training is completed, trained E is used I 、E A And G network generates a camouflage face image for the input face image. Wherein step 1) uses the pair of classification losses E I And E A Pre-training is performed. And 2) performing model training by using the loss of face reconstruction. And 3) respectively designing L2 norm loss functions of the appearance characteristics and the identity characteristics to constrain the accuracy of model characteristic extraction and the controllability of a generated result. The invention can obviously change the appearance characteristics of the face while keeping the face identity matching, and the effectiveness of privacy protection is proved through experimental data.

Description

Face privacy protection method and device based on feature decoupling
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a face privacy protection method and device in a video monitoring scene.
Background
Under the background of big data, data safety problems such as data abuse, data stealing, privacy disclosure and the like are in a trend of increasing and outbreak. The data security case of information company huge head represented by Facebook and Google makes the data security problem receive more extensive attention including academic and industrial circles, especially the data security and privacy problem. In recent years, privacy protection is particularly important when more and more image and video data is being recorded, stored or analyzed globally and shared between government departments and other stakeholders, as the latter may tend to utilize the data for various purposes. Therefore, user privacy protection is a problem to be solved in data distribution and sharing. Before data publishing or data sharing, corresponding privacy protection processing is carried out on the data, and sensitive information related to a user in the data is hidden. As a unique personalized privacy information, compared with other privacy attributes, the face privacy protection technology is paid more attention and discussed by the public.
At present, the recognition rate of a face recognition algorithm approaches the human recognition level under the promotion of the development of a deep learning theory. A monitoring system having a face recognition technology has been widely applied to various scenes such as public places, airports, prisons, and the like, and the public is in a real state of being blinded everywhere. Therefore, research on a face privacy protection technology is imminent. The report of face recognition technology-system designed to protect privacy and prevent prejudice issued in 2020 by lander corporation also suggests: personally identifiable information such as facial feature data collected in public places is processed using anonymization techniques. The human face privacy protection research is the leading problem of artificial intelligence and machine learning at present, and is applied to the fields closely related to the public privacy, such as public monitoring, smart cities, medicine and the like.
The existing face privacy protection methods are mainly divided into three categories. The first type of methods are image filtering based methods such as blurring, pixelation, random noise, etc. Such methods strongly alter the facial regions in the image, making them useless in some applications (e.g. video surveillance, social networks). The second method is a k-same family of face de-recognition methods based on a k-anonymous framework. The method firstly groups the faces according to the unrecognizable information such as expressions and the like, and then generates a substitute face for each group. The method can ensure that any face recognition system is not superior to 1/k when recognizing the face corresponding to the specific image, wherein k is the minimum number of faces in all groups. The third category of methods is based on deep learning, which aims to remove the biometric information of a human face as much as possible, while preserving the expression, gender, race and other available facial attributes. In summary, these human face privacy protection methods never distinguish between human vision and machine vision during design, and a great number of methods are proposed for de-identification of a machine vision system. However, in practical application scenarios, the identification is an important attribute of the face data, such as: the government departments in the public monitoring system are used for capturing identity tracing authentication of lawbreakers, and original face data used for identity authentication in the payment system and the like. In such application scenarios, the public need for face privacy protection still exists, but the prior art cannot meet the need.
Disclosure of Invention
The invention provides a face camouflage generation method based on characteristic decoupling aiming at the problem of face privacy protection in a monitoring scene, namely, a new camouflage face is generated by utilizing the identity recognition characteristics of a protected face and the appearance characteristics of a reference face. The human face image generated by the method can obviously change the appearance characteristics of the protected human face, but can be subjected to human face verification by a human face recognition system. Therefore, the method decomposes the utility of the face into an identity feature and an appearance feature, and defines a corresponding feature vector. In consideration of a more suitable scene, the identity feature vector mainly encodes related semantics of human face biological feature recognition information, and an outline and a background of a human face, and the appearance feature vector captures an appearance vector of any other features, such as skin color, hair, eyebrows, nose and the like.
The face camouflage generation model provided by the method adopts a generation type confrontation network structure, and the structure of the face camouflage generation model is shown in figure 1. This model consists of four modules: identity feature extractor E I The method is used for extracting the unique biological recognition characteristics in the face image; appearance feature extractor E A Extracting the appearance characteristics of the face of the person; a face generator G for generating a synthetic face containing input identity features and appearance features; and the discriminator D is used for distinguishing the generated face from the real face. Two kinds of image mapping are introduced in the model training phase: same face generation and different face generation. The same face generation is used for constraining the reconstruction capability and the generation effect of the model, and different face generation is used for controlling the generation result of the constraint model to be consistentIs true and (6) data distribution.
The technical scheme adopted by the invention is as follows:
a face privacy protection method based on feature decoupling comprises the following steps:
pre-trained identity feature extractor E I And an appearance feature extractor E A
Federated identity feature extractor E I Appearance feature extractor E A Performing end-to-end training with a face generator G to generate a face image which is the same as the input face image;
federated identity feature extractor E I Appearance feature extractor E A The face generator G and the discriminator D carry out end-to-end training to generate different face images;
given a protected face image and a reference face image, using E I 、E A And G, obtaining a camouflage face image of the protected face image.
Further, the above method comprises the steps of:
1) Preprocessing the labeled data set and pre-training the identity extractor E with the existing data (i.e., the preprocessed data) I And appearance feature extractor E A
2) The training model performs the same face generation, also known as face reconstruction. Giving a face image, using pre-trained E I And E A The identity features and the appearance features of the human face are respectively extracted, and the two feature vectors are input into a human face generator G to generate a human face image (namely an image B' in figure 1) which is the same as the input data.
3) The training model generates different faces, namely, the generation of a camouflage face. First, given two different face images A and B, using E I Extracting identity feature f of image A I (A),E A Extracting appearance feature f of image B A (B) (ii) a Secondly, generating a new face image A' by using the two acquired feature vectors through a G network; then, A' is inputted to a discriminator D and an identity extractor E respectively I And appearance feature extractor E A Wherein D determines whether the generated image approximates a real face image (i.e., image A), E I And E A Then f is obtained respectively I (A') and f A (A'); and finally, training the model by using the combination of discriminant loss, feature loss (identity feature loss and appearance feature loss), face difference loss and the like until the model is stable.
4) And after the model training is finished, performing model testing. Using E in trained models I 、E A And the G network is used for inputting the protected face image and the reference face image to obtain a disguised face image of the protected face.
Further, the preprocessing of the face data in the step 1) includes processing such as face alignment and data scaling. To E is to I And E A In the pre-training, the pre-training E is performed in consideration of the size of the face region and the traceless fusion with the background in practical applications I In the process, besides the biological recognition characteristics of the human face are extracted, the information such as the human face geometric structure in the image is also kept. Furthermore, since subsequent face generators tend to rely on features with more spatial information, we ignore appearance features in order to avoid emphasizing identity features, we will E I The input image is converted into a grayscale image to drive the face generator to simultaneously utilize both features.
Further, an original image reconstruction task is designed in the step 2), the model can be regarded as a self-encoder at the moment, so that how the model learns to reconstruct the original image is, the original image reconstruction task plays an important regularization role on the whole model, and meanwhile, the feature vectors obtained by the two feature extractors are ensured to be not omitted. In this process, we use face reconstruction loss to constrain.
Further, step 3) in order to generate a face image with a preserved identity and appearance difference, hidden space coding based on appearance features and identity features is introduced to control the generation of the image. In this process, the input feature vector f of the known face generator I (A) And f A (B) And feature vectors f extracted from the generated image I (A') and f A (A') between the features are designed respectivelyIncluding identity feature loss and appearance feature loss, to constrain the accuracy of model feature extraction and the controllability of the generated results. Meanwhile, in order to make the appearance difference between the generated face and the face providing the identity characteristics as large as possible, an L2 norm loss function (face difference loss) of a' and a is designed. Furthermore, a countering loss function (discriminant loss) is used to constrain the generated image to fit the true data distribution.
Further, step 4) gives two face images, namely a protected face and a reference face, and utilizes E I Extracting identity features of the protected face image, E A And extracting the appearance features of the reference face image, inputting the two feature vectors into the G network, and finally obtaining the disguised face image of the protected face.
A face privacy protection device based on characteristic decoupling by adopting the method comprises the following steps:
a model training module for pre-training the identity feature extractor E I And appearance feature extractor E A (ii) a Federated identity feature extractor E I Appearance feature extractor E A A face generator G for generating a face image identical to the input face image; joint identity feature extractor E I Appearance feature extractor E A The human face generator G and the discriminator D generate different human face images;
a camouflaged face image generation module for giving a protected face image and a reference face image, using E I 、E A And G, obtaining a camouflage face image of the protected face image.
In summary, the invention designs a face camouflage generation method for privacy protection, and achieves certain effect. Compared with the prior face privacy protection technology, the invention has the advantages that:
1. a brand-new concept and method for protecting the human face privacy are provided, the human eye vision and the machine vision are distinguished for the first time in the aspect of human face privacy protection, and the situation that the existing identity removal identification method only aims at a machine vision system is broken through.
2. By utilizing the proposed generation mechanism, the generation module can learn the appearance and identity codes with clear and complementary meanings, and generate high-quality face images based on the hidden space vectors.
3. Because of the use of the hidden space coding, the face image generated by the method is interpretable and limited in the real image content, which also ensures the authenticity of the generation.
Drawings
FIG. 1: generating a model architecture diagram by human face camouflage;
FIG. 2: and generating a result example graph by face camouflage.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a face privacy protection method for keeping biological feature recognition under the condition that human eye vision and machine vision are not distinguished by the existing face privacy protection method. Experiments are performed below to illustrate the effectiveness of the present invention.
In view of the adaptability of the inventive method to faces of different age stages, the experimental data employed a Cross-age face dataset CACD (b. -c.chen, c. -s.chen, and w.h.hsu, "Cross-age reference coding for an innovative face recognition and review," in European reference on computer vision,2014, pp.768-783.4) containing 163446 face images from 2000 celebrities with an age span of 16 to 62 years. The training data set randomly selected 80% of the different age groups in the CACD, with the remaining 20% being the test data set.
The experimental procedure was as follows:
(1) And (5) preprocessing the face data. For the face image obtained under the unconstrained condition, firstly, detecting face key points by using Dlib, secondly, extracting 68 face coordinates from the face image by using a regression tree integration method, and then carrying out normalization processing on the training face to finally obtain an input image with the size of 128 x 128, wherein the central part 112 x 112 is a face region.
(2) And (4) designing a network structure. Appearance feature extractor E in model A A ResNet-50 network is adopted, a global average pooling layer and a full connection layer are removed, and an adaptive maximum pooling layer is added. The network is pre-trained and outputs features with dimensions 2048 x 4 x 1. Identity characteristic extractor E I The identity matching feature with output dimension of 128 × 64 × 32 is composed of 4 convolution layers and 4 residual blocks. The face generator G consists of 4 residual blocks, 2 self-adaptive instance normalization layers and 4 convolution layers; the discriminator D adopts the structure of the discriminator in the conditional countermeasure network.
(3) A pre-training phase. Appearance feature extractor E A And identity feature extractor E I The training data set is used for the former, and the training data set prepared in step (1) is used for the latter. In the pre-training process, the random gradient descent method with the learning rate of 0.0001 and the momentum of 0.9 is adopted to optimize the E a Optimization E of learning rate of 0.0001Adam optimization algorithm I . Training E I The used loss function is identity loss, and the classification loss function is used in the method and defined as follows:
L id =E[-log(p(c i |x i ))]
wherein E (.) represents the mathematical expectation, p (c) i |x i ) Representing an input face image x i Belonging to identity c i The probability of (c).
(4) And (5) a model training stage. And (3) performing end-to-end training on the model by using 80% of data in the step (1), and performing iterative optimization on each module by adopting an Adam optimization algorithm with a learning rate of 0.0001 in the training process. As previously described, the training is divided into identical face generation and different face generation. When the same face is generated, the model structure is similar to a self-encoder, and the loss function used is the reconstruction loss of the face, which is defined as follows:
L rec =E[||y-G(f I (y),f A (y))|| 2 ]
where y denotes a reference input face, f I (.) and f A Is divided intoAnd D, representing the extracted identity features and appearance features, and G (.) representing the generation of a human face image.
When different faces are generated, 4 loss functions are used, and are respectively defined as the following for distinguishing loss, identity characteristic loss, appearance characteristic loss and face difference loss:
judging loss: l is D =E[log(D(y))+log(1-D(G(f I (x),f A (y))))]
Loss of identity characteristics: l is f1 =E[||f I (x)-f I (x′)|| 2 ]
Loss of appearance characteristics: l is f2 =E[||f A (y)-f A (x′)|| 2 ]
Face difference loss: l is dif =-E[||x-G(f I (y),f A (y))|| 2 ]
Wherein x represents a protected face, x' represents a face image generated by the face generator G, and D (.) represents an authenticity judgment result. Thus, the overall model loss function equation is as follows:
L=λ 1 L D2 L id3 L rec4 L f15 L f26 L dif
wherein λ is 12 …λ 6 Are balance parameters.
(5) And (5) a model testing stage. And (3) testing the model by using the model trained in the step (4) and the other 20% of data in the step (1). In the testing phase, we use only the appearance feature extractor E A Identity feature extractor E I And a face generator G.
The invention provides a face privacy protection method, which comprises the following test environments and experimental results:
(1) And (3) testing environment:
the system environment is as follows: ubuntu16.04;
hardware environment: memory: 64GB, GPU: great word TITAN-XP, hard disk: 2TB;
(2) Experimental data:
training data:
ImageNet (5000 pieces) pretrained appearance encoder E a
CACD (1500 sheets) pretrained identity encoder E I
CACD (1500 sheets) trains the same face generation;
CACD (1500 sheets) trained different face generation.
And (3) test data: CACD (500 sheets)
The evaluation method comprises the following steps:
(3) The experimental results are as follows:
in order to illustrate the effect of the invention, the test results of two methods of face verification (1:1) and face recognition (1:N) are adopted to verify the face identity retention performance, and the face recognition model adopts the currently mainstream open source face recognition algorithm insight (J.Deng, J.Guo, X.Niannan, and S.Zafeiriou, "Arcface: direct and regular margin loss for face recognition," in CVPR, 2019.6.); for the human face appearance change effect generated by the invention, a human observation result is adopted as a verification method. As no method similar to the method disclosed by the invention is found at present, only the original face recognition result is selected as a reference in the experimental result. In addition, when training, the model of whether the identity keeping module is used is trained to test the usefulness of the identity keeping module. FIG. 2 is a diagram of an example of a face masquerading generation result using the method of the present invention.
The accuracy of the experimental results is shown in table 1:
TABLE 1. Accuracy of different methods
Serial number Method Face verification accuracy Face recognition accuracy
1 Raw face data 0.9705 0.7641
2 Human recognition 0.8575 -
3 The invention (without identity module) 0.8263 0.4243
4 The invention (use identity keeping module) 0.9263 0.8564
As is clear from table 1, the face camouflage generation method for privacy protection provided by the present invention can significantly change the appearance characteristics of the face while maintaining the face identity matching, and verify the validity of the identity maintenance module.
Another embodiment of the present invention provides a face privacy protection device based on feature decoupling using the above method, which includes:
a model training module for pre-training the identity feature extractor E I And appearance feature extractor E A (ii) a Federated identity feature extractor E I Appearance feature extractor E A And a face generator G for generating a face image identical to the input face image(ii) a Federated identity feature extractor E I Appearance feature extractor E A The human face generator G and the discriminator D generate different human face images;
a camouflaged face image generation module for giving a protected face image and a reference face image, using E I 、E A And G, obtaining a camouflage face image of the protected face image.
Based on the same inventive concept, another embodiment of the present invention provides an electronic device (computer, server, smartphone, etc.) comprising a memory and a processor, the memory storing a computer program configured to be executed by the processor, the computer program comprising instructions for performing the steps of the inventive method.
Based on the same inventive concept, another embodiment of the present invention provides a computer-readable storage medium (e.g., ROM/RAM, magnetic disk, optical disk) storing a computer program, which when executed by a computer, performs the steps of the inventive method.
The above embodiments are only intended to illustrate the technical solution of the present invention and not to limit the same, and a person skilled in the art can modify the technical solution of the present invention or substitute the same without departing from the spirit and scope of the present invention, and the scope of the present invention should be determined by the claims.

Claims (5)

1. A face privacy protection method based on feature decoupling is characterized by comprising the following steps:
pre-trained identity feature extractor E I And appearance feature extractor E A
Federated identity feature extractor E I Appearance feature extractor E A Performing end-to-end training with a face generator G to generate a face image which is the same as the input face image;
joint identity feature extractor E I Appearance feature extractor E A The face generator G and the discriminator D carry out end-to-end training to generate different face imagesAn image;
given a protected face image and a reference face image, using E I 、E A G, obtaining a camouflage face image of the protected face image;
the combined identity characteristic extractor E I Appearance feature extractor E A And a face generator G performs end-to-end training to generate a face image which is the same as the input face image, and the method comprises the following steps:
given a face image, using pre-trained E I And E A Respectively extracting the identity characteristics and the appearance characteristics of the face image, and inputting the two characteristic vectors into a face generator G to generate a face image which is the same as the input face image; performing model training by using the human face reconstruction loss;
the combined identity characteristic extractor E I Appearance feature extractor E A And the face generator G and the discriminator D carry out end-to-end training to generate different face images, and the method comprises the following steps:
given two different face images A and B, using E I Extracting identity feature f of image A I (A),E A Extracting appearance feature f of image B A (B);
Using the acquired identity f I (A) And appearance feature f A (B) Generating a new face image A' through G;
respectively inputting A' to a discriminator D and an identity characteristic extractor E I And appearance feature extractor E A Wherein D judges whether the generated image approximates to a real face image, E I And E A Respectively obtain f I (A') and f A (A′);
Performing joint training by using discrimination loss, identity characteristic loss, appearance characteristic loss and face difference loss until the model is stable;
said utilization E I 、E A And G, obtaining a camouflage face image of the protected face image, comprising: by using E I Extracting the identity characteristics of the protected face image, using E A Extracting appearance features of the reference face image, inputting the extracted features into a face generator G to obtain the protected face imageAnd the face image is pretended to be a face image.
2. The method of claim 1, wherein the pre-trained identity extractor E I And appearance feature extractor E A The method comprises the following steps:
carrying out face data preprocessing, including face alignment and data scaling processing;
using pairs of classification losses E I And E A Pre-training is performed, pair E I And E A When pre-training is carried out, besides extracting the biological recognition characteristics of the human face, the geometric structure information of the human face in the image is also kept; and, E I The input image is converted into a gray scale image to drive the face generator to simultaneously utilize the identity feature and the appearance feature.
3. A face privacy protection device based on feature decoupling using the method of claim 1 or 2, comprising:
a model training module for pre-training the identity feature extractor E I And appearance feature extractor E A (ii) a Joint identity feature extractor E I Appearance feature extractor E A Performing end-to-end training with a face generator G to generate a face image which is the same as the input face image; federated identity feature extractor E I Appearance feature extractor E A The face generator G and the discriminator D carry out end-to-end training to generate different face images;
a camouflaged face image generation module for giving a protected face image and a reference face image, using E I 、E A G, obtaining a camouflage face image of the protected face image;
the federated identity feature extractor E I Appearance feature extractor E A And a face generator G performs end-to-end training to generate a face image which is the same as the input face image, and the method comprises the following steps:
given a face image, using pre-trained E I And E A Respectively extracting the identity and appearance characteristics of the plants, and then extractingThe two feature vectors are input into a human face generator G to generate a human face image which is the same as the input human face image; performing model training by using human face reconstruction loss;
the federated identity feature extractor E I Appearance feature extractor E A And the face generator G and the discriminator D carry out end-to-end training to generate different face images, and the method comprises the following steps:
given two different face images A and B, using E I Extracting identity feature f of image A I (A),E A Extracting appearance feature f of image B A (B);
Using the acquired identity f I (A) And appearance feature f A (B) Generating a new face image A' through G;
respectively inputting A' to a discriminator D and an identity characteristic extractor E I And appearance feature extractor E A Wherein D judges whether the generated image approximates to a real face image, E I And E A Then f is obtained respectively I (A') and f A (A′);
Performing joint training by using discrimination loss, identity characteristic loss, appearance characteristic loss and face difference loss until the model is stable;
said utilization E I 、E A And G, obtaining a camouflage face image of the protected face image, comprising: by using E I Extracting the identity characteristics of the protected face image, using E A And extracting the appearance characteristics of the reference face image, and inputting the extracted characteristics into a face generator G to obtain a disguised face image of the protected face image.
4. An electronic apparatus, comprising a memory and a processor, the memory storing a computer program configured to be executed by the processor, the computer program comprising instructions for performing the method of claim 1 or 2.
5. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a computer, implements the method of claim 1 or 2.
CN202011447934.1A 2020-12-09 2020-12-09 Face privacy protection method and device based on feature decoupling Active CN112668401B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011447934.1A CN112668401B (en) 2020-12-09 2020-12-09 Face privacy protection method and device based on feature decoupling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011447934.1A CN112668401B (en) 2020-12-09 2020-12-09 Face privacy protection method and device based on feature decoupling

Publications (2)

Publication Number Publication Date
CN112668401A CN112668401A (en) 2021-04-16
CN112668401B true CN112668401B (en) 2023-01-17

Family

ID=75402130

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011447934.1A Active CN112668401B (en) 2020-12-09 2020-12-09 Face privacy protection method and device based on feature decoupling

Country Status (1)

Country Link
CN (1) CN112668401B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991484B (en) * 2021-04-28 2021-09-03 中科计算技术创新研究院 Intelligent face editing method and device, storage medium and equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110610456A (en) * 2019-09-27 2019-12-24 上海依图网络科技有限公司 Imaging system and video processing method
CN110647659A (en) * 2019-09-27 2020-01-03 上海依图网络科技有限公司 Imaging system and video processing method
CN110674765A (en) * 2019-09-27 2020-01-10 上海依图网络科技有限公司 Imaging system and video processing method
CN111414856A (en) * 2020-03-19 2020-07-14 支付宝(杭州)信息技术有限公司 Face image generation method and device for realizing user privacy protection
CN111476200A (en) * 2020-04-27 2020-07-31 华东师范大学 Face de-identification generation method based on generation of confrontation network
CN112052761A (en) * 2020-08-27 2020-12-08 腾讯科技(深圳)有限公司 Method and device for generating confrontation face image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110610456A (en) * 2019-09-27 2019-12-24 上海依图网络科技有限公司 Imaging system and video processing method
CN110647659A (en) * 2019-09-27 2020-01-03 上海依图网络科技有限公司 Imaging system and video processing method
CN110674765A (en) * 2019-09-27 2020-01-10 上海依图网络科技有限公司 Imaging system and video processing method
CN111414856A (en) * 2020-03-19 2020-07-14 支付宝(杭州)信息技术有限公司 Face image generation method and device for realizing user privacy protection
CN111476200A (en) * 2020-04-27 2020-07-31 华东师范大学 Face de-identification generation method based on generation of confrontation network
CN112052761A (en) * 2020-08-27 2020-12-08 腾讯科技(深圳)有限公司 Method and device for generating confrontation face image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Learning Disentangled Representations for Identity Preserving Surveillance Face Camouflage;Jingzhi Li 等;《2020 25th International Conference on Pattern Recognition (ICPR)》;20210115;全文 *

Also Published As

Publication number Publication date
CN112668401A (en) 2021-04-16

Similar Documents

Publication Publication Date Title
Ahmed et al. Analysis survey on deepfake detection and recognition with convolutional neural networks
Yang et al. Neural network inversion in adversarial setting via background knowledge alignment
Demir et al. Where do deep fakes look? synthetic face detection via gaze tracking
CN111241958A (en) Video image identification method based on residual error-capsule network
Seow et al. A comprehensive overview of Deepfake: Generation, detection, datasets, and opportunities
CN113537027B (en) Face depth counterfeiting detection method and system based on face division
Nadeem et al. A survey of deep learning solutions for multimedia visual content analysis
Rahman et al. A qualitative survey on deep learning based deep fake video creation and detection method
Gong et al. Deepfake forensics, an ai-synthesized detection with deep convolutional generative adversarial networks
CN116958637A (en) Training method, device, equipment and storage medium of image detection model
CN112668401B (en) Face privacy protection method and device based on feature decoupling
Martinsson et al. Adversarial representation learning for synthetic replacement of private attributes
CN115424314A (en) Recognizable face anonymization processing method and system
Arora et al. A review of techniques to detect the GAN-generated fake images
Gowda et al. Investigation of comparison on modified cnn techniques to classify fake face in deepfake videos
CN114036553A (en) K-anonymity-combined pedestrian identity privacy protection method
Zhang et al. CNN-based anomaly detection for face presentation attack detection with multi-channel images
Chergui et al. Investigating deep cnns models applied in kinship verification through facial images
CN115457374B (en) Deep pseudo-image detection model generalization evaluation method and device based on reasoning mode
Jang et al. L-GAN: landmark-based generative adversarial network for efficient face de-identification
Dong 3D face recognition neural network for digital human resource management
Chi et al. Toward robust deep learning systems against deepfake for digital forensics
Yavuzkiliç et al. DeepFake face video detection using hybrid deep residual networks and LSTM architecture
Costa et al. Improving human perception of GAN generated facial image synthesis by filtering the training set considering facial attributes
Hummady et al. A Review: Face Recognition Techniques using Deep Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant