CN111476200A - Face de-identification generation method based on generation of confrontation network - Google Patents
Face de-identification generation method based on generation of confrontation network Download PDFInfo
- Publication number
- CN111476200A CN111476200A CN202010343798.5A CN202010343798A CN111476200A CN 111476200 A CN111476200 A CN 111476200A CN 202010343798 A CN202010343798 A CN 202010343798A CN 111476200 A CN111476200 A CN 111476200A
- Authority
- CN
- China
- Prior art keywords
- face image
- face
- image
- loss
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a face de-recognition generation method based on generation of a countermeasure network, which comprises the steps of obtaining N pairs of face images, respectively extracting characteristic vectors from each face image, shielding facial five-sense organs of each face image by adopting random noise to obtain an shielded face image, combining the shielded face image with the corresponding face image characteristic vectors, combining the shielded face image and the corresponding face image characteristic vectors to be used as input of a generator in the generation of the countermeasure network, and using an original face image as a real face image of a discriminator to form a training sample; the generator and the discriminator are trained using training samples. After the training is finished, in the application stage, for each face picture to be subjected to de-recognition, the shielding face image and the face image feature vector are obtained in the same way, and then the shielding face image and the face image feature vector are combined and input to a generator of a trained face de-recognition generation model to obtain a de-recognized face image. The invention can generate the human face image of the virtual user with high quality while protecting the privacy of the user.
Description
Technical Field
The invention belongs to the technical field of face recognition, and particularly relates to a face de-recognition generation method based on generation of a confrontation network.
Background
With the rapid development of network information technology, face recognition technology and applications have gradually developed from academic circles to government departments and industrial circles, and play an important role in more and more applications, and these roles are generally used to replace or assist identification cards, passwords, other certificates and the like to perform user identity information verification by means of identification. However, no matter from the training of the face recognition model and the practical application, a large amount of high-quality labeled data is often needed, and the data often carries the personal portrait privacy of the user, and the privacy of the user is affected when the data is acquired by a third-party operator in the training or using process. The requirements make the generation requirements of face de-recognition come up, and the unique identification of the user is provided under the condition of not revealing individual privacy, so that the training of the face recognition model and the practical application of the face recognition model can be carried out.
The human face de-identification generation method mainly comprises two parts, namely human face de-identification and human face picture generation. The traditional methods usually focus more on the de-identification part of the human face, such as K anonymization and other methods, and the methods have certain defects: firstly, after the face is subjected to de-recognition by the methods, although the data can meet the de-recognition requirement, the data can not uniquely identify the user, so the data can not be used for training and using a face recognition model, and the actual use value of the data is low. Secondly, the definition of the methods is poor, the pictures are fuzzy, and the method has larger difference with real face pictures. In addition, for different face pictures of the same user, due to different factors such as shooting environments, user binding and the like, the pictures after desensitization may be quite different, that is, the characteristic information of the user is lost more.
Therefore, the task of only completing the face de-recognition cannot meet the actual face use requirement. In actual face use, a data owner needs to ensure the unique identification and other characteristics of data under the condition of ensuring that the portrait privacy of a user is not revealed, has high enough definition, and retains enough characteristic information to be used for training of a face recognition model and actual face recognition application, however, no effective solution is provided in the industry at present for the requirement.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a face de-recognition generation method based on a generation countermeasure network, which not only ensures that the privacy of personal information of a user is not leaked, but also ensures that pictures have high definition and user characteristics are kept as much as possible so as to be used for training and applying a face recognition model.
In order to achieve the above object, the method for generating a face de-recognition for a confrontation network according to the present invention comprises the following steps:
s1: acquiring N pairs of face images, wherein two face images in each pair of face images are different face images belonging to the same user, each face image is adjusted to a preset size, and the face images are recordedWhereinRepresenting the ith human face image in the nth human face image, i is 1,2, N is 1,2, …, N;
s2: respectively inputting each face image into a pre-trained face feature extraction model, acquiring corresponding feature vectors, and recording each face imageThe corresponding feature vector is
Each human face image is obtainedThe facial features of the human face are shielded by random noise to obtain a shielded facial imageConvert it into vector and face imageFeature vector ofCombining as input, and taking the original face imageTriplet as a real face image, constituting a training sample
S3: constructing and generating a countermeasure network, which comprises a generator and a discriminator, wherein the input of the generator is the combination of an occlusion face image and a face image feature vector, the output is the generation of a virtual user face image, and a real image adopted by the discriminator is a corresponding original face image;
s4: training a countermeasure network by using the training sample pair obtained in the step S2, selecting a plurality of face images from the face image pair set in each batch in the training process, using the corresponding training samples as the training samples in the current batch, wherein the adopted losses include countermeasure loss, gradient penalty loss, intra-user loss, inter-user loss and similarity loss, and the calculation methods of the various losses are respectively as follows:
adopting a discriminator in a generation countermeasure network to obtain the score of each real face image in the training samples of the current batch and the score of a generated virtual user face image corresponding to the real face image in the training samples of the current batch, and calculating the Wasserstein distance between the scores of the real face images and the scores of the generated virtual user face images as the countermeasure loss L D;
calculating the gradient penalty loss of each training sample in the current batch of training samples, and taking the average result as the gradient penalty loss L GP;
adopting a face feature extraction model to obtain a feature vector pair which is corresponding to each pair of face images in a current batch of training samples and generates a virtual user face image, calculating the cosine distance of two feature vectors in each feature vector pair, and averaging the cosine distances corresponding to all the feature vector pairs to obtain L FFI (loss in user);
randomly selecting K pairs of face images from a current batch of training samples, wherein two face images in each pair of face images belong to different users, acquiring feature vector pairs which generate virtual user face images and correspond to the K pairs of different user face images by adopting a face feature extraction model, calculating cosine distances of two feature vectors in each feature vector pair, and averaging the cosine distances corresponding to all the feature vector pairs to obtain L FFO (loss between users);
adopting a face feature extraction model to obtain each face image in the training samples of the current batch and the corresponding feature vector for generating the face image of the virtual user, calculating the cosine distance between each face image and the feature vector for generating the face image of the virtual user corresponding to each face image, and averaging the cosine distance to obtain the de-identification loss L RF;
calculating the structural similarity of each face image in the training samples of the current batch and the corresponding generated virtual user face image, and averaging the structural similarity to obtain structural similarity loss L s;
setting discriminator loss as L D-theta L GP, generator loss as L D + αL FFI + βL s-gamma L RF + ηL FFO, wherein theta, α, β, gamma and η are preset parameters, and alternately training the discriminator and the generator;
s5: adjusting a face image to be subjected to de-recognition to generate a preset size, extracting a feature vector f ' of the obtained face image p ' through a face feature extraction model, and carrying out facial five-sense part shielding on the face image p ' by adopting random noise to obtain a face imageConverting the facial image into a vector, combining the vector with the feature vector f ', and inputting the vector to a generator for generating a countermeasure network to obtain a de-identified face image p'*。
The invention is based on the human face that produces the confrontation network removes the recognition and produces the method, obtain N to the facial image, input each facial image into the facial feature extraction model trained in advance to get the characteristic vector separately, shelter from and obtain and shelter from the facial image with the facial five sense organs part of each facial image by the random noise, shelter from the facial image combination with correspondent facial image characteristic vector got, the two combines as the input of the generator in producing the confrontation network, the original facial image is as the real facial image of the discriminator, form a training sample; the generator and the discriminator are trained using training samples. After the training is finished, in the application stage, for each face picture to be subjected to de-recognition, the shielding face image and the face image feature vector are obtained in the same way, and then the shielding face image and the face image feature vector are combined and input to a generator of a trained face de-recognition generation model to obtain a de-recognized face image.
The invention can utilize the face image of the real user to generate the face image of the virtual user with high quality, furthest retains the characteristics of the gender, the race, the skin color and the like of the user with small correlation degree with the identity recognition on the basis of satisfying the de-recognition, ensures that the statistical information of the user set is not influenced, ensures that the generated face image can still be used for training and using a face recognition model for the same virtual user, thereby ensuring that the face image generated by the de-recognition has high availability while protecting the privacy of the user.
Drawings
FIG. 1 is a flow chart of an embodiment of a face de-recognition generation method based on generation of a confrontation network according to the present invention;
fig. 2 is a comparison diagram of the original face image and the generated virtual user face image in the present embodiment.
Detailed Description
The following description of the embodiments of the present invention is provided in order to better understand the present invention for those skilled in the art with reference to the accompanying drawings. It is to be expressly noted that in the following description, a detailed description of known functions and designs will be omitted when it may obscure the subject matter of the present invention.
Examples
FIG. 1 is a flow chart of an embodiment of a face de-recognition generation method based on generation of a confrontation network according to the invention. As shown in fig. 1, the method for generating a face de-recognition based on a generation countermeasure network of the present invention specifically includes the steps of:
s101: acquiring a face image sample:
acquiring N pairs of face images, wherein two face images in each pair of face images are different face images belonging to the same user, each face image is normalized to a preset size, and the face images are recordedWhereinThe nth face image is represented by i being 1,2, N being 1,2, …, N.
S102: obtaining a training sample:
respectively inputting each face image into a pre-trained face feature extraction model, acquiring corresponding feature vectors, and recording each face imageThe corresponding feature vector is
Each human face image is obtainedThe facial features of the human face are shielded by random noise to obtain a shielded facial imageConvert it into vector and face imageFeature vector ofCombining as input, and taking the original face imageTriplet as a real face image, constituting a training sample
In the invention, the human face is shieldedThe face recognition method is used for providing face irrelevant background information for the model, the background information is kept as much as possible in the generation process, and the feature vector of the face image provides face information for the model and is used for face recognition generation.
Through the shielding of random noise, original face information of a user is not directly input to generate a confrontation network, the generated confrontation network is made to learn and generate a virtual user face different from the original face according to the original face feature vector, and because the original face features of the same user are similar, the generated confrontation network can keep the similarity in the training process, so that the generated confrontation network still belongs to the same virtual user after recognition generation.
S103: constructing and generating a countermeasure network:
and constructing and generating a countermeasure network, wherein the countermeasure network comprises a generator and a discriminator, the generator inputs the combination of the occlusion face image and the face image characteristic vector, the generator outputs the combination of the occlusion face image and the face image characteristic vector to generate a virtual user face image, and a real image adopted by the discriminator is a corresponding original face image.
S104: training generates a confrontation network:
training the generation pairing-resisting network by the training samples obtained in the step S102, selecting a plurality of face images from the face image pair set in each batch in the training process, and taking the corresponding training samples as the training samples in the current batch. Because the setting of the loss is very important for the training of generating the countermeasure network, in order to improve the performance of the generated countermeasure network obtained by training, the loss adopted in the generation of the countermeasure network in the invention comprises the countermeasure loss, the gradient penalty loss, the intra-user loss, the inter-user loss and the similarity loss, and the calculation methods are respectively as follows:
loss of antagonism:
and adopting a discriminator in the generation countermeasure network to obtain the score of each real face image in the training samples of the current batch and the score of the generated virtual user face image corresponding to the real face image in the training samples of the current batch, and calculating the Wasserstein distance between the real face image score and the generated virtual user face image score as the countermeasure loss L D.
Gradient penalty loss:
the gradient penalty loss of each training sample in the current batch of training samples is calculated, and the average gradient penalty loss is used as the gradient penalty loss L GP., which is a common parameter in the generation of the countermeasure network.
Loss in user:
and acquiring feature vector pairs which are corresponding to each pair of face images in the current batch of training samples and generate virtual user face images by adopting a face feature extraction model, calculating cosine distances of two feature vectors in each feature vector pair, and averaging the cosine distances corresponding to all the feature vector pairs to obtain L FFI (loss in user).
Loss between users:
randomly selecting K pairs of face images from the training samples in the current batch, wherein two face images in each pair of face images belong to different users, acquiring feature vector pairs which generate virtual user face images and correspond to the K pairs of different user face images by adopting a face feature extraction model, calculating the cosine distance of two feature vectors in each feature vector pair, and averaging the cosine distances corresponding to all the feature vector pairs to obtain L FFO (loss between users).
De-identified loss:
and acquiring each face image in the current batch of training samples and the corresponding feature vector for generating the virtual user face image by adopting a face feature extraction model, calculating the cosine distance between each face image and the corresponding feature vector for generating the virtual user face image, and averaging the cosine distances to obtain the de-identification loss L RF.
Loss of structural similarity:
and calculating the Structural Similarity (Structural Similarity Index) of each face image in the current batch of training samples and the corresponding generated virtual user face image, and taking the average result as Structural Similarity loss L s.
Setting discriminator loss as L D-theta L GP, generator loss as L0D + L7L 1FFI + L8L 2 s-gamma L5 RF + L6 FFO, where θ, α, β, gamma, L9 are preset parameters, generally normal numbers, and setting up according to actual needs, alternately training discriminator and generator, that is, fixing generator parameters first, training discriminator, at this time maximizing L D-theta L GP, then fixing discriminator parameters, training generator, at this time minimizing LD + L3 FFI + L4 s-gamma f + ηL FFO, and finally converging the two to obtain the final generated model, in this embodiment, θ is 5, α is 0.5, β is 0.1, γ is 0.1, and η is 0.25.
Through the setting of the loss, the face image generated by the generation countermeasure network can be distinguished from the original face image, but the characteristics in the original face image can be kept, and the generated virtual user face image obtained by the face image of the same user is similar as much as possible.
The training end condition can be set as required, and the training end in this embodiment is determined in the following manner: calculating cosine distances between feature vector pairs corresponding to each pair of face images in the training samples in the current batch, averaging the cosine distances to be used as cosine distances of real face images, then calculating cosine distances between feature vector pairs corresponding to generated virtual user face images obtained by each pair of face images in the training samples in the current batch, averaging the cosine distances to be used as cosine distances of generated virtual user face images, continuing training if the cosine distances of the generated virtual user face images are greater than the cosine distances of the real face images, and otherwise, ending the training.
S105: face de-recognition generation:
normalizing the face image to be subjected to de-recognition to a preset size, extracting a feature vector f ' of the obtained face image p ' through a face feature extraction model, and shielding five sense organs of the face image p ' by adopting random noise to obtain the face imageConverting the facial image into a vector, combining the vector with the feature vector f ', and inputting the vector to a generator for generating a countermeasure network to obtain a de-identified face image p'*。
In order to further improve the quality of the generated face image, the generated face image can be verified, and the specific method comprises the following steps:
1) and acquiring different face images belonging to the same user as the face image p ', normalizing the face images to a preset size, and acquiring a face image p'. Similarly, the obtained face image p ' is subjected to feature vector f ' extraction through a face feature extraction model, and the face image p ' is subjected to face five-sense organ partial shielding by adopting random noise to obtain a face imageConverting the facial image into a vector, combining the vector with the characteristic vector f ', inputting the vector into a generator for generating an antagonistic network to obtain a de-identified face image p', and then obtaining a face image*。
2) Obtaining a de-identified face image p 'by a discriminator for creating a countermeasure network'*If the corresponding score is smaller than a preset threshold value, the face image p 'after being subjected to identification is considered'*Not vivid enough, the verification is not passed, otherwise, step 3) is carried out.
3) Extracting the face image p 'subjected to de-recognition by adopting a face feature extraction model'*And a face image p ″)*The face image p 'is judged according to the similarity of the feature vectors'*And a face image p ″)*Whether the users are from the same user or not, if not, the verification is not successfulOtherwise, go to step 4).
4) Extracting a face image p ' and a de-identified face image p ' by adopting a face feature extraction model '*Feature vectors, and judging the face image p ' and the face image p ' according to the similarity of the feature vectors '*Whether the users are from the same user, if the users belong to the same user, the verification is not passed, and if not, the step 5) is carried out.
5) Calculating face image p 'and face image p'*If the structural similarity is larger than a preset threshold value, the verification is passed, otherwise, the verification is not passed.
In order to better illustrate the technical effects of the invention, the invention is experimentally verified by using specific examples. Fig. 2 is a comparison diagram of the original face image and the generated virtual user face image in the present embodiment. As shown in fig. 2, the generated virtual user face image is significantly different from the original face image, so that the privacy of the user is effectively protected, but the characteristics of the user in the original face image, such as gender, race, skin color, and the like, are retained, and the loss degree of the definition of the virtual user face image compared with the original face image is also within the acceptable application range.
Although illustrative embodiments of the present invention have been described above to facilitate the understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, and various changes may be made apparent to those skilled in the art as long as they are within the spirit and scope of the present invention as defined and defined by the appended claims, and all matters of the invention which utilize the inventive concepts are protected.
Claims (3)
1. A face de-recognition generation method based on generation of a confrontation network is characterized by comprising the following steps:
s1: acquiring N pairs of face images, wherein two face images in each pair of face images are different face images belonging to the same user, each face image is adjusted to a preset size, and the face images are recordedWhereinRepresenting the ith human face image in the nth human face image, i is 1,2, N is 1,2, …, N;
s2: respectively inputting each face image into a pre-trained face feature extraction model, acquiring corresponding feature vectors, and recording each face imageThe corresponding feature vector is fi n;
Each human face image is obtainedThe facial features of the human face are shielded by random noise to obtain a shielded facial imageConvert it into vector and face imageCharacteristic vector f ofi nCombining as input, and taking the original face imageTriplet as a real face image, constituting a training sample
S3: constructing and generating a countermeasure network, which comprises a generator and a discriminator, wherein the input of the generator is the combination of an occlusion face image and a face image feature vector, the output is the generation of a virtual user face image, and a real image adopted by the discriminator is a corresponding original face image;
s4: training a countermeasure network by using the training sample pair obtained in the step S2, selecting a plurality of face images from the face image pair set in each batch in the training process, using the corresponding training samples as the training samples in the current batch, wherein the adopted losses include countermeasure loss, gradient penalty loss, intra-user loss, inter-user loss and similarity loss, and the calculation methods of the various losses are respectively as follows:
adopting a discriminator in a generation countermeasure network to obtain the score of each real face image in the training samples of the current batch and the score of a generated virtual user face image corresponding to the real face image in the training samples of the current batch, and calculating the Wasserstein distance between the scores of the real face images and the scores of the generated virtual user face images as the countermeasure loss L D;
calculating the gradient penalty loss of each training sample in the current batch of training samples, and taking the average result as the gradient penalty loss L GP;
adopting a face feature extraction model to obtain a feature vector pair which is corresponding to each pair of face images in a current batch of training samples and generates a virtual user face image, calculating the cosine distance of two feature vectors in each feature vector pair, and averaging the cosine distances corresponding to all the feature vector pairs to obtain L FFI (loss in user);
randomly selecting K pairs of face images from a current batch of training samples, wherein two face images in each pair of face images belong to different users, acquiring feature vector pairs which generate virtual user face images and correspond to the K pairs of different user face images by adopting a face feature extraction model, calculating cosine distances of two feature vectors in each feature vector pair, and averaging the cosine distances corresponding to all the feature vector pairs to obtain L FFO (loss between users);
adopting a face feature extraction model to obtain each face image in the training samples of the current batch and the corresponding feature vector for generating the face image of the virtual user, calculating the cosine distance between each face image and the feature vector for generating the face image of the virtual user corresponding to each face image, and averaging the cosine distance to obtain the de-identification loss L RF;
calculating the structural similarity of each face image in the training samples of the current batch and the corresponding generated virtual user face image, and averaging the structural similarity to obtain structural similarity loss L s;
setting discriminator loss as L D-theta L GP, generator loss as L D + αL FFI + βL s-gamma L RF + ηL FFO, wherein theta, α, β, gamma and η are preset parameters, and alternately training the discriminator and the generator;
s5: adjusting a face image to be subjected to de-recognition to generate a preset size, extracting a feature vector f ' of the obtained face image p ' through a face feature extraction model, and carrying out facial five-sense part shielding on the face image p ' by adopting random noise to obtain a face imageConverting the facial image into a vector, combining the vector with the feature vector f ', and inputting the vector to a generator for generating a countermeasure network to obtain a de-identified face image p'*。
2. The method for generating a de-recognition face as claimed in claim 1, wherein the method for determining the training end of generating the confrontation network in step S4 comprises: calculating cosine distances between feature vector pairs corresponding to each pair of face images in the training samples in the current batch, averaging the cosine distances to be used as cosine distances of real face images, then calculating cosine distances between feature vector pairs corresponding to generated virtual user face images obtained by each pair of face images in the training samples in the current batch, averaging the cosine distances to be used as cosine distances of generated virtual user face images, continuing training if the cosine distances of the generated virtual user face images are greater than the cosine distances of the real face images, and otherwise, ending the training.
3. The method for generating a de-identified face according to claim 1, wherein the step S5 further includes verifying the generated de-identified face image, and the specific steps include:
1) acquiring different face images belonging to the same user as the face image p ', and adjusting the face images to a preset size to obtain a face image p'; extracting the feature vector f ' of the obtained face image p ' through a face feature extraction model, and carrying out facial feature partial shielding on the face image p ' by adopting random noise to obtain a face imageConverting the facial image into a vector, combining the vector with the characteristic vector f ', inputting the vector into a generator for generating an antagonistic network to obtain a de-identified face image p', and then obtaining a face image*;
2) Obtaining a de-identified face image p 'by a discriminator for creating a countermeasure network'*And if the corresponding score is smaller than a preset threshold value, the verification is not passed, and if not, the step 3) is carried out.
3) Extracting the face image p 'subjected to de-recognition by adopting a face feature extraction model'*And a face image p ″)*The face image p 'is judged according to the similarity of the feature vectors'*And a face image p ″)*Whether the users are from the same user, if not, the verification is not passed, otherwise, the step 4) is carried out;
4) extracting a face image p ' and a de-identified face image p ' by adopting a face feature extraction model '*Feature vectors, and judging the face image p ' and the face image p ' according to the similarity of the feature vectors '*Whether the users are from the same user, if the users belong to the same user, the verification is not passed, and if not, the step 5) is carried out;
5) calculating face image p 'and face image p'*If the structural similarity is larger than a preset threshold value, the verification is passed, otherwise, the verification is not passed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010343798.5A CN111476200B (en) | 2020-04-27 | 2020-04-27 | Face de-identification generation method based on generation of confrontation network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010343798.5A CN111476200B (en) | 2020-04-27 | 2020-04-27 | Face de-identification generation method based on generation of confrontation network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111476200A true CN111476200A (en) | 2020-07-31 |
CN111476200B CN111476200B (en) | 2022-04-19 |
Family
ID=71755753
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010343798.5A Active CN111476200B (en) | 2020-04-27 | 2020-04-27 | Face de-identification generation method based on generation of confrontation network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111476200B (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111950635A (en) * | 2020-08-12 | 2020-11-17 | 温州大学 | Robust feature learning method based on hierarchical feature alignment |
CN112084962A (en) * | 2020-09-11 | 2020-12-15 | 贵州大学 | Face privacy protection method based on generation type countermeasure network |
CN112307514A (en) * | 2020-11-26 | 2021-02-02 | 哈尔滨工程大学 | Difference privacy greedy grouping method adopting Wasserstein distance |
CN112613445A (en) * | 2020-12-29 | 2021-04-06 | 深圳威富优房客科技有限公司 | Face image generation method and device, computer equipment and storage medium |
CN112668401A (en) * | 2020-12-09 | 2021-04-16 | 中国科学院信息工程研究所 | Face privacy protection method and device based on feature decoupling |
CN112734436A (en) * | 2021-01-08 | 2021-04-30 | 支付宝(杭州)信息技术有限公司 | Terminal and method for supporting face recognition |
CN112926559A (en) * | 2021-05-12 | 2021-06-08 | 支付宝(杭州)信息技术有限公司 | Face image processing method and device |
CN112949535A (en) * | 2021-03-15 | 2021-06-11 | 南京航空航天大学 | Face data identity de-identification method based on generative confrontation network |
CN113033511A (en) * | 2021-05-21 | 2021-06-25 | 中国科学院自动化研究所 | Face anonymization method based on control decoupling identity representation |
CN113486839A (en) * | 2021-07-20 | 2021-10-08 | 支付宝(杭州)信息技术有限公司 | Encryption model training, image encryption and encrypted face image recognition method and device |
CN113705410A (en) * | 2021-08-20 | 2021-11-26 | 陈成 | Face image desensitization processing and verifying method and system |
CN114049417A (en) * | 2021-11-12 | 2022-02-15 | 北京字节跳动网络技术有限公司 | Virtual character image generation method and device, readable medium and electronic equipment |
CN115617882A (en) * | 2022-12-20 | 2023-01-17 | 粤港澳大湾区数字经济研究院(福田) | Time sequence diagram data generation method and system with structural constraint based on GAN |
CN113486839B (en) * | 2021-07-20 | 2024-10-22 | 支付宝(杭州)信息技术有限公司 | Encryption model training, image encryption and encrypted face image recognition method and device |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108520503A (en) * | 2018-04-13 | 2018-09-11 | 湘潭大学 | A method of based on self-encoding encoder and generating confrontation network restoration face Incomplete image |
CN108573222A (en) * | 2018-03-28 | 2018-09-25 | 中山大学 | The pedestrian image occlusion detection method for generating network is fought based on cycle |
CN108764207A (en) * | 2018-06-07 | 2018-11-06 | 厦门大学 | A kind of facial expression recognizing method based on multitask convolutional neural networks |
CN108829855A (en) * | 2018-06-21 | 2018-11-16 | 山东大学 | It is worn based on the clothing that condition generates confrontation network and takes recommended method, system and medium |
CN109815928A (en) * | 2019-01-31 | 2019-05-28 | 中国电子进出口有限公司 | A kind of face image synthesis method and apparatus based on confrontation study |
CN109840477A (en) * | 2019-01-04 | 2019-06-04 | 苏州飞搜科技有限公司 | Face identification method and device are blocked based on eigentransformation |
CN109886167A (en) * | 2019-02-01 | 2019-06-14 | 中国科学院信息工程研究所 | One kind blocking face identification method and device |
CN110085263A (en) * | 2019-04-28 | 2019-08-02 | 东华大学 | A kind of classification of music emotion and machine composing method |
CN110135366A (en) * | 2019-05-20 | 2019-08-16 | 厦门大学 | Pedestrian's recognition methods again is blocked based on multiple dimensioned generation confrontation network |
CN110598806A (en) * | 2019-07-29 | 2019-12-20 | 合肥工业大学 | Handwritten digit generation method for generating countermeasure network based on parameter optimization |
CN110728628A (en) * | 2019-08-30 | 2020-01-24 | 南京航空航天大学 | Face de-occlusion method for generating confrontation network based on condition |
-
2020
- 2020-04-27 CN CN202010343798.5A patent/CN111476200B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108573222A (en) * | 2018-03-28 | 2018-09-25 | 中山大学 | The pedestrian image occlusion detection method for generating network is fought based on cycle |
CN108520503A (en) * | 2018-04-13 | 2018-09-11 | 湘潭大学 | A method of based on self-encoding encoder and generating confrontation network restoration face Incomplete image |
CN108764207A (en) * | 2018-06-07 | 2018-11-06 | 厦门大学 | A kind of facial expression recognizing method based on multitask convolutional neural networks |
CN108829855A (en) * | 2018-06-21 | 2018-11-16 | 山东大学 | It is worn based on the clothing that condition generates confrontation network and takes recommended method, system and medium |
CN109840477A (en) * | 2019-01-04 | 2019-06-04 | 苏州飞搜科技有限公司 | Face identification method and device are blocked based on eigentransformation |
CN109815928A (en) * | 2019-01-31 | 2019-05-28 | 中国电子进出口有限公司 | A kind of face image synthesis method and apparatus based on confrontation study |
CN109886167A (en) * | 2019-02-01 | 2019-06-14 | 中国科学院信息工程研究所 | One kind blocking face identification method and device |
CN110085263A (en) * | 2019-04-28 | 2019-08-02 | 东华大学 | A kind of classification of music emotion and machine composing method |
CN110135366A (en) * | 2019-05-20 | 2019-08-16 | 厦门大学 | Pedestrian's recognition methods again is blocked based on multiple dimensioned generation confrontation network |
CN110598806A (en) * | 2019-07-29 | 2019-12-20 | 合肥工业大学 | Handwritten digit generation method for generating countermeasure network based on parameter optimization |
CN110728628A (en) * | 2019-08-30 | 2020-01-24 | 南京航空航天大学 | Face de-occlusion method for generating confrontation network based on condition |
Non-Patent Citations (4)
Title |
---|
FEI PENG 等: "FD-GAN: Face De-Morphing Generative Adversarial Network for Restoring Accomplice"s Facial Image", 《SPECIAL SECTION ON DIGITAL FORENSICS THROUGH MULTIMEDIA SOURCE INFERENCE》 * |
TSUNG-YI LIN 等: "Focal Loss for Dense Object Detection", 《2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》 * |
王素琴 等: "基于生成对抗网络的遮挡表情识别", 《计算机应用研究》 * |
贾迪 等: "图像匹配方法研究综述", 《中国图像图形学报》 * |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111950635A (en) * | 2020-08-12 | 2020-11-17 | 温州大学 | Robust feature learning method based on hierarchical feature alignment |
CN111950635B (en) * | 2020-08-12 | 2023-08-25 | 温州大学 | Robust feature learning method based on layered feature alignment |
CN112084962A (en) * | 2020-09-11 | 2020-12-15 | 贵州大学 | Face privacy protection method based on generation type countermeasure network |
CN112307514A (en) * | 2020-11-26 | 2021-02-02 | 哈尔滨工程大学 | Difference privacy greedy grouping method adopting Wasserstein distance |
CN112307514B (en) * | 2020-11-26 | 2023-08-01 | 哈尔滨工程大学 | Differential privacy greedy grouping method adopting Wasserstein distance |
CN112668401B (en) * | 2020-12-09 | 2023-01-17 | 中国科学院信息工程研究所 | Face privacy protection method and device based on feature decoupling |
CN112668401A (en) * | 2020-12-09 | 2021-04-16 | 中国科学院信息工程研究所 | Face privacy protection method and device based on feature decoupling |
CN112613445A (en) * | 2020-12-29 | 2021-04-06 | 深圳威富优房客科技有限公司 | Face image generation method and device, computer equipment and storage medium |
CN112613445B (en) * | 2020-12-29 | 2024-04-30 | 深圳威富优房客科技有限公司 | Face image generation method, device, computer equipment and storage medium |
CN112734436A (en) * | 2021-01-08 | 2021-04-30 | 支付宝(杭州)信息技术有限公司 | Terminal and method for supporting face recognition |
CN112949535B (en) * | 2021-03-15 | 2022-03-11 | 南京航空航天大学 | Face data identity de-identification method based on generative confrontation network |
CN112949535A (en) * | 2021-03-15 | 2021-06-11 | 南京航空航天大学 | Face data identity de-identification method based on generative confrontation network |
CN113657350A (en) * | 2021-05-12 | 2021-11-16 | 支付宝(杭州)信息技术有限公司 | Face image processing method and device |
CN112926559A (en) * | 2021-05-12 | 2021-06-08 | 支付宝(杭州)信息技术有限公司 | Face image processing method and device |
CN113033511A (en) * | 2021-05-21 | 2021-06-25 | 中国科学院自动化研究所 | Face anonymization method based on control decoupling identity representation |
CN113486839A (en) * | 2021-07-20 | 2021-10-08 | 支付宝(杭州)信息技术有限公司 | Encryption model training, image encryption and encrypted face image recognition method and device |
CN113486839B (en) * | 2021-07-20 | 2024-10-22 | 支付宝(杭州)信息技术有限公司 | Encryption model training, image encryption and encrypted face image recognition method and device |
CN113705410A (en) * | 2021-08-20 | 2021-11-26 | 陈成 | Face image desensitization processing and verifying method and system |
CN114049417A (en) * | 2021-11-12 | 2022-02-15 | 北京字节跳动网络技术有限公司 | Virtual character image generation method and device, readable medium and electronic equipment |
CN114049417B (en) * | 2021-11-12 | 2023-11-24 | 抖音视界有限公司 | Virtual character image generation method and device, readable medium and electronic equipment |
CN115617882A (en) * | 2022-12-20 | 2023-01-17 | 粤港澳大湾区数字经济研究院(福田) | Time sequence diagram data generation method and system with structural constraint based on GAN |
Also Published As
Publication number | Publication date |
---|---|
CN111476200B (en) | 2022-04-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111476200B (en) | Face de-identification generation method based on generation of confrontation network | |
CN110457994B (en) | Face image generation method and device, storage medium and computer equipment | |
CN111340008B (en) | Method and system for generation of counterpatch, training of detection model and defense of counterpatch | |
CN111460494B (en) | Multi-mode deep learning-oriented privacy protection method and system | |
CN106096582B (en) | Distinguish real and flat surfaces | |
CN109858392B (en) | Automatic face image identification method before and after makeup | |
US10223612B2 (en) | Frame aggregation network for scalable video face recognition | |
CN109712095B (en) | Face beautifying method with rapid edge preservation | |
CN110929836B (en) | Neural network training and image processing method and device, electronic equipment and medium | |
CN113705290A (en) | Image processing method, image processing device, computer equipment and storage medium | |
WO2022166797A1 (en) | Image generation model training method, generation method, apparatus, and device | |
CN111275784A (en) | Method and device for generating image | |
CN112508782A (en) | Network model training method, face image super-resolution reconstruction method and equipment | |
WO2024051480A1 (en) | Image processing method and apparatus, computer device, and storage medium | |
CN118196231B (en) | Lifelong learning draft method based on concept segmentation | |
CN115984930A (en) | Micro expression recognition method and device and micro expression recognition model training method | |
CN109961152B (en) | Personalized interaction method and system of virtual idol, terminal equipment and storage medium | |
CN111368763A (en) | Image processing method and device based on head portrait and computer readable storage medium | |
CN113297624B (en) | Image preprocessing method and device | |
CN115311595B (en) | Video feature extraction method and device and electronic equipment | |
Valenzuela et al. | Expression transfer using flow-based generative models | |
CN116844008A (en) | Attention mechanism guided content perception non-reference image quality evaluation method | |
CN113327212B (en) | Face driving method, face driving model training device, electronic equipment and storage medium | |
CN115708135A (en) | Face recognition model processing method, face recognition method and device | |
CN114821203B (en) | Fine-grained image model training and identifying method and device based on consistency loss |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |