CN111476200B - Face de-identification generation method based on generation of confrontation network - Google Patents

Face de-identification generation method based on generation of confrontation network Download PDF

Info

Publication number
CN111476200B
CN111476200B CN202010343798.5A CN202010343798A CN111476200B CN 111476200 B CN111476200 B CN 111476200B CN 202010343798 A CN202010343798 A CN 202010343798A CN 111476200 B CN111476200 B CN 111476200B
Authority
CN
China
Prior art keywords
face image
face
loss
image
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010343798.5A
Other languages
Chinese (zh)
Other versions
CN111476200A (en
Inventor
孙铭佑
王晓玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China Normal University
Original Assignee
East China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China Normal University filed Critical East China Normal University
Priority to CN202010343798.5A priority Critical patent/CN111476200B/en
Publication of CN111476200A publication Critical patent/CN111476200A/en
Application granted granted Critical
Publication of CN111476200B publication Critical patent/CN111476200B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The invention discloses a face de-recognition generation method based on generation of a countermeasure network, which comprises the steps of obtaining N pairs of face images, respectively extracting characteristic vectors from each face image, shielding facial five-sense organs of each face image by adopting random noise to obtain an shielded face image, combining the shielded face image with the corresponding face image characteristic vectors, combining the shielded face image and the corresponding face image characteristic vectors to be used as input of a generator in the generation of the countermeasure network, and using an original face image as a real face image of a discriminator to form a training sample; the generator and the discriminator are trained using training samples. After the training is finished, in the application stage, for each face picture to be subjected to de-recognition, the shielding face image and the face image feature vector are obtained in the same way, and then the shielding face image and the face image feature vector are combined and input to a generator of a trained face de-recognition generation model to obtain a de-recognized face image. The invention can generate the human face image of the virtual user with high quality while protecting the privacy of the user.

Description

Face de-identification generation method based on generation of confrontation network
Technical Field
The invention belongs to the technical field of face recognition, and particularly relates to a face de-recognition generation method based on generation of a confrontation network.
Background
With the rapid development of network information technology, face recognition technology and applications have gradually developed from academic circles to government departments and industrial circles, and play an important role in more and more applications, and these roles are generally used to replace or assist identification cards, passwords, other certificates and the like to perform user identity information verification by means of identification. However, no matter from the training of the face recognition model and the practical application, a large amount of high-quality labeled data is often needed, and the data often carries the personal portrait privacy of the user, and the privacy of the user is affected when the data is acquired by a third-party operator in the training or using process. The requirements make the generation requirements of face de-recognition come up, and the unique identification of the user is provided under the condition of not revealing individual privacy, so that the training of the face recognition model and the practical application of the face recognition model can be carried out.
The human face de-identification generation method mainly comprises two parts, namely human face de-identification and human face picture generation. The traditional methods usually focus more on the de-identification part of the human face, such as K anonymization and other methods, and the methods have certain defects: firstly, after the face is subjected to de-recognition by the methods, although the data can meet the de-recognition requirement, the data can not uniquely identify the user, so the data can not be used for training and using a face recognition model, and the actual use value of the data is low. Secondly, the definition of the methods is poor, the pictures are fuzzy, and the method has larger difference with real face pictures. In addition, for different face pictures of the same user, due to different factors such as shooting environments, user binding and the like, the pictures after desensitization may be quite different, that is, the characteristic information of the user is lost more.
Therefore, the task of only completing the face de-recognition cannot meet the actual face use requirement. In actual face use, a data owner needs to ensure the unique identification and other characteristics of data under the condition of ensuring that the portrait privacy of a user is not revealed, has high enough definition, and retains enough characteristic information to be used for training of a face recognition model and actual face recognition application, however, no effective solution is provided in the industry at present for the requirement.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a face de-recognition generation method based on a generation countermeasure network, which not only ensures that the privacy of personal information of a user is not leaked, but also ensures that pictures have high definition and user characteristics are kept as much as possible so as to be used for training and applying a face recognition model.
In order to achieve the above object, the method for generating a face de-recognition for a confrontation network according to the present invention comprises the following steps:
s1: acquiring N pairs of face images, wherein two face images in each pair of face images are different face images belonging to the same user, each face image is adjusted to a preset size, and the face images are recorded
Figure BDA0002469402170000021
Wherein
Figure BDA0002469402170000022
Representing the ith human face image in the nth human face image, i is 1,2, N is 1,2, …, N;
s2: respectively inputting each face image into a pre-trained face feature extraction model, acquiring corresponding feature vectors, and recording each face image
Figure BDA0002469402170000023
The corresponding feature vector is
Figure BDA0002469402170000024
Each human face image is obtained
Figure BDA0002469402170000025
The facial features of the human face are shielded by random noise to obtain a shielded facial image
Figure BDA0002469402170000026
Convert it into vector and face image
Figure BDA0002469402170000027
Feature vector of
Figure BDA0002469402170000028
Combining as input, and taking the original face image
Figure BDA0002469402170000029
Triplet as a real face image, constituting a training sample
Figure BDA00024694021700000210
S3: constructing and generating a countermeasure network, which comprises a generator and a discriminator, wherein the input of the generator is the combination of an occlusion face image and a face image feature vector, the output is the generation of a virtual user face image, and a real image adopted by the discriminator is a corresponding original face image;
s4: training a countermeasure network by using the training sample pair obtained in the step S2, selecting a plurality of face images from the face image pair set in each batch in the training process, using the corresponding training samples as the training samples in the current batch, wherein the adopted losses include countermeasure loss, gradient penalty loss, intra-user loss, inter-user loss and similarity loss, and the calculation methods of the various losses are respectively as follows:
the calculation method of the resistance loss comprises the following steps: acquiring the score of each real face image in the training samples of the current batch and the score of a generated virtual user face image corresponding to the real face image in the training samples of the current batch by adopting a discriminator in a generated countermeasure network, and calculating the Wasserstein distance between the scores of the real face images and the scores of the generated virtual user face images as a countermeasure loss LD;
the gradient penalty loss calculation method comprises the following steps: calculating the gradient penalty loss of each training sample in the current batch of training samples, and taking the average result as the gradient penalty loss LGP;
the method for calculating the loss in the user comprises the following steps: obtaining feature vector pairs which are corresponding to each pair of face images in the current batch of training samples and generate virtual user face images by adopting a face feature extraction model, calculating cosine distances of two feature vectors in each feature vector pair, and averaging the cosine distances corresponding to all the feature vector pairs to obtain the LFFI (loss in user);
the method for calculating the loss among the users comprises the following steps: randomly selecting K pairs of face images from the training samples in the current batch, wherein two face images in each pair of face images belong to different users, acquiring feature vector pairs which generate virtual user face images and correspond to the K pairs of different user face images by adopting a face feature extraction model, calculating the cosine distance of two feature vectors in each feature vector pair, and averaging the cosine distances corresponding to all the feature vector pairs to obtain an LFFO (loss between users);
the method for calculating the de-identification loss comprises the following steps: adopting a face feature extraction model to obtain each face image in the training samples of the current batch and the corresponding feature vector for generating the virtual user face image, calculating the cosine distance between each face image and the feature vector for generating the virtual user face image corresponding to each face image, and averaging the cosine distances to obtain the LRF (loss of discriminant function);
the method for calculating the loss of the structural similarity comprises the following steps: calculating the structural similarity of each face image in the training samples in the current batch and the corresponding generated virtual user face image, and taking the average result as the structural similarity loss Ls;
setting the discriminator loss as LD-theta LGP and the generator loss as LD + alpha LFFI + beta Ls-gamma LRF + eta LFFO, wherein theta, alpha, beta, gamma and eta are preset parameters, and alternately training the discriminator and the generator;
s5: adjusting a face image to be subjected to de-recognition to generate a preset size, extracting a feature vector f ' of the obtained face image p ' through a face feature extraction model, and carrying out facial five-sense part shielding on the face image p ' by adopting random noise to obtain a face image
Figure BDA0002469402170000031
Converting the facial image into a vector, combining the vector with the feature vector f ', and inputting the vector to a generator for generating a countermeasure network to obtain a de-identified face image p'*
The invention is based on the human face that produces the confrontation network removes the recognition and produces the method, obtain N to the facial image, input each facial image into the facial feature extraction model trained in advance to get the characteristic vector separately, shelter from and obtain and shelter from the facial image with the facial five sense organs part of each facial image by the random noise, shelter from the facial image combination with correspondent facial image characteristic vector got, the two combines as the input of the generator in producing the confrontation network, the original facial image is as the real facial image of the discriminator, form a training sample; the generator and the discriminator are trained using training samples. After the training is finished, in the application stage, for each face picture to be subjected to de-recognition, the shielding face image and the face image feature vector are obtained in the same way, and then the shielding face image and the face image feature vector are combined and input to a generator of a trained face de-recognition generation model to obtain a de-recognized face image.
The invention can utilize the face image of the real user to generate the face image of the virtual user with high quality, furthest retains the characteristics of the gender, the race, the skin color and the like of the user with small correlation degree with the identity recognition on the basis of satisfying the de-recognition, ensures that the statistical information of the user set is not influenced, ensures that the generated face image can still be used for training and using a face recognition model for the same virtual user, thereby ensuring that the face image generated by the de-recognition has high availability while protecting the privacy of the user.
Drawings
FIG. 1 is a flow chart of an embodiment of a face de-recognition generation method based on generation of a confrontation network according to the present invention;
fig. 2 is a comparison diagram of the original face image and the generated virtual user face image in the present embodiment.
Detailed Description
The following description of the embodiments of the present invention is provided in order to better understand the present invention for those skilled in the art with reference to the accompanying drawings. It is to be expressly noted that in the following description, a detailed description of known functions and designs will be omitted when it may obscure the subject matter of the present invention.
Examples
FIG. 1 is a flow chart of an embodiment of a face de-recognition generation method based on generation of a confrontation network according to the invention. As shown in fig. 1, the method for generating a face de-recognition based on a generation countermeasure network of the present invention specifically includes the steps of:
s101: acquiring a face image sample:
acquiring N pairs of face images, wherein two face images in each pair of face images are different face images belonging to the same user, each face image is normalized to a preset size, and the face images are recorded
Figure BDA0002469402170000041
Wherein
Figure BDA0002469402170000042
The nth face image is represented by i being 1,2, N being 1,2, …, N.
S102: obtaining a training sample:
respectively inputting each face image into a pre-trained face feature extraction model, acquiring corresponding feature vectors, and recording each face image
Figure BDA0002469402170000043
The corresponding feature vector is
Figure BDA0002469402170000044
Each human face image is obtained
Figure BDA0002469402170000045
The facial features of the human face are shielded by random noise to obtain a shielded facial image
Figure BDA0002469402170000051
Convert it into vector and face image
Figure BDA0002469402170000052
Feature vector of
Figure BDA0002469402170000053
Combining as input to get the original personFace image
Figure BDA0002469402170000054
Triplet as a real face image, constituting a training sample
Figure BDA0002469402170000055
In the invention, the human face is shielded
Figure BDA0002469402170000056
The face recognition method is used for providing face irrelevant background information for the model, the background information is kept as much as possible in the generation process, and the feature vector of the face image provides face information for the model and is used for face recognition generation.
Through the shielding of random noise, original face information of a user is not directly input to generate a confrontation network, the generated confrontation network is made to learn and generate a virtual user face different from the original face according to the original face feature vector, and because the original face features of the same user are similar, the generated confrontation network can keep the similarity in the training process, so that the generated confrontation network still belongs to the same virtual user after recognition generation.
S103: constructing and generating a countermeasure network:
and constructing and generating a countermeasure network, wherein the countermeasure network comprises a generator and a discriminator, the generator inputs the combination of the occlusion face image and the face image characteristic vector, the generator outputs the combination of the occlusion face image and the face image characteristic vector to generate a virtual user face image, and a real image adopted by the discriminator is a corresponding original face image.
S104: training generates a confrontation network:
training the generation pairing-resisting network by the training samples obtained in the step S102, selecting a plurality of face images from the face image pair set in each batch in the training process, and taking the corresponding training samples as the training samples in the current batch. Because the setting of the loss is very important for the training of generating the countermeasure network, in order to improve the performance of the generated countermeasure network obtained by training, the loss adopted in the generation of the countermeasure network in the invention comprises the countermeasure loss, the gradient penalty loss, the intra-user loss, the inter-user loss and the similarity loss, and the calculation methods are respectively as follows:
loss of antagonism:
and adopting a discriminator in the generation countermeasure network to obtain the score of each real face image in the training samples of the current batch and the score of the generated virtual user face image corresponding to the real face image in the training samples of the current batch, and calculating the Wasserstein distance between the scores of the real face images and the scores of the generated virtual user face images as the countermeasure loss LD.
Gradient penalty loss:
and calculating the gradient penalty loss of each training sample in the current batch of training samples, and taking the average result as the gradient penalty loss LGP. The gradient penalty loss is a common parameter in the generation of the countermeasure network, and the specific calculation method thereof is not described in detail herein.
Loss in user:
and acquiring a feature vector pair which is corresponding to each pair of face images in the current batch of training samples and generates a virtual user face image by adopting a face feature extraction model, calculating cosine distances of two feature vectors in each feature vector pair, and averaging the cosine distances corresponding to all the feature vector pairs to obtain the LFFI (loss in user).
Loss between users:
randomly selecting K pairs of face images from the training samples in the current batch, wherein two face images in each pair of face images belong to different users, acquiring feature vector pairs which generate virtual user face images and correspond to the K pairs of different user face images by adopting a face feature extraction model, calculating the cosine distance of two feature vectors in each feature vector pair, and averaging the cosine distances corresponding to all the feature vector pairs to obtain the LFFO (loss between users).
De-identified loss:
and acquiring each face image in the training samples of the current batch and the corresponding feature vector for generating the virtual user face image by adopting a face feature extraction model, calculating the cosine distance between each face image and the corresponding feature vector for generating the virtual user face image, and averaging the cosine distances to obtain the LRF (loss of discriminant function).
Loss of structural similarity:
and calculating the Structural Similarity (Structural Similarity Index) of each face image in the training samples in the current batch and the corresponding generated virtual user face image, and averaging the Structural Similarity Index to obtain the Structural Similarity loss Ls. The structural similarity integrates the contrast, brightness and structural similarity between the two images, and the similarity of the two images can be better measured.
Setting the discriminator loss as LD-theta LGP, the generator loss as LD + alpha LFFI + beta Ls-gamma LRF + eta LFFO, where theta, alpha, beta, gamma and eta are preset parameters, generally normal numbers, setting according to actual needs, alternately training the discriminator and the generator, namely fixing the generator parameters, training the discriminator, maximizing LD-theta LGP, then fixing the discriminator parameters, training the generator, minimizing LD + alpha LFFI + beta Ls-gamma LRF + eta LFFO, and finally converging the two to obtain the final generation model. In this embodiment, θ is 5, α is 0.5, β is 0.1, γ is 0.1, and η is 0.25.
Through the setting of the loss, the face image generated by the generation countermeasure network can be distinguished from the original face image, but the characteristics in the original face image can be kept, and the generated virtual user face image obtained by the face image of the same user is similar as much as possible.
The training end condition can be set as required, and the training end in this embodiment is determined in the following manner: calculating cosine distances between feature vector pairs corresponding to each pair of face images in the training samples in the current batch, averaging the cosine distances to be used as cosine distances of real face images, then calculating cosine distances between feature vector pairs corresponding to generated virtual user face images obtained by each pair of face images in the training samples in the current batch, averaging the cosine distances to be used as cosine distances of generated virtual user face images, continuing training if the cosine distances of the generated virtual user face images are greater than the cosine distances of the real face images, and otherwise, ending the training.
S105: face de-recognition generation:
normalizing the face image to be subjected to de-recognition to a preset size, extracting a feature vector f ' of the obtained face image p ' through a face feature extraction model, and shielding five sense organs of the face image p ' by adopting random noise to obtain the face image
Figure BDA0002469402170000071
Converting the facial image into a vector, combining the vector with the feature vector f ', and inputting the vector to a generator for generating a countermeasure network to obtain a de-identified face image p'*
In order to further improve the quality of the generated face image, the generated face image can be verified, and the specific method comprises the following steps:
1) and acquiring different face images belonging to the same user as the face image p ', normalizing the face images to a preset size, and acquiring a face image p'. Similarly, the obtained face image p ' is subjected to feature vector f ' extraction through a face feature extraction model, and the face image p ' is subjected to face five-sense organ partial shielding by adopting random noise to obtain a face image
Figure BDA0002469402170000072
Converting the facial image into a vector, combining the vector with the characteristic vector f ', inputting the vector into a generator for generating an antagonistic network to obtain a de-identified face image p', and then obtaining a face image*
2) Obtaining a de-identified face image p 'by a discriminator for creating a countermeasure network'*If the corresponding score is smaller than a preset threshold value, the face image p 'after being subjected to identification is considered'*Not vivid enough, the verification is not passed, otherwise, step 3) is carried out.
3) Extracting the face image p 'subjected to de-recognition by adopting a face feature extraction model'*And a face image p ″)*The face image p 'is judged according to the similarity of the feature vectors'*And a face image p ″)*Whether the users are from the same user, if not, the verification is not passed, and if not, the step 4) is entered.
4) Adopting human face characteristics to carryModel extraction face image p 'and de-identified face image p'*Feature vectors, and judging the face image p ' and the face image p ' according to the similarity of the feature vectors '*Whether the users are from the same user, if the users belong to the same user, the verification is not passed, and if not, the step 5) is carried out.
5) Calculating face image p 'and face image p'*If the structural similarity is larger than a preset threshold value, the verification is passed, otherwise, the verification is not passed.
In order to better illustrate the technical effects of the invention, the invention is experimentally verified by using specific examples. Fig. 2 is a comparison diagram of the original face image and the generated virtual user face image in the present embodiment. As shown in fig. 2, the generated virtual user face image is significantly different from the original face image, so that the privacy of the user is effectively protected, but the characteristics of the user in the original face image, such as gender, race, skin color, and the like, are retained, and the loss degree of the definition of the virtual user face image compared with the original face image is also within the acceptable application range.
Although illustrative embodiments of the present invention have been described above to facilitate the understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, and various changes may be made apparent to those skilled in the art as long as they are within the spirit and scope of the present invention as defined and defined by the appended claims, and all matters of the invention which utilize the inventive concepts are protected.

Claims (3)

1. A face de-recognition generation method based on generation of a confrontation network is characterized by comprising the following steps:
s1: acquiring N pairs of face images, wherein two face images in each pair of face images are different face images belonging to the same user, each face image is adjusted to a preset size, and the face images are recorded
Figure FDA0002469402160000011
Wherein
Figure FDA0002469402160000012
Representing the ith human face image in the nth human face image, i is 1,2, N is 1,2, …, N;
s2: respectively inputting each face image into a pre-trained face feature extraction model, acquiring corresponding feature vectors, and recording each face image
Figure FDA0002469402160000013
The corresponding feature vector is fi n
Each human face image is obtained
Figure FDA0002469402160000014
The facial features of the human face are shielded by random noise to obtain a shielded facial image
Figure FDA0002469402160000015
Convert it into vector and face image
Figure FDA0002469402160000016
Characteristic vector f ofi nCombining as input, and taking the original face image
Figure FDA0002469402160000017
Triplet as a real face image, constituting a training sample
Figure FDA0002469402160000018
S3: constructing and generating a countermeasure network, which comprises a generator and a discriminator, wherein the input of the generator is the combination of an occlusion face image and a face image feature vector, the output is the generation of a virtual user face image, and a real image adopted by the discriminator is a corresponding original face image;
s4: training a countermeasure network by using the training sample pair obtained in the step S2, selecting a plurality of face images from the face image pair set in each batch in the training process, using the corresponding training samples as the training samples in the current batch, wherein the adopted losses include countermeasure loss, gradient penalty loss, intra-user loss, inter-user loss and similarity loss, and the calculation methods of the various losses are respectively as follows:
the calculation method of the resistance loss comprises the following steps: acquiring the score of each real face image in the training samples of the current batch and the score of a generated virtual user face image corresponding to the real face image in the training samples of the current batch by adopting a discriminator in a generated countermeasure network, and calculating the Wasserstein distance between the scores of the real face images and the scores of the generated virtual user face images as a countermeasure loss LD;
the gradient penalty loss calculation method comprises the following steps: calculating the gradient penalty loss of each training sample in the current batch of training samples, and taking the average result as the gradient penalty loss LGP;
the method for calculating the loss in the user comprises the following steps: obtaining feature vector pairs which are corresponding to each pair of face images in the current batch of training samples and generate virtual user face images by adopting a face feature extraction model, calculating cosine distances of two feature vectors in each feature vector pair, and averaging the cosine distances corresponding to all the feature vector pairs to obtain the LFFI (loss in user);
the calculation method of the inter-user loss comprises the following steps: randomly selecting K pairs of face images from a current batch of training samples, wherein two face images in each pair of face images belong to different users, acquiring feature vector pairs which generate virtual user face images and correspond to the K pairs of different user face images by adopting a face feature extraction model, calculating the cosine distance of two feature vectors in each feature vector pair, and averaging the cosine distances corresponding to all the feature vector pairs to obtain an LFFO (loss between users);
the method for calculating the de-identification loss comprises the following steps: adopting a face feature extraction model to obtain each face image in the training samples of the current batch and the corresponding feature vector for generating the virtual user face image, calculating the cosine distance between each face image and the feature vector for generating the virtual user face image corresponding to each face image, and averaging the cosine distances to obtain the LRF (loss of discriminant function);
the method for calculating the loss of the structural similarity comprises the following steps: calculating the structural similarity of each face image in the training samples in the current batch and the corresponding generated virtual user face image, and taking the average result as the structural similarity loss Ls;
setting the discriminator loss as LD-theta LGP and the generator loss as LD + alpha LFFI + beta Ls-gamma LRF + eta LFFO, wherein theta, alpha, beta, gamma and eta are preset parameters, and alternately training the discriminator and the generator;
s5: adjusting a face image to be subjected to de-recognition to generate a preset size, extracting a feature vector f ' of the obtained face image p ' through a face feature extraction model, and carrying out facial five-sense part shielding on the face image p ' by adopting random noise to obtain a face image
Figure FDA0002469402160000021
Converting the facial image into a vector, combining the vector with the feature vector f ', and inputting the vector to a generator for generating a countermeasure network to obtain a de-identified face image p'*
2. The method for generating a de-recognition face as claimed in claim 1, wherein the method for determining the training end of generating the confrontation network in step S4 comprises: calculating cosine distances between feature vector pairs corresponding to each pair of face images in the training samples in the current batch, averaging the cosine distances to be used as cosine distances of real face images, then calculating cosine distances between feature vector pairs corresponding to generated virtual user face images obtained by each pair of face images in the training samples in the current batch, averaging the cosine distances to be used as cosine distances of generated virtual user face images, continuing training if the cosine distances of the generated virtual user face images are greater than the cosine distances of the real face images, and otherwise, ending the training.
3. The method for generating a de-identified face according to claim 1, wherein the step S5 further includes verifying the generated de-identified face image, and the specific steps include:
1) acquiring different face images belonging to the same user as the face image p ', and adjusting the face images to a preset size to obtain a face image p'; extracting the feature vector f ' of the obtained face image p ' through a face feature extraction model, and carrying out facial feature partial shielding on the face image p ' by adopting random noise to obtain a face image
Figure FDA0002469402160000031
Converting the facial image into a vector, combining the vector with the characteristic vector f ', inputting the vector into a generator for generating an antagonistic network to obtain a de-identified face image p', and then obtaining a face image*
2) Obtaining a de-identified face image p 'by a discriminator for creating a countermeasure network'*If the corresponding score is smaller than a preset threshold value, the verification is not passed, and if not, the step 3) is carried out;
3) extracting the face image p 'subjected to de-recognition by adopting a face feature extraction model'*And a face image p ″)*The face image p 'is judged according to the similarity of the feature vectors'*And a face image p ″)*Whether the users are from the same user, if not, the verification is not passed, otherwise, the step 4) is carried out;
4) extracting a face image p ' and a de-identified face image p ' by adopting a face feature extraction model '*Feature vectors, and judging the face image p ' and the face image p ' according to the similarity of the feature vectors '*Whether the users are from the same user, if the users belong to the same user, the verification is not passed, and if not, the step 5) is carried out;
5) calculating face image p 'and face image p'*If the structural similarity is larger than a preset threshold value, the verification is passed, otherwise, the verification is not passed.
CN202010343798.5A 2020-04-27 2020-04-27 Face de-identification generation method based on generation of confrontation network Active CN111476200B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010343798.5A CN111476200B (en) 2020-04-27 2020-04-27 Face de-identification generation method based on generation of confrontation network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010343798.5A CN111476200B (en) 2020-04-27 2020-04-27 Face de-identification generation method based on generation of confrontation network

Publications (2)

Publication Number Publication Date
CN111476200A CN111476200A (en) 2020-07-31
CN111476200B true CN111476200B (en) 2022-04-19

Family

ID=71755753

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010343798.5A Active CN111476200B (en) 2020-04-27 2020-04-27 Face de-identification generation method based on generation of confrontation network

Country Status (1)

Country Link
CN (1) CN111476200B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111950635B (en) * 2020-08-12 2023-08-25 温州大学 Robust feature learning method based on layered feature alignment
CN112084962B (en) * 2020-09-11 2021-05-25 贵州大学 Face privacy protection method based on generation type countermeasure network
CN112307514B (en) * 2020-11-26 2023-08-01 哈尔滨工程大学 Differential privacy greedy grouping method adopting Wasserstein distance
CN112668401B (en) * 2020-12-09 2023-01-17 中国科学院信息工程研究所 Face privacy protection method and device based on feature decoupling
CN112613445B (en) * 2020-12-29 2024-04-30 深圳威富优房客科技有限公司 Face image generation method, device, computer equipment and storage medium
CN112734436A (en) * 2021-01-08 2021-04-30 支付宝(杭州)信息技术有限公司 Terminal and method for supporting face recognition
CN112949535B (en) * 2021-03-15 2022-03-11 南京航空航天大学 Face data identity de-identification method based on generative confrontation network
CN112926559B (en) * 2021-05-12 2021-07-30 支付宝(杭州)信息技术有限公司 Face image processing method and device
CN113033511B (en) * 2021-05-21 2021-08-10 中国科学院自动化研究所 Face anonymization method based on control decoupling identity representation
CN113705410A (en) * 2021-08-20 2021-11-26 陈成 Face image desensitization processing and verifying method and system
CN114049417B (en) * 2021-11-12 2023-11-24 抖音视界有限公司 Virtual character image generation method and device, readable medium and electronic equipment
CN115617882B (en) * 2022-12-20 2023-05-23 粤港澳大湾区数字经济研究院(福田) GAN-based time sequence diagram data generation method and system with structural constraint

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108520503A (en) * 2018-04-13 2018-09-11 湘潭大学 A method of based on self-encoding encoder and generating confrontation network restoration face Incomplete image
CN108573222A (en) * 2018-03-28 2018-09-25 中山大学 The pedestrian image occlusion detection method for generating network is fought based on cycle
CN108764207A (en) * 2018-06-07 2018-11-06 厦门大学 A kind of facial expression recognizing method based on multitask convolutional neural networks
CN108829855A (en) * 2018-06-21 2018-11-16 山东大学 It is worn based on the clothing that condition generates confrontation network and takes recommended method, system and medium
CN109815928A (en) * 2019-01-31 2019-05-28 中国电子进出口有限公司 A kind of face image synthesis method and apparatus based on confrontation study
CN109840477A (en) * 2019-01-04 2019-06-04 苏州飞搜科技有限公司 Face identification method and device are blocked based on eigentransformation
CN109886167A (en) * 2019-02-01 2019-06-14 中国科学院信息工程研究所 One kind blocking face identification method and device
CN110085263A (en) * 2019-04-28 2019-08-02 东华大学 A kind of classification of music emotion and machine composing method
CN110135366A (en) * 2019-05-20 2019-08-16 厦门大学 Pedestrian's recognition methods again is blocked based on multiple dimensioned generation confrontation network
CN110598806A (en) * 2019-07-29 2019-12-20 合肥工业大学 Handwritten digit generation method for generating countermeasure network based on parameter optimization
CN110728628A (en) * 2019-08-30 2020-01-24 南京航空航天大学 Face de-occlusion method for generating confrontation network based on condition

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108573222A (en) * 2018-03-28 2018-09-25 中山大学 The pedestrian image occlusion detection method for generating network is fought based on cycle
CN108520503A (en) * 2018-04-13 2018-09-11 湘潭大学 A method of based on self-encoding encoder and generating confrontation network restoration face Incomplete image
CN108764207A (en) * 2018-06-07 2018-11-06 厦门大学 A kind of facial expression recognizing method based on multitask convolutional neural networks
CN108829855A (en) * 2018-06-21 2018-11-16 山东大学 It is worn based on the clothing that condition generates confrontation network and takes recommended method, system and medium
CN109840477A (en) * 2019-01-04 2019-06-04 苏州飞搜科技有限公司 Face identification method and device are blocked based on eigentransformation
CN109815928A (en) * 2019-01-31 2019-05-28 中国电子进出口有限公司 A kind of face image synthesis method and apparatus based on confrontation study
CN109886167A (en) * 2019-02-01 2019-06-14 中国科学院信息工程研究所 One kind blocking face identification method and device
CN110085263A (en) * 2019-04-28 2019-08-02 东华大学 A kind of classification of music emotion and machine composing method
CN110135366A (en) * 2019-05-20 2019-08-16 厦门大学 Pedestrian's recognition methods again is blocked based on multiple dimensioned generation confrontation network
CN110598806A (en) * 2019-07-29 2019-12-20 合肥工业大学 Handwritten digit generation method for generating countermeasure network based on parameter optimization
CN110728628A (en) * 2019-08-30 2020-01-24 南京航空航天大学 Face de-occlusion method for generating confrontation network based on condition

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
FD-GAN: Face De-Morphing Generative Adversarial Network for Restoring Accomplice"s Facial Image;Fei Peng 等;《SPECIAL SECTION ON DIGITAL FORENSICS THROUGH MULTIMEDIA SOURCE INFERENCE》;20190621;第75122-75131页 *
Focal Loss for Dense Object Detection;Tsung-Yi Lin 等;《2017 IEEE International Conference on Computer Vision》;20171231;第2999-3008页 *
图像匹配方法研究综述;贾迪 等;《中国图像图形学报》;20190324;第677-699页 *
基于生成对抗网络的遮挡表情识别;王素琴 等;《计算机应用研究》;20191031;第3112-3116页 *

Also Published As

Publication number Publication date
CN111476200A (en) 2020-07-31

Similar Documents

Publication Publication Date Title
CN111476200B (en) Face de-identification generation method based on generation of confrontation network
Chintha et al. Recurrent convolutional structures for audio spoof and video deepfake detection
TWI753327B (en) Image processing method, processor, electronic device and computer-readable storage medium
CN110543846B (en) Multi-pose face image obverse method based on generation countermeasure network
CN109657554B (en) Image identification method and device based on micro expression and related equipment
WO2020258668A1 (en) Facial image generation method and apparatus based on adversarial network model, and nonvolatile readable storage medium and computer device
CN111460494B (en) Multi-mode deep learning-oriented privacy protection method and system
CN108429619A (en) Identity identifying method and system
CN109784277B (en) Emotion recognition method based on intelligent glasses
US20230081982A1 (en) Image processing method and apparatus, computer device, storage medium, and computer program product
CN110929836B (en) Neural network training and image processing method and device, electronic equipment and medium
CN115050064A (en) Face living body detection method, device, equipment and medium
CN110969243A (en) Method and device for training countermeasure generation network for preventing privacy leakage
CN111108508B (en) Face emotion recognition method, intelligent device and computer readable storage medium
CN115984930A (en) Micro expression recognition method and device and micro expression recognition model training method
CN113297624B (en) Image preprocessing method and device
Atzori et al. Demographic bias in low-resolution deep face recognition in the wild
CN114036553A (en) K-anonymity-combined pedestrian identity privacy protection method
CN109961152B (en) Personalized interaction method and system of virtual idol, terminal equipment and storage medium
CN113327212B (en) Face driving method, face driving model training device, electronic equipment and storage medium
KR102408042B1 (en) Group based face recognition method and apparatus
CN114612991A (en) Conversion method and device for attacking face picture, electronic equipment and storage medium
CN115708135A (en) Face recognition model processing method, face recognition method and device
CN115311595B (en) Video feature extraction method and device and electronic equipment
CN113313020B (en) Unmarked facial expression capturing method and system based on virtual human

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant