CN111476200A - Face de-identification generation method based on generation of confrontation network - Google Patents
Face de-identification generation method based on generation of confrontation network Download PDFInfo
- Publication number
- CN111476200A CN111476200A CN202010343798.5A CN202010343798A CN111476200A CN 111476200 A CN111476200 A CN 111476200A CN 202010343798 A CN202010343798 A CN 202010343798A CN 111476200 A CN111476200 A CN 111476200A
- Authority
- CN
- China
- Prior art keywords
- face image
- face
- loss
- feature vector
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013503 de-identification Methods 0.000 title claims abstract description 31
- 238000000034 method Methods 0.000 title claims abstract description 26
- 239000013598 vector Substances 0.000 claims abstract description 76
- 238000012549 training Methods 0.000 claims abstract description 66
- 230000001815 facial effect Effects 0.000 claims abstract description 9
- 239000000284 extract Substances 0.000 claims abstract description 6
- 238000000605 extraction Methods 0.000 claims description 22
- 238000004364 calculation method Methods 0.000 claims description 16
- 238000012795 verification Methods 0.000 claims description 10
- 238000010586 diagram Methods 0.000 description 2
- 230000007812 deficiency Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
- Image Processing (AREA)
Abstract
本发明公开了一种基于生成对抗网络的人脸去识别化生成方法,获取N对人脸图像,对每张人脸图像分别提取得到特征向量,将每张人脸图像的人脸五官部分采用随机噪声进行遮挡获取遮挡人脸图像,将遮挡人脸图像与相应的人脸图像特征向量组合,二者结合作为生成对抗网络中生成器的输入,原始人脸图像作为辨别器的真实人脸图像,构成一个训练样本;采用训练样本对生成器和辨别器进行训练。训练完成后,在应用阶段,对于每一张待去识别化的人脸图片,同样得到遮挡人脸图像与人脸图像特征向量,然后组合输入至训练好的人脸去识别化生成模型的生成器,得到去识别化后的人脸图像。本发明可以在保护用户隐私的同时,生成高质量的虚拟用户的人脸图像。
The invention discloses a face de-identification generation method based on a generative confrontation network, which acquires N pairs of face images, extracts feature vectors from each face image respectively, and uses the facial features of each face image as The occluded face image is obtained by occlusion with random noise, and the occluded face image is combined with the corresponding face image feature vector. , which constitutes a training sample; the generator and discriminator are trained using the training sample. After the training is completed, in the application stage, for each face image to be de-identified, the occluded face image and the feature vector of the face image are also obtained, and then combined and input to the generation of the trained face de-identification generation model. to obtain the de-identified face image. The present invention can generate high-quality face images of virtual users while protecting user privacy.
Description
技术领域technical field
本发明属于人脸识别技术领域,更为具体地讲,涉及一种基于生成对抗网络的人脸去识别化生成方法。The invention belongs to the technical field of face recognition, and more particularly, relates to a method for generating face de-identification based on a generative confrontation network.
背景技术Background technique
随着网络信息技术飞速发展,人脸识别技术与应用已经逐渐从学术界发展到政府部门以及工业界,在越来越多的应用中扮演着重要的角色,这些角色通常为用以代替或辅助身份证、密码、其他证件等凭借标识来进行用户的身份信息验证。然而不论从人脸识别模型的训练以及实际应用中,往往需要大量高质量带标签的数据,而这些数据往往会携带着用户的个人肖像隐私,在训练或使用中被第三方运营商获取到将对用户的隐私产生影响。正是这样的需求使得人脸去识别化生成需求应运而生,提供在不泄露个人隐私的情况下唯一标识用户,能够进行人脸识别模型的训练以及人脸识别模型的实际应用。With the rapid development of network information technology, face recognition technology and applications have gradually developed from academia to government departments and industry, playing an important role in more and more applications. These roles are usually used to replace or assist ID cards, passwords, and other documents are used to verify the user's identity information by virtue of the identification. However, regardless of the training or practical application of face recognition models, a large amount of high-quality labeled data is often required, and these data often carry the privacy of users' personal portraits, which are obtained by third-party operators during training or use. Impact on user privacy. It is this demand that makes the need for face de-identification generation come into being, providing unique identification of users without revealing personal privacy, enabling face recognition model training and practical application of face recognition models.
人脸去识别化生成方法主要由两部分构成,人脸去识别化以及人脸图片生成。以往的传统方法往往更注重于人脸的去识别化部分,如K匿名等方法,这些方法存在着一定缺点:首先,用这些方法在对人脸去识别化后,数据虽然可以满足去识别化要求,但本身无法再唯一标识用户,因此无法用于人脸识别模型的训练与使用中,其实际使用价值较低。其次,这些方法的清晰度较差,图片比较模糊,与真实人脸图片存在较大差异。此外,对于同一用户的不同人脸图片,由于其拍摄环境、用户装束等因素不同,在脱敏后的图片可能会截然不同,即对于用户特征信息丢失较多。The face de-identification generation method is mainly composed of two parts, face de-identification and face image generation. The traditional methods in the past tend to focus more on the de-identification part of the face, such as K anonymity and other methods. These methods have certain shortcomings: first, after de-identifying the face with these methods, although the data can meet the de-identification requirements. requirements, but it can no longer uniquely identify users, so it cannot be used in the training and use of face recognition models, and its actual use value is low. Secondly, the clarity of these methods is poor, and the pictures are blurred, which is quite different from the real face pictures. In addition, for different face pictures of the same user, due to different shooting environments, user attire and other factors, the desensitized pictures may be completely different, that is, more user feature information is lost.
因此,仅完成人脸去识别化任务是无法满足实际人脸使用需求的。在实际的人脸使用中,数据拥有方需要在保障用户肖像隐私不被泄露的情况下,还能够保证数据的唯一标识等特性,并具有足够高的清晰度,保留足够多的特征信息能够用于人脸识别模型的训练以及实际人脸识别应用,然而对于此种需求目前行业内并没有行之有效的解决方法。Therefore, only completing the task of face de-identification cannot meet the needs of actual face use. In the actual use of faces, the data owner needs to ensure that the user's portrait privacy is not leaked, but also to ensure the unique identification of the data and other characteristics, and to have high enough clarity and retain enough feature information to be able to use However, there is currently no effective solution to this demand in the industry.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于克服现有技术的不足,提供一种基于生成对抗网络的人脸去识别化生成方法,既保证用户个人信息隐私不被泄露,又保证图片具有足够高的清晰度以及尽可能保留用户特征,以便用于人脸识别模型的训练以及应用。The purpose of the present invention is to overcome the deficiencies of the prior art and provide a method for generating face de-identification based on a generative confrontation network, which not only ensures that the privacy of the user's personal information is not leaked, but also ensures that the picture has a high enough definition and is as clear as possible. User features are preserved for training and application of face recognition models.
为了实现上述发明目的,本发明于生成对抗网络的人脸去识别化生成方法包括以下步骤:In order to achieve the above-mentioned purpose of the invention, the present invention includes the following steps in the method for generating face de-identification by generating adversarial network:
S1:获取N对人脸图像,每对人脸图像中的两张人脸图像为属于同一用户的不同人脸图像,且每张人脸图像调整至预设尺寸,记人脸图像对其中表示第n对人脸图像中的第i张人脸图像,i=1,2,n=1,2,…,N;S1: Obtain N pairs of face images, the two face images in each pair of face images are different face images belonging to the same user, and each face image is adjusted to a preset size, and the face image pair is recorded. in Indicates the i-th face image in the n-th pair of face images, i=1,2, n=1,2,...,N;
S2:将每张人脸图像分别输入预训练的人脸特征提取模型中,获取对应的特征向量,记每张人脸图像所对应的特征向量为 S2: Input each face image into the pre-trained face feature extraction model, obtain the corresponding feature vector, and record each face image The corresponding eigenvectors are
将每张人脸图像的人脸五官部分采用随机噪声进行遮挡,得到遮挡人脸图像将其转化为向量,与人脸图像的特征向量进行组合作为输入,将原始人脸图像作为真实人脸图像,构成一个训练样本的三元组 each face image The facial features are occluded by random noise, and the occluded face image is obtained. Convert it to a vector, with the face image eigenvector of Combining as input, the original face image As a real face image, a triplet of training samples is formed
S3:构建生成对抗网络,包括生成器和辨别器,其中生成器的输入为遮挡人脸图像和人脸图像特征向量的组合,输出为生成虚拟用户人脸图像,辨别器所采用的真实图像为对应的原始人脸图像;S3: Build a generative adversarial network, including a generator and a discriminator. The input of the generator is the combination of the occluded face image and the feature vector of the face image, and the output is to generate a virtual user face image. The real image used by the discriminator is The corresponding original face image;
S4:将步骤S2中所得到的训练样本对生成对抗网络进行训练,在训练过程中每批次从人脸图像对集合中选取若干对人脸图像,将对应的训练样本作为当前批次训练样本,所采用的损失包括对抗损失、梯度惩罚损失、用户内损失、用户间损失和相似度损失,各种损失的计算方法分别为:S4: Train the generative adversarial network with the training samples obtained in step S2, select several pairs of face images from the set of face image pairs in each batch during the training process, and use the corresponding training samples as the current batch of training samples , the losses used include adversarial loss, gradient penalty loss, intra-user loss, inter-user loss and similarity loss. The calculation methods of various losses are:
对抗损失的计算方法为:采用生成对抗网络中的辨别器获取当前批次训练样本中每张真实人脸图像的分数,以及当前批次训练样本中真实人脸图像所对应的生成虚拟用户人脸图像的分数,计算真实人脸图像分数和生成虚拟用户人脸图像分数之间的Wasserstein距离作为对抗损失LD;The calculation method of the adversarial loss is: using the discriminator in the generative adversarial network to obtain the score of each real face image in the current batch of training samples, and the generated virtual user face corresponding to the real face image in the current batch of training samples The score of the image, the Wasserstein distance between the real face image score and the generated virtual user face image score is calculated as the adversarial loss LD;
梯度惩罚损失的计算方法为:计算当前批次训练样本中每个训练样本的梯度惩罚损失,平均后作为梯度惩罚损失LGP;The calculation method of gradient penalty loss is: calculate the gradient penalty loss of each training sample in the current batch of training samples, and average it as the gradient penalty loss LGP;
用户内损失的计算方法为:采用人脸特征提取模型获取当前批次训练样本中每对人脸图像所对应的生成虚拟用户人脸图像的特征向量对,计算每个特征向量对中两个特征向量的余弦距离,将所有特征向量对所对应的余弦距离平均后作为用户内损失LFFI;The calculation method of intra-user loss is as follows: using the face feature extraction model to obtain the feature vector pair of the generated virtual user face image corresponding to each pair of face images in the current batch of training samples, and calculating the two features in each feature vector pair. The cosine distance of the vector, the cosine distance corresponding to all feature vector pairs is averaged as the intra-user loss LFFI;
用户间损失的计算方法为:当前批次训练样本中随机选取K对人脸图像,每对人脸图像中的两张人脸图像属于不同用户,采用人脸特征提取模型获取K对不同用户人脸图像所对应的生成虚拟用户人脸图像的特征向量对,计算每个特征向量对中两个特征向量的余弦距离,将所有特征向量对所对应的余弦距离平均后作为用户间损失LFFO;The calculation method of user-to-user loss is as follows: randomly select K pairs of face images in the current batch of training samples, two face images in each pair of face images belong to different users, and use the face feature extraction model to obtain K pairs of different users. The feature vector pair of the generated virtual user face image corresponding to the face image, the cosine distance of the two feature vectors in each feature vector pair is calculated, and the cosine distance corresponding to all feature vector pairs is averaged as the inter-user loss LFFO;
去识别化损失的计算方法为:采用人脸特征提取模型获取当前批次训练样本中每张人脸图像以及所对应的生成虚拟用户人脸图像的特征向量,计算每张人脸图像与其对应生成虚拟用户人脸图像特征向量之间的余弦距离,平均后作为去识别化损失LRF;The calculation method of the de-identification loss is as follows: using the face feature extraction model to obtain each face image in the current batch of training samples and the corresponding feature vector of the generated virtual user face image, and calculating each face image and its corresponding generated face image. The cosine distance between the feature vectors of the virtual user face image is averaged as the de-identification loss LRF;
结构相似度损失的计算方法为:计算当前批次训练样本中每张人脸图像以及所对应的生成虚拟用户人脸图像的结构相似度,平均后作为结构相似度损失Ls;The calculation method of the structural similarity loss is: calculate the structural similarity of each face image in the current batch of training samples and the corresponding generated virtual user face image, and average it as the structural similarity loss Ls;
设置辨别器损失为LD-θLGP,生成器损失为LD+αLFFI+βLs-γLRF+ηLFFO,其中θ、α、β、γ、η为预设的参数,交替训练辨别器与生成器;Set the discriminator loss as LD-θLGP and the generator loss as LD+αLFFI+βLs-γLRF+ηLFFO, where θ, α, β, γ, and η are preset parameters, and train the discriminator and the generator alternately;
S5:将需要进行去识别化生成的人脸图像调整至预设尺寸,将所得到的人脸图像p′通过人脸特征提取模型提取其特征向量f′,将人脸图像p′采用随机噪声进行人脸五官部分遮挡得到人脸图像将其转化为向量,与特征向量f′进行组合,然后输入至生成对抗网络的生成器,得到去识别化后的人脸图像p′*。S5: Adjust the face image that needs to be de-identified and generated to a preset size, extract the feature vector f' of the obtained face image p' through the face feature extraction model, and use random noise for the face image p' Partial occlusion of the facial features to obtain a face image It is converted into a vector, combined with the feature vector f', and then input to the generator of the generative adversarial network to obtain the de-identified face image p' * .
本发明基于生成对抗网络的人脸去识别化生成方法,获取N对人脸图像,将每张人脸图像分别输入预训练的人脸特征提取模型中得到特征向量,将每张人脸图像的人脸五官部分采用随机噪声进行遮挡获取遮挡人脸图像,将得到的遮挡人脸图像与相应的人脸图像特征向量组合,二者结合作为生成对抗网络中生成器的输入,原始人脸图像作为辨别器的真实人脸图像,构成一个训练样本;采用训练样本对生成器和辨别器进行训练。训练完成后,在应用阶段,对于每一张待去识别化的人脸图片,同样得到遮挡人脸图像与人脸图像特征向量,然后组合输入至训练好的人脸去识别化生成模型的生成器,得到去识别化后的人脸图像。The invention is based on the face de-identification generation method of generative confrontation network, obtains N pairs of face images, inputs each face image into a pre-trained face feature extraction model to obtain feature vectors, The facial features of the face are occluded by random noise to obtain the occluded face image, and the obtained occluded face image is combined with the corresponding feature vector of the face image. The real face image of the discriminator constitutes a training sample; the generator and the discriminator are trained with the training sample. After the training is completed, in the application stage, for each face image to be de-identified, the occluded face image and the feature vector of the face image are also obtained, and then combined and input to the generation of the trained face de-identification generation model. to obtain the de-identified face image.
采用本发明可以利用的真实用户的人脸图像生成高质量的虚拟用户的人脸图像,在满足去识别化的基础上最大程度保留用户的性别、人种、肤色等与身份识别相关度较小的特征,保证用户集的统计信息不受影响,并且对于相同用户生成的图片均属于相同虚拟用户,保证了生成后人脸图像仍能用于人脸识别模型的训练以及使用,从而在保护用户隐私同时使去识别化生成的人脸图像具有高度可用性。The face image of the real user that can be used by the present invention is used to generate a high-quality face image of the virtual user, and on the basis of satisfying the de-identification, the gender, race, skin color, etc. of the user are kept to the greatest extent and have a small correlation with the identification. It ensures that the statistical information of the user set is not affected, and the pictures generated by the same user belong to the same virtual user, which ensures that the generated face image can still be used for the training and use of the face recognition model, so as to protect the user. Privacy also makes face images generated by de-identification highly usable.
附图说明Description of drawings
图1是本发明基于生成对抗网络的人脸去识别化生成方法的具体实施方式流程图;Fig. 1 is the specific implementation flow chart of the face de-identification generation method based on generative adversarial network of the present invention;
图2是本实施例中原始人脸图像和生成虚拟用户人脸图像的对比图。FIG. 2 is a comparison diagram of the original face image and the generated virtual user face image in this embodiment.
具体实施方式Detailed ways
下面结合附图对本发明的具体实施方式进行描述,以便本领域的技术人员更好地理解本发明。需要特别提醒注意的是,在以下的描述中,当已知功能和设计的详细描述也许会淡化本发明的主要内容时,这些描述在这里将被忽略。The specific embodiments of the present invention are described below with reference to the accompanying drawings, so that those skilled in the art can better understand the present invention. It should be noted that, in the following description, when the detailed description of known functions and designs may dilute the main content of the present invention, these descriptions will be omitted here.
实施例Example
图1是本发明基于生成对抗网络的人脸去识别化生成方法的具体实施方式流程图。如图1所示,本发明基于生成对抗网络的人脸去识别化生成方法的具体步骤包括:FIG. 1 is a flow chart of a specific implementation of the method for generating face de-identification based on generative adversarial network of the present invention. As shown in FIG. 1 , the specific steps of the method for generating face de-identification based on generative adversarial network of the present invention include:
S101:获取人脸图像样本:S101: Obtain a face image sample:
获取N对人脸图像,每对人脸图像中的两张人脸图像为属于同一用户的不同人脸图像,且每张人脸图像均归一化至预设尺寸,记人脸图像对其中表示第n对人脸图像中的第i张人脸图像,i=1,2,n=1,2,…,N。Obtain N pairs of face images, the two face images in each pair of face images are different face images belonging to the same user, and each face image is normalized to a preset size, and the face image pair is recorded. in Indicates the ith face image in the nth pair of face images, i=1, 2, n=1, 2,...,N.
S102:获取训练样本:S102: Obtain training samples:
将每张人脸图像分别输入预训练的人脸特征提取模型中,获取对应的特征向量,记每张人脸图像所对应的特征向量为 Input each face image into the pre-trained face feature extraction model, obtain the corresponding feature vector, and record each face image The corresponding eigenvectors are
将每张人脸图像的人脸五官部分采用随机噪声进行遮挡,得到遮挡人脸图像将其转化为向量,与人脸图像的特征向量进行组合作为输入,将原始人脸图像作为真实人脸图像,构成一个训练样本的三元组 each face image The facial features are occluded by random noise, and the occluded face image is obtained. Convert it to a vector, with the face image eigenvector of Combining as input, the original face image As a real face image, a triplet of training samples is formed
在本发明中,遮挡人脸用于给模型提供人脸无关的背景信息,在生成过程中尽可能保留,人脸图像的特征向量为模型提供人脸信息,用于人脸去识别化生成。In the present invention, the face is blocked It is used to provide the model with background information irrelevant to the face, and keep it as much as possible during the generation process. The feature vector of the face image provides the model with face information for face de-identification generation.
通过随机噪声的遮挡,从而使用户原有人脸信息不直接输入生成对抗网络,让生成对抗网络根据原始人脸特征向量学习生成与原始人脸存在差异的虚拟用户人脸,由于相同用户的原始人脸特征相似,在训练过程中可以让生成对抗网络保持这种相似性,进而使得去识别生成之后仍然属于相同的虚拟用户。Through the occlusion of random noise, the original face information of the user is not directly input to the generative adversarial network, and the generative adversarial network learns to generate a virtual user face that is different from the original face according to the original face feature vector. The face features are similar, and the generative adversarial network can maintain this similarity during the training process, so that the de-identification still belongs to the same virtual user after generation.
S103:构建生成对抗网络:S103: Build a Generative Adversarial Network:
构建生成对抗网络,包括生成器和辨别器,其中生成器的输入为遮挡人脸图像和人脸图像特征向量的组合,输出为生成虚拟用户人脸图像,辨别器所采用的真实图像为对应的原始人脸图像。Build a generative adversarial network, including a generator and a discriminator, where the input of the generator is the combination of the occluded face image and the feature vector of the face image, and the output is to generate a virtual user face image, and the real image used by the discriminator is the corresponding original face image.
S104:训练生成对抗网络:S104: Train Generative Adversarial Networks:
将步骤S102中所得到的训练样本对生成对抗网络进行训练,在训练过程中每批次从人脸图像对集合中选取若干对人脸图像,将对应的训练样本作为当前批次训练样本。由于损失的设置对于生成对抗网络的训练是非常重要的,为了提高训练得到的生成对抗网络的性能,本发明中在生成对抗网络所采用的损失包括对抗损失、梯度惩罚损失、用户内损失、用户间损失和相似度损失,其计算方法分别如下:The training samples obtained in step S102 are used to train the generative adversarial network. During the training process, several pairs of face images are selected from the set of face image pairs in each batch, and the corresponding training samples are used as the current batch of training samples. Since the setting of the loss is very important for the training of the generative adversarial network, in order to improve the performance of the generative adversarial network obtained by training, the losses used in the generative adversarial network in the present invention include adversarial loss, gradient penalty loss, intra-user loss, user loss Between loss and similarity loss, the calculation methods are as follows:
·对抗损失:· Adversarial loss:
采用生成对抗网络中的辨别器获取当前批次训练样本中每张真实人脸图像的分数,以及当前批次训练样本中真实人脸图像所对应的生成虚拟用户人脸图像的分数,计算真实人脸图像分数和生成虚拟用户人脸图像分数之间的Wasserstein距离作为对抗损失LD。Use the discriminator in the generative adversarial network to obtain the score of each real face image in the current batch of training samples, and the score of the generated virtual user face image corresponding to the real face image in the current batch of training samples, and calculate the real face image score of the real person. The Wasserstein distance between face image scores and generated virtual user face image scores as adversarial loss LD.
·梯度惩罚损失:Gradient penalty loss:
计算当前批次训练样本中每个训练样本的梯度惩罚损失,平均后作为梯度惩罚损失LGP。梯度惩罚损失是生成对抗网络中常用参数,其具体计算方法在此不再赘述。Calculate the gradient penalty loss for each training sample in the current batch of training samples, and average it as the gradient penalty loss LGP. Gradient penalty loss is a commonly used parameter in generative adversarial networks, and its specific calculation method will not be repeated here.
·用户内损失:· Intra-user losses:
采用人脸特征提取模型获取当前批次训练样本中每对人脸图像所对应的生成虚拟用户人脸图像的特征向量对,计算每个特征向量对中两个特征向量的余弦距离,将所有特征向量对所对应的余弦距离平均后作为用户内损失LFFI。The face feature extraction model is used to obtain the feature vector pair of the generated virtual user face image corresponding to each pair of face images in the current batch of training samples, and the cosine distance of the two feature vectors in each feature vector pair is calculated. The cosine distance corresponding to the vector pair is averaged as the intra-user loss LFFI.
·用户间损失:Losses between users:
在当前批次训练样本中随机选取K对人脸图像,每对人脸图像中的两张人脸图像属于不同用户,采用人脸特征提取模型获取K对不同用户人脸图像所对应的生成虚拟用户人脸图像的特征向量对,计算每个特征向量对中两个特征向量的余弦距离,将所有特征向量对所对应的余弦距离平均后作为用户间损失LFFO。K pairs of face images are randomly selected in the current batch of training samples. The two face images in each pair of face images belong to different users. The face feature extraction model is used to obtain the generated virtual images corresponding to K pairs of face images of different users. For the feature vector pairs of the user's face image, the cosine distance of the two feature vectors in each feature vector pair is calculated, and the cosine distances corresponding to all feature vector pairs are averaged as the inter-user loss LFFO.
·去识别化损失:De-identification loss:
采用人脸特征提取模型获取当前批次训练样本中每张人脸图像以及所对应的生成虚拟用户人脸图像的特征向量,计算每张人脸图像与其对应的生成虚拟用户人脸图像特征向量之间的余弦距离,平均后作为去识别化损失LRF。The face feature extraction model is used to obtain each face image in the current batch of training samples and the corresponding feature vector of the generated virtual user face image, and the sum of each face image and its corresponding feature vector of the generated virtual user face image is calculated. The cosine distance between them is averaged as the de-identification loss LRF.
·结构相似度损失:Structural similarity loss:
计算当前批次训练样本中每张人脸图像以及所对应的生成虚拟用户人脸图像的结构相似度(Structural Similarity Index),平均后作为结构相似度损失Ls。结构相似度融合了两幅图像之间的对比度、亮度和结构相似性,可以较好地衡量两幅图像的相似度。Calculate the structural similarity (Structural Similarity Index) of each face image in the current batch of training samples and the corresponding generated virtual user face image, and use the average as the structural similarity loss Ls. Structural similarity combines the contrast, brightness and structural similarity between two images, which can better measure the similarity of two images.
设置辨别器损失为LD-θLGP,生成器损失为LD+αLFFI+βLs-γLRF+ηLFFO,其中θ、α、β、γ、η为预设的参数,一般为正常数,根据实际需要设置即可,交替训练辨别器与生成器,即先固定生成器参数,训练辨别器,此时令LD-θLGP最大化,然后固定辨别器参数,训练生成器,此时令LD+αLFFI+βLs-γLRF+ηLFFO最小化,最终二者达到收敛,得到最终生成模型。在本实施例中,θ=5,α=0.5,β=0.1,γ=0.1,η=0.25。Set the discriminator loss as LD-θLGP, and the generator loss as LD+αLFFI+βLs-γLRF+ηLFFO, where θ, α, β, γ, and η are preset parameters, generally normal numbers, which can be set according to actual needs , train the discriminator and the generator alternately, that is, first fix the generator parameters, train the discriminator, then maximize LD-θLGP, then fix the discriminator parameters, train the generator, at this time, make LD+αLFFI+βLs-γLRF+ηLFFO minimum Finally, the two converge, and the final generated model is obtained. In this embodiment, θ=5, α=0.5, β=0.1, γ=0.1, and η=0.25.
通过以上损失的设置,可以使生成对抗网络所生成的人脸图像既能与原有人脸图像存在区别,但是又能保留原有人脸图像中的特征,并且由相同用户的人脸图像得到的生成虚拟用户人脸图像尽可能相似。Through the above loss settings, the face image generated by the generative adversarial network can be different from the original face image, but can retain the features in the original face image, and the generated face image obtained from the same user's face image can be generated. The virtual user face images are as similar as possible.
训练的结束条件可以根据需要进行设置,本实施例中的训练结束采用以下方式进行判断:计算当前批次训练样本中每对人脸图像所对应特征向量对之间的余弦距离,平均后作为真实人脸图像余弦距离,然后计算当前批次训练样本中由每对人脸图像所得到的生成虚拟用户人脸图像对所对应特征向量对之间的余弦距离,平均后作为生成虚拟用户人脸图像余弦距离,如果生成虚拟用户人脸图像余弦距离大于真实人脸图像余弦距离,则继续训练,否则训练结束。The end condition of the training can be set as required, and the end of the training in this embodiment is judged in the following manner: Calculate the cosine distance between the feature vector pairs corresponding to each pair of face images in the current batch of training samples, and average them as the true value. face image cosine distance, and then calculate the cosine distance between the corresponding feature vector pairs of the generated virtual user face image pair obtained by each pair of face images in the current batch of training samples, and average them as the generated virtual user face image Cosine distance, if the cosine distance of the generated virtual user face image is greater than the cosine distance of the real face image, continue training, otherwise the training ends.
S105:人脸去识别化生成:S105: face de-identification generation:
将需要进行去识别化生成的人脸图像归一化至预设尺寸,将所得到的人脸图像p′通过人脸特征提取模型提取其特征向量f′,将人脸图像p′采用随机噪声进行人脸五官部分遮挡得到人脸图像将其转化为向量,与特征向量f′进行组合,然后输入至生成对抗网络的生成器,得到去识别化后的人脸图像p′*。Normalize the face image that needs to be de-identified to a preset size, extract the feature vector f' of the obtained face image p' through the face feature extraction model, and use random noise for the face image p' Partial occlusion of the facial features to obtain a face image It is converted into a vector, combined with the feature vector f', and then input to the generator of the generative adversarial network to obtain the de-identified face image p' * .
为了进一步提升所生成的人脸图像质量,还可以对所生成的人脸图像进行校验,其具体方法如下:In order to further improve the quality of the generated face image, the generated face image can also be verified, and the specific method is as follows:
1)获取和人脸图像p′属于同一用户的不同人脸图像,将其归一化至预设尺寸,得到人脸图像p″。同样地,将所得到的人脸图像p″通过人脸特征提取模型提取其特征向量f″,将人脸图像p″采用随机噪声进行人脸五官部分遮挡得到人脸图像将其转化为向量,与特征向量f″进行组合,然后输入至生成对抗网络的生成器,得到去识别化后的人脸图像p″*。1) Acquire different face images belonging to the same user as the face image p', normalize them to a preset size, and obtain a face image p". Similarly, pass the obtained face image p" through the face The feature extraction model extracts its feature vector f", and uses random noise to occlude the facial features of the face image p" to obtain a face image. It is converted into a vector, combined with the feature vector f″, and then input to the generator of the generative adversarial network to obtain the de-identified face image p″ * .
2)采用生成对抗网络的辨别器获取去识别化后的人脸图像p′*对应的分数,如果小于预设阈值,则认为去识别化后的人脸图像p′*不够逼真,校验不通过,否则进入步骤3)。2) Use the discriminator of the generative adversarial network to obtain the score corresponding to the de-identified face image p′ * . If it is less than the preset threshold, it is considered that the de-identified face image p′ * is not realistic enough, and the verification is not correct. Pass, otherwise go to step 3).
3)采用人脸特征提取模型提取去识别化后的人脸图像p′*和人脸图像p″*的特征向量,通过特征向量的相似度判断人脸图像p′*和人脸图像p″*是否来源于同一个用户,如果不是,校验不通过,否则进入步骤4)。3) Use the face feature extraction model to extract the feature vectors of the de-identified face image p′ * and the face image p″ * , and judge the face image p′ * and the face image p″ by the similarity of the feature vectors * Whether it comes from the same user, if not, the verification fails, otherwise go to step 4).
4)采用人脸特征提取模型提取人脸图像p′和去识别化后的人脸图像p′*特征向量,通过特征向量的相似度判断人脸图像p′和人脸图像p′*是否来源于同一个用户,如果属于同一用户,校验不通过,否则进入步骤5)。4) Use the face feature extraction model to extract the face image p′ and the de-identified face image p′ * feature vector, and judge whether the face image p′ and the face image p′ * come from the similarity of the feature vectors For the same user, if it belongs to the same user, the verification fails, otherwise, go to step 5).
5)计算人脸图像p′和人脸图像p′*的结构相似度,如果结构相似度大于预设阈值,则校验通过,否则校验不通过。5) Calculate the structural similarity between the face image p' and the face image p' * , if the structural similarity is greater than the preset threshold, the verification is passed, otherwise the verification fails.
为了更好地说明本发明的技术效果,采用具体实例对本发明进行实验验证。图2是本实施例中原始人脸图像和生成虚拟用户人脸图像的对比图。如图2所示,生成虚拟用户人脸图像与原始人脸图像存在显著区别,有效保护了用户隐私,但是保留了原始人脸图像中用户的性别、人种、肤色等特征,并且其清晰度与原始人脸图像相比的损失程度也在应用可接受范围之内。In order to better illustrate the technical effect of the present invention, the present invention is experimentally verified by using a specific example. FIG. 2 is a comparison diagram of the original face image and the generated virtual user face image in this embodiment. As shown in Figure 2, the generated virtual user face image is significantly different from the original face image, which effectively protects user privacy, but retains the user's gender, race, skin color and other characteristics in the original face image, and its clarity The degree of loss compared to the original face image is also within the acceptable range for the application.
尽管上面对本发明说明性的具体实施方式进行了描述,以便于本技术领域的技术人员理解本发明,但应该清楚,本发明不限于具体实施方式的范围,对本技术领域的普通技术人员来讲,只要各种变化在所附的权利要求限定和确定的本发明的精神和范围内,这些变化是显而易见的,一切利用本发明构思的发明创造均在保护之列。Although the illustrative specific embodiments of the present invention have been described above to facilitate the understanding of the present invention by those skilled in the art, it should be clear that the present invention is not limited to the scope of the specific embodiments. For those skilled in the art, As long as various changes are within the spirit and scope of the present invention as defined and determined by the appended claims, these changes are obvious, and all inventions and creations utilizing the inventive concept are included in the protection list.
Claims (3)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010343798.5A CN111476200B (en) | 2020-04-27 | 2020-04-27 | Face de-identification generation method based on generation of confrontation network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010343798.5A CN111476200B (en) | 2020-04-27 | 2020-04-27 | Face de-identification generation method based on generation of confrontation network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111476200A true CN111476200A (en) | 2020-07-31 |
CN111476200B CN111476200B (en) | 2022-04-19 |
Family
ID=71755753
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010343798.5A Active CN111476200B (en) | 2020-04-27 | 2020-04-27 | Face de-identification generation method based on generation of confrontation network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111476200B (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111950635A (en) * | 2020-08-12 | 2020-11-17 | 温州大学 | A Robust Feature Learning Method Based on Hierarchical Feature Alignment |
CN112084962A (en) * | 2020-09-11 | 2020-12-15 | 贵州大学 | Face privacy protection method based on generation type countermeasure network |
CN112307514A (en) * | 2020-11-26 | 2021-02-02 | 哈尔滨工程大学 | A Differential Privacy Greedy Grouping Method Using Wasserstein Distance |
CN112613445A (en) * | 2020-12-29 | 2021-04-06 | 深圳威富优房客科技有限公司 | Face image generation method and device, computer equipment and storage medium |
CN112668401A (en) * | 2020-12-09 | 2021-04-16 | 中国科学院信息工程研究所 | Face privacy protection method and device based on feature decoupling |
CN112734436A (en) * | 2021-01-08 | 2021-04-30 | 支付宝(杭州)信息技术有限公司 | Terminal and method for supporting face recognition |
CN112926559A (en) * | 2021-05-12 | 2021-06-08 | 支付宝(杭州)信息技术有限公司 | Face image processing method and device |
CN112949535A (en) * | 2021-03-15 | 2021-06-11 | 南京航空航天大学 | Face data identity de-identification method based on generative confrontation network |
CN113033511A (en) * | 2021-05-21 | 2021-06-25 | 中国科学院自动化研究所 | Face anonymization method based on control decoupling identity representation |
CN113486839A (en) * | 2021-07-20 | 2021-10-08 | 支付宝(杭州)信息技术有限公司 | Encryption model training, image encryption and encrypted face image recognition method and device |
CN113705492A (en) * | 2021-08-31 | 2021-11-26 | 杭州艾芯智能科技有限公司 | Method and system for generating face training sample image, computer equipment and storage medium |
CN113705410A (en) * | 2021-08-20 | 2021-11-26 | 陈成 | Face image desensitization processing and verifying method and system |
CN114036553A (en) * | 2021-10-28 | 2022-02-11 | 杭州电子科技大学 | A Pedestrian Identity Privacy Protection Method Combined with k Anonymity |
CN114049417A (en) * | 2021-11-12 | 2022-02-15 | 北京字节跳动网络技术有限公司 | Virtual character image generation method and device, readable medium and electronic equipment |
CN114550249A (en) * | 2022-02-15 | 2022-05-27 | Oppo广东移动通信有限公司 | Face image generation method and device, computer readable medium and electronic equipment |
CN115617882A (en) * | 2022-12-20 | 2023-01-17 | 粤港澳大湾区数字经济研究院(福田) | Time sequence diagram data generation method and system with structural constraint based on GAN |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108520503A (en) * | 2018-04-13 | 2018-09-11 | 湘潭大学 | A Method of Repairing Face Defect Image Based on Autoencoder and Generative Adversarial Network |
CN108573222A (en) * | 2018-03-28 | 2018-09-25 | 中山大学 | Pedestrian Image Occlusion Detection Method Based on Recurrent Adversarial Generative Network |
CN108764207A (en) * | 2018-06-07 | 2018-11-06 | 厦门大学 | A kind of facial expression recognizing method based on multitask convolutional neural networks |
CN108829855A (en) * | 2018-06-21 | 2018-11-16 | 山东大学 | It is worn based on the clothing that condition generates confrontation network and takes recommended method, system and medium |
CN109815928A (en) * | 2019-01-31 | 2019-05-28 | 中国电子进出口有限公司 | A kind of face image synthesis method and apparatus based on confrontation study |
CN109840477A (en) * | 2019-01-04 | 2019-06-04 | 苏州飞搜科技有限公司 | Face identification method and device are blocked based on eigentransformation |
CN109886167A (en) * | 2019-02-01 | 2019-06-14 | 中国科学院信息工程研究所 | A method and device for occluding face recognition |
CN110085263A (en) * | 2019-04-28 | 2019-08-02 | 东华大学 | A kind of classification of music emotion and machine composing method |
CN110135366A (en) * | 2019-05-20 | 2019-08-16 | 厦门大学 | Occluded pedestrian re-identification method based on multi-scale generative adversarial network |
CN110598806A (en) * | 2019-07-29 | 2019-12-20 | 合肥工业大学 | Handwritten digit generation method for generating countermeasure network based on parameter optimization |
CN110728628A (en) * | 2019-08-30 | 2020-01-24 | 南京航空航天大学 | A face de-occlusion method based on conditional generative adversarial network |
-
2020
- 2020-04-27 CN CN202010343798.5A patent/CN111476200B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108573222A (en) * | 2018-03-28 | 2018-09-25 | 中山大学 | Pedestrian Image Occlusion Detection Method Based on Recurrent Adversarial Generative Network |
CN108520503A (en) * | 2018-04-13 | 2018-09-11 | 湘潭大学 | A Method of Repairing Face Defect Image Based on Autoencoder and Generative Adversarial Network |
CN108764207A (en) * | 2018-06-07 | 2018-11-06 | 厦门大学 | A kind of facial expression recognizing method based on multitask convolutional neural networks |
CN108829855A (en) * | 2018-06-21 | 2018-11-16 | 山东大学 | It is worn based on the clothing that condition generates confrontation network and takes recommended method, system and medium |
CN109840477A (en) * | 2019-01-04 | 2019-06-04 | 苏州飞搜科技有限公司 | Face identification method and device are blocked based on eigentransformation |
CN109815928A (en) * | 2019-01-31 | 2019-05-28 | 中国电子进出口有限公司 | A kind of face image synthesis method and apparatus based on confrontation study |
CN109886167A (en) * | 2019-02-01 | 2019-06-14 | 中国科学院信息工程研究所 | A method and device for occluding face recognition |
CN110085263A (en) * | 2019-04-28 | 2019-08-02 | 东华大学 | A kind of classification of music emotion and machine composing method |
CN110135366A (en) * | 2019-05-20 | 2019-08-16 | 厦门大学 | Occluded pedestrian re-identification method based on multi-scale generative adversarial network |
CN110598806A (en) * | 2019-07-29 | 2019-12-20 | 合肥工业大学 | Handwritten digit generation method for generating countermeasure network based on parameter optimization |
CN110728628A (en) * | 2019-08-30 | 2020-01-24 | 南京航空航天大学 | A face de-occlusion method based on conditional generative adversarial network |
Non-Patent Citations (4)
Title |
---|
FEI PENG 等: "FD-GAN: Face De-Morphing Generative Adversarial Network for Restoring Accomplice"s Facial Image", 《SPECIAL SECTION ON DIGITAL FORENSICS THROUGH MULTIMEDIA SOURCE INFERENCE》 * |
TSUNG-YI LIN 等: "Focal Loss for Dense Object Detection", 《2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》 * |
王素琴 等: "基于生成对抗网络的遮挡表情识别", 《计算机应用研究》 * |
贾迪 等: "图像匹配方法研究综述", 《中国图像图形学报》 * |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111950635A (en) * | 2020-08-12 | 2020-11-17 | 温州大学 | A Robust Feature Learning Method Based on Hierarchical Feature Alignment |
CN111950635B (en) * | 2020-08-12 | 2023-08-25 | 温州大学 | Robust feature learning method based on layered feature alignment |
CN112084962A (en) * | 2020-09-11 | 2020-12-15 | 贵州大学 | Face privacy protection method based on generation type countermeasure network |
CN112307514A (en) * | 2020-11-26 | 2021-02-02 | 哈尔滨工程大学 | A Differential Privacy Greedy Grouping Method Using Wasserstein Distance |
CN112307514B (en) * | 2020-11-26 | 2023-08-01 | 哈尔滨工程大学 | A Differentially Private Greedy Grouping Method Using Wasserstein Distance |
CN112668401A (en) * | 2020-12-09 | 2021-04-16 | 中国科学院信息工程研究所 | Face privacy protection method and device based on feature decoupling |
CN112668401B (en) * | 2020-12-09 | 2023-01-17 | 中国科学院信息工程研究所 | A face privacy protection method and device based on feature decoupling |
CN112613445A (en) * | 2020-12-29 | 2021-04-06 | 深圳威富优房客科技有限公司 | Face image generation method and device, computer equipment and storage medium |
CN112613445B (en) * | 2020-12-29 | 2024-04-30 | 深圳威富优房客科技有限公司 | Face image generation method, device, computer equipment and storage medium |
CN112734436A (en) * | 2021-01-08 | 2021-04-30 | 支付宝(杭州)信息技术有限公司 | Terminal and method for supporting face recognition |
CN112949535B (en) * | 2021-03-15 | 2022-03-11 | 南京航空航天大学 | A face data identity de-identification method based on generative adversarial network |
CN112949535A (en) * | 2021-03-15 | 2021-06-11 | 南京航空航天大学 | Face data identity de-identification method based on generative confrontation network |
CN112926559A (en) * | 2021-05-12 | 2021-06-08 | 支付宝(杭州)信息技术有限公司 | Face image processing method and device |
CN113657350A (en) * | 2021-05-12 | 2021-11-16 | 支付宝(杭州)信息技术有限公司 | Face image processing method and device |
CN113033511A (en) * | 2021-05-21 | 2021-06-25 | 中国科学院自动化研究所 | Face anonymization method based on control decoupling identity representation |
CN113486839A (en) * | 2021-07-20 | 2021-10-08 | 支付宝(杭州)信息技术有限公司 | Encryption model training, image encryption and encrypted face image recognition method and device |
CN113486839B (en) * | 2021-07-20 | 2024-10-22 | 支付宝(杭州)信息技术有限公司 | Encryption model training, image encryption and encrypted face image recognition method and device |
CN113705410A (en) * | 2021-08-20 | 2021-11-26 | 陈成 | Face image desensitization processing and verifying method and system |
CN113705492A (en) * | 2021-08-31 | 2021-11-26 | 杭州艾芯智能科技有限公司 | Method and system for generating face training sample image, computer equipment and storage medium |
CN114036553A (en) * | 2021-10-28 | 2022-02-11 | 杭州电子科技大学 | A Pedestrian Identity Privacy Protection Method Combined with k Anonymity |
CN114036553B (en) * | 2021-10-28 | 2025-05-13 | 杭州电子科技大学 | A pedestrian identity privacy protection method combined with k-anonymity |
CN114049417A (en) * | 2021-11-12 | 2022-02-15 | 北京字节跳动网络技术有限公司 | Virtual character image generation method and device, readable medium and electronic equipment |
CN114049417B (en) * | 2021-11-12 | 2023-11-24 | 抖音视界有限公司 | Virtual character image generation method and device, readable medium and electronic equipment |
CN114550249A (en) * | 2022-02-15 | 2022-05-27 | Oppo广东移动通信有限公司 | Face image generation method and device, computer readable medium and electronic equipment |
CN115617882A (en) * | 2022-12-20 | 2023-01-17 | 粤港澳大湾区数字经济研究院(福田) | Time sequence diagram data generation method and system with structural constraint based on GAN |
Also Published As
Publication number | Publication date |
---|---|
CN111476200B (en) | 2022-04-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111476200B (en) | Face de-identification generation method based on generation of confrontation network | |
TWI779970B (en) | Image processing method, processor, electronic device and computer-readable storage medium | |
CN109800710B (en) | Pedestrian re-identification system and method | |
CN110706152B (en) | Face illumination migration method based on generation of confrontation network | |
CN111340008A (en) | Method and system for generation of counterpatch, training of detection model and defense of counterpatch | |
WO2022179401A1 (en) | Image processing method and apparatus, computer device, storage medium, and program product | |
CN110096156A (en) | Virtual costume changing method based on 2D image | |
US20210397822A1 (en) | Living body detection method, apparatus, electronic device, storage medium and program product | |
KR102455966B1 (en) | Mediating Apparatus, Method and Computer Readable Recording Medium Thereof | |
CN113297624B (en) | Image preprocessing method and device | |
CN113033511B (en) | A Face Anonymity Method Based on Manipulating Decoupled Identity Representation | |
WO2022252372A1 (en) | Image processing method, apparatus and device, and computer-readable storage medium | |
CN113177892A (en) | Method, apparatus, medium, and program product for generating image inpainting model | |
CN112200075A (en) | A face anti-counterfeiting method based on anomaly detection | |
CN117635772A (en) | Image generation method, device and equipment | |
KR20210037672A (en) | Identity recognition method, computer-readable storage medium, terminal device and apparatus | |
CN114219728B (en) | A method and system for restoring facial images | |
CN114360002A (en) | Method and device for training face recognition model based on federated learning | |
CN114612989A (en) | Method and device for generating face recognition data set, electronic equipment and storage medium | |
CN114360015A (en) | Living body detection method, living body detection device, living body detection equipment and storage medium | |
CN115708135A (en) | Face recognition model processing method, face recognition method and device | |
CN113705290B (en) | Image processing method, device, computer equipment and storage medium | |
CN114529957B (en) | A face recognition method and system | |
Zhao et al. | Zero-Shot Defense Against Toxic Images via Inherent Multimodal Alignment in LVLMs | |
CN117763612A (en) | Class universal disturbing face image privacy protection method based on triplet constraint |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |