CN111476749A - Face repairing method for generating confrontation network based on face key point guidance - Google Patents

Face repairing method for generating confrontation network based on face key point guidance Download PDF

Info

Publication number
CN111476749A
CN111476749A CN202010261441.2A CN202010261441A CN111476749A CN 111476749 A CN111476749 A CN 111476749A CN 202010261441 A CN202010261441 A CN 202010261441A CN 111476749 A CN111476749 A CN 111476749A
Authority
CN
China
Prior art keywords
face
layer
convolutional
key point
size
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010261441.2A
Other languages
Chinese (zh)
Other versions
CN111476749B (en
Inventor
裴炤
黄丽
张艳宁
马苗
郭敏
李峻
武杰
陈昱莅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Normal University
Original Assignee
Shaanxi Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi Normal University filed Critical Shaanxi Normal University
Priority to CN202010261441.2A priority Critical patent/CN111476749B/en
Publication of CN111476749A publication Critical patent/CN111476749A/en
Application granted granted Critical
Publication of CN111476749B publication Critical patent/CN111476749B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a face repairing method for generating an confrontation network based on face key point guidance, which comprises the following steps: constructing a face key point guided generation confrontation network, training the face key point guided generation confrontation network and repairing a face. The method uses the face key point guide type generation confrontation network to generate the complete face, and under the condition that a large area of the face is lost, the face key point loss function is combined to assist the training of the network, so that the contour of the generated face is guided to be continuously close to the contour of the real face, and the repaired face contour is coherent and real. The problem of face repair result distortion caused by large-area loss due to conditions such as severe shielding and the like is solved.

Description

Face repairing method for generating confrontation network based on face key point guidance
Technical Field
The invention belongs to the technical field of computer vision, and mainly relates to a method for completing a human face repairing task by utilizing a face key point guided generation confrontation network under the condition of large-area loss.
Background
Face restoration is a technique for obtaining a complete face by using known face information to patch missing regions. The provision of generating a countermeasure network further improves the authenticity of the face restoration result. The literature, "Semantic image inpainting with deep generating models, in proceedings of the IEEE conference on computing and pattern recognition,2017: 5485-. However, under the conditions of severe occlusion and the like, information of a large area of the face is lost, and due to the lack of effective context and prior information, the method causes a repair result to be not ideal, and particularly shows the distortion of key parts such as a face contour obtained by repairing. The task of positioning key points such as the facial contour, eyebrows, eyes, nose and the like of the human face is called human face key point prediction. Aiming at the problems, under the condition that the information of a large area region of the face is lost, a generation countermeasure network with an optimal structure is determined, and the generated face outline and the like are guided to be continuously close to the true value by using the face key point prediction result, so that the face repairing task is better completed, and the method has great research significance and value.
Disclosure of Invention
Technical problem to be solved
The technical problem to be solved by the invention is to overcome the technical problems in the prior art and provide a face repairing method for generating an confrontation network based on face key point guidance, and a face repairing task under the condition of large-area deletion is completed by using a deep neural network model to obtain a more real and coherent face repairing image.
Technical scheme
A face repairing method based on face key point guiding generation confrontation network is characterized by comprising the following steps:
step 1: constructing a face key point guide type generation confrontation network, wherein the network comprises a face repairing module and a face key point prediction module;
the face repairing module consists of a generator and a discriminator, wherein the generator comprises an input, 10 convolutional layers, 2 void convolutional layers, 2 deconvolution layers and an output, and the input is a face image with a random binary mask with the size of 64 × 64 × 3, and comprises the following steps:
IM=I⊙M (1)
in the formula IMThe face image is a face image with a random binary mask, I is an original face image in a face data set used for training a network, and M is a randomly generated binary mask with the size of 64 × 64;
the feature map of the human face is a feature map of a face, the feature map of the face, the face is a feature map of the face, the face is a face, the face is a face, the face is a face, the face is characterized by the face, the face is characterized by the face, the face is a face, the face is characterized by the face, the features, the face is characterized by the features;
the discriminator comprises an input, 4 convolutional layers, 1 fully-connected layer and an output, wherein the input is a face image with the size of 64 × 64 × 3, the input is a feature map with the size of 64 × 364 64, the first convolutional layer has the convolutional kernel size of 5 × 05, the step size is 2, the activation function is × 1Re × 2U, 64 feature maps with the size of 64 × are output, the second convolutional layer has the convolutional kernel size of 5 × 45, the step size is 1, the activation function is × 5Re × 6U, 128 feature maps with the size of 32 × 732 are output, the third convolutional layer has the convolutional kernel size of 5 × 5, the step size is 1, the activation function is L Re L U, 256 feature maps with the size of 16 × 16 are output, the fourth convolutional layer has the convolutional kernel size of 5 × 5, the step size is 1, the activation function is L Re L U, 512 feature maps with the size of 4 × 4 are output, the third fully-connected layer has the activation function of MOID, the output range is 0, and the output value of the face probability value of a single face input represents the real face;
the human face key point prediction module comprises an input, 4 convolution layers, 1 full-connection layer and an output; except for the output, the module structure is the same as the structure of the discriminator; the output of the module is a 136-dimensional vector which represents x and y coordinate prediction results of 68 key points of the human face;
step 2: training a face key point guide type generation confrontation network, wherein the training of the network comprises two steps, namely training a face key point prediction module, fixing the face key point prediction module and training a face restoration module;
firstly, training a face key point prediction module by using a face data set with a face key point label, wherein a face key point loss function used in the training process is defined as follows:
Lld=||H(IX)-PGT||1(2)
in the formula, LldAs a face key point loss function, IXIs the input face image of the face key point prediction module, when training the face key point prediction module, IXI, H (-) represents the output of the face prediction module, PGTRepresenting the label value of the key point of the face corresponding to the input face image, | · | | luminance1Represents a norm of L1;
and secondly, training a face repairing module by using a face data set with a face key point label, wherein the training is completed by alternately updating parameters of a generator and a discriminator, a loss function of the face repairing module used for the training is composed of three parts, and the first part is a traditional countermeasure loss function and is defined as follows:
Figure BDA0002439450460000041
formula (III) LadvTo combat the loss function, D (-) represents the output of the arbiter, G (-) represents the output of the generator, E [ ·]Representing an expected value;
the second part is a face reconstruction loss function, defined as follows:
Lmse=||I-G(IM)||2(4)
in the formula, LmseReconstructing a loss function for a face, | · | | luminance2Represents a L2 norm;
the third part is a face key point loss function, defined as formula (2), where, when training the generator, IX=G(IM);
Finally determining a loss function of the face repairing module:
LFIM=αLmse+βLadv+γLld(5)
in the formula, LFIMFor the face repair module loss function, α and gamma are super parameters;
and step 3: face repair
Inputting the face to be repaired into a generator in the model, outputting a complete face by the generator, cutting and pasting the region to be repaired corresponding to the output face on the face to be repaired, and finally obtaining a face repairing result:
IC=G(IM)⊙(1-M)+IM(6)
in the formula ICAnd obtaining a face repairing result.
Advantageous effects
The invention provides a face repairing method for generating an confrontation network based on face key point guidance, which comprises the following steps:
constructing a face key point guided generation confrontation network, training the face key point guided generation confrontation network and repairing a face. The method uses the face key point guide type generation confrontation network to generate the complete face, and under the condition that a large area of the face is lost, the face key point loss function is combined to assist the training of the network, so that the contour of the generated face is guided to be continuously close to the contour of the real face, and the repaired face contour is coherent and real. The problem of face repair result distortion of large-area deletion caused by conditions such as severe shielding is solved.
Detailed Description
The invention will now be further described with reference to the examples:
taking a 300VM public face data set as a face repairing module and a face key point prediction module training set, and a FaceScrub public face data set as a test set, the face repairing method for generating an confrontation network based on face key point guidance comprises the following steps:
(1) construction of face key point guiding type generation confrontation network
The network comprises two modules, namely a face repairing module and a face key point predicting module.
The face restoration module consists of a generator and a discriminator, wherein the generator comprises an input, 10 convolutional layers, 2 void convolutional layers, 2 deconvolution layers and an output, the input is a face image with a random binary mask and the size of the face image is 64 × 64 × 3, and the face image is as follows:
IM=I⊙M (1)
in the formula IMIs a face image with a random binary mask, I is the original face image in the face data set used to train the network, and M is a randomly generated binary mask of size 64 × 64.
The first layer of convolutional layers, the convolutional kernel size is 5, the step size is 1, the activation function is Re 0U, the feature map output is 64, the size is 64, the second layer of convolutional layers, the convolutional kernel size is 5, the step size is 2, the activation function is 3Re 4U, the feature map output is 128, the size is 32, the third layer of convolutional layers, the convolutional kernel size is 65, the step size is 1, the activation function is 7Re 8U, the feature map output is 128, the size is 32 932, the fourth layer of convolutional layers, the convolutional kernel size is 5, the step size is 2, the activation function is Re 1U, the feature map output is 256, the fifth layer of convolutional layers is 16, the fifth layer of convolutional layers, the convolutional kernel size is 5, the step size is 1, the activation function is 2Re 5U, the feature map output is 16, the sixth layer of convolutional layers, the convolutional kernel size is 85, the step size is 1, the activation function is 6Re 9U, the feature map output is 256, the activation function is 16, the activation layer is 2Re 5U, the activation function is 95, the activation function output is 64, the activation function output is 35, the activation hole map output is 64, the activation function output is 16, the twelfth layer, the activation function is 16, the activation function output is 16, the feature map output is 16, the activation function output is 35, the activation function output is 16, the activation function is 35, the activation hole map output is 16, the activation function output is 16, the activation layer of the activation kernel size, the activation function output is 16, the activation layer, the activation function of the activation layer, the activation layer of the activation function, the activation hole map output is 35, the activation layer, the activation function is 35, the activation layer of the activation layer, the activation function output is 35, the activation hole map output is 35, the activation function of the activation layer, the activation hole map output is 35, the activation function of the activation function, the activation hole map output is 16, the activation layer of the activation hole map output is 35, the activation layer, the activation function output is 16, the activation function of the activation hole map output.
The discriminator comprises an input, 4 convolutional layers, 1 fully-connected layer and an output, wherein the input is a face image with the size of 64 × 64 × 3, the input is a first layer convolutional layer, the size of a convolutional kernel is 5 × 05, the step size is 2, an activation function is × 1Re × 2U, 64 feature maps with the size of 64 × 364 are output, the size of a convolutional kernel of a second layer is 5 × 45, the step size is 1, an activation function is × 5Re × 6U, 128 feature maps with the size of 32 × 732 are output, the size of a convolutional kernel of a third layer is 5 × 5, the step size is 1, the activation function is L Re L U, 256 feature maps with the size of 16 × 16 are output, the size of the convolutional kernel is 5 × 5, the step size is 1, the activation function is L Re L U, 512 feature maps with the size of 4 × 4 are output, the first fully-connected layer, the activation function is a sigid, the output range of 0, and the output represents the value of the input value of a real face.
The face key point prediction module comprises an input layer, 4 convolution layers, 1 full-connection layer and an output layer. Except for the output, the module structure is the same as the structure of the discriminator. The output of the module is a 136-dimensional vector which represents x and y coordinate prediction results of 68 key points of the human face.
(2) Training face key point guiding generation confrontation network
The network training comprises two steps, namely a first step of training a face key point prediction module, a second step of fixing the face key point prediction module and a training face restoration module.
The first step trains the face keypoint prediction module using a face dataset 300VM with face keypoint labels. Determining a loss function of a human face key point prediction module:
Lld=||H(IX)-PGT||1(2)
in the formula, LldAs a face key point loss function, IXIs the input face image of the face key point prediction module, when training the face key point module, IXI, H (-) represents the output of the face prediction module, PGTRepresenting the label value of the key point of the face corresponding to the input face image, | · | | luminance1Representing a L1 norm.
The second step trains the face restoration module using a face dataset 300VM with face keypoint labels. Training is accomplished by alternately updating the generator and discriminator parameters. Determining the loss function consists of three parts, the first part is the traditional countermeasure loss and is defined as follows:
Figure BDA0002439450460000071
in the formula, LadvTo combat the loss function, D (-) represents the output of the arbiter, G (-) represents the output of the generator, E [ ·]Representing the expected value.
The second part is face reconstruction loss, defined as follows:
Lmse=||I-G(IM)||2(4)
in the formula, LmseReconstructing a loss function for a face, | · | | luminance2Representing a L2 norm.
The third part is face keypoint loss, defined as formula (2), where, when training the generator, IX=G(IM)。
Finally determining a loss function of the face repairing module:
LFIM=αLmse+βLadv+γLld(5)
in the formula, LFIMFor the face restoration module loss function, α and γ are super parameters, and take values of 1, 0.0001 and 0.0001 respectively.
(3) Face repair
And (3) finishing the face repairing task by using the model trained in the step (2). Generating a random binary mask on a face data set FaceScrub as a face to be repaired, inputting the face to be repaired into a generator of the model, outputting a complete face by the generator, then cutting and pasting a region to be repaired corresponding to the output face onto the face to be repaired, and finally obtaining a face repairing result:
IC=G(IM)⊙(1-M)+IM(6)
in the formula ICAnd obtaining a face repairing result.

Claims (1)

1. A face repairing method based on face key point guiding generation confrontation network is characterized by comprising the following steps:
step 1: constructing a face key point guide type generation confrontation network, wherein the network comprises a face repairing module and a face key point prediction module;
the face repairing module consists of a generator and a discriminator, wherein the generator comprises an input, 10 convolutional layers, 2 void convolutional layers, 2 deconvolution layers and an output, and the input is a face image with a random binary mask and with the size of 64 × 64 × 3, and comprises the following steps:
IM=I⊙M (1)
in the formula IMThe face image is a face image with a random binary mask, I is an original face image in a face data set used for training a network, and M is a randomly generated binary mask with the size of 64 × 64;
the feature map of the active hole pattern comprises a first layer of convolutional layers, a second layer of convolutional layers, a fifth layer of convolutional layers, a sixth layer of convolutional layers, a fifth layer of convolutional layers, a sixth layer of convolutional layers, a fifth layer of convolutional layers, a convolutional layer, a convolutional kernel, a sixth layer of convolutional layers, a convolutional kernel, a fifth layer of convolutional layers, a convolutional layer of convolutional layers, a sixth layer of convolutional layers, a convolutional kernel, a fifth layer of convolutional layer, a convolutional layer of which has the size of 5, a convolutional kernel, a size of a seventeenth layer, a convolutional layer of convolutional layers, a convolutional layer of activation function, a seventeenth layer, a convolutional layer of 12U, a convolutional layer, a fourteenth layer of activation function, a convolutional layer of 12U, a convolutional layer, a fourteenth layer of convolutional layer, a fourteenth layer, a convolutional layer of activation function, a fifteenth layer, a convolutional layer, a fifteenth layer of activation function, a fifteenth layer, a convolutional layer of activation function, a convolutional layer, a fifteenth layer, a convolutional layer, a fifteenth layer, a seventeenth layer, a convolutional layer, a seventeenth layer, a convolutional layer, a seventeenth layer, a convolutional;
the discriminator comprises an input, 4 convolutional layers, 1 fully-connected layer and an output, wherein the input is a face image with the size of 64 × 64 × 3, the input is a feature map with the size of 64 × 364 64, the first convolutional layer has the convolutional kernel size of 5 × 05, the step size is 2, the activation function is × 1Re × 2U, 64 feature maps with the size of 64 × are output, the second convolutional layer has the convolutional kernel size of 5 × 45, the step size is 1, the activation function is × 5Re × 6U, 128 feature maps with the size of 32 × 732 are output, the third convolutional layer has the convolutional kernel size of 5 × 5, the step size is 1, the activation function is L Re L U, 256 feature maps with the size of 16 × 16 are output, the fourth convolutional layer has the convolutional kernel size of 5 × 5, the step size is 1, the activation function is L Re L U, 512 feature maps with the size of 4 × 4 are output, the third fully-connected layer has the activation function of 0, the value range of a single probability value output representing the face, and the input value of the face image is represented by the real face;
the human face key point prediction module comprises an input, 4 convolution layers, 1 full-connection layer and an output; except for the output, the module structure is the same as the structure of the discriminator; the output of the module is a 136-dimensional vector which represents x and y coordinate prediction results of 68 key points of the human face;
step 2: training a face key point guide type generation confrontation network, wherein the training of the network comprises two steps, namely training a face key point prediction module, fixing the face key point prediction module and training a face restoration module;
firstly, training a face key point prediction module by using a face data set with a face key point label, wherein a face key point loss function used in the training process is defined as follows:
Lld=||H(IX)-PGT||1(2)
in the formula, LldAs a face key point loss function, IXIs the input face image of the face key point prediction module, when training the face key point prediction module, IXI, H (-) represents the output of the face prediction module, PGTRepresenting the label value of the key point of the face corresponding to the input face image, | · | | luminance1Represents a norm of L1;
and secondly, training a face repairing module by using a face data set with a face key point label, wherein the training is completed by alternately updating parameters of a generator and a discriminator, a loss function of the face repairing module used for the training is composed of three parts, and the first part is a traditional countermeasure loss function and is defined as follows:
Figure FDA0002439450450000031
formula (III) LadvTo combat the loss function, D (-) represents the output of the arbiter, G (-) represents the output of the generator, E [ ·]Represents the expected value;
the second part is a face reconstruction loss function, defined as follows:
Lmse=||I-G(IM)||2(4)
in the formula, LmseReconstructing a loss function for a face, | · | | luminance2Represents a L2 norm;
the third part is a face key point loss function, defined as formula (2), where, when training the generator, IX=G(IM);
Finally determining a loss function of the face repairing module:
LFIM=αLmse+βLadv+γLld(5)
in the formula, LFIMFor the face repair module loss function, α and gamma are super parameters;
and step 3: face repair
Inputting the face to be repaired into a generator in the model, outputting a complete face by the generator, cutting and pasting the region to be repaired corresponding to the output face on the face to be repaired, and finally obtaining a face repairing result:
IC=G(IM)⊙(1-M)+IM(6)
in the formula ICAnd obtaining a face repairing result.
CN202010261441.2A 2020-04-03 2020-04-03 Face repairing method for generating confrontation network in guiding mode based on face key points Active CN111476749B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010261441.2A CN111476749B (en) 2020-04-03 2020-04-03 Face repairing method for generating confrontation network in guiding mode based on face key points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010261441.2A CN111476749B (en) 2020-04-03 2020-04-03 Face repairing method for generating confrontation network in guiding mode based on face key points

Publications (2)

Publication Number Publication Date
CN111476749A true CN111476749A (en) 2020-07-31
CN111476749B CN111476749B (en) 2023-02-28

Family

ID=71749797

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010261441.2A Active CN111476749B (en) 2020-04-03 2020-04-03 Face repairing method for generating confrontation network in guiding mode based on face key points

Country Status (1)

Country Link
CN (1) CN111476749B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738230A (en) * 2020-08-05 2020-10-02 深圳市优必选科技股份有限公司 Face recognition method, face recognition device and electronic equipment
CN112633130A (en) * 2020-12-18 2021-04-09 成都三零凯天通信实业有限公司 Face mask removing method based on key point restoration image
CN112949553A (en) * 2021-03-22 2021-06-11 陈懋宁 Face image restoration method based on self-attention cascade generation countermeasure network
CN113066034A (en) * 2021-04-21 2021-07-02 腾讯科技(深圳)有限公司 Face image restoration method and device, restoration model, medium and equipment
CN114140883A (en) * 2021-12-10 2022-03-04 沈阳康泰电子科技股份有限公司 Gait recognition method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019015466A1 (en) * 2017-07-17 2019-01-24 广州广电运通金融电子股份有限公司 Method and apparatus for verifying person and certificate
CN109961507A (en) * 2019-03-22 2019-07-02 腾讯科技(深圳)有限公司 A kind of Face image synthesis method, apparatus, equipment and storage medium
CN110288537A (en) * 2019-05-20 2019-09-27 湖南大学 Facial image complementing method based on the depth production confrontation network from attention
CN110689499A (en) * 2019-09-27 2020-01-14 北京工业大学 Face image restoration method based on dense expansion convolution self-coding countermeasure network
WO2020029356A1 (en) * 2018-08-08 2020-02-13 杰创智能科技股份有限公司 Method employing generative adversarial network for predicting face change
CN110910322A (en) * 2019-11-05 2020-03-24 北京奇艺世纪科技有限公司 Picture processing method and device, electronic equipment and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019015466A1 (en) * 2017-07-17 2019-01-24 广州广电运通金融电子股份有限公司 Method and apparatus for verifying person and certificate
WO2020029356A1 (en) * 2018-08-08 2020-02-13 杰创智能科技股份有限公司 Method employing generative adversarial network for predicting face change
CN109961507A (en) * 2019-03-22 2019-07-02 腾讯科技(深圳)有限公司 A kind of Face image synthesis method, apparatus, equipment and storage medium
CN110288537A (en) * 2019-05-20 2019-09-27 湖南大学 Facial image complementing method based on the depth production confrontation network from attention
CN110689499A (en) * 2019-09-27 2020-01-14 北京工业大学 Face image restoration method based on dense expansion convolution self-coding countermeasure network
CN110910322A (en) * 2019-11-05 2020-03-24 北京奇艺世纪科技有限公司 Picture processing method and device, electronic equipment and computer readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
孙硕等: "生成对抗网络进行感知遮挡人脸还原的算法研究", 《小型微型计算机系统》 *
曹志义等: "基于生成对抗网络的遮挡图像修复算法", 《北京邮电大学学报》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738230A (en) * 2020-08-05 2020-10-02 深圳市优必选科技股份有限公司 Face recognition method, face recognition device and electronic equipment
CN112633130A (en) * 2020-12-18 2021-04-09 成都三零凯天通信实业有限公司 Face mask removing method based on key point restoration image
CN112949553A (en) * 2021-03-22 2021-06-11 陈懋宁 Face image restoration method based on self-attention cascade generation countermeasure network
CN113066034A (en) * 2021-04-21 2021-07-02 腾讯科技(深圳)有限公司 Face image restoration method and device, restoration model, medium and equipment
CN114140883A (en) * 2021-12-10 2022-03-04 沈阳康泰电子科技股份有限公司 Gait recognition method and device

Also Published As

Publication number Publication date
CN111476749B (en) 2023-02-28

Similar Documents

Publication Publication Date Title
CN111476749A (en) Face repairing method for generating confrontation network based on face key point guidance
Quan et al. Image inpainting with local and global refinement
Yang et al. Freenerf: Improving few-shot neural rendering with free frequency regularization
Choe et al. Face generation for low-shot learning using generative adversarial networks
Zhang et al. Text-guided neural image inpainting
CN109919830B (en) Method for restoring image with reference eye based on aesthetic evaluation
CN113112411A (en) Human face image semantic restoration method based on multi-scale feature fusion
CN113240613A (en) Image restoration method based on edge information reconstruction
CN112950775A (en) Three-dimensional face model reconstruction method and system based on self-supervision learning
CN113066171B (en) Face image generation method based on three-dimensional face deformation model
CN111612872B (en) Face age change image countermeasure generation method and system
CN113112416B (en) Semantic-guided face image restoration method
Shen et al. Single-shot semantic image inpainting with densely connected generative networks
CN111291669B (en) Dual-channel depression angle face fusion correction GAN network and face fusion correction method
CN112001859A (en) Method and system for repairing face image
CN113591928B (en) Vehicle re-identification method and system based on multi-view and convolution attention module
CN115731138A (en) Image restoration method based on Transformer and convolutional neural network
CN111914618B (en) Three-dimensional human body posture estimation method based on countermeasure type relative depth constraint network
CN113935919A (en) Image restoration algorithm based on GAN network
Liu et al. Facial image inpainting using multi-level generative network
CN117237931A (en) License plate generation method, device and equipment containing Chinese characters and storage medium
CN116523985B (en) Structure and texture feature guided double-encoder image restoration method
CN111739168B (en) Large-scale three-dimensional face synthesis method with suppressed sample similarity
CN113392786A (en) Cross-domain pedestrian re-identification method based on normalization and feature enhancement
Duan et al. DIQA-FF: dual image quality assessment for face frontalization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant