CN113378721A - Method and system for generating confrontation face correction based on symmetry and local discrimination - Google Patents

Method and system for generating confrontation face correction based on symmetry and local discrimination Download PDF

Info

Publication number
CN113378721A
CN113378721A CN202110657280.3A CN202110657280A CN113378721A CN 113378721 A CN113378721 A CN 113378721A CN 202110657280 A CN202110657280 A CN 202110657280A CN 113378721 A CN113378721 A CN 113378721A
Authority
CN
China
Prior art keywords
network
image
face image
local
global
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110657280.3A
Other languages
Chinese (zh)
Other versions
CN113378721B (en
Inventor
刘芳
李玲玲
李任鹏
鲍骞月
黄欣研
刘旭
陈璞花
杨苗苗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202110657280.3A priority Critical patent/CN113378721B/en
Publication of CN113378721A publication Critical patent/CN113378721A/en
Application granted granted Critical
Publication of CN113378721B publication Critical patent/CN113378721B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a system for generating confrontation face correction based on symmetry and local discrimination. According to the priori knowledge of the symmetry of the human face, unifying the deflection direction of the human face into a positive deflection direction, correcting a left eye area with small texture and structural deformation by adopting a local generator, and taking the left eye area after horizontal turnover correction as a right eye area after correction; meanwhile, the corresponding local area is extracted from the generated front face image for image discrimination and identity discrimination, so that the finally generated front face image can keep better consistency with the real front face image in the binocular area, and meanwhile, the local texture detail can be better recovered.

Description

Method and system for generating confrontation face correction based on symmetry and local discrimination
Technical Field
The invention belongs to the technical field of image generation, and particularly relates to a method and a system for generating confrontation face correction based on symmetry and local discrimination.
Background
The face recognition is essentially a passive biological recognition technology for recognizing uncooperative objects, and the face recognition accuracy is greatly reduced due to factors such as posture change, illumination, expression and shielding in a real unconstrained environment. Although the deep convolutional neural network can strongly express the image characteristics, a series of problems existing above are proved to have great influence on the final performance of face recognition, especially when the posture is greatly changed, especially when the deflection angle of the face is close to 90 degrees, and the accuracy rate of the face recognition is rapidly reduced. The face correction re-recognition can better improve the face recognition accuracy under a large-angle posture at present. The general flow of face correction and re-identification is to give a side face image, generate a front face image of the corresponding identity through a model, and then verify and compare the front face image with a front face image in a face base.
By means of the strong generation capability of the generation of the confrontation network, a series of methods for face correction based on the generation of the confrontation network are derived. In the prior method for correcting the face based on a multi-path generator, a local generator is adopted to respectively correct a left eye region, a right eye region, a nose region and a mouth region, when a face deflection angle is too large, corresponding semantics in a two-dimensional face image are lost due to self-shielding, the corresponding semantic regions cannot be obtained by performing region segmentation through key point coordinates of two eyes, and then corresponding frontal regions are generated by the local generator unreasonably, so that the consistency between the frontal binocular regions generated by the local generator and a real frontal image is not good, meanwhile, a global generator cannot better fuse local correction results, and the situation shows that the local generator can generate the frontal local regions which are better consistent with the real frontal image, but the consistency between the corresponding local regions which finally generate the frontal image and the real frontal image is not good.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a method and a system for generating an confrontation face correction based on symmetry and local discrimination, wherein firstly, according to the priori knowledge of face symmetry, a face image with a deflection angle of-90 to 0 ° is horizontally inverted to be used as a face image of 0 to +90 °, a local generator is used for respectively correcting a left eye region, a nose region and a mouth region of a side face image, according to the inherent biological feature of face symmetry, the left eye region after horizontal inversion correction is used as a right eye region after correction, and finally a global generator is used for fusing local correction results to generate a face image. Meanwhile, in order to enable the global generator to better fuse the local correction result, the generated front face image is extracted to obtain a corresponding local area for image discrimination and identity discrimination. The finally generated front face image and the real front face image in the binocular region can keep better consistency, and meanwhile, local texture details can be better recovered.
The invention adopts the following technical scheme:
a method for generating confrontation face correction based on symmetry and local discrimination comprises the following steps:
s1, constructing side face images I in pairs under different deflection anglespAnd a front face image IfUsing the image pairs as a training set;
s2, constructing a multi-path generation countermeasure network based on symmetry prior and local discrimination, and training by using the training set constructed in the step S1 to generate the countermeasure network comprising a multi-path generator network G and a global image discriminator network DgLocal area discriminator DlGlobal image feature extraction network BgAnd local area feature extraction network Bl
And S3, inputting the side face image to the multipath generator G for generating the antithetical network after training in the step S2 to generate a front face image for face correction.
Specifically, in step S1, the side face image IpThe deflection angle of the face image is changed from-90 degrees to +90 degrees at intervals of 15 degrees, 13 face deflection angles are included in total, and the face image with the deflection angle of 0 degree is taken as a front face image IfAnd horizontally turning the face image with the deflection angle of-90 to 0 degrees to serve as the face image with the deflection angle of 0 to +90 degrees, and unifying the deflection direction of the face to be the positive deflection direction.
Specifically, in step S2, the multi-path generator network G includes a global generator GgAnd local generator group
Figure BDA0003113626750000031
k is 1,2,3, local generator set
Figure BDA0003113626750000032
Respectively as a side face image IpThe left eye area, the nose area and the mouth area are output as a frontal left eye area, a frontal nose area and a frontal mouth area, the left eye area after horizontal turnover correction is used as a corrected right eye area, and position aggregation is carried out on the corrected left eye area, the corrected right eye area, the corrected nose area and the corrected mouth area based on a frontal face key point; global generator GgIs input as a global side face image IpAnd a local correction area for outputting as a face correction image
Figure BDA0003113626750000033
Further, the local generator group
Figure BDA0003113626750000034
The device comprises an input layer, a first convolution layer, a second convolution layer, a third convolution layer, a fourth convolution layer, a first deconvolution layer, a second deconvolution layer, a third deconvolution layer, a fourth convolution layer, a fifth convolution layer and an output layer which are connected in sequence;
global generator GgComprises an input layer, a first convolution layer, a second convolution layer, a third convolution layer and a fourth convolution layer which are connected in sequenceA convolutional layer, a fifth convolutional layer, a first fully-connected layer, a second fully-connected layer, a first deconvolution layer, a second deconvolution layer, a third deconvolution layer, a fourth deconvolution layer, a fifth convolutional layer, a sixth convolutional layer, and an output layer.
Specifically, in step S2, the global image discriminator network DgIs the generated frontal face image
Figure BDA0003113626750000035
And a real frontal face image If(ii) a Local area discriminator DlIs to generate a frontal face image
Figure BDA0003113626750000036
Local area and real frontal image IfThe local area of (a).
Further, a global image discriminator network DgAnd a local area discriminator DlThe input layer, the first convolution layer, the second convolution layer, the third convolution layer, the fourth convolution layer, the fifth convolution layer, the first full-connection layer and the two classifiers are connected in sequence; global image discriminator network DgIs input as a global frontal face image, a local area discriminator DlIs the local region corresponding to the global frontal image.
Specifically, in step S2, the global image feature extraction network BgFor pre-training the model MobileFaceNet, a real frontal face image I is usedfFine adjustment is carried out; local area feature extraction network BlAlso for the pre-training model MobileFaceNet, real frontal face image I is usedfFine-tuning the corresponding local area; fixed global image feature extraction network B in training of entire networkgAnd local area feature extraction network BlIs input as the generated frontal face image without changing the parameters of
Figure BDA0003113626750000041
And a real frontal face image If
Specifically, in step S2, training the multi-path generated countermeasure network based on symmetry prior and local discrimination specifically includes:
s201, constructing a global image multi-scale pixel loss function LpixelLocal area pixel loss function Lpixel_lGlobal image penalty function Ladv_gLocal regional antagonistic loss function Ladv_lAnd a fusion feature identity discrimination loss function LipSymmetric loss function LsymRegular term LtvAnd the overall loss function Ltotal
S202, loss function through structure, side face IpAnd front face IfCombining the image pair with batch random gradient descent method, and performing image pair on a multipath generator network G and a global image discriminator network DgAnd local area discriminator network DlSequentially carrying out alternate training to obtain a multi-path generator network G and a global image discriminator network DgAnd local area discriminator network DlAnd (5) training the weight.
Further, global image discriminator network D is subjected to a batch stochastic gradient descent methodgLocal area discriminator network DlThe specific steps of sequentially carrying out alternate training with the multipath generator network G are as follows:
s2021, setting the training batch size n to 32 and the iteration number t to 200, and including seven weighting parameters λ in the loss function1=10、λ2=10、λ3=0.1、λ4=0.1、λ5=100、λ6=0.3、λ7=0.0001;
S2022, randomly sampling n samples in a batch in the side face-front face image pair;
s2023, updating the global image discriminator network D by a batch random gradient descent methodg
S2024, updating the local area discriminator network D by a batch random gradient descent methodl
S2025, updating the multipath generator network G by a batch random gradient descent method:
s2026, repeating the steps S2022 to S2025 until the iteration time t is reached;
s2027, output trainingWeight θ of trained multipath generator network GGGlobal image discriminator network DgWeight of (2)
Figure BDA0003113626750000051
And local area discriminator network DlWeight of (2)
Figure BDA0003113626750000052
Another technical solution of the present invention is a system for generating confrontation face correction based on symmetry and local discrimination, comprising:
a data module for constructing paired side face images I under different deflection anglespAnd a front face image IfUsing the image pairs as a training set;
a network module for constructing a multi-path generation countermeasure network based on symmetry prior and local discrimination, and training by using the training set constructed in the step S1, wherein the generation countermeasure network comprises a multi-path generator network G and a global image discriminator network DgLocal area discriminator DlGlobal image feature extraction network BgAnd local area feature extraction network Bl
And the correction module is used for inputting the side face image to the multipath generator G of the anti-network generated after the training of the step S2 to generate a front face image for the face correction.
Compared with the prior art, the invention has at least the following beneficial effects:
the invention relates to a method for generating an antagonistic face based on symmetry and local discrimination, which comprises the steps of firstly horizontally turning a face image with a deflection angle of-90-0 degrees according to the priori knowledge of the symmetry of the face to be used as a face image of 0-90 degrees, respectively correcting a left eye region, a nose region and a mouth region of a side face image by using a local generator, according to the inherent biological characteristic of the symmetry of a front face, taking the left eye region subjected to horizontal turning correction as a corrected right eye region, and finally fusing a local correction result by using a global generator to generate the front face image. Meanwhile, in order to enable the global generator to better fuse the local correction result, the generated front face image is extracted to obtain a corresponding local area for image discrimination and identity discrimination. The finally generated front face image and the real front face image in the binocular region can keep better consistency, and meanwhile, local texture details can be better recovered.
Further, a pair of side face and front face images under different deflection angles is constructed, and a side face image IpThe deflection angle of the face image is changed from-90 degrees to +90 degrees at intervals of 15 degrees, and the face image with the deflection angle of 0 degree is taken as a front face image IfAccording to the priori knowledge of face symmetry, a face image with a deflection angle of-90 degrees to 0 degrees is horizontally turned to serve as a face image of 0 degrees to +90 degrees, the deflection direction of the face is unified to be a positive deflection direction, at the moment, the texture and the structural deformation of a left eye region of a side face image are smaller than those of a right eye region all the time, meanwhile, under a large deflection angle, the left eye region can be relatively accurately positioned and segmented through key point coordinates, and then only a local generator is adopted to correct the left eye region.
Further, the constructed multi-path generator network G comprises a global generator GgAnd local generator group
Figure BDA0003113626750000061
Using local generator sets
Figure BDA0003113626750000062
The left eye region, the nose region, and the mouth region of the input side face image are respectively corrected, and the corrected left eye region is horizontally flipped as a corrected right eye region according to an inherent biological feature of front face symmetry. And correcting the left eye area with small deformation of the texture structure in the binocular area, and taking the horizontally reversed and corrected left eye area as the corrected right eye area, so that the consistency of the obtained positive binocular area and the real face image is better.
Further, a global generator GgAnd local generator group
Figure BDA0003113626750000063
All adopt a coder-decoder structure, the coder is composed of a series of convolutional layer modules and a layer of full connectionThe decoder is composed of continuous deconvolution layer modules, and for each convolution layer module and each deconvolution layer module, the depth of the network is increased through a residual block, and the learning capacity of the network is improved.
Further, the discriminator network comprises a global image discriminator network DgAnd a local area discriminator DlGlobal image discriminator DgDiscriminating that the input global image is from a real frontal image IfOr a generated frontal face image
Figure BDA0003113626750000064
Local area discriminator DlJudging whether the input local area is from the real face image IfOr a generated frontal face image
Figure BDA0003113626750000065
The local area for generating the front face image is judged, and the constraint global generator can better fuse the local correction result, so that the finally generated front face image can be better recovered in the local area of the five sense organs.
Further, a global image discriminator network DgAnd a local area discriminator DlThe image recognition device is composed of a continuous convolution layer and two full-connection layers, wherein the number of output nodes of the last full-connection layer is 1, and the output nodes are used for judging the truth of an input image.
Further, a global image feature extraction network B is usedgAnd local area feature extraction network BlGlobal and local fusion identity discrimination is carried out on the generated front face image, local area matching is considered on the basis of keeping global image identity consistency, and accuracy of face recognition can be further improved.
Further, a global image multi-scale pixel loss function L is used simultaneouslypixelLocal area pixel loss function Lpixel_lGlobal image penalty function Ladv_gWhen the equal-loss term is trained on the network constructed in step S2 by using the pair of face images in step S1, the final generated front face image can not only restore global information, but also restore global informationLocal texture details can be well restored.
Further, the network of discriminators and the network of generators in the network of step S2 are alternately iteratively trained using a stochastic gradient descent method, and both the network of generators and the network of discriminators facilitate each other, so that the final generator can generate a frontal face image that is as realistic as possible while maintaining identity information consistency, while the discriminator cannot distinguish whether the input image is from a real frontal face image or a generated frontal face image.
In summary, the invention firstly unifies the human face deflection direction as the positive direction according to the human face symmetry, corrects the left eye area, the nose area and the mouth area of the side face image by adopting the local generator, and horizontally overturns the corrected left eye area as the corrected right eye area according to the inherent biological feature of the front face symmetry; and extracting a corresponding local area from the generated front face image to perform image discrimination and identity discrimination, so that the finally generated front face image can keep better consistency with the real front face image in the binocular area, and meanwhile, local texture details can be better recovered.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
FIG. 1 is a schematic diagram of a multi-path generator G according to the present invention;
FIG. 2 shows a global image discriminator D according to the present inventiongAnd a local area discriminator DlA schematic structural diagram;
FIG. 3 is a schematic diagram of a global image and local region feature fusion discrimination structure in the present invention;
FIG. 4 is a graph of the results of the correction of side-face images for different deflection angles;
FIG. 5 is a graph comparing the generation of TP-GAN according to the present invention;
fig. 6 is a diagram of the correction result of the comparison method related to face correction under different deflection angles.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Various structural schematics according to the disclosed embodiments of the invention are shown in the drawings. The figures are not drawn to scale, wherein certain details are exaggerated and possibly omitted for clarity of presentation. The shapes of various regions, layers and their relative sizes and positional relationships shown in the drawings are merely exemplary, and deviations may occur in practice due to manufacturing tolerances or technical limitations, and a person skilled in the art may additionally design regions/layers having different shapes, sizes, relative positions, according to actual needs.
The invention provides a method for generating confrontation face correction based on symmetry and local discrimination, which constructs paired side faces I under different deflection anglespAnd front face IfUsing the image pairs as a training set; construction and training baseA multi-path generation countermeasure network with symmetry prior and local discrimination, the network comprising a multi-path generator network G, a global image discriminator network DgLocal area discriminator DlGlobal image feature extraction network BgAnd local area feature extraction network Bl(ii) a A side face image is input, and a front face image is generated.
Referring to fig. 1, the present invention provides a method for generating confrontation face correction based on symmetry and local discrimination, which includes the following steps:
s1, constructing side face images I in pairs under different deflection anglespAnd a front face image IfUsing the image pairs as a training set;
side face image IpThe deflection angle of the face image is changed from-90 degrees to +90 degrees at intervals of 15 degrees, 13 face deflection angles are included in total, and the face image with the deflection angle of 0 degree is taken as a front face image IfAccording to the priori knowledge of the symmetry of the human face, the human face image with the deflection angle of-90 degrees to 0 degrees is horizontally turned to be used as the human face image with the deflection angle of 0 degrees to +90 degrees.
S2, constructing a multi-path generation countermeasure network based on symmetry prior and local discrimination and using the side faces I paired in the step S1pAnd front face IfTraining the image pair to generate a confrontation network comprising a multi-way generator network G and a global image discriminator network DgLocal area discriminator DlGlobal image feature extraction network BgAnd local area feature extraction network Bl
The multiple generator network G contains groups of local generators
Figure BDA0003113626750000091
And global generator Gg
Local generator set
Figure BDA0003113626750000092
Comprises an input layer, a first convolution layer, a second convolution layer, a third convolution layer, a fourth convolution layer, a first deconvolution layer, a second deconvolution layer, a third deconvolution layer, a fourth convolution layer, a fifth convolution layer and an output layer which are connected in sequenceThe input layers are 40 × 40 left eye region, 40 × 32 nose region, and 48 × 32 mouth region, respectively, the filter sizes of 9 layers between the input layer and the output layer are 3,3,3,3,3,3,3,3, 1,2,2,2,2,2,2,1,1, respectively, the number of feature maps is 64,128,256,512,256,128,64,64,3, and the output layer sizes are 40 × 40 image, 40 × 32 image, and 48 × 32 image, respectively.
Local generator set
Figure BDA0003113626750000101
Is input as a side face image IpThe left eye area, the nose area and the mouth area are output as a frontal left eye area, a frontal nose area and a frontal mouth area, and the parameters of each convolution kernel in the network are initialized randomly to obtain an initialized network.
Global generator GgComprises an input layer, a first convolution layer, a second convolution layer, a third convolution layer, a fourth convolution layer, a fifth convolution layer, a first full-connection layer, a second full-connection layer, a first anti-convolution layer, a second anti-convolution layer, a third anti-convolution layer, a fourth anti-convolution layer, a fifth convolution layer, a sixth convolution layer and an output layer which are connected in sequence, the 5 layers between the input layer and the first fully-connected layer have filter sizes of 7,5,3,3,3, step sizes of 1,2,2,2,2, respectively, feature map numbers of 64,128,256,512, the node of the first fully-connected layer is 512, the node of the second fully-connected layer is 256, the 5 layers between the second fully-connected layer and the output layer have filter sizes of 8,3,3,3,3, 3, step sizes of 1,4,2,2,1, feature map numbers of 64,32,16,8,3, respectively, and the output layer size is 128 × 128 × 3.
Global generator GgIs input as a side face image IpOutput as a face image
Figure BDA0003113626750000102
And randomly initializing the parameters of each convolution kernel in the network to obtain the initialized network.
Referring to FIG. 2, a global image discriminator network DgAnd a local area discriminator DlAll comprise a plurality of pipes connected in sequenceThe system comprises an input layer, a first convolution layer, a second convolution layer, a third convolution layer, a fourth convolution layer, a fifth convolution layer, a first full-connection layer and a classifier; global image discriminator network DgIs input as a global frontal face image, a local area discriminator DlIs the local region corresponding to the global frontal image.
Global image discriminator network DgThe input layer size of (1) is an image of 128 × 128 × 3 size, the 5-layer filters between the input layer and the output layer have sizes of 3,3,3,3,3, respectively, step sizes of 2,2,2,2,2, respectively, the number of feature maps is 64,128,256,512,512, the nodes of the fully connected layer are 1024,1, and the output is a scalar.
Global image discriminator network DgIs a real face image IfAnd the front face image output by the multi-path generator network G
Figure BDA0003113626750000111
For discriminating whether the input image is from a real frontal image IfOr a generated frontal face image
Figure BDA0003113626750000112
And randomly initializing the parameters of each convolution kernel in the network to obtain the initialized network.
Local area discriminator DlThe input layer size of (1) is an image of 128 × 128 × 3 size, the 5-layer filters between the input layer and the output layer have sizes of 3,3,3,3,3, respectively, step sizes of 2,2,2,2,2, respectively, the number of feature maps is 64,128,256,512,512, the nodes of the fully connected layer are 1024,1, and the output is a scalar.
Local area arbiter network DlIs a real face image IfFrontal face image corresponding to local area and output of multi-path generator network G
Figure BDA0003113626750000113
Corresponding local area for distinguishing whether the input image is from real front face image IfOr a generated frontal face image
Figure BDA0003113626750000114
And randomly initializing the parameters of each convolution kernel in the network to obtain the initialized network.
Global image feature extraction network BgFor pre-training the model MobileFaceNet, a real frontal face image I is usedfFine adjustment is carried out; fixed global image feature extraction network B in training of entire networkgIs input as the generated frontal face image without changing the parameters of
Figure BDA0003113626750000115
And a real frontal face image IfFor discriminating the generated front face image
Figure BDA0003113626750000116
Whether identity information is kept unchanged; local area feature extraction network BlAlso for the pre-training model MobileFaceNet, real frontal face image I is usedfFine-tuning the corresponding local area, and inputting the fine-tuned local area as the generated front face image
Figure BDA0003113626750000117
And a real frontal face image IfAnd corresponding local regions for maintaining local region similarity.
Training the constructed multipath generation countermeasure network based on symmetry prior and local discrimination specifically comprises the following steps:
s201, constructing a global image multi-scale pixel loss function LpixelLocal area pixel loss function Lpixel_lGlobal image penalty function Ladv_gLocal regional antagonistic loss function Ladv_lAnd a fusion feature identity discrimination loss function LipSymmetric loss function LsymRegular term LtvAnd the overall loss function Ltotal
Overall loss function LtotalBy aiming at the generated front face image
Figure BDA0003113626750000121
And a real frontal face image IfMulti-scale pixel loss function LpixelLocal area pixel loss function L for the generated frontal localized area and the real frontal localized areapixel_lGlobal image penalty function Ladv_gLocal regional antagonistic loss function Ladv_lFor the generated front face image
Figure BDA0003113626750000122
And a real frontal face image IfFused feature identity discriminant loss function L ofipFor the generated front face image
Figure BDA0003113626750000123
L of the symmetric loss function ofsymAnd for the generated frontal face image
Figure BDA0003113626750000124
Regular term L oftvThe weighted sum yields:
Figure BDA0003113626750000125
wherein, thetaGRepresenting a parameter, theta, of the multi-path generator network GDParameter, λ, representing the arbiter network D1、λ2、λ3、λ4、λ5、λ6And λ7Are hyper-parameters, in particular:
for generated front face image
Figure BDA0003113626750000126
And a real frontal face image IfPixel loss function LpixelExpressed as:
Figure BDA0003113626750000127
wherein S represents 3 scales, wherein Ws、HsC respectively represents the width, height and channel number of 3 front face images with different scales。
Figure BDA0003113626750000128
And IfRespectively representing the generated frontal face image and the real frontal face image.
Local area pixel loss function L for generated frontal localized area and real frontal localized areapixel_lExpressed as:
Figure BDA0003113626750000129
global image penalty function LadvExpressed as:
Figure BDA00031136267500001210
wherein the content of the first and second substances,
Figure BDA00031136267500001211
and IfRespectively representing the generated frontal face image and the real frontal face image. DgThe value of (v) represents the arbiter network DgTo output of (c).
Local area penalty function Ladv_lExpressed as:
Figure BDA0003113626750000131
wherein the content of the first and second substances,
Figure BDA0003113626750000132
and If_lRespectively representing the generated frontal face image and the real frontal face image. DlThe value of (v) represents the arbiter network DlTo output of (c).
Referring to FIG. 3, the generated front face image is referred to
Figure BDA0003113626750000133
And a real frontal face image IfFused feature identity judgment ofOther loss function LipExpressed as:
Figure BDA0003113626750000134
wherein the content of the first and second substances,
Figure BDA0003113626750000135
and
Figure BDA0003113626750000136
respectively representing a penultimate fully-connected layer of the global image feature extraction network and a penultimate fully-connected layer of the local region feature extraction,
Figure BDA0003113626750000137
and
Figure BDA0003113626750000138
respectively representing the global characteristic and the local area characteristic of the generated frontal face image to be cascaded and the global characteristic and the local area characteristic of the real frontal face image to be cascaded,
Figure BDA0003113626750000139
and
Figure BDA00031136267500001310
respectively representing the generated frontal face image and the local area corresponding to the generated frontal face image, IfAnd If_lRepresenting the real face image and the local area corresponding to the real face image, | · | | luminance2Representing the two-norm of the vector.
For generated front face image
Figure BDA00031136267500001311
L of the symmetric loss function ofsymExpressed as:
Figure BDA00031136267500001312
wherein W, H and C represent the width, height and channel number of the front face image respectively,
Figure BDA00031136267500001313
representing the generated frontal face image.
For generated front face image
Figure BDA00031136267500001314
Regular term L oftvExpressed as:
Figure BDA00031136267500001315
wherein W, H and C represent the width, height and channel number of the front face image respectively,
Figure BDA00031136267500001316
representing the generated frontal face image.
S202, loss function through structure, side face IpAnd front face IfCombining the image pair with batch random gradient descent method, and performing image pair on the multipath generator network G and the global image discriminator DgAnd a local area discriminator DlSequentially carrying out alternate training to obtain a multi-path generator network G and a global image discriminator DgAnd a local area discriminator DlAnd (5) training the weight.
For multi-channel generator network G and global image discriminator D by means of batch random gradient descent methodgAnd a local area discriminator DlThe specific steps of sequentially carrying out alternate training are as follows:
s2021, setting the training batch size n to 32 and the number of iterations t to 200, and seven weighting parameters λ included in the loss function1=10、λ2=1、λ3=1、λ4=10、λ5=0.1、λ6=100、λ7=0.3,λ8=0.0001;
S2022, randomly sampling n samples in a batch in the side face-front face image pair
S2023, by batch random gradientMethod for updating global image discriminator network Dg
Figure BDA0003113626750000141
S2024, updating the local area discriminator network D by a batch random gradient descent methodl:
Figure BDA0003113626750000142
S2025, updating the multipath generator network G by a batch random gradient descent method:
Figure BDA0003113626750000143
s2026, repeating the steps S2022 to S2025 until the iteration time t is reached;
s2027, outputting the weight theta of the multi-path generator network G after trainingGGlobal image discriminator network DgWeight of (2)
Figure BDA0003113626750000144
And local area discriminator network DlWeight of (2)
Figure BDA0003113626750000145
S3, generating a front face image in the multi-path generator G for generating the confrontation network trained in the step S2 after inputting the side face image.
In another embodiment of the present invention, a system for generating a confrontation face based on symmetry and local discrimination is provided, which can be used to implement the above method for generating a confrontation face based on symmetry and local discrimination.
Wherein, the data module is used for constructing different deflection anglesPaired side face images IpAnd a front face image IfUsing the image pairs as a training set;
a network module for constructing a multi-path generation countermeasure network based on symmetry prior and local discrimination, and training by using the training set constructed in the step S1, wherein the generation countermeasure network comprises a multi-path generator network G and a global image discriminator network DgLocal area discriminator DlGlobal image feature extraction network BgAnd local area feature extraction network Bl
And the correction module is used for inputting the side face image to the multipath generator G of the anti-network generated after the training of the step S2 to generate a front face image for the face correction.
In yet another embodiment of the present invention, a terminal device is provided that includes a processor and a memory for storing a computer program comprising program instructions, the processor being configured to execute the program instructions stored by the computer storage medium. The Processor may be a Central Processing Unit (CPU), or may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable gate array (FPGA) or other Programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, etc., which is a computing core and a control core of the terminal, and is adapted to implement one or more instructions, and is specifically adapted to load and execute one or more instructions to implement a corresponding method flow or a corresponding function; the processor of the embodiment of the invention can be used for generating the operation of the confrontation face correction method based on symmetry and local discrimination, and comprises the following steps:
construction of paired side face images I at different deflection anglespAnd a front face image IfUsing the image pairs as a training set; constructing a multi-path generation countermeasure network based on symmetry prior and local discrimination, and training by using a training set, wherein the generation countermeasure network comprises a multi-path generator network G and a global image discriminator network DgLocal area discriminator DlGlobal image feature extraction network BgAnd local area feature extraction network Bl(ii) a Inputting side face images to a multipath generator G for generating an anti-network after training to generate a front face image for face correction.
In still another embodiment of the present invention, the present invention further provides a storage medium, specifically a computer-readable storage medium (Memory), which is a Memory device in a terminal device and is used for storing programs and data. It is understood that the computer readable storage medium herein may include a built-in storage medium in the terminal device, and may also include an extended storage medium supported by the terminal device. The computer-readable storage medium provides a storage space storing an operating system of the terminal. Also, one or more instructions, which may be one or more computer programs (including program code), are stored in the memory space and are adapted to be loaded and executed by the processor. It should be noted that the computer-readable storage medium may be a high-speed RAM memory, or may be a non-volatile memory (non-volatile memory), such as at least one disk memory.
One or more instructions stored in the computer-readable storage medium can be loaded and executed by the processor to implement the corresponding steps of the method for generating the confrontation face correction based on symmetry and local discrimination in the above embodiments; one or more instructions in the computer-readable storage medium are loaded by the processor and perform the steps of:
construction of paired side face images I at different deflection anglespAnd a front face image IfUsing the image pairs as a training set; constructing a multi-path generation countermeasure network based on symmetry prior and local discrimination, and training by using a training set, wherein the generation countermeasure network comprises a multi-path generator network G and a global image discriminator network DgLocal area discriminator DlGlobal image feature extraction network BgAnd local area feature extraction network Bl(ii) a Generating a front face image in a multi-path generator G for inputting a side face image to a training and generating an anti-network for face correction。
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The effect of the present invention will be further described with reference to the simulation diagram
1. Simulation conditions
The hardware platform for simulation of the invention is as follows: HP Z840; the software platform is as follows: a Pythrch; the multi-pose human face data set used by the invention is M ^2FPA, 49140 training set samples and 19824 testing set samples.
2. Simulation content and results
Referring to fig. 4, for the correction result of the side face images with different deflection angles, the deflection angles of the side face images from left to right are respectively 15 °, 30 °, 45 °, 60 °, 75 ° and 90 °, the left side of each image pair is the input side face image, the right side is the generated front face image, and the last column is the real front face image.
The method of the invention is used for carrying out experiments under the simulation conditions, and the side face image and the front face image which are paired in the training set are used for training the generated confrontation network based on the identity constraint. The results as shown in fig. 4 are obtained, the first row and the second row are correction results with a face deflection angle of +15 °, +30 °, +45 °, +60 °, +75 ° and +90 ° from left to right, the third row is correction results with a face deflection angle of-15 °, -30 °, -45 °, -60 °, -75 ° and-90 ° from left to right, the left side of each image pair of each row is an input side face image, the right side is a generated front face image, and the last column is a real front face image. The results shown in fig. 5 were obtained, the first, third and fifth lines respectively show the TP-GAN-based correction results for the side-face images with the yaw angles of-45 °, +60 °, and +75 °, and the second, fourth, and sixth lines respectively show the SL-GAN-based correction results for the side-face images with the yaw angles of-45 °, +60 °, and +75 °.
Each row is respectively an input side face image, an input partial area, a generated frontal face image and a real frontal face image from left to right. As a result shown in fig. 6, the first column is the input side face images at different deflection angles, the deflection angles are 30 °, 45 °, 60 °, 75 ° and 90 ° from top to bottom, the second column is the correction result of the present invention, the third to fifth columns are the correction results of the comparison method, and the last column is the real front face image.
Table 1 shows the face recognition results of the comparison methods related to face correction at different deflection angles. Viewed according to rows, the deflection angles of each row from left to right are respectively 15 degrees, 30 degrees, 45 degrees, 60 degrees, 75 degrees and 90 degrees, and the last row is the face recognition result of the invention.
TABLE 1 (%)
Figure BDA0003113626750000181
The results in table 1 show that the face recognition accuracy under a large deflection angle can be improved, and compared with TP-GAN, the face recognition accuracy under a large deflection angle is improved.
In summary, the method and system for generating an antagonistic face based on symmetry and local discrimination of the present invention unify the human face deflection direction as a positive direction according to the symmetry of the face, use a local generator to correct the left eye region, nose region and mouth region of the side face image, and horizontally turn over the corrected left eye region as the corrected right eye region according to the inherent biological feature of the symmetry of the face; and extracting a corresponding local area from the generated front face image to perform image discrimination and identity discrimination, so that the finally generated front face image can keep better consistency with the real front face image in the binocular area, and meanwhile, local texture details can be better recovered.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above-mentioned contents are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modification made on the basis of the technical idea of the present invention falls within the protection scope of the claims of the present invention.

Claims (10)

1. A method for generating confrontation face correction based on symmetry and local discrimination is characterized by comprising the following steps:
s1, constructing side face images I in pairs under different deflection anglespAnd a front face image IfUsing the image pairs as a training set;
s2, constructing a multi-path generation countermeasure network based on symmetry prior and local discrimination, and training by using the training set constructed in the step S1 to generate the countermeasure network comprising a multi-path generator network G and a global image discriminator network DgLocal area discriminator DlGlobal image feature extraction network BgAnd local area feature extraction network Bl
And S3, inputting the side face image to the multipath generator G for generating the antithetical network after training in the step S2 to generate a front face image for face correction.
2. The method according to claim 1, wherein in step S1, the side face image IpThe deflection angle of the face image is changed from-90 degrees to +90 degrees at intervals of 15 degrees, 13 face deflection angles are included in total, and the face image with the deflection angle of 0 degree is taken as a front face image IfAnd horizontally turning the face image with the deflection angle of-90 to 0 degrees to serve as the face image with the deflection angle of 0 to +90 degrees, and unifying the deflection direction of the face to be the positive deflection direction.
3. The method of claim 1, wherein in step S2, the multi-way generator network G comprises a global generator GgAnd local generator group
Figure FDA0003113626740000011
Local generator set
Figure FDA0003113626740000012
Respectively as a side face image IpThe left eye area, the nose area and the mouth area are output as a frontal left eye area, a frontal nose area and a frontal mouth area, the left eye area after horizontal turnover correction is used as a corrected right eye area, and position aggregation is carried out on the corrected left eye area, the corrected right eye area, the corrected nose area and the corrected mouth area based on a frontal face key point; global generator GgIs input as a global side face image IpAnd a local correction area for outputting as a face correction image
Figure FDA0003113626740000013
4. The method of claim 3, wherein the set of local generators
Figure FDA0003113626740000014
The device comprises an input layer, a first convolution layer, a second convolution layer, a third convolution layer, a fourth convolution layer, a first deconvolution layer, a second deconvolution layer, a third deconvolution layer, a fourth convolution layer, a fifth convolution layer and an output layer which are connected in sequence;
global generator GgThe multilayer structure comprises an input layer, a first convolution layer, a second convolution layer, a third convolution layer, a fourth convolution layer, a fifth convolution layer, a first full-connection layer, a second full-connection layer, a first anti-convolution layer, a second anti-convolution layer, a third anti-convolution layer, a fourth anti-convolution layer, a fifth convolution layer, a sixth convolution layer and an output layer which are sequentially connected.
5. The method according to claim 1, wherein in step S2, the global image discriminator network DgIs the generated frontal face image
Figure FDA0003113626740000021
And a real frontal face image If(ii) a Local area discriminator DlIs to generate a frontal face image
Figure FDA0003113626740000022
Local area and real frontal image IfThe local area of (a).
6. The method of claim 5, wherein the global image discriminator network DgAnd a local area discriminator DlThe input layer, the first convolution layer, the second convolution layer, the third convolution layer, the fourth convolution layer, the fifth convolution layer, the first full-connection layer and the two classifiers are connected in sequence; global image discriminator network DgIs input as a global frontal face image, a local area discriminator DlIs the local region corresponding to the global frontal image.
7. The method according to claim 1, wherein in step S2, the global image feature extraction network BgFor pre-training the model MobileFaceNet, a real frontal face image I is usedfFine adjustment is carried out; local area feature extraction network BlAlso for the pre-training model MobileFaceNet, real frontal face image I is usedfFine-tuning the corresponding local area; fixed global image feature extraction network B in training of entire networkgAnd local area feature extraction network BlIs input as the generated frontal face image without changing the parameters of
Figure FDA0003113626740000023
And a real frontal face image If
8. The method according to claim 1, wherein in step S2, training the constructed multi-path generative countermeasure network based on symmetry prior and local discrimination specifically comprises:
s201, constructing a global image multi-scale pixel loss function LpixelLocal area pixel loss function Lpixel_lGlobal image penalty function Ladv_gLocal regional antagonistic loss function Ladv_lAnd a fusion feature identity discrimination loss function LipSymmetric loss function LsymRegular term LtvAnd the overall loss function Ltotal
S202, loss function through structure, side face IpAnd front face IfCombining the image pair with batch random gradient descent method, and performing image pair on a multipath generator network G and a global image discriminator network DgAnd local area discriminator network DlSequentially carrying out alternate training to obtain a multi-path generator network G and a global image discriminator network DgAnd local area discriminator network DlAnd (5) training the weight.
9. Method according to claim 8, characterized in that the global image discriminator network D is subjected to a batch stochastic gradient descent methodgLocal area discriminator network DlThe specific steps of sequentially carrying out alternate training with the multipath generator network G are as follows:
s2021, setting the training batch size n to 32 and the iteration number t to 200, and including seven weighting parameters λ in the loss function1=10、λ2=10、λ3=0.1、λ4=0.1、λ5=100、λ6=0.3、λ7=0.0001;
S2022, randomly sampling n samples in a batch in the side face-front face image pair;
s2023, updating the global image discriminator network D by a batch random gradient descent methodg
S2024, updating the local area discriminator network D by a batch random gradient descent methodl
S2025, updating the multipath generator network G by a batch random gradient descent method:
s2026, repeating the steps S2022 to S2025 until the iteration time t is reached;
s2027, outputting the weight theta of the multi-path generator network G after trainingGGlobal image discriminator network DgWeight of (2)
Figure FDA0003113626740000031
And local area discriminator network DlWeight of (2)
Figure FDA0003113626740000032
10. A system for generating confrontation face correction based on symmetry and local discrimination, comprising:
a data module for constructing paired side face images I under different deflection anglespAnd a front face image IfUsing the image pairs as a training set;
a network module for constructing a multi-path generation countermeasure network based on symmetry prior and local discrimination, and training by using the training set constructed in the step S1, wherein the generation countermeasure network comprises a multi-path generator network G and a global image discriminator network DgLocal area discriminator DlGlobal image feature extraction network BgAnd local area feature extraction network Bl
And the correction module is used for inputting the side face image to the multipath generator G of the anti-network generated after the training of the step S2 to generate a front face image for the face correction.
CN202110657280.3A 2021-06-11 2021-06-11 Symmetrical and local discrimination-based face correction method and system for generating countermeasure Active CN113378721B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110657280.3A CN113378721B (en) 2021-06-11 2021-06-11 Symmetrical and local discrimination-based face correction method and system for generating countermeasure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110657280.3A CN113378721B (en) 2021-06-11 2021-06-11 Symmetrical and local discrimination-based face correction method and system for generating countermeasure

Publications (2)

Publication Number Publication Date
CN113378721A true CN113378721A (en) 2021-09-10
CN113378721B CN113378721B (en) 2023-08-18

Family

ID=77574287

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110657280.3A Active CN113378721B (en) 2021-06-11 2021-06-11 Symmetrical and local discrimination-based face correction method and system for generating countermeasure

Country Status (1)

Country Link
CN (1) CN113378721B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516604A (en) * 2021-09-14 2021-10-19 成都数联云算科技有限公司 Image restoration method
CN113837933A (en) * 2021-11-26 2021-12-24 北京市商汤科技开发有限公司 Network training and image generation method and device, electronic equipment and storage medium
CN114240950A (en) * 2021-11-23 2022-03-25 电子科技大学 Brain tumor image generation and segmentation method based on deep neural network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019015466A1 (en) * 2017-07-17 2019-01-24 广州广电运通金融电子股份有限公司 Method and apparatus for verifying person and certificate
CN109815928A (en) * 2019-01-31 2019-05-28 中国电子进出口有限公司 A kind of face image synthesis method and apparatus based on confrontation study
CN110222668A (en) * 2019-06-17 2019-09-10 苏州大学 Based on the multi-pose human facial expression recognition method for generating confrontation network
CN110738161A (en) * 2019-10-12 2020-01-31 电子科技大学 face image correction method based on improved generation type confrontation network
WO2020029356A1 (en) * 2018-08-08 2020-02-13 杰创智能科技股份有限公司 Method employing generative adversarial network for predicting face change
CN111046900A (en) * 2019-10-25 2020-04-21 重庆邮电大学 Semi-supervised generation confrontation network image classification method based on local manifold regularization

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019015466A1 (en) * 2017-07-17 2019-01-24 广州广电运通金融电子股份有限公司 Method and apparatus for verifying person and certificate
WO2020029356A1 (en) * 2018-08-08 2020-02-13 杰创智能科技股份有限公司 Method employing generative adversarial network for predicting face change
CN109815928A (en) * 2019-01-31 2019-05-28 中国电子进出口有限公司 A kind of face image synthesis method and apparatus based on confrontation study
CN110222668A (en) * 2019-06-17 2019-09-10 苏州大学 Based on the multi-pose human facial expression recognition method for generating confrontation network
CN110738161A (en) * 2019-10-12 2020-01-31 电子科技大学 face image correction method based on improved generation type confrontation network
CN111046900A (en) * 2019-10-25 2020-04-21 重庆邮电大学 Semi-supervised generation confrontation network image classification method based on local manifold regularization

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
卫星;杨国强;李佳;陆阳;石雷;: "结合多尺度条件生成对抗网络的井下轨道检测", 中国图象图形学报, no. 02 *
林乐平;李三凤;欧阳宁;: "基于多姿态特征融合生成对抗网络的人脸校正方法", 计算机应用, no. 10 *
黄菲;高飞;朱静洁;戴玲娜;俞俊;: "基于生成对抗网络的异质人脸图像合成:进展与挑战", 南京信息工程大学学报(自然科学版), no. 06 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113516604A (en) * 2021-09-14 2021-10-19 成都数联云算科技有限公司 Image restoration method
CN113516604B (en) * 2021-09-14 2021-11-16 成都数联云算科技有限公司 Image restoration method
CN114240950A (en) * 2021-11-23 2022-03-25 电子科技大学 Brain tumor image generation and segmentation method based on deep neural network
CN114240950B (en) * 2021-11-23 2023-04-07 电子科技大学 Brain tumor image generation and segmentation method based on deep neural network
CN113837933A (en) * 2021-11-26 2021-12-24 北京市商汤科技开发有限公司 Network training and image generation method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113378721B (en) 2023-08-18

Similar Documents

Publication Publication Date Title
CN113378721A (en) Method and system for generating confrontation face correction based on symmetry and local discrimination
JP6159489B2 (en) Face authentication method and system
Choe et al. Face generation for low-shot learning using generative adversarial networks
CN109416727B (en) Method and device for removing glasses in face image
Shrivastava et al. Multiple kernel learning for sparse representation-based classification
CN106415594B (en) Method and system for face verification
Liu et al. Multi-channel pose-aware convolution neural networks for multi-view facial expression recognition
CN113255788B (en) Method and system for generating confrontation network face correction based on two-stage mask guidance
US11176457B2 (en) Method and apparatus for reconstructing 3D microstructure using neural network
Hara et al. Towards good practice for action recognition with spatiotemporal 3d convolutions
CN110826462A (en) Human body behavior identification method of non-local double-current convolutional neural network model
CN110188667B (en) Face rectification method based on three-party confrontation generation network
Chen et al. Mask dynamic routing to combined model of deep capsule network and u-net
CN113239870B (en) Identity constraint-based face correction method and system for generating countermeasure network
Kecheril Sadanandan et al. Spheroid segmentation using multiscale deep adversarial networks
CN116704079B (en) Image generation method, device, equipment and storage medium
CN108446661A (en) A kind of deep learning parallelization face identification method
Kolen et al. Scenes from exclusive-or: Back propagation is sensitive to initial conditions
Liu et al. Deep learning and its application to general image classification
CN113239866B (en) Face recognition method and system based on space-time feature fusion and sample attention enhancement
EP3929822A1 (en) Neuromorphic apparatus and method with neural network
CN110210419A (en) The scene Recognition system and model generating method of high-resolution remote sensing image
Zhang et al. Landmark-guided local deep neural networks for age and gender classification
Liu et al. Large kernel refine fusion net for neuron membrane segmentation
CN105469101A (en) Mixed two-dimensional probabilistic principal component analysis method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant