CN108334816B - Multi-pose face recognition method based on contour symmetric constraint generation type countermeasure network - Google Patents

Multi-pose face recognition method based on contour symmetric constraint generation type countermeasure network Download PDF

Info

Publication number
CN108334816B
CN108334816B CN201810033455.1A CN201810033455A CN108334816B CN 108334816 B CN108334816 B CN 108334816B CN 201810033455 A CN201810033455 A CN 201810033455A CN 108334816 B CN108334816 B CN 108334816B
Authority
CN
China
Prior art keywords
network
image
face
convolution
reconstructed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810033455.1A
Other languages
Chinese (zh)
Other versions
CN108334816A (en
Inventor
欧阳宁
刘力元
林乐平
莫建文
袁华
首照宇
张彤
陈利霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN201810033455.1A priority Critical patent/CN108334816B/en
Publication of CN108334816A publication Critical patent/CN108334816A/en
Application granted granted Critical
Publication of CN108334816B publication Critical patent/CN108334816B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2132Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on discrimination criteria, e.g. discriminant analysis
    • G06F18/21322Rendering the within-class scatter matrix non-singular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Abstract

The invention discloses a multi-pose face recognition method based on a contour symmetric constraint generation type confrontation network, which is characterized by comprising the following steps of: 1) preprocessing data; 2) a contour constraint generation network; 3) a symmetric constraint countermeasure network; 4) training a balance network; 5) and (5) reconstructing and identifying. The method can effectively solve the problem of the influence of the attitude angle deflection of the face image, extract the characteristics of the face with more robustness under multiple attitudes, particularly restrain the global quality and the local details mutually under the reconstruction of large-angle attitudes, keep the contour characteristic information of the front face and meet the high-precision requirement on the multi-attitude face recognition in practical application.

Description

Multi-pose face recognition method based on contour symmetric constraint generation type countermeasure network
Technical Field
The invention relates to the field of intelligent image processing and pattern recognition, in particular to a multi-pose face recognition method based on a Contour symmetric Constraint generation adaptive Network (SC-GAN).
Background
Multi-pose Face recognition (Multi-position Face recognition) is a hot spot of machine vision research in recent years, especially the introduction and the rise of deep learning, so that the Face recognition technology has made significant progress and rapid development in many fields. However, in reality, the face image is susceptible to various factors such as different environments, illumination, expressions, postures and the like, and the accuracy of face recognition is affected, wherein the face posture is a very challenging problem.
In order to solve the intra-class change caused by the posture change in the face recognition, researchers have obtained certain achievements, and the current main technologies are divided into two types, namely 2D and 3D. The 2D method mainly comprises the following steps: stacking Progressive Auto-encoders (SPAE for short), and gradually reconstructing a non-frontal face image into a frontal face image by utilizing shallow Progressive self-encoding; the method comprises the steps that an identity retention feature is extracted from faces with different postures and illumination through a Deep Convolution Network (DCNN for short), and a normal-illumination face image is reconstructed by using the feature. Although the methods are easy to implement and can extract features with strong robustness, the local texture information is lost too much, the quality of the reconstructed front face image is reduced, and the subsequent identification performance is influenced. In the 3D method, a 3D face model corresponding to a 2D face is fitted by mainly applying a method of evaluating depth information and minimizing reconstruction differences, and then a unified face View, such as a subjective Appearance model (View-Based Active application, VAAM for short), is reconstructed by normalization, and a virtual angle of a test picture is generated by shifting from 3D face data and compared with a synthesized front face image. Such methods require a lot of depth information and the fitting process and the calculation process are too difficult. In recent years, generative confrontation networks have gained intense interest from a wide range of researchers with their excellent performance in the visual perception task. The model is composed of a generator and a discriminator, wherein the generator captures a real data sample and generates new sample data, the discriminator distinguishes the real sample and the generated data to achieve the balance of true and false discrimination, and the generated countermeasure network is an unsupervised learning model, can effectively solve a series of problems of data generation, and has richer self-learning capability in feature extraction. Because the generating type countermeasure network has stronger capability of restoring richness and saturation of the image, a new thought is provided for solving the face and face influenced by the posture.
Disclosure of Invention
The invention aims to provide a multi-pose face recognition method based on a contour symmetric constraint generation type confrontation network aiming at the defects of the prior art. The method can effectively solve the problem of the influence of the attitude angle deflection of the face image, extract the characteristics of the face with more robustness under multiple attitudes, particularly restrain the global quality and the local details mutually under the reconstruction of large-angle attitudes, keep the contour characteristic information of the front face and meet the high-precision requirement on the multi-attitude face recognition in practical application.
The technical scheme for realizing the purpose of the invention is as follows:
the multi-pose face recognition method based on the contour symmetric constraint generation type countermeasure network comprises the following steps:
1) data preprocessing: in order to establish better face template characteristics, a multi-pose face database is divided into a training image and a test image, and the training image and the test image are subjected to normalization processing;
2) contour constraint generation network: firstly, images of any posture under normal illumination are processed through convolution pooling, and the output frontal face image is used as the output of a generation network. Let the face image input in the network be generated as w × h, and any posture under normal illumination be x0The generation network is composed of two convolutional layers and two convolutional neural networks of pooling layers, Wi 1The weight matrix generated for the first layer convolution maps the signature,
Figure GDA0003305058000000021
feature map, V, generated for the second layer convolution1、V2The first and second layers of matrix pooling respectively adopt RELU function and image x0The characteristic graph obtained by the two layers of convolution network is xi 2And then:
Figure GDA0003305058000000022
wherein, sigma represents an activation function, a front face contour histogram is added into an image after network reconstruction is generated to restrict the quality of global features, the center position of the image is represented by any pixel (i, j) of a front face image f (x, y), the gradients of the center pixel of a given window in m and n directions are respectively calculated, W is a convolution mapping feature map, and the edge of an image coordinate is obtained through theta
Figure GDA0003305058000000023
And is connected with a third layer convolution mapping characteristic graph W output by the convolution network3Convolution by comparison with a feature map xi 2The tensor product of (a) yields the calculation y of the reconstructed front facei
Figure GDA0003305058000000024
The reconstructed front face based on the convolution network in the network is generated, the learning parameters of the network are continuously updated and reversely propagated by adopting gradient descent, namely, the updated parameters are obtained by the sum of the intermediate variable and the previous layer of learning parameters
Figure GDA0003305058000000025
A represents an intermediate variable which is,
Figure GDA0003305058000000026
continuously updating the characteristic parameters of the back propagation for the gradient descent method, wherein k is the number of continuous learning propagation,
Figure GDA0003305058000000031
for the counter-propagating error term eiAnd a characteristic variable xi-1The product of the two components is obtained,
Figure GDA0003305058000000032
wherein the characteristic variable is the characteristic of the preceding term, whereby the back propagation error e can be derivediInverting the error terms at each layer;
3) symmetric constraint countermeasure networks: the countermeasure network is a discrimination network similar to a comparator, and a reconstructed image output by the network is generated through the step 2)
Figure GDA0003305058000000033
And real face frontal data as expected output
Figure GDA0003305058000000034
Performing a cost function for distinguishing between true and false
Figure GDA0003305058000000035
This portion reconstructs the sample pixel levels from the arbiter in order to bring the network up to the desired output for the corresponding sample
Figure GDA0003305058000000036
And real image pixel level
Figure GDA0003305058000000037
Introduction of discriminant loss of LrFused pixel loss function:
Figure GDA0003305058000000038
according to the characteristic of face symmetry, the width of all input images is covered by half, the coordinates on the right side are gradually shielded, the feature points are gradually described through the absolute value of subtraction of reconstructed sample images, and the symmetry loss L aiming at face correction in the process of posture reconstruction is introducedslThe symmetry of the visible part and the covering part reconstruction is solved:
Figure GDA0003305058000000039
the final loss function is therefore weighted formula (3), formula (4): l issyn=Lr1Lsl2LceeWherein L isceeIs a cross-entropy loss function, which limits the hidden activation function, lambda1And λ2Is the coefficient of the balance punishment item, after defining the final loss function, the weight of the generation network and the counter network is alternately updated by adopting a back propagation algorithm
Figure GDA00033050580000000310
And deviation of
Figure GDA00033050580000000311
Weighing parameters of a generating network and a countering network
Figure GDA00033050580000000312
Figure GDA00033050580000000313
Figure GDA00033050580000000314
4) Training a balance network: after the processing of the 3 steps, the network independently and alternately iteratively updates the parameters, wherein the first sequence is a fixed generation network G and a training discrimination network D to maximize the discrimination accuracy, and the second sequence is the fixed discrimination network D and the training generation network G to minimize the discrimination accuracy until the discrimination is performed with real data
Figure GDA00033050580000000315
Almost the same image, namely whether the true sample or the false sample is, the countermeasure network converts the output two-dimensional matrix value into a probability value p;
5) reconstruction and identification: inputting test images with different attitude angles into the tested network through a balance generation type countermeasure network to obtain an output image of the generator
Figure GDA0003305058000000041
For the reconstructed image, the reconstructed front face image and the network highest hidden layer feature are respectively subjected to dimension reduction by using a linear discriminant analysis method, namely an LDA method, to extract the face feature with discriminability, and the face recognition is completed by using a nearest neighbor classifier.
The method utilizes the nonlinear modeling capability of the convolutional neural network as a generator of the network, each corresponding posture characteristic change has higher robustness by correcting the multi-posture visual angle layer by layer, the edge of the outline is restrained by using the face front face histogram, and the multi-posture multi-angle image reconstruction quality is ensured. And based on the ability of the generation type confrontation network to model the distribution, true and false discrimination is carried out by taking the real face front face data as a discriminator, and as symmetrical loss constraint is introduced, the network has more real face information on the face front face characteristics reconstructed in the discriminator, the reconstructed face data is perfectly screened and distinguished, so that the reconstructed face image highlights more detailed characteristic information, the whole is smooth, the noise points are fewer, and the recognition efficiency can be greatly improved.
The method effectively solves the problem of the posture angle deflection influence of the human face image, extracts the characteristics of the human face with more robustness under multiple postures, particularly restrains the global quality and the local details under the reconstruction of the large-angle posture, keeps the contour characteristic information of the front face, and meets the high-precision requirement on the multi-posture human face recognition in practical application.
Drawings
FIG. 1 is a schematic flow chart of an exemplary method;
FIG. 2 is an overall block diagram of a countermeasure network generated based on a contour symmetry constraint in the embodiment;
fig. 3 is a partially reconstructed front face image between +75 ° -75 ° on the Multi-PIE data set in an embodiment;
fig. 4 is a comparison graph of different reconstruction methods at 75 ° in the example.
Detailed Description
The present invention will be described in further detail with reference to the following drawings and examples, but the present invention is not limited thereto.
Example (b):
referring to fig. 1 and 2, the multi-pose face recognition method based on the contour symmetric constraint generation type confrontation network includes the following steps:
1) data preprocessing: in order to establish better face front face template characteristics, a Multi-pose face database is divided into a training image and a test image, and the training image and the test image are subjected to normalization processing, the validity of the technical scheme is verified on a Multi-PIE face image library in the present example, the database comprises a total of 754204 face images with different poses, illumination and expressions for 337 individuals, one sub-image set is taken in the present example and comprises 11 poses in an angle range of-75 degrees to +75 degrees, the poses are spaced at 15 degrees, the selected face data are all normal expressions under normal illumination, the size of the images in the database is aligned and cut to 64x64, the first 210 individuals are used as training images, the rest 127 individuals are used as test images, and the face images (0 degree) in the test set are selected as reference images in a discrimination network, as shown in FIG. 3;
2) contour constraint generation network: firstly, processing images of any posture under normal illumination through convolution pooling, taking the output front face image as the output of a generation network, and setting the input face image in the generation network as the image of any posture under normal illumination as x0The generation network is a convolutional neural network consisting of two 5 × 5 convolutional layers and two 3 × 3 pooling layers, Wi 1The weight matrix generated for the first layer convolution maps the signature,
Figure GDA0003305058000000051
feature map, V, generated for the second layer convolution1、V2The first and second layers of matrix pooling respectively adopt RELU function and image x0The characteristic graph obtained by the two layers of convolution network is xi 2And then:
Figure GDA0003305058000000052
wherein, sigma represents an activation function, a front face contour histogram is added into an image after network reconstruction is generated to restrict the quality of global features, the center position of the image is represented by any pixel (i, j) of a front face image f (x, y), the gradients of the center pixel of a given window in m and n directions are respectively calculated, W is a convolution mapping feature map, and the edge of an image coordinate is obtained through theta
Figure GDA0003305058000000053
And is connected with a third layer convolution mapping characteristic graph W output by the convolution network3Convolution by comparison with a feature map xi 2The tensor product of (a) yields the calculation y of the reconstructed front facei
Figure GDA0003305058000000054
The reconstructed front face based on the convolution network in the network is generated, the learning parameters of the network are continuously updated and reversely propagated by adopting gradient descent, namely, the updated parameters are obtained by the sum of the intermediate variable and the previous layer of learning parameters
Figure GDA0003305058000000055
A represents an intermediate variable which is,
Figure GDA0003305058000000056
continuously updating the characteristic parameters of the back propagation for the gradient descent method, wherein k is the number of continuous learning propagation,
Figure GDA0003305058000000057
for the counter-propagating error term eiAnd a characteristic variable xi-1The product of the two components is obtained,
Figure GDA0003305058000000058
wherein the characteristic variable is the characteristic of the preceding term, whereby the back propagation error e can be derivediInverting the error terms at each layer;
3) symmetric constraint countermeasure networks: the countermeasure network is a discrimination network similar to a comparator, and a reconstructed image output by the network is generated through the step 2)
Figure GDA0003305058000000059
And real face data as desired output
Figure GDA00033050580000000510
Performing a cost function for distinguishing between true and false
Figure GDA0003305058000000061
This portion reconstructs the sample pixel levels from the arbiter in order to bring the network up to the desired output for the corresponding sample
Figure GDA0003305058000000062
And real image pixel level
Figure GDA0003305058000000063
Introduction of discriminant loss of LrFused pixel loss function:
Figure GDA0003305058000000064
according to the characteristic of face symmetry, the width of all input images is covered by half, the coordinates on the right side are gradually shielded, the feature points are gradually described through the absolute value of subtraction of reconstructed sample images, and the symmetry loss L aiming at face correction in the process of posture reconstruction is introducedslThe symmetry of the visible part and the covering part reconstruction is solved:
Figure GDA0003305058000000065
the final loss function is therefore weighted formula (3), formula (4): l issyn=Lr1Lsl2LceeWherein L isceeIs a cross-entropy loss function, which limits the hidden activation function, lambda1And λ2Is the coefficient of the balance punishment item, after defining the final loss function, the weight of the generation network and the counter network is alternately updated by adopting a back propagation algorithm
Figure GDA0003305058000000066
And deviation of
Figure GDA0003305058000000067
Weighing parameters of a generating network and a countering network
Figure GDA0003305058000000068
Figure GDA0003305058000000069
Figure GDA00033050580000000610
4) Training a balance network: after the processing of the 3 steps, the network independently and alternately iteratively updates the parameters, wherein the first sequence is a fixed generation network G and a training discrimination network D to maximize the discrimination accuracy, and the second sequence is the fixed discrimination network D and the training generation network G to minimize the discrimination accuracy until the discrimination is performed with real data
Figure GDA00033050580000000611
Almost the same image, namely whether the true sample or the false sample is, the countermeasure network converts the output two-dimensional matrix value into a probability value p;
5) reconstruction and identification: inputting test images with different attitude angles into the tested network through a balance generation type countermeasure network to obtain an output image of the generator
Figure GDA00033050580000000612
For the reconstructed image, the reconstructed front face image and the network highest hidden layer feature are respectively subjected to dimension reduction by using a linear discriminant analysis method, namely an LDA method, to extract the face feature with discriminability, and the face recognition is completed by using a nearest neighbor classifier.
By adopting the method of the embodiment, the faces with different postures can be reconstructed into the front face image, and the texture information of the original image can be better restored by the front face image reconstructed by the method of the technical scheme can be intuitively seen from the graph shown in FIG. 3, wherein 1, 3 and 5 lines are original images with different postures of different faces from +75 degrees to-75 degrees; 2. lines 4 and 6 are face images reconstructed by the text method when the face images correspond to different angles one by one, and it can be seen that the face images reconstructed by the method still keep the symmetry and the visual clarity of the original images under the large-angle posture of +/-75 degrees; in fig. 4, when the face pose angle is 75 °, a comparison between the method of the present technical solution and deep learning methods such as GAN (generative confrontation network), DCNN (deep convolutional neural network), MVP (multi-view perception model) is shown, and it can be intuitively seen from visual observation that the facial symmetry is better than other methods when the large-angle pose face is reconstructed in the present technical solution.

Claims (1)

1. The multi-pose face recognition method based on the contour symmetric constraint generation type confrontation network is characterized by comprising the following steps of:
1) data preprocessing: in order to establish better face template characteristics, a multi-pose face database is divided into a training image and a test image, and the training image and the test image are subjected to normalization processing;
2) contour constraint generation network: firstly, processing images of any posture under normal illumination by convolution pooling, taking the output front face image as the output of a generation network, and setting the input face image in the generation network as w x h and the any posture under normal illumination as x0The generation network is composed of two convolutional layers and two convolutional neural networks of pooling layers, Wi 1The weight matrix generated for the first layer convolution maps the signature,
Figure FDA0003305057990000011
feature map, V, generated for the second layer convolution1、V2The first and second layers of matrix pooling respectively adopt RELU function and image x0The characteristic graph obtained by the two layers of convolution network is xi 2And then:
Figure FDA0003305057990000012
where σ represents the activation function, in-lifeAdding a front face contour histogram into an image reconstructed in a network to restrict the quality of global features, expressing the central position of the image by using any pixel (i, j) of a front face image f (x, y), respectively calculating the gradients of the central pixel of a given window in m and n directions, wherein W is a convolution mapping feature map, and obtaining the edge of an image coordinate through theta
Figure FDA0003305057990000013
And is connected with a third layer convolution mapping characteristic graph W output by the convolution network3Convolution by comparison with a feature map xi 2The tensor product of (a) yields the calculation y of the reconstructed front facei
Figure FDA0003305057990000014
Generating reconstructed front face based on convolution network in network, and continuously updating network parameters by gradient descent and reversely propagating, i.e. updated parameters
Figure FDA0003305057990000015
A represents an intermediate variable which is,
Figure FDA0003305057990000016
wherein
Figure FDA0003305057990000017
The back propagation formula of the convolution network contains a back propagation error term eiAnd a characteristic variable xi-1
Figure FDA0003305057990000018
Wherein the characteristic variable is the characteristic of the preceding term, whereby the back propagation error e can be derivediInverting the error terms at each layer;
3) symmetric constraint countermeasure networks: the countermeasure network is a discrimination network similar to a comparator, and a reconstructed image output by the network is generated through the step 2)
Figure FDA0003305057990000019
And real face frontal data as expected output
Figure FDA00033050579900000110
Performing a cost function for distinguishing between true and false
Figure FDA00033050579900000111
This portion reconstructs the sample pixel levels from the arbiter in order to bring the network up to the desired output for the corresponding sample
Figure FDA00033050579900000112
And real image pixel level
Figure FDA00033050579900000113
Introduction of discriminant loss of LrFused pixel loss function:
Figure FDA0003305057990000021
according to the characteristic of face symmetry, the width of all input images is covered by half, the coordinates on the right side are gradually shielded, the feature points are gradually described through the absolute value of subtraction of reconstructed sample images, and the symmetry loss L aiming at face correction in the process of posture reconstruction is introducedslThe symmetry of the visible part and the covering part reconstruction is solved:
Figure FDA0003305057990000022
the final loss function is therefore weighted formula (3), formula (4): l issyn=Lr1Lsl2LceeWherein L isceeIs a cross-entropy loss function, which limits the hidden activation function, lambda1And λ2Is a coefficient that balances the penalty term,after a final loss function is defined, the weights of the generation network and the countermeasure network are alternately updated by adopting a back propagation algorithm
Figure FDA0003305057990000023
And deviation of
Figure FDA0003305057990000024
Weighing parameters of a generating network and a countering network
Figure FDA0003305057990000025
Figure FDA0003305057990000026
Figure FDA0003305057990000027
4) Training a balance network: after the processing of the 3 steps, the network independently and alternately iteratively updates the parameters, wherein the first sequence is a fixed generation network G and a training discrimination network D to maximize the discrimination accuracy, and the second sequence is the fixed discrimination network D and the training generation network G to minimize the discrimination accuracy until the discrimination is performed with real data
Figure FDA0003305057990000028
Almost the same image, namely whether the true sample or the false sample is, the countermeasure network converts the output two-dimensional matrix value into a probability value p;
5) reconstruction and identification: inputting test images with different attitude angles into the tested network through a balance generation type countermeasure network to obtain an output image of the generator
Figure FDA0003305057990000029
Respectively using the reconstructed front face image and the network highest hidden layer characteristics for the reconstructed imageAnd a linear discriminant analysis method, namely an LDA method is used for reducing dimensions to extract the human face features with discriminant, and a nearest neighbor classifier is used for finishing the human face recognition.
CN201810033455.1A 2018-01-15 2018-01-15 Multi-pose face recognition method based on contour symmetric constraint generation type countermeasure network Expired - Fee Related CN108334816B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810033455.1A CN108334816B (en) 2018-01-15 2018-01-15 Multi-pose face recognition method based on contour symmetric constraint generation type countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810033455.1A CN108334816B (en) 2018-01-15 2018-01-15 Multi-pose face recognition method based on contour symmetric constraint generation type countermeasure network

Publications (2)

Publication Number Publication Date
CN108334816A CN108334816A (en) 2018-07-27
CN108334816B true CN108334816B (en) 2021-11-23

Family

ID=62924212

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810033455.1A Expired - Fee Related CN108334816B (en) 2018-01-15 2018-01-15 Multi-pose face recognition method based on contour symmetric constraint generation type countermeasure network

Country Status (1)

Country Link
CN (1) CN108334816B (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109146868A (en) * 2018-08-27 2019-01-04 北京青燕祥云科技有限公司 3D Lung neoplasm generation method, device and electronic equipment
CN109344706A (en) * 2018-08-28 2019-02-15 杭州电子科技大学 It is a kind of can one man operation human body specific positions photo acquisition methods
CN109255831B (en) * 2018-09-21 2020-06-12 南京大学 Single-view face three-dimensional reconstruction and texture generation method based on multi-task learning
CN111046707A (en) * 2018-10-15 2020-04-21 天津大学青岛海洋技术研究院 Face restoration network in any posture based on facial features
CN109671084B (en) * 2018-11-15 2023-05-30 华东交通大学 Method for measuring shape of workpiece
CN109543827B (en) * 2018-12-02 2020-12-29 清华大学 Generating type confrontation network device and training method
CN109684973B (en) * 2018-12-18 2023-04-07 哈尔滨工业大学 Face image filling system based on symmetric consistency convolutional neural network
CN109711386B (en) * 2019-01-10 2020-10-09 北京达佳互联信息技术有限公司 Method and device for obtaining recognition model, electronic equipment and storage medium
CN109919018A (en) * 2019-01-28 2019-06-21 浙江英索人工智能科技有限公司 Image eyes based on reference picture automatically open method and device
CN109815928B (en) * 2019-01-31 2021-05-11 中国电子进出口有限公司 Face image synthesis method and device based on counterstudy
CN110135336B (en) * 2019-05-14 2023-08-25 腾讯科技(深圳)有限公司 Training method, device and storage medium for pedestrian generation model
CN110321849B (en) * 2019-07-05 2023-12-22 腾讯科技(深圳)有限公司 Image data processing method, device and computer readable storage medium
CN110427864B (en) * 2019-07-29 2023-04-21 腾讯科技(深圳)有限公司 Image processing method and device and electronic equipment
US11475608B2 (en) 2019-09-26 2022-10-18 Apple Inc. Face image generation with pose and expression control
CN110751098B (en) * 2019-10-22 2022-06-14 中山大学 Face recognition method for generating confrontation network based on illumination and posture
CN110796080B (en) * 2019-10-29 2023-06-16 重庆大学 Multi-pose pedestrian image synthesis algorithm based on generation countermeasure network
CN111860093A (en) * 2020-03-13 2020-10-30 北京嘀嘀无限科技发展有限公司 Image processing method, device, equipment and computer readable storage medium
CN111861949B (en) * 2020-04-21 2023-07-04 北京联合大学 Multi-exposure image fusion method and system based on generation countermeasure network
CN111523497B (en) * 2020-04-27 2024-02-27 深圳市捷顺科技实业股份有限公司 Face correction method and device and electronic equipment
CN111680566B (en) * 2020-05-11 2023-05-16 东南大学 Small sample face recognition method for generating countermeasure network based on sliding partitioning
CN111652798B (en) * 2020-05-26 2023-09-29 浙江大华技术股份有限公司 Face pose migration method and computer storage medium
CN112884030B (en) * 2021-02-04 2022-05-06 重庆邮电大学 Cross reconstruction based multi-view classification system and method
CN113111776B (en) * 2021-04-12 2024-04-16 京东科技控股股份有限公司 Method, device, equipment and storage medium for generating countermeasure sample
CN113140015B (en) * 2021-04-13 2023-03-14 杭州欣禾圣世科技有限公司 Multi-view face synthesis method and system based on generation countermeasure network

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017223530A1 (en) * 2016-06-23 2017-12-28 LoomAi, Inc. Systems and methods for generating computer ready animation models of a human head from captured data images
CN106951867B (en) * 2017-03-22 2019-08-23 成都擎天树科技有限公司 Face identification method, device, system and equipment based on convolutional neural networks
CN107154023B (en) * 2017-05-17 2019-11-05 电子科技大学 Based on the face super-resolution reconstruction method for generating confrontation network and sub-pix convolution
CN107292813B (en) * 2017-05-17 2019-10-22 浙江大学 A kind of multi-pose Face generation method based on generation confrontation network
CN107239766A (en) * 2017-06-08 2017-10-10 深圳市唯特视科技有限公司 A kind of utilization resists network and the significantly face of three-dimensional configuration model ajusts method
CN107437077A (en) * 2017-08-04 2017-12-05 深圳市唯特视科技有限公司 A kind of method that rotation face based on generation confrontation network represents study

Also Published As

Publication number Publication date
CN108334816A (en) 2018-07-27

Similar Documents

Publication Publication Date Title
CN108334816B (en) Multi-pose face recognition method based on contour symmetric constraint generation type countermeasure network
CN108537743B (en) Face image enhancement method based on generation countermeasure network
CN108038420B (en) Human behavior recognition method based on depth video
CN110555434B (en) Method for detecting visual saliency of three-dimensional image through local contrast and global guidance
CN109886881B (en) Face makeup removal method
CN106462724B (en) Method and system based on normalized images verification face-image
CN110348330A (en) Human face posture virtual view generation method based on VAE-ACGAN
CN112418074A (en) Coupled posture face recognition method based on self-attention
CN111754396B (en) Face image processing method, device, computer equipment and storage medium
CN105869166B (en) A kind of human motion recognition method and system based on binocular vision
CN109859305A (en) Three-dimensional face modeling, recognition methods and device based on multi-angle two-dimension human face
EP3905194A1 (en) Pose estimation method and apparatus
CN106023298A (en) Point cloud rigid registration method based on local Poisson curved surface reconstruction
CN108090451B (en) Face recognition method and system
CN110852941A (en) Two-dimensional virtual fitting method based on neural network
CN112508991B (en) Panda photo cartoon method with separated foreground and background
CN111445426B (en) Target clothing image processing method based on generation of countermeasure network model
CN110660020B (en) Image super-resolution method of antagonism generation network based on fusion mutual information
CN109887021A (en) Based on the random walk solid matching method across scale
CN108470178B (en) Depth map significance detection method combined with depth credibility evaluation factor
CN106803094A (en) Threedimensional model shape similarity analysis method based on multi-feature fusion
CN111882516B (en) Image quality evaluation method based on visual saliency and deep neural network
CN114724218A (en) Video detection method, device, equipment and medium
CN114005046A (en) Remote sensing scene classification method based on Gabor filter and covariance pooling
CN113222808A (en) Face mask removing method based on generative confrontation network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20211123