CN111951153B - Face attribute refined editing method based on generation of countering network hidden space deconstructment - Google Patents

Face attribute refined editing method based on generation of countering network hidden space deconstructment Download PDF

Info

Publication number
CN111951153B
CN111951153B CN202010806948.1A CN202010806948A CN111951153B CN 111951153 B CN111951153 B CN 111951153B CN 202010806948 A CN202010806948 A CN 202010806948A CN 111951153 B CN111951153 B CN 111951153B
Authority
CN
China
Prior art keywords
face
attribute
normal vector
generation
generated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010806948.1A
Other languages
Chinese (zh)
Other versions
CN111951153A (en
Inventor
许佳奕
鞠怡轩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202010806948.1A priority Critical patent/CN111951153B/en
Publication of CN111951153A publication Critical patent/CN111951153A/en
Application granted granted Critical
Publication of CN111951153B publication Critical patent/CN111951153B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a face attribute refined editing method based on generation of a countering network hidden space deconstructment, which comprises the following steps: s101, constructing a generator model; s102, constructing a classifier model; s103, modifying the generated codes through normal vectors to obtain a pair of generated codes; s104, a pair of generated codes are brought into a generator model to obtain a pair of face images; s105, a pair of face images are brought into a classifier model, and a classification result is obtained; s106, calculating a minimum value of the loss function; s107, repeating S103 to S106 through an inverse gradient algorithm, and respectively moving the optimized normal vector along the positive and negative directions to generate codes so as to change the face attribute, wherein the normal vector when the face attribute changes to the greatest extent is taken as a target normal vector; s108, generating a new face image with the face attribute corresponding to the target normal vector.

Description

Face attribute refined editing method based on generation of countering network hidden space deconstructment
Technical Field
The invention relates to the technical field of digital image processing and face image editing and synthesis, in particular to a face attribute refined editing method based on generation of a countermeasures network hidden space deconstructment.
Background
Face property editing refers to the process of modifying a single or multiple properties of a face image to generate a new face image with target properties. The editing and generating technology of the face image has important application in the field of civil demands such as public security, digital entertainment and the like. In the public safety field, law enforcement personnel cannot directly obtain face photos of suspects under the condition that crimes occur under the condition that monitoring is not covered or is blocked and the like. Drawing face portraits with specified attributes based on witness's witness words becomes the main way to determine the identity of a suspect. In the field of digital entertainment, the cosmetic design technology realizes portrait beauty by changing the makeup of the face, and has huge market potential in the field of live broadcasting and social contact. Therefore, the method for researching the face attribute editing and applying the method to the synthesis of the face image from the user-controllable angle has important economic and social values.
The conventional face attribute editing algorithm usually finds key points in a face image, and then edits the face by using the key points of the face through manual adjustment or grid deformation, which is time-consuming and sometimes causes problems such as face distortion. Currently, advanced algorithms are classified into three types, namely, changing and controlling face attributes by using deep learning technology for generating a countermeasure network (GAN). The first method adopts a mode of adding conditions in an input layer of a generator, and comprises two parts of random noise and conditions to form a generated code, wherein the attribute control is realized by changing the conditions, and different generated codes synthesize different face images. The method needs to add the conditions into the training process, and the problems of difficult training of the generator and mode collapse are easily generated. The second type of method is based on the idea of style migration, and obtains control conditions from a pair of reference face images. Such methods enable migration modification of batch properties, but it is difficult to separate individual properties for finer tuning. The third class of methods finds a generated code having a target attribute and satisfying a control condition based on analyzing the specific meaning of the generated code in the GAN generator. The method can add or remove the attribute from the face image to realize qualitative modification of the attribute, but also can not realize quantitative modification of the appointed attribute.
In summary, the existing method has a considerable limitation that the attribute of one face can be integrally migrated to another face, or the attribute of the face can be qualitatively modified by adding or removing.
Disclosure of Invention
In order to solve the defects in the prior art and realize the purpose of editing single attribute or multiple attribute fine granularity of a face so as to generate a new face image, the invention adopts the following technical scheme:
the face attribute refined editing method based on the generation of the countering network hidden space deconstructment comprises the following steps:
s101, constructing a generator model G for generating a face image through a generation code;
s102, constructing a classifier model C for calculating a classification result of the face attribute through the face image;
s103, modifying the generated code z through the normal vector n to obtain a pair of generated codes, wherein the generated codes are used for adding or inhibiting the face attribute, and the formula is as follows:
z'=z+α*n (1)
z is a generated code, z' is a modified generated code, n is a normal vector, the normal vector corresponds to different face attributes, alpha is a modification coefficient for controlling the degree of face attribute modification, and when alpha is taken as-1 and 1, a pair of generated codes are z-n and z+n;
s104, bringing a pair of generated codes z-n and z+n into the trained generator model G to obtain a pair of face images;
s105, a pair of face images are brought into a trained classifier model C, and a classification result of a certain attribute is obtained;
s106, calculating the minimum value of the loss function, wherein the formula is as follows:
v (n) is a loss function, G is a generator, and C is a classifier;
s107, repeating S103 to S106 through an inverse gradient algorithm, wherein the optimized normal vector n moves the generated code z along the positive and negative directions respectively to change the face attribute, and the normal vector n with the largest change degree of the face attribute is taken as the target normal vector n *
S108, target normal vector n * The modified generated code z' is obtained by adopting the formula 1, and the generator model G is used for generating the vector n with the target normal * A new face image corresponding to the face attribute.
Control vectors for controlling the change of the attributes of a single face are analyzed, the entanglement among a plurality of attributes of the face is realized, and the attributes of the appointed face can be quantitatively modified; the modification of the generated codes is comprehensively controlled by fusing a plurality of control vectors, so that the fine granularity editing of single attributes or a plurality of face attributes is realized, and a new face image is generated.
The value of alpha is between-3 and 3. Experimental results show that beyond-3 to 3, the generated face has larger flaws or the generated image is not a face.
And S108, when modifying the plurality of face attributes, adopting a mode of fusing attribute normal vectors to comprehensively modify the generated codes, wherein the formula is as follows:
z'=z+α 1 ×n 1 *2 ×n 2 * +…+α k ×n k * (3)
z is the original code of the face to be modified, z' is the modified code, k attributes are modified, n i * And alpha i Is the target normal vector and the motion coefficient corresponding to the ith attribute.
The step S101 includes the following steps:
s201, in the hidden space deconstructing domain of the generator, using a random generation code expressed by a multidimensional vector as an input of a generator model G;
s202, reorganizing the generated codes into feature graphs;
s203, performing convolution module operation, convolution module and up-sampling operation on the feature map to obtain a face image, wherein the convolution module comprises convolution, instance normalization and Relu operation.
S102, constructing two classifiers for judging whether the specified attributes of the face exist or not by adopting a convolutional neural network structure, and predicting the probability that the image has the specified attributes of the face, wherein the method comprises the following steps:
s301, inputting a face image;
s302, carrying out convolution and average pooling operation on the face image to obtain a feature map with reduced resolution;
s303, stretching the feature map into vectors through convolution and self-adaptive average pooling operation;
s304, the vector is sent into a group of full connection layers, and a two-dimensional vector is output through softmax to be used as a classification result of whether the face has the appointed attribute.
The invention has the advantages that:
based on the analysis of the hidden space of the GAN generator, a control vector for controlling the change of the attribute of a single face is analyzed, so that the entanglement among a plurality of attributes of the face is realized, and the attribute of the appointed face can be quantitatively modified; the modification of the generated codes is comprehensively controlled by fusing a plurality of control vectors, so that the fine granularity editing of single attributes or a plurality of face attributes is realized, and a new face image is generated.
Drawings
FIG. 1 is a block diagram of a system module according to the present invention.
Fig. 2 is a flow chart of learning control normal vectors corresponding to face attributes in the present invention.
Fig. 3 is a block diagram of a generator network in the present invention.
Fig. 4 is a block diagram of a classifier network in accordance with the present invention.
Detailed Description
The following describes specific embodiments of the present invention in detail with reference to the drawings. It should be understood that the detailed description and specific examples, while indicating and illustrating the invention, are not intended to limit the invention.
Based on the face attribute fine editing method for generating the countermeasures against the network hidden space deconstructment, training the target normal vector n of the corresponding attribute from the initial normal vector n represented by the initial zero vector through a generator and a classifier * The learning process is shown in fig. 2. After the normal vector corresponding to each attribute to be edited is obtained, a flow of generating a new face by fusing a plurality of attribute modification effects is fused, as shown in fig. 1, and the specific processing mode is as follows:
step 1: a generator model G trained via FFHQ data sets is used. As shown in fig. 3, in the hidden space deconstructing domain of the generator, the vector reshape is formed in the form of a 4×4×32 feature map, i.e. the original feature map is of the size 4×4, with a random code represented by a 512-dimensional vector as input, totaling 32 channels. The method comprises the steps of obtaining a 4 multiplied by 512 graph through the operation of 1 convolution module, then generating a 1024 multiplied by 16 characteristic graph through the operation of 8 convolution modules and up-sampling, and obtaining a 1024 multiplied by 1024 resolution color face image through the operation of 1 convolution module. Wherein each convolution module contains one convolution, instance normalization and Relu operation.
The convolution kernel size adopted in the implementation is 3×3, the size of the image is not changed through the filling operation, and the change of the number of channels of the feature map is related to the number of channels of the convolution module. The length and width of the image are doubled for each implementation of the up-sampling operation, which does not change the number of channels. The number of convolution channels in the 8 convolution modules employed in this implementation is 512, 256, 128, 64, 32, 16.
The number of operations of the convolution module and the up-sampling is variable, for example, the convolution module and the up-sampling operation are carried out for 7 times, the obtained image is 512×512, but the quality of the image is not high at 512×512 or below, and the convolution module and the up-sampling operation are carried out for 8 times, so that 1024×1024 achieves the high-definition effect, and the convolution module and the up-sampling operation exceed 1024×1024, so that the convolution module and the up-sampling operation cause excessive consumption and are not suitable for the existing common computer.
Step 2: in order to produce the correct guidance for optimization of the normal vector n, a sufficiently accurate classifier C is required for predicting the probability (whether the probability is yes or no) that the image has the specified properties. And constructing two classifiers for judging whether the face specified attribute exists or not by using a convolutional neural network structure. As shown in fig. 4, the input of the network is a 1024×1024 resolution color face image, after 5 convolutions and average pooling operations, the color face image is reduced to a 32×32 resolution feature image, after one convolution and adaptive average pooling, the feature image is stretched into a vector, and the vector is sent to a plurality of subsequent full-connection layers, and finally the network outputs a two-dimensional vector S through a softmax layer, wherein the value of each dimension is 0 or 1, and the vector S is used as a classification result of whether the face has a specified attribute.
The present embodiment convolves and pools a feature map of 32×32×256 from a color image of 1024×1024 resolution, and the feature map is convolved and averaged and pooled 5 times in total. The convolution operation, in which the number of channels is changed from 3 channels (three components of red, green and blue of a color image) to 256,5 times, uses a convolution kernel of 3×3, and the feature map is not changed in size by the padding operation, and the number of channels for each of the 5 convolution operations is 96, 256, 384, 256. The purpose of the pooling operation is to reduce the size of the feature map, and the 5 average pooling operations each time average 4 pixel values in a 2 x 2 image block to 1 pixel, i.e. the length and width of the image is reduced by half, so the size of the final feature map is changed from 1024 x 1024 to 32 x 32. In the present embodiment, the feature map of 32×32×256 is converted into the feature map of 8×8×64, and the convolution kernel size used is 3×3. Adaptive average pooling is a special form of pooling operation, which obtains a feature map of a specified output size by linear interpolation.
The expansion and full join operation straightens an 8×8×64 multi-channel feature map into a 4096-dimensional vector, and then maps the vector into 256-dimensional vectors through the full join operation.
For the simple task of two classification, the poor classification effect is caused by the too small number of times of convolution and average pooling operation, and the unnecessary calculation power consumption is caused by the too large number of times. Experimental results show that 5 times of convolution and pooling operations can achieve a good classification effect/target.
Step 3: for a certain face attribute, the generated code is modified by using a formula 1, and an attribute which does not exist originally is added or some unnecessary attributes are restrained by modifying the attribute by a normal vector. And respectively moving along the positive and negative directions of the normal vector, and performing two operations of adding and suppressing to restrict the optimized normal vector to be more in line with expectations. And modifying the generated code z according to the attribute normal vector, taking alpha as-1 and 1, and obtaining a pair of generated codes z-n and z+n.
z'=z+α*n (1)
z is the generated code, z' is the modified generated code, n is the normal vector, i.e. the control vector, and α is the modification coefficient. Editing of different attributes can be achieved by selecting and controlling normal vectors corresponding to different face attributes, the modification degree of the attributes can be controlled by selecting different alpha, if alpha is larger than 0, the added attributes are represented by alpha is smaller than 0, the deleted attributes are represented by alpha, the value of alpha is between-3 and 3, experimental results show that the value exceeds-3 to 3, the generated face flaws are larger, or the generated image is not a face.
Step 4: for a pair of the generated codes z-n (not having a certain face attribute) and z+n (having the attribute), 2 pieces of 1024×1024 color face images as shown in fig. 2 are generated by using the generator model in step 1.
Step 5: and (3) aiming at the face attribute, sending the 2 face images generated in the step (4) to the classifier in the step (2) to obtain a classification result represented by a two-dimensional vector. Wherein, the value of S1 (1, 0) indicates that the attribute is not included, and the value of S2 (0, 1) indicates that the attribute is included.
Step 6: the loss function V (n) of the overall network is designed as shown in equation 2:
g is the generator, C is the classifier, n is the vector to be optimized, and z is the generated code belonging to the hidden space of the generator. When n is the target normal vector, C (G (z-n)) is the minimum, C (G (z+n)) is the maximum, and the loss function as a whole is the minimum.
Step 7: and (3) continuously repeating the steps 3-6 through reverse gradient propagation to optimize n, and finding the normal vector when the attribute degree is maximum before and after the code generation is moved. The new random generation code z is given to each modified n, the face image generated by adopting the z-n correspondence does not have the attribute, the face image generated by adopting the z+n correspondence has the attribute, and the normal vector n at the moment is the approximate solution of the target normal vector.
Step 8: as shown in fig. 1, for a given original generated code and a corresponding face generated by the generator described in step 1, when editing of a certain attribute is required, the modified generated code is calculated by using formula 1 according to the target normal vector obtained by training in the above step, and then a new face image with a specified attribute is generated again by the generator described in step 1. If multiple attributes need to be modified, the generated code is modified comprehensively by adopting a mode of fusing attribute normal vectors, and the multi-attribute modification adopts the following formula:
z'=z+α 1 ×n 1 *2 ×n 2 * +…+α k ×n k * (3)
z is the original code of the face to be modified, z' is the modified code, k attributes are modified, n i * And alpha i Is the target normal vector and the motion coefficient corresponding to the ith attribute.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced with equivalents; such modifications and substitutions do not depart from the spirit of the technical solutions according to the embodiments of the present invention.

Claims (5)

1. The face attribute refined editing method based on the generation of the countering network hidden space deconstructment is characterized by comprising the following steps of:
s101, constructing a generator model G for generating a face image through a generation code;
s102, constructing a classifier model C for calculating a classification result of the face attribute through the face image;
s103, modifying the generated code z through the normal vector n to obtain a pair of generated codes, wherein the generated codes are used for adding or inhibiting the face attribute, and the formula is as follows:
z'=z+α*n (1)
z is a generated code, z' is a modified generated code, n is a normal vector, the normal vector corresponds to different face attributes, alpha is a modification coefficient for controlling the degree of face attribute modification, and when alpha is taken as-1 and 1, a pair of generated codes are z-n and z+n;
s104, bringing a pair of generated codes z-n and z+n into the trained generator model G to obtain a pair of face images;
s105, a pair of face images are brought into a trained classifier model C, and a classification result of a certain face attribute is obtained;
s106, calculating the minimum value of the loss function, wherein the formula is as follows:
v (n) is a loss function, G is a generator, and C is a classifier;
s107, repeating S103 to S106 through an inverse gradient algorithm, wherein the optimized normal vector n moves the generated code z along the positive and negative directions respectively to change the face attribute, and the normal vector n with the largest change degree of the face attribute is taken as the target normal vector n *
S108, target normal vector n * The modified generated code z' is obtained by adopting the formula 1, and the generator model G is used for generating the vector n with the target normal * A new face image corresponding to the face attribute.
2. The face attribute refinement editing method based on generation of the countering network hidden space deconstructment according to claim 1, wherein the value of α is between-3 and 3.
3. The face attribute refinement editing method based on generation of the antagonism network hidden space deconstructment of claim 1, wherein the step S108 is to modify the generated codes comprehensively by adopting a mode of merging attribute normal vectors when modifying a plurality of face attributes, and the formula is as follows:
z'=z+α 1 ×n 1 *2 ×n 2 * +…+α k ×n k * (3)
z is the original code of the face to be modified, z' is the modified code, k attributes are modified, n i * And alpha i Is the target normal vector and the motion coefficient corresponding to the ith attribute.
4. The face attribute refinement editing method based on generation of the challenge network hidden space deconstructment according to claim 1, wherein the step S101 includes the steps of:
s201, in the hidden space deconstructing domain of the generator, using a random generation code expressed by a multidimensional vector as an input of a generator model G;
s202, reorganizing the generated codes into feature graphs;
s203, performing convolution module operation, convolution module and up-sampling operation on the feature map to obtain a face image, wherein the convolution module comprises convolution, instance normalization and Relu operation.
5. The face attribute refinement editing method based on generation of the antagonism network hidden space deconstructment according to claim 1, wherein the step S102 is to build a classifier for judging whether the face specified attribute exists or not by adopting a convolutional neural network structure, and comprises the following steps:
s301, inputting a face image;
s302, carrying out convolution and average pooling operation on the face image to obtain a feature map with reduced resolution;
s303, stretching the feature map into vectors through convolution and self-adaptive average pooling operation;
s304, the vector is sent into a group of full connection layers, and a two-dimensional vector is output through softmax to be used as a classification result of whether the face has the appointed attribute.
CN202010806948.1A 2020-08-12 2020-08-12 Face attribute refined editing method based on generation of countering network hidden space deconstructment Active CN111951153B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010806948.1A CN111951153B (en) 2020-08-12 2020-08-12 Face attribute refined editing method based on generation of countering network hidden space deconstructment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010806948.1A CN111951153B (en) 2020-08-12 2020-08-12 Face attribute refined editing method based on generation of countering network hidden space deconstructment

Publications (2)

Publication Number Publication Date
CN111951153A CN111951153A (en) 2020-11-17
CN111951153B true CN111951153B (en) 2024-02-13

Family

ID=73332394

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010806948.1A Active CN111951153B (en) 2020-08-12 2020-08-12 Face attribute refined editing method based on generation of countering network hidden space deconstructment

Country Status (1)

Country Link
CN (1) CN111951153B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112613445B (en) * 2020-12-29 2024-04-30 深圳威富优房客科技有限公司 Face image generation method, device, computer equipment and storage medium
CN112766157B (en) * 2021-01-20 2022-08-30 乐山师范学院 Cross-age face image recognition method based on disentanglement representation learning
CN113052230A (en) * 2021-03-22 2021-06-29 浙江大学 Clothing image generation system and method based on disentanglement network
CN112991160B (en) * 2021-05-07 2021-08-20 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN113221794B (en) * 2021-05-24 2024-05-03 厦门美图之家科技有限公司 Training data set generation method, device, equipment and storage medium
CN113361659B (en) * 2021-07-16 2023-08-22 广东工业大学 Image controllable generation method and system based on hidden space principal component analysis
CN113408673B (en) * 2021-08-19 2021-11-02 联想新视界(南昌)人工智能工研院有限公司 Generation countermeasure network subspace decoupling and generation editing method, system and computer
CN113793254B (en) * 2021-09-07 2024-05-10 中山大学 Face image attribute editing method, system, computer equipment and storage medium
WO2023171335A1 (en) * 2022-03-11 2023-09-14 ソニーセミコンダクタソリューションズ株式会社 Data generation device, method, and program
CN114373215A (en) * 2022-03-22 2022-04-19 北京大甜绵白糖科技有限公司 Image processing method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020029356A1 (en) * 2018-08-08 2020-02-13 杰创智能科技股份有限公司 Method employing generative adversarial network for predicting face change
CN111275613A (en) * 2020-02-27 2020-06-12 辽宁工程技术大学 Editing method for generating confrontation network face attribute by introducing attention mechanism
CN111368662A (en) * 2020-02-25 2020-07-03 华南理工大学 Method, device, storage medium and equipment for editing attribute of face image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020029356A1 (en) * 2018-08-08 2020-02-13 杰创智能科技股份有限公司 Method employing generative adversarial network for predicting face change
CN111368662A (en) * 2020-02-25 2020-07-03 华南理工大学 Method, device, storage medium and equipment for editing attribute of face image
CN111275613A (en) * 2020-02-27 2020-06-12 辽宁工程技术大学 Editing method for generating confrontation network face attribute by introducing attention mechanism

Also Published As

Publication number Publication date
CN111951153A (en) 2020-11-17

Similar Documents

Publication Publication Date Title
CN111951153B (en) Face attribute refined editing method based on generation of countering network hidden space deconstructment
CN107529650B (en) Closed loop detection method and device and computer equipment
CN112435191B (en) Low-illumination image enhancement method based on fusion of multiple neural network structures
CN110136062B (en) Super-resolution reconstruction method combining semantic segmentation
Wang et al. Channel and space attention neural network for image denoising
CN111968123B (en) Semi-supervised video target segmentation method
Li et al. Dlgsanet: lightweight dynamic local and global self-attention networks for image super-resolution
CN111737743A (en) Deep learning differential privacy protection method
Zheng et al. T-net: Deep stacked scale-iteration network for image dehazing
CN114626042B (en) Face verification attack method and device
CN108734677A (en) A kind of blind deblurring method and system based on deep learning
Choudhary et al. Recent Trends and Techniques in Image Enhancement using Differential Evolution-A Survey
CN111768326A (en) High-capacity data protection method based on GAN amplification image foreground object
CN105787892A (en) Monte Carlo noise removal method based on machine learning
CN112419191A (en) Image motion blur removing method based on convolution neural network
CN116188874A (en) Image countermeasure sample generation method and system
Zhuo et al. Ridnet: Recursive information distillation network for color image denoising
CN114612476A (en) Image tampering detection method based on full-resolution hybrid attention mechanism
Jang et al. Dual path denoising network for real photographic noise
CN116091823A (en) Single-feature anchor-frame-free target detection method based on fast grouping residual error module
Liang et al. Multi-scale hybrid attention graph convolution neural network for remote sensing images super-resolution
CN115511708A (en) Depth map super-resolution method and system based on uncertainty perception feature transmission
CN113554047A (en) Training method of image processing model, image processing method and corresponding device
Yang et al. Image defogging based on amended dark channel prior and 4‐directional L1 regularisation
CN116011558B (en) High-mobility countermeasure sample generation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant