CN115345773B - Makeup migration method based on generation of confrontation network - Google Patents

Makeup migration method based on generation of confrontation network Download PDF

Info

Publication number
CN115345773B
CN115345773B CN202210977447.9A CN202210977447A CN115345773B CN 115345773 B CN115345773 B CN 115345773B CN 202210977447 A CN202210977447 A CN 202210977447A CN 115345773 B CN115345773 B CN 115345773B
Authority
CN
China
Prior art keywords
makeup
face
image
mapping
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210977447.9A
Other languages
Chinese (zh)
Other versions
CN115345773A (en
Inventor
吴爱国
程诗文
谢锦洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Harbin Institute of Technology
Original Assignee
Shenzhen Graduate School Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Harbin Institute of Technology filed Critical Shenzhen Graduate School Harbin Institute of Technology
Priority to CN202210977447.9A priority Critical patent/CN115345773B/en
Publication of CN115345773A publication Critical patent/CN115345773A/en
Application granted granted Critical
Publication of CN115345773B publication Critical patent/CN115345773B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The invention discloses a makeup transfer method based on a generation countermeasure network, which comprises the following steps: firstly, segmenting a human face; step two, UV mapping; step three, cosmetic extraction; step four, color transfer; step five, pattern transfer; and step six, UV inverse mapping. The invention not only can obtain better makeup transfer effect when the posture difference between the source image and the reference image is larger, but also can realize the partial makeup combined transfer of a plurality of reference images, the partial makeup single transfer of a single reference image and the extreme makeup transfer of patterns, thereby improving the controllability and flexibility of the makeup transfer and the robustness of the image posture for users and being more suitable for practical application scenes.

Description

Makeup transfer method based on generation countermeasure network
Technical Field
The invention belongs to the technical field of computer vision, and relates to a face makeup migration method, in particular to a makeup migration method based on a generation confrontation network.
Background
Makeup is a way of improving the appearance of the face with a particular cosmetic product, which can be more attractive and is therefore widely used. Common cosmetics include foundations for covering facial blemishes, concealers, eyeliners, eye shadows, lipsticks, and the like. The cosmetics are not only various in variety, various in brand, but also different in color, and different in use method. Without the opinion of professional cosmetic makers, people have difficulty finding a cosmetic style suitable for themselves. How to help people to quickly and accurately find personalized makeup products becomes a research focus gradually, most people do not have enough time, economic cost and other trial and error cost, and the actual purchase of the cosmetics and the trial and selection are difficult. The makeup migration technology can help people try out the expected makeup by means of image synthesis, and the process can be called as virtual makeup trying. In addition, as the demand for online cosmetics purchase continues to increase, people are also interested in virtual makeup trying technologies. If a user can find a makeup style suitable for the user through virtual makeup trial, the sales of cosmetics can be greatly promoted, and the method is not only beneficial to the development of businesses, but also beneficial to the development of the field of image style migration.
Virtualization applications are convenient tools that help users try different makeup styles, and currently include scholarly show, TAAZ, and dailymakeer 1. However, these software rely on pre-determined makeup and only allow the user to select one style from a given set of makeup styles, i.e., both limited to a set of preset configurations or a set of parameters for a particular characteristic, which does not meet the user's personalized needs. In fact, in daily life we can see from the photographs of the stars many different, beautiful makeup styles, which all provide a reference for the user's makeup style. The virtual makeup migration algorithm can help the user transfer the makeup style in the photos to the face picture of the user, so that the user can simply and directly judge whether the makeup style is suitable for the user.
Face makeup migration is an image generation technique that migrates makeup from a reference image (a makeup face) to a source image (an unpasteurized face) while keeping face pose, expression and identity unchanged. The human face contains a large amount of useful information, and research based on the human face is widely concerned in many fields. Face makeup migration has important value in both theoretical research and commercial applications as an important application for creating confrontational networks. However, because the human face has the characteristics of complex geometric forms, different postures, different color changes and different texture changes of the makeup of the reference image, the generation of the result image for transferring the makeup of the reference image into the source image has certain difficulty, and higher requirements on the transfer of the makeup of the human face are put forward by realizing the transfer of various makeup and controllable makeup transfer.
The makeup migration task is a specific style migration task and is a relatively novel and leading topic. The ideal makeup transfer technique should do: the style of the reference makeup is embodied as much as possible while the structure of the face of the image is kept unchanged after the makeup.
Achieving a good makeup transfer effect requires accurate semantic segmentation of the face to generate a realistic composite image. Conventional methods have focused primarily on image pre-processing techniques such as image gradient editing or operations based on physical processes. Guo et al first tried the makeup migration task to break down the image into three layers, including facial structure, makeup color and skin, and then transfer the makeup typing of the reference image into the source image. This process requires complex preprocessing steps and a transfer process, and the synthesized image has a significant unnatural phenomenon.
The literature (Liu S, ou X, qian R, et al. Makeup Like a Superstar: deep localized makeup transfer network [ J ]. ArXiv preprinting arXiv:1604.07102, 2016.) first proposes a Deep learning framework-based cosmetic effect migration algorithm, which achieves end-to-end cosmetic application, and proposes several basic criteria for cosmetic effect migration algorithm, including: (1) integrity of cosmetic effect: the transfer cosmetic effect at least comprises three parts of base makeup, eye makeup and lip makeup; (2) accuracy of topical cosmetic effect: transferring the cosmetic effect of the corresponding part to the corresponding part; (3) the result is natural: the images after the cosmetic migration should be as natural as possible without significant artifacts; (4) controllability of the degree of makeup efficiency: the control of the shade degree of the migration cosmetic effect can be realized. However, in the method, the makeup effects of the three parts are respectively transferred in respective forms, and the whole makeup effect is regarded as a simple combination of different parts, so that the final output image has obvious unnatural artifacts at the combination position. Kaur et al propose a facial texture transfer method, which achieves smooth and natural transfer of facial texture of a reference image into a content facial image without changing the identity of a source image, and retains the facial structure of the source image by designing a loss function. Liu et al introduces a double-layer countermeasure network, and integrates two countermeasure networks into an end-to-end deep network, wherein one network reconstructs a facial image of a human face at a pixel level, and the other network retains identity information of a source image at a feature level, thereby achieving good effect.
BeautyGAN is the first proposed cosmetic migration algorithm based on generation of confrontational networks, which adds histogram loss to the framework of cycleGAN. PairedcycleGAN also provides an asymmetric generation confrontation network framework on the basis of cycleGAN to complete the cosmetic effect migration and removal tasks, and introduces a variant of a cycle consistency loss function to support the designated reference image to carry out the cosmetic effect migration. Unlike BeautyGAN, it trains two asymmetric neural networks simultaneously, where network 1 is used for cosmetic migration and network 2 is used for cosmetic removal. And the output of network 1 is used as the supplementary input of network 2, so that the data volume of network training is doubled. However, the makeup effect is not migrated as a whole, but three neural networks are trained corresponding to the makeup effects of three parts, so that the resulting image has unnatural artifacts. The LADN (local antagonistic separation network of facial makeup and makeup removal) proposed by Gu et al uses multiple overlapping local discriminators and asymmetric loss functions to ensure consistency of local detail. Similar to Chang et al in PairedcycleGAN, the method is trained for removing and applying makeup simultaneously, and utilizes local discriminators to avoid incomplete patches in the image, and local discriminators to avoid cropping blocks of the image. Unlike Chang et al, this method uses a coding structure to encode the cosmetic style and identity separately, as does SLGAN. Thus, a combined cosmetic effect style may be obtained by interpolating between the codes corresponding to the two different cosmetic effect styles. CA-GAN proposes a color discriminator to improve the fine-grained cosmetic color migration of the lip and eye areas, enabling quantitative analysis of the color accuracy of the cosmetic migration. However, since makeup migration is simply regarded as color migration of each region, non-color information such as glossiness and texture of makeup effect is ignored.
The makeup effect transfer algorithm is limited to the makeup effects of eye makeup, lip makeup and base makeup. MakeupBag expands the diversity of makeup, realizes the transfer of extreme makeup for some facial stickers, tattoos, ornaments and the like, but it cannot transfer makeup because the algorithm cannot identify natural skin as the makeup to be transferred. In addition, this method can misidentify some facial masks as cosmetic effects that need to be transferred, resulting in an incomplete transfer of makeup.
Although the methods described above can perform cosmetic migration in a certain sense, they do not specifically address the problem of spatial misalignment between the source and reference images, nor do they allow for precise and local adjustment of the cosmetic. The PSGAN attempts to solve the spatial misalignment problem for the first time by introducing an attention mechanism, which improves the robustness of the pose and the expression. This is a big step forward in the ability of a data set to go from a single front to different poses. They establish pixel-level associations between source images to reference images and implement local cosmetic migration using facial resolution masks and facial markers. In addition, the PSGAN can achieve control over the degree of shifting cosmetic shade by changing the weight of the attention feature. However, the calculation cost of the attention matrix is huge, the training network takes a long time, and the local cosmetic migration is not flexible. The CPM algorithm realizes diversity of makeup migration by processing color makeup and pattern makeup, respectively, and establishes a corresponding pattern makeup data set. However, since the makeup is transferred as a whole, flexible combination of a plurality of reference images and transfer of partial makeup cannot be realized. In addition, when the difference in the image orientation is large, an unnatural artifact exists.
Despite the intensive research of many people, there is no method that can achieve the transfer of the extreme makeup but also give the user sufficient flexibility of use. Accordingly, there is a need in the art for a new method of face makeup migration that addresses the above-mentioned problems.
Disclosure of Invention
In order to solve the above problems in the prior art, the present invention provides a makeup transfer method based on generation of a countermeasure network, which has robustness of human face pose, and can realize controllable and diversified effects.
The purpose of the invention is realized by the following technical scheme:
a makeup migration method based on a generation countermeasure network comprises the following two technical schemes:
the technical proposal I,
Step one, human face segmentation
Inputting the RGB three-channel colorful face reference image into a face segmentation module to obtain a face semantic segmentation gray level image;
in this step, the face segmentation module segments the face into 19 parts: background, face, nose, glasses, left eye, right eye, left eyebrow, right eyebrow, left ear, right ear, mouth (including teeth), upper lip, lower lip, hair, hat, earring, necklace, neck, clothing, human face semantic segmentation gray scale map different size gray scale value mark different parts;
step two, UV mapping
Respectively inputting a face source image, a face reference image and a face semantic segmentation gray image corresponding to the face reference image into a UV mapping module, and separating position information and texture information of the image to obtain a corresponding UV position map S and a corresponding UV texture map T;
step three, cosmetic extraction
Extracting the makeup of three parts, namely eye makeup, lip makeup and base makeup according to the UV texture mapping corresponding to the face reference image obtained in the step two and the UV texture mapping corresponding to the face semantic segmentation gray level image of the face reference image;
step four, color transfer
Step four, establishing a color transfer branch generator
The color transfer branch generator comprises an encoder 1, an encoder 2, a first bottleneck layer, a second bottleneck layer, a dressing transfer module and a decoder;
step four, respectively inputting the makeup of the eye makeup, the lip makeup and the base makeup obtained in the step three into the encoder 2 to extract the style characteristics of the makeup of each part, and obtaining the style code of the makeup;
step three, inputting the UV texture mapping corresponding to the source image obtained in the step two into the encoder 1 and the first bottleneck layer to extract the face identity characteristics;
fourthly, the makeup style code obtained in the fourth step is fused into the face identity characteristic by using a makeup migration module, and then a decoder decodes the face identity characteristic to obtain a UV texture map for migrating the reference makeup to the source image;
step five, UV inverse mapping
Mapping the UV texture map obtained in the fourth step after makeup transfer along the UV position map of the source image obtained in the second step to restore the UV texture map into a real two-dimensional image, wherein the image is a result image of makeup transfer;
the second technical scheme,
Step one, face segmentation
Inputting the RGB three-channel colorful face reference image into a face segmentation module to obtain a face semantic segmentation gray level image;
step two, UV mapping
Respectively inputting a face source image, a face reference image and a face semantic segmentation gray image corresponding to the face reference image into a UV mapping module, and separating position information and texture information of the image to obtain a corresponding UV position map and a corresponding UV texture map;
step three, cosmetic extraction
According to the UV texture mapping corresponding to the face reference image obtained in the second step and the UV texture mapping corresponding to the face semantic segmentation gray level image of the face reference image, and extracting the makeup of three parts, namely eye makeup, lip makeup and base makeup;
step four, color transfer
Step four, constructing a color transfer branch generator
The color transfer branch generator comprises an encoder 1, an encoder 2, a first bottleneck layer, a second bottleneck layer, a dressing transfer module and a decoder;
step four, respectively inputting the makeup of the eye makeup, the lip makeup and the base makeup obtained in the step three into the encoder 2 to extract the style characteristics of the makeup of each part, and obtaining the style code of the makeup;
step four, inputting the UV texture mapping corresponding to the source image obtained in the step two into the encoder 1 and the first bottleneck layer to extract the face identity characteristics;
fourthly, the makeup style code obtained in the fourth step is fused into the face identity characteristic by using a makeup migration module, and then a decoder decodes the face identity characteristic to obtain a UV texture map for migrating the reference makeup to the source image;
step five, pattern transfer
Fifthly, inputting the UV texture mapping corresponding to the reference image obtained in the second step into a pattern segmentation network of a pattern transfer branch to obtain a human face semantic segmentation gray map corresponding to the makeup of the pattern;
step two, comparing the human face semantic segmentation gray level image corresponding to the pattern makeup obtained in the step one with the UV texture mapping of the reference image according to a phase, and extracting the pattern makeup;
fifthly, inverting the human face semantic segmentation gray level image corresponding to the makeup of the pattern obtained in the fifth step, carrying out phase-wise addition on the inverted human face semantic segmentation gray level image and the UV texture mapping image obtained in the fourth step after the makeup is transferred, and carrying out phase-wise addition on the inverted human face semantic segmentation gray level image and the UV texture mapping image obtained in the fifth step and the pattern makeup obtained in the second step to obtain a complete UV texture mapping image after the makeup is transferred;
step six, UV inverse mapping
And mapping the UV texture map obtained in the fifth step along the UV position map of the source image obtained in the second step to restore the UV texture map into a real two-dimensional image, wherein the image is a result image of makeup transfer.
Compared with the prior art, the invention has the following advantages:
the invention not only can obtain better makeup transfer effect when the posture difference between the source image and the reference image is larger, but also can realize the partial makeup combined transfer of a plurality of reference images, the partial makeup single transfer of a single reference image and the extreme makeup transfer of patterns, thereby improving the controllability and flexibility of the makeup transfer and the robustness of the image posture for users and being more suitable for practical application scenes.
Drawings
FIG. 1 is a network architecture of a face segmentation module;
FIG. 2 is a UV mapping module;
FIG. 3 is a color branch generator structure;
FIG. 4 is a detailed structure of a color branch generator;
FIG. 5 is an algorithm overall framework 1;
FIG. 6 is an algorithm overall framework 2;
FIG. 7 is a graph showing the migration results of a cosmetic makeup;
FIG. 8 is a migration result when there is a difference in attitude;
FIG. 9 is a partial makeup migration result;
FIG. 10 is a graph showing the migration results of a cosmetic makeup;
FIG. 11 is a migration result when there is a difference in attitude;
figure 12 is the partial makeup migration results.
Detailed Description
The technical solution of the present invention is further described below with reference to the accompanying drawings, but not limited thereto, and any modification or equivalent replacement of the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention shall be covered by the protection scope of the present invention.
The invention provides a human face makeup transfer method based on the respective treatment of basic makeup and pattern makeup, which mainly comprises four parts: (1) face segmentation network: and extracting information of each part of the human face. (2) UV mapping network: and separating the position information and the texture information of the face image. (3) color transfer branching: the basic makeup of the eye makeup part, the lip makeup part and the base makeup part are respectively extracted, coded, combined and migrated. (4) pattern transfer branching: the extreme makeup such as facial patterns is transferred.
1. Face segmentation network
The previous work only extracts all parts of makeup in the network training process and does not process the reference images in the network reasoning process, so that the makeup of the whole face can only be transferred together, the makeup transfer of a specified part cannot be realized, and the local makeup combination transfer of a plurality of reference images cannot be realized. In order to provide flexibility and controllability for the makeup migration of the part of the user, a face segmentation module needs to be designed to accurately extract the makeup of each part of the face of the user in the image. The network structure of U-net is introduced by dividing the network, the information of each part of the image can be more effectively divided, the concrete network structure is shown in figure 1, each dark rectangle in the figure represents the feature mapping with different sizes, the number of channels is marked above the rectangle, and the length and width are marked on the left side of the rectangle. The light rectangle box represents the copied feature map. Arrows of different shades of gray and different directions represent different operations.
The face segmentation module receives RGB three-channel color face images as input, and uniformly cuts the face images into 256 multiplied by 256 sizes. And continuously extracting features through convolution and pooling, obtaining 19-channel face segmentation mapping through up-sampling, and finally converting the face segmentation mapping into a face semantic segmentation gray-scale map. Here, the human face is divided into 19 parts, different parts are marked by different gray scale values in different sizes in the semantic segmentation map, and the gray scale weight marks of specific parts are shown in table 1. And then, the makeup information of different parts can be extracted by comparing the face image with the corresponding face segmentation semantic graph according to the bit phase, so that the subsequent coding and migration of the makeup are facilitated. The training of the face segmentation network adopts a CelebAMask-HQ data set, the data set is a large-scale high-resolution data set with fine-grained mask labels, the data set comprises more than 30000 face images, the resolution is 1024 multiplied by 1024, each image is provided with a corresponding artificial marked mask label image, and the resolution is 512 multiplied by 512. The face is divided into 19 parts, such as eyes, nose, mouth, etc., note that not all faces have the 19 parts in their entirety.
TABLE 1 face segmentation weight tag Table
Figure BDA0003798889390000111
2. UV mapping network
The PSGAN solves the problem of makeup migration caused by overlarge human face posture difference between a source image and a reference image by introducing an attention mechanism, but the PSGAN has huge calculation overhead and poor effect. The best method for solving the problem is to carry out three-dimensional face analysis, namely separating the attitude information and the texture information of the face image, then carrying out makeup transfer in the texture information only, and restoring the result image according to the original attitude information, so that the difference of the attitude information can be shielded, and the attitude information of the source image can be well protected. However, if the pose information is obtained by directly adopting a three-dimensional face modeling method, huge calculation overhead is caused. In order to improve the robustness of the algorithm to the posture difference between a reference image and a source image and improve the processing speed, UV mapping is introduced to separate the face position information from texture information in the image, a large number of parameters are not required to be calculated to carry out face three-dimensional modeling, but pairing data of the face image and the posture information thereof are used for carrying out supervised training on a coding and decoding network, so that a colorful face image is input, and the corresponding position information is output, namely UV position mapping.
UV mapping is a common technique for texture mapping of 3D objects in computer graphics, i.e. each key point of a 3D object is associated with a position (UV coordinate) on a 2D image, and texture sampling is performed accordingly to obtain a texture map of the 3D object, i.e. the texture on the surface of the 3D object is flattened into a 2D image. PRNet expands this idea and introduces UV position mapping to encode arbitrary 3D facial shapes. The three RGB channels of color pictures are used to store the 3D positions of points from the 3D face model, i.e. XYZ coordinate information, this picture is the UV position map. Regardless of the input head pose combination, each pixel in the UV position map corresponds to a fixed semantic point of the face. There is also a map for storing texture information from points of the 2D face model, which is a UV texture map. And performing texture mapping on the real image according to the UV position mapping to obtain a matched UV texture mapping. The UV position map contains all information about the shape of the face, head pose and facial expression, and the UV texture map contains all texture information of the face, without necessarily linking the two. Therefore, the face position information and the texture information in one face picture are separated, and only the texture information of the picture is subjected to makeup transfer, so that the influence caused by large posture difference is naturally removed.
The UV mapping module network adopts a coder-decoder structure, and the specific structure is shown in FIG. 2. The encoder maps the image from 256 × 256 × 3 to an 8 × 8 × 512 feature tensor starting with one convolutional layer and then 10 residual blocks. The decoder section contains 17 transposed convolutional layers, and predicts the UV position map of the input image from the feature tensor. The convolution kernel size for all convolution layers was 4 x 4 and the ReLu activation function was used.
The process of UV mapping is here denoted by UV, i.e. given an input face picture, its corresponding UV position map S and UV texture map T are obtained using a pre-trained model of UV mapping. By UV -1 And (4) representing the UV inverse mapping process, namely restoring a two-dimensional real face image according to the UV position map S and the UV texture map T. The process is defined as follows:
S,T:=UV(I) (1);
I:=UV -1 (S,T) (2)。
in the overall algorithm framework, I is respectively input into a picture src
Figure BDA0003798889390000131
I ref And
Figure BDA0003798889390000132
applying the transfer function UV to obtain a corresponding UV map (S) src ,T src )、
Figure BDA0003798889390000133
(S ref ,T ref ) And
Figure BDA0003798889390000134
note that here S src
Figure BDA0003798889390000135
S ref And
Figure BDA0003798889390000136
is merely thatUV position mapping related to 3D face shape and therefore this part is not related to makeup texture. Then, the texture is mapped to T src
Figure BDA0003798889390000137
T ref And
Figure BDA0003798889390000138
inputting a color transfer branch and a pattern transfer branch, and fusing the output results of the two branches into a final texture map T after makeup transfer res . Finally, pasting S according to the UV position of the source image src By rendering function UV -1 It is transformed to a standard two-dimensional image:
I res =UV -1 (S src ,T res ) (3);
in the formula I res Representing the resulting two-dimensional result image (res).
3. Color transfer branching
The color transfer branch is trained in the form of generating a confrontation network, and the trained generators are used for finishing the extraction, coding and transfer of the makeup. In order to realize the respective extraction, coding and transfer of the three parts of basic makeup, a generator introduces a coding mechanism of StyleGAN (Karras T, lane S, ailat. Assety-basedgenergareturement for generating additive spatial networks [ C ]// Proceedings of IEEE/CVF conference on computer vision and consistency.2019: 4401-4410), and the three parts of basic makeup are respectively coded and transferred.
The generator is structurally shown in fig. 3 and comprises an encoder 1, an encoder 2, a first bottleneck layer, a second bottleneck layer, a dressing transfer module and a decoder.
The encoder 1 and the first bottleneck layer above the left perform feature extraction on the face identity of the source image, and the aim is to extract the face identity feature from the input source image:
F id =FIEnc(x) (8)。
the encoder 1 is composed of two convolution blocks, and the specific structure of the convolution block is shown in (a) of fig. 4, and includes a convolution layer, an example normalization function, and a ReLu activation function. In order to solve the problem that the deep network is difficult to train, a first bottleneck layer is introduced with a structure of a residual block, the bottleneck layer is composed of three residual blocks, and the residual block is a common residual block without an AdaIN layer. The specific structure of the residual block is shown in fig. 4 (b), and is formed by concatenating a convolutional layer, an instance normalization, a ReLu activation function, a convolutional layer, and an instance normalization, and the initial input is superimposed on the last layer output of the network as the output of the residual block.
According to a face semantic segmentation graph of a reference image obtained by a previous face segmentation module, a texture diagram of a reference image at the lower left side forms three parts of an eye part, a face part and a lip part, and the three parts are realized by applying the following formulas:
y i =T ref ⊙M i (4);
wherein, y i Represents each component of the face in the reference image, i = { lip, skin, eye }, M i Is the corresponding weight mask, an indicates Hadamard multiplication.
Makeup of each inputted part i The makeup feature of each part is extracted by inputting the encoder 2 at the lower left in fig. 3. The encoder 2 is composed of two convolution blocks, and the specific structure of the convolution block is shown in fig. 4 (a), which includes a convolution layer, an example normalization function, and a ReLu activation function. The structure of encoder 1 and encoder 2 are identical, but note that they do not share parameters.
In order to realize the coding and the disentanglement of each part of the makeup, the makeup migration module introduces a coding mechanism of StyleGAN, which consists of a mapping module and a multilayer perceptron in figure 4. The mapping module is structured as shown in fig. 4 (c), and is composed of an average pooling layer and a 1 × 1 convolutional layer. Each component y it will input i Is mapped to a partially specific pattern code z i . The three partial codes are then concatenated to form a complete initial pattern code Z in the potential space Z:
Figure BDA0003798889390000151
wherein the content of the first and second substances,
Figure BDA0003798889390000152
here, a connection is indicated.
Just because the input reference images are deconstructed into different semantic components, the method of the invention can realize the combination of any semantic components of different reference images even if the postures and expressions of the reference images are different. The formula for combining multiple reference image partial makeup codes is as follows:
Figure BDA0003798889390000161
wherein a, b, c each correspond to y a ,y b ,y c Three reference images.
To change the distribution of the training data, it is necessary to inject non-linearities, and to input the initial pattern coding z into a multi-layer perceptual network (MLP) with three fully connected layers, one can obtain the pattern coding in the more de-entangled potential space W:
w=MLP(z) (7)。
the second bottleneck layer in fig. 3 fuses the pattern codes w to the face identity information features F one by one id Thereby transferring the makeup of the reference image to the face of the source image. The second bottleneck layer is composed of three fusion blocks, the structure of the fusion block is shown in fig. 4 (d), and the fusion block is a residual block structure with AdaIN, namely, the original example normalization in the residual block in fig. 4 (b) is changed into AdaIN self-adaptive example normalization, and the output of the makeup migration module is used as the parameter of the AdaIN layer, so that the characteristic fusion is realized. The pattern code w is specified by a learnable affine transform and then passed to each blend block. The jth AdaIN layer is defined by the formula:
Figure BDA0003798889390000162
wherein, w s,j And w b,j Is a scaling and biasing pattern using the corresponding component, F j Representing the input signature, μ (-) and σ (-) are the mean and standard deviation, respectively, of the channel level.
After two layers of upsampling convolution of the decoder in fig. 3, the final result picture is obtained
Figure BDA0003798889390000163
The specific structure of the upsampled volume block is shown in fig. 4 (e), and is composed of upsampling, convolution, layer normalization, and ReLu activation functions.
Due to the complexity of the makeup migration task, this part of the training to generate the countermeasure network employs a joint loss function, which includes the following parts:
(1) A penalty function is resisted. Which is used to direct the generator to produce more realistic results. Using two discriminators D X And D Y And respectively judging the authenticity of the picture in the makeup field X and the makeup field Y, and distinguishing the image generated by the network from the real image. The structure of the two discriminators is the same as that of the markov discriminator. Penalty function L of the generator D Penalty function L of sum discriminator G Are respectively defined as:
Figure BDA0003798889390000171
L G =-E x~X,y~Y [log(D X (G(y,x))×D Y (G(x,y)))] (11)。
(2) A global perceptual loss function. Since the pictures are from two different domains, pixel-level constraints are not feasible. In order to ensure the face consistency (content consistency) of the input source image and the output makeup migration image, a perception loss function is used for ensuring the identity consistency of the whole face. Global perceptual loss function L global Is defined as:
L global =||F l (G(y,x))-F l (y)|| 2 +||F l (G(x,y))-F l (x)|| 2 (12);
wherein, F l (. To) represents the output characteristics of the I level of the VGG network model, | | · | | computationally 2 Is the L2 norm.
(3) A local perceptual loss function. In addition to the global perceptual loss function, a local loss function is introduced to further maintain the invariance of non-cosmetic migration portions, such as teeth, eyebrows, and the like. Local loss function L local Is defined as follows:
Figure BDA0003798889390000172
where M denotes the mask for a particular part in the set I = { teeth, hair, eyeballs, eyebrown }, I denotes the subscript of I.
(4) A cyclic consistency loss function. For the unsupervised learning task without paired image data, a cyclic consistency loss function provided by cycleGAN is adopted to protect the identity information of the source image, and the cyclic consistency loss function L cyc Is defined as:
L cyc =||G(G(y,x),y)-y|| 1 +||G(G(x,y),x)-x|| 1 (14);
wherein | · | charging 1 Representing the L1 norm.
(5) Cosmetic loss function. A makeup loss function is introduced to improve the accuracy of makeup transfer, which uses Histogram Matching (HM) to provide a post-makeup transfer image as a pseudo-reference real image. The method is formed by matching local histograms of three different face regions, including face, eyes and lips, and the three parts are integrated into a pseudo-reference real image. Cosmetic loss function L makeup Is defined as:
L makeup =||G(x,y)-HM(x,y)|| 2 +||G(y,x)-HM(y,x)|| 2 (15);
where HM (-) represents histogram matching, the output of HM (x, y) has makeup style of y, and retains the face identity of x.
The overall loss function of the complete network is defined as:
L total =λ GAN (L D +L G )+λ cyc L cycg L globall L localmakeup L makeup (16);
wherein λ is GANcycgl And λ makeup Respectively the weight taken up by the different loss terms.
The MT-dataset is adopted for network training, but all the MT-dataset needs to be converted into corresponding UV texture maps, and then training is carried out by using a training set of the UV texture maps.
4. Pattern transfer branch
The pattern transfer branch target is to detect and transfer pattern-based makeup such as stickers, facial patterns, ornaments, and the like. Since the makeup nature of the pattern is different from that of the base makeup, the migration base makeup can be regarded as a migration of colors, and the makeup of the migration pattern must be kept unchanged in terms of texture, shape, position, and the like, and deformed to the surface of the target 3D face. Thus, a pattern transfer branch is introduced to achieve the transfer of makeup of patterns other than eye makeup, lip makeup, and base makeup.
In a natural real image, the process of transferring a makeup of a pattern is complicated, and the pattern needs to be divided, unfolded and then warped into a target image. However, since the UV texture mapping is used as an input, the position information of the human face is already separated, and the two steps of unfolding and re-warping are not needed, so that the problem is directly reduced to simple texture image segmentation. This process can be implemented by any segmentation network, here using a U-net network structure similar to the face segmentation module, as shown in fig. 1. Reference image texture map T for finally realizing given input ref Predicting a binary segmentation mask gamma for the pattern makeup m
The CPM-Synt-1 is adopted as a training data set, and the data set comprises 5555 groups of data. Training is achieved by reducing die loss between the true segmentation mask and the predicted segmentation mask as follows:
Figure BDA0003798889390000191
wherein, gamma is gt For reference true segmentation mask, Γ pr Is a predicted segmentation mask.
5. Combining and UV inverse mapping
The output of the pattern transfer branch is a weight-split mask Γ m However, the output of the color branch is the texture map T res The output result forms of the two branches are completely different, which reflects the fundamental difference between the two types of makeup, so that the method adopts two independent branches to respectively treat different types of makeup.
In order to get the final desired result of UV texture mapping, it is necessary to blend the pattern makeup of the reference image defined by the predictive weight mask with the color-shifted output texture mapping:
T res =T ref ⊙Γ m +T color ⊙(1-Γ m ) (18);
wherein, gamma is m Pattern-segmented mask, T, representing a pattern-transferring branch output ref Texture map, T, representing a reference image obtained from a UV mapping module color A texture map after makeup indicating color shift branch output, T indicating multiplication of corresponding elements res A texture map is shown in which the outputs of the color transfer branch and the pattern transfer branch are mixed. Finally, the UV texture map is mapped to a real output image using a rendering function:
I res =UV -1 (S src ,T res ) (19)。
in the invention, the network training is divided into the following four parts for training respectively:
(1) And pre-training a face segmentation network.
(2) The UV mapping network is pre-trained.
(3) Pre-training a pattern makeup segmentation network.
(4) And training a generation countermeasure network of the color transfer branch.
In the invention, the network reasoning process comprises the following two steps:
the first process is a network reasoning process for only carrying out basic makeup migration of three parts, namely eye makeup, lip makeup and base makeup, and is divided into the following parts according to time lines as shown in figure 5:
(1) And inputting the reference image into a face segmentation network to obtain a face semantic segmentation gray image corresponding to the reference image.
(2) And inputting the source image, the reference image and the face semantic segmentation gray level image corresponding to the reference image into a UV mapping network, separating the position information and the texture information of the image, and respectively storing to obtain a corresponding UV position map and a corresponding UV texture map. Subsequent makeup transfer work will be performed on the UV texture map, leaving the positional information of the source image for later use.
(3) And (3) extracting the makeup of three parts, namely eye makeup, lip makeup and base makeup according to the UV texture mapping corresponding to the reference image obtained in the step (2) and the UV texture mapping corresponding to the human face semantic segmentation gray level image of the reference image by phase comparison.
(4) And (4) respectively inputting the three parts of the makeup obtained in the step (3) into a makeup encoder of a color transfer branch for makeup encoding, inputting the UV texture map of the source image obtained in the step (2) into the color transfer branch for encoding the identity information of the human face, and obtaining the UV texture map for transferring the reference makeup into the source image after passing through a makeup transfer and decoding module in the color transfer branch.
(5) And (3) mapping the UV texture map after the makeup transfer obtained in the step (4) along the UV position map of the source image obtained in the step (2), and restoring the UV texture map into a real two-dimensional image, wherein the image is a result image of the makeup transfer.
And a second process, namely, a network reasoning process for simultaneously carrying out color makeup and pattern makeup migration, as shown in fig. 6, the method is divided into the following parts according to a time line:
(1) And inputting the reference image into a face segmentation network to obtain a corresponding face semantic segmentation gray image.
(2) And respectively inputting the source image, the reference image and the human face semantic segmentation gray level image corresponding to the reference image into a UV mapping network, separating the position information and the texture information of the image, and respectively storing to obtain corresponding UV position mapping and UV texture mapping. Subsequent makeup transfer work will be performed on the UV texture map, leaving the positional information of the source image for later use.
(3) And (3) extracting the makeup of three parts, namely eye makeup, lip makeup and base makeup according to the UV texture mapping corresponding to the reference image obtained in the step (2) and the UV texture mapping corresponding to the human face semantic segmentation gray level image of the reference image by phase comparison.
(4) And (3) respectively inputting the three parts of the makeup obtained in the step (3) into a makeup encoder of a color transfer branch for makeup encoding, inputting the UV texture map of the source image obtained in the step (2) into the color transfer branch for encoding the identity information of the human face, and obtaining the UV texture map for transferring the reference makeup to the source image after passing through a makeup transfer and decoding module in the color transfer branch.
(5) Inputting the UV texture mapping corresponding to the reference image obtained in the step (2) into a pattern segmentation network of the pattern transfer branch to obtain a weight segmentation mask corresponding to the makeup of the pattern.
(6) And (5) comparing the obtained weight segmentation mask of the makeup of the pattern with the UV texture mapping of the reference image according to the phase, and extracting the makeup of the pattern.
(7) And (4) inverting the human face semantic segmentation gray level image of the pattern makeup, which is obtained in the step (5), and performing bitwise addition on the UV texture mapping image after the makeup migration, which is obtained in the step (4), and then performing bitwise addition on the UV texture mapping image after the makeup migration, which is obtained in the step (6), so as to obtain a complete UV texture mapping image after the makeup migration.
(8) And (3) mapping the UV texture map obtained in the step (7) along the UV position map of the source image obtained in the step (2) to restore the UV texture map into a real two-dimensional image, wherein the image is a result image of makeup transfer.
Cosmetic transfer effect:
(1) The transfer of the basic makeup of only three parts of eye makeup, lip makeup and base makeup is performed.
The result of the overall makeup transfer when both the source image and the reference image are positive images is shown in fig. 7, each row in the figure is a group of experiments, the left column is the input source image, the middle is the reference image, and the right column is the result of transferring the makeup of the reference image into the source image. The whole makeup migration effect is good, lip makeup, base makeup and eye makeup can be well migrated, but the eye makeup is slightly insufficient because when the eye makeup is extracted, the eye makeup is realized by drawing a rectangular frame around eyes, and in the migrated image, a fine rectangular frame shape can be seen, and the problem needs to be solved subsequently.
The results of the makeup transfer when there was a large difference in the pose between the reference image and the source image are shown in fig. 8, where each line was a set of experiments. For each experiment, the leftmost column is the source image, the second is the reference image, the third column is the makeup migration results of the algorithm of the present invention, and the fourth column is the makeup migration results of the CPM algorithm. By contrast, the CPM algorithm also migrates the shadow part which is not supposed to be migrated in the reference image into the result image, and the right face of the man faces forwards at the moment and is not supposed to have the shadow, but the algorithm of the invention can better migrate the whole makeup, which embodies the advantage of coding the local makeup. Therefore, when the posture difference is large, the algorithm of the invention has better effect than that of CPM.
Because the algorithm of the invention respectively extracts and codes three parts of makeup of the input image, rather than transferring the makeup as a whole, the algorithm can transfer the local makeup of the reference image, and can realize the combined transfer of different local makeup of a plurality of reference images, which cannot be realized by the CPM algorithm. The results of the transfer of the partial makeup are shown in fig. 9, in which the first two lines are the results of the experiment in a single reference image and the last line are the results of the experiment in two reference images. In the first two rows, the first column is the source image, the second column is the reference image, and the third, fourth, and fifth columns are the result of transferring only lip makeup, make-up, and eye makeup of the reference image into the source image, respectively. In the last row, the first column is the input source image, the second column is the input first reference image, the third column is the input second reference image, the fourth column is the result of transferring the lip makeup of the first reference image in combination with the other makeup of the second reference image into the source image, and the fifth column is the result of transferring the lip makeup of the second reference image in combination with the other makeup of the first reference image into the source image.
(2) On the basis of (1), the transfer of the makeup of the pattern is expanded.
In order to embody the transfer effect of the makeup, the reference image with the makeup is selected as the part of the reference image. The result of the overall makeup transfer when both the source image and the reference image are positive images is shown in fig. 10, each row is a group of experiments, the leftmost column is the input source image, the second column is the input reference image, and the rightmost column is the result obtained by transferring the entire makeup of the reference image to the source image. Here, it can be seen that the transfer is excellent in three-part makeup, i.e., eye makeup, lip makeup, and base makeup, and in the makeup of patterns.
The results of the makeup transfer when there was a large difference in the pose between the reference image and the source image are shown in fig. 11, where each line was a set of experiments. For each experiment, the leftmost column is the source image, the second is the reference image, the third column is the makeup migration results of the algorithm of the present invention, and the fourth column is the makeup migration results of the CPM algorithm. By contrast, the CPM algorithm also migrates the shadow part which is not migrated in the reference image into the result image, and the algorithm of the invention can better migrate the whole makeup, which embodies the advantage of coding the partial makeup. However, the effects of eye makeup and lip makeup are not as good as the CPM algorithm.
The results of the migration of the makeup are shown in fig. 12, where the first two rows are the results of the experiment under a single reference image and the last row is the results of the experiment under two reference images. In the first two rows, the first column is the input source image, the second column is the input reference image, and the third, fourth, fifth, and sixth columns are the result of transferring only lip makeup, base makeup, eye makeup, and makeup of the reference image into the source image, respectively. In the last row, the first column is the input source image, the second column is the first reference image, the third column is the second reference image, the fourth column is the result of transferring lip makeup of the first reference image in combination with other makeup of the second reference image into the source image, the fifth column is the result of transferring lip makeup of the second reference image and other makeup of the first reference image into the source image, and the sixth column is the result of transferring lip makeup of the second reference image, pattern makeup and other makeup of the first reference image into the source image. The result shows that the algorithm of the invention can flexibly realize the transfer of the local makeup and the flexible combination of the local makeup of a plurality of reference images, and can well meet the personalized requirements of users.

Claims (6)

1. A makeup migration method based on a generative confrontation network, characterized in that it comprises the following steps:
step one, face segmentation
Inputting the RGB three-channel color face reference image into a face segmentation module to obtain a face semantic segmentation gray level image;
step two, UV mapping
Inputting a face source image, a face reference image and a face semantic segmentation gray image corresponding to the face reference image into a UV mapping module respectively, and separating position information and texture information of the image to obtain a corresponding UV position map S and a corresponding UV texture map T;
step three, cosmetic extraction
According to the UV texture mapping corresponding to the face reference image obtained in the second step and the UV texture mapping corresponding to the face semantic segmentation gray level image of the face reference image, and extracting the makeup of three parts, namely eye makeup, lip makeup and base makeup;
step four, color transfer
Step four, establishing a color transfer branch generator
The color transfer branch generator comprises an encoder 1, an encoder 2, a first bottleneck layer, a second bottleneck layer, a dressing transfer module and a decoder;
step four, respectively inputting the makeup of the eye makeup, the lip makeup and the base makeup obtained in the step three into the encoder 2 to extract the style characteristics of the makeup of each part, and obtaining the style code of the makeup;
step three, inputting the UV texture mapping corresponding to the source image obtained in the step two into the encoder 1 and the first bottleneck layer to extract the face identity characteristics;
fourthly, the makeup style code obtained in the fourth step is fused into the face identity characteristic by using a makeup migration module, and then a decoder decodes the face identity characteristic to obtain a UV texture map for migrating the reference makeup to the source image;
step five, UV inverse mapping
And mapping the UV texture map obtained in the fourth step along the UV position map of the source image obtained in the second step to restore the UV texture map into a real two-dimensional image, wherein the image is a result image of makeup transfer.
2. The makeup migration method based on generation of confrontation network as claimed in claim 1, wherein in said first step, the face segmentation module segments the face into background, face, nose, glasses, left eye, right eye, left eyebrow, right eyebrow, left ear, right ear, mouth, upper lip, lower lip, hair, cap, earring, necklace, neck, clothing, and the face semantically segments different parts marked with different size gray scale values.
3. The makeup migration method based on generation countermeasure network according to claim 1, characterized in that said encoder 1 and encoder 2 are each composed of two convolution blocks, a convolution block comprising convolution layer, instance normalization and ReLu activation function; the first bottleneck layer is composed of three residual blocks, and the residual blocks are formed by serially connecting a convolution layer, an example normalization layer, a ReLu activation function, the convolution layer and the example normalization layer; the makeup migration module introduces a StyleGAN coding mechanism and consists of a mapping module and a multilayer perceptron, wherein the mapping module consists of a pooling layer and a 1 x 1 convolution layer, the input features are mapped into a one-dimensional code z finally, the one-dimensional code z is mapped into a disentangled coding space through the multilayer perceptron, and a disentangled code w is obtained and is used as a parameter of a second bottleneck layer; the second bottleneck layer is composed of three residual blocks with AdaIn, and the residual blocks with AdaIn are formed by serially connecting a convolution layer, adaIN adaptive instance normalization, reLu activation function, convolution layer and AdaIN adaptive instance normalization; the decoder is composed of two upsampling convolution blocks, and the upsampling convolution block is composed of upsampling, convolution, layer normalization and ReLu activation functions.
4. A makeup migration method based on creation of a countermeasure network, characterized in that the method comprises the steps of:
step one, human face segmentation
Inputting the RGB three-channel color face reference image into a face segmentation module to obtain a face semantic segmentation gray level image;
step two, UV mapping
Inputting a face source image, a face reference image and a face semantic segmentation gray image corresponding to the face reference image into a UV mapping module respectively, and separating position information and texture information of the image to obtain a corresponding UV position map and a corresponding UV texture map;
step three, cosmetic extraction
Extracting the makeup of three parts, namely eye makeup, lip makeup and base makeup according to the UV texture mapping corresponding to the face reference image obtained in the step two and the UV texture mapping corresponding to the face semantic segmentation gray level image of the face reference image;
step four, color transfer
Step four, establishing a color transfer branch generator
The color transfer branch generator comprises an encoder 1, an encoder 2, a first bottleneck layer, a second bottleneck layer, a dressing transfer module and a decoder;
step two, respectively inputting the three parts of makeup of the eye, the lip and the base obtained in the step three into the encoder 2 to extract the pattern characteristics of each part of makeup so as to obtain the pattern code of the makeup;
step three, inputting the UV texture mapping corresponding to the source image obtained in the step two into the encoder 1 and the first bottleneck layer to extract the face identity characteristics;
fourthly, the makeup style code obtained in the fourth step is fused into the face identity characteristic by using a makeup migration module, and then a decoder decodes the face identity characteristic to obtain a UV texture map for migrating the reference makeup to the source image;
step five, pattern transfer
Fifthly, inputting the UV texture mapping corresponding to the reference image obtained in the second step into a pattern segmentation network of a pattern transfer branch to obtain a human face semantic segmentation gray map corresponding to the makeup of the pattern;
step two, comparing the human face semantic segmentation gray level image corresponding to the pattern makeup obtained in the step one with the UV texture mapping of the reference image according to the phase, and extracting the pattern makeup;
fifthly, inverting the human face semantic segmentation gray level image corresponding to the makeup of the pattern obtained in the fifth step, carrying out phase-wise addition on the inverted human face semantic segmentation gray level image and the UV texture mapping image obtained in the fourth step after the makeup is transferred, and carrying out phase-wise addition on the inverted human face semantic segmentation gray level image and the UV texture mapping image obtained in the fifth step and the pattern makeup obtained in the second step to obtain a complete UV texture mapping image after the makeup is transferred;
step six, UV inverse mapping
And mapping the UV texture map obtained in the fifth step along the UV position map of the source image obtained in the second step to restore the UV texture map into a real two-dimensional image, wherein the image is a result image of the makeup transfer.
5. The makeup migration method according to claim 4, wherein in said first step, the face segmentation module segments the face into different parts marked by different gray values of different sizes in the face semantic segmentation gray map, wherein the different parts are marked by different gray values of different sizes in the face semantic segmentation gray map.
6. The makeup migration method based on generation countermeasure network according to claim 4, characterized in that said encoder 1 and encoder 2 are each composed of two convolution blocks, the convolution blocks comprising convolution layer, instance normalization and ReLu activation function; the first bottleneck layer is composed of three residual blocks, and the residual blocks are formed by serially connecting a convolution layer, an example normalization layer, a ReLu activation function, the convolution layer and the example normalization layer; the makeup migration module introduces a StyleGAN coding mechanism and consists of a mapping module and a multilayer perceptron, wherein the mapping module consists of a pooling layer and a 1 multiplied by 1 convolutional layer, the input features are mapped into a one-dimensional code z finally, the one-dimensional code z is mapped into a disentanglement coding space through the multilayer perceptron, and a disentanglement code w is obtained and is used as a parameter of a second bottleneck layer; the second bottleneck layer is composed of three residual blocks with AdaIn, and the residual blocks with AdaIn are formed by serially connecting a convolution layer, adaIN adaptive instance normalization, a ReLu activation function, a convolution layer and AdaIN adaptive instance normalization; the decoder is composed of two upsampling convolution blocks, and the upsampling convolution block is composed of upsampling, convolution, layer normalization and ReLu activation functions.
CN202210977447.9A 2022-08-15 2022-08-15 Makeup migration method based on generation of confrontation network Active CN115345773B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210977447.9A CN115345773B (en) 2022-08-15 2022-08-15 Makeup migration method based on generation of confrontation network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210977447.9A CN115345773B (en) 2022-08-15 2022-08-15 Makeup migration method based on generation of confrontation network

Publications (2)

Publication Number Publication Date
CN115345773A CN115345773A (en) 2022-11-15
CN115345773B true CN115345773B (en) 2023-02-17

Family

ID=83952073

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210977447.9A Active CN115345773B (en) 2022-08-15 2022-08-15 Makeup migration method based on generation of confrontation network

Country Status (1)

Country Link
CN (1) CN115345773B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117475481B (en) * 2023-12-27 2024-03-01 四川师范大学 Domain migration-based night infrared image animal identification method and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107622472A (en) * 2017-09-12 2018-01-23 北京小米移动软件有限公司 Face dressing moving method and device
CN111950432A (en) * 2020-08-07 2020-11-17 武汉理工大学 Makeup style migration method and system based on regional style consistency
CN112541856A (en) * 2020-12-07 2021-03-23 重庆邮电大学 Medical image style migration method combining Markov field and Graham matrix characteristics
CN113538213A (en) * 2021-06-09 2021-10-22 华南师范大学 Data processing method, system and storage medium for makeup migration
CN113724265A (en) * 2021-07-19 2021-11-30 北京旷视科技有限公司 Skin color migration method and device, storage medium and electronic equipment
KR102377222B1 (en) * 2021-08-13 2022-03-23 주식회사 에이아이네이션 Artificial intelligence virtual makeup method and device using multi-angle image recognition processing technology
CN114359035A (en) * 2021-12-27 2022-04-15 中山大学 Human body style migration method, device and medium based on generation of confrontation network
CN114742693A (en) * 2022-03-15 2022-07-12 西北大学 Dressing migration method based on adaptive example normalization

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107622472A (en) * 2017-09-12 2018-01-23 北京小米移动软件有限公司 Face dressing moving method and device
CN111950432A (en) * 2020-08-07 2020-11-17 武汉理工大学 Makeup style migration method and system based on regional style consistency
CN112541856A (en) * 2020-12-07 2021-03-23 重庆邮电大学 Medical image style migration method combining Markov field and Graham matrix characteristics
CN113538213A (en) * 2021-06-09 2021-10-22 华南师范大学 Data processing method, system and storage medium for makeup migration
CN113724265A (en) * 2021-07-19 2021-11-30 北京旷视科技有限公司 Skin color migration method and device, storage medium and electronic equipment
KR102377222B1 (en) * 2021-08-13 2022-03-23 주식회사 에이아이네이션 Artificial intelligence virtual makeup method and device using multi-angle image recognition processing technology
CN114359035A (en) * 2021-12-27 2022-04-15 中山大学 Human body style migration method, device and medium based on generation of confrontation network
CN114742693A (en) * 2022-03-15 2022-07-12 西北大学 Dressing migration method based on adaptive example normalization

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
人脸图像风格迁移的改进算法;郭美钦等;《深圳大学学报(理工版)》;20190528(第03期);全文 *

Also Published As

Publication number Publication date
CN115345773A (en) 2022-11-15

Similar Documents

Publication Publication Date Title
Jetchev et al. The conditional analogy gan: Swapping fashion articles on people images
Nguyen et al. Lipstick ain't enough: beyond color matching for in-the-wild makeup transfer
CN108288072A (en) A kind of facial expression synthetic method based on generation confrontation network
Wang et al. A survey on face data augmentation
Cheng et al. Parametric modeling of 3D human body shape—A survey
CN103268623B (en) A kind of Static Human Face countenance synthesis method based on frequency-domain analysis
Yi et al. Line drawings for face portraits from photos using global and local structure based GANs
Singh et al. Neural style transfer: A critical review
Liu et al. Structure-guided arbitrary style transfer for artistic image and video
Liu et al. Psgan++: Robust detail-preserving makeup transfer and removal
CN115345773B (en) Makeup migration method based on generation of confrontation network
CN114581992A (en) Human face expression synthesis method and system based on pre-training StyleGAN
Liu et al. Image neural style transfer with preserving the salient regions
KR20230085931A (en) Method and system for extracting color from face images
Yuan et al. RAMT-GAN: Realistic and accurate makeup transfer with generative adversarial network
CN115496650A (en) Makeup migration method based on generation countermeasure network
Liu et al. Translate the facial regions you like using self-adaptive region translation
CN112991484B (en) Intelligent face editing method and device, storage medium and equipment
Kwolek et al. Recognition of JSL fingerspelling using deep convolutional neural networks
Sun et al. Local facial makeup transfer via disentangled representation
Nguyen-Phuoc et al. Alteredavatar: Stylizing dynamic 3d avatars with fast style adaptation
Li et al. Hybrid Transformers with Attention-guided Spatial Embeddings for Makeup Transfer and Removal
He et al. Text-based image style transfer and synthesis
Chen et al. Cantonese porcelain image generation using user-guided generative adversarial networks
Yuan et al. MuNeRF: Robust Makeup Transfer in Neural Radiance Fields

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant