CN114742693B - Dressing migration method based on self-adaptive instance normalization - Google Patents
Dressing migration method based on self-adaptive instance normalization Download PDFInfo
- Publication number
- CN114742693B CN114742693B CN202210254916.4A CN202210254916A CN114742693B CN 114742693 B CN114742693 B CN 114742693B CN 202210254916 A CN202210254916 A CN 202210254916A CN 114742693 B CN114742693 B CN 114742693B
- Authority
- CN
- China
- Prior art keywords
- makeup
- network
- migration
- dressing
- make
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000013508 migration Methods 0.000 title claims abstract description 71
- 230000005012 migration Effects 0.000 title claims abstract description 71
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000010606 normalization Methods 0.000 title claims abstract description 33
- 238000012549 training Methods 0.000 claims abstract description 23
- 238000010586 diagram Methods 0.000 claims abstract description 12
- 239000002537 cosmetic Substances 0.000 claims description 20
- 230000003044 adaptive effect Effects 0.000 claims description 13
- 238000012360 testing method Methods 0.000 claims description 13
- 230000008569 process Effects 0.000 claims description 6
- 125000004122 cyclic group Chemical group 0.000 claims description 4
- 210000000887 face Anatomy 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 4
- 238000005070 sampling Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 9
- 238000002474 experimental method Methods 0.000 abstract description 9
- 230000001815 facial effect Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000012546 transfer Methods 0.000 description 5
- 238000011161 development Methods 0.000 description 4
- 230000018109 developmental process Effects 0.000 description 4
- 238000011160 research Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 230000008447 perception Effects 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- 230000003042 antagnostic effect Effects 0.000 description 2
- 230000003796 beauty Effects 0.000 description 2
- 230000000052 comparative effect Effects 0.000 description 2
- 230000008034 disappearance Effects 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 1
- 230000008485 antagonism Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000010422 painting Methods 0.000 description 1
- 230000008929 regeneration Effects 0.000 description 1
- 238000011069 regeneration method Methods 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000000779 smoke Substances 0.000 description 1
- 238000013526 transfer learning Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a makeup migration method based on self-adaptive normalization, which comprises the following steps: step 1, constructing a data set; step 2, extracting a plain characteristic diagram and a dressing characteristic diagram; step 3, constructing a training network model based on self-adaptive normalization makeup migration; step 4, inputting the makeup characteristic atlas and the plain characteristic atlas into the makeup network G1 The method comprises the steps of carrying out a first treatment on the surface of the Step 5, the make-up network generates an image 1; step 6, inputting the generated image 1 and the corresponding dressing characteristic diagram into a discriminator D1 Outputting a discrimination value; step 7, inputting the generated image 1 into a makeup removing network G2 Obtaining a generated image 2; step 8, inputting the generated image 2 and the plain feature map into a discriminator D2 Outputting a discrimination value; step 9, returning to the step 5 until any one of the dressing feature atlas and the element Yan Tezheng atlas is empty; and step 10, reaching preset iteration times to obtain trained parameters. Experiments show that the method can be effectively applied to makeup migration tasks of different styles, and particularly has better migration effect of drama makeup.
Description
Technical Field
The invention belongs to the technical field and relates to a makeup migration method based on self-adaptive normalization.
Background
Makeup migration is the leading direction of hot spot research in computer vision. According to the data of the national statistical office, the total amount of the cosmetics and the import and export amount of the cosmetics in China are gradually increased year by year in the last 10 years, and the cosmetics become the consumer market of the second beauty cosmetics in China. With the industry upgrade of China and the demand of the public for beauty, cosmetic software is increasingly sought after. The sales of the American painting show in 2020 reaches 11.97 billions, and the one-key makeup changing of shaking audio and video is also rapidly popular nationwide, and more live broadcast software increases the functions of face makeup. The development of makeup migration algorithm is relied on whether the one-key makeup of entertainment software or the virtual trial makeup of an electronic commerce platform and the hot development of immersive drama in the countryside of China art. The makeup migration algorithm not only extracts and re-fuses the makeup, but also considers the semantics of different parts, shadow patterns, shadows and the like of the face. Compared with the computer makeup technology, the makeup migration has stronger autonomy and randomness. The user can obtain and match the wanted makeup from any real face, so that the user experience is greatly improved, and the diversity of the makeup is enriched. A great number of students develop and research the makeup migration task, and a certain result is achieved.
Excellent makeup migration algorithms are endless, and the generation of an antagonistic network has become the mainstream makeup migration framework at present due to the image regeneration characteristics of the antagonistic network. Li et al first created an antagonism network for use in cosmetic migration work, and proposed a dual input-output migration framework beautyygan. The method introduces a pixel-level histogram to realize migration, so that the effect is remarkably improved. However, the algorithm has high requirements on makeup, is good in clear daily makeup, and is not suitable for extreme makeup. Chen et al have improved beautyngan and introduced a reversible convolution generation flow Glow to manipulate potential vectors, completing the local transfer of makeup. However, there are some problems in such local migration, such as the fact that the color accuracy and training efficiency are affected by the makeup with a large difference in style. The Jiang et al proposes PSGAN for the robustness of the makeup migration, and the method utilizes a face detection technology to deform the makeup to adapt to the face, so that the robustness of the makeup migration is enhanced; but the algorithm is still not ideal for migration effects of extreme makeup.
The development of the generation countermeasure network accelerates the promotion and the update of the makeup migration method, and the makeup migration of a non-daily style also achieves certain development. Hoshen et al introduced a cosmetic package dataset into the GAN for facial patterns such as tattoos, etc., completing the migration of the specific patterns of the face; however, the algorithm has a smearing phenomenon for migration of the cosmetics. Wang et al propose CA-GAN for color subdivision of make-up, and label learning of introducing colors, the algorithm is better for controlling the color and shade of extreme make-up, but for migration of daily make-up, the algorithm has the problem that facial textures are not kept in place. Aiming at ornaments such as tattoos and nose rings in European and American extreme makeup, nguyen et al introduces a color migration and ornament training network and transfers the makeup migration work into a UV texture space; the algorithm overcomes the robustness problems of shadow and gesture to a certain extent, but still has the conditions of difficult training, partial distortion of generated images and the like.
Disclosure of Invention
Aiming at the problem that the difficulty of network training in the prior art is continuously improved along with the increase of style difference, and the dressing migration effect is limited by the dressing style, a dressing migration algorithm based on self-adaptive instance normalization is provided.
In order to achieve the above purpose, the invention adopts the following technical scheme:
a makeup migration method based on self-adaptive normalization specifically comprises the following steps:
step 1, constructing a data set, wherein the data set is divided into two sets of a essence Yan Tu and a dressing chart, and the dressing chart comprises daily makeup, european and American makeup and drama dressing; the data set is divided into a training set and a testing set;
step 2, respectively extracting a face feature map and a makeup feature map of the training set element Yan Tu by utilizing VGG19, wherein the face feature map and the makeup feature map comprise features of faces, eyes and lips;
step 3, constructing a training network model based on adaptive normalization makeup migration, wherein the model comprises two generators and corresponding discriminators; wherein the two generators are respectively used as make-up networks G 1 Make-up removal network G 2 The method comprises the steps of carrying out a first treatment on the surface of the The generator comprises a multi-layer perceptron, 4 AdaIN modules, a combination module of a residual error layer, 2 up-sampling layers and an output layer which are sequentially connected; the output of the AdaIN module in each combination module enters the input of the residual error layer, and the 2 upsampling layers are connected in series;
step 4, inputting the makeup characteristic atlas and the plain characteristic atlas into the makeup network G 1 In (a) and (b);
step 5, make-up network G 1 Randomly selecting a piece of makeup characteristic diagram and a piece of plain characteristic diagram from the makeup characteristic diagram set and the plain characteristic diagram set respectively to form a pair of images, and carrying out makeup treatment on the pair of images to obtain a generated image 1; updating the dressing feature atlas and the element Yan Tezheng atlas;
step 6, the generated image 1 and the corresponding dressing characteristic diagram are input into a discriminator D together 1 Outputting the discrimination value to the make-up network G 1 For guiding the application of make-up networks G 1 Correcting parameters;
step 7, inputting the generated image 1 into the makeup removing network G 2 The makeup removing treatment is carried out to obtain a generated image 2;
step 8, inputting the generated image 2 and the corresponding plain feature map into a discriminator D 2 In the process, the discrimination value is output to the makeup removing network G 2 For guiding the make-up removing network G 2 Correcting parameters;
step 9, returning to the step 5 until any one of the dressing feature atlas and the element Yan Tezheng atlas is empty, and terminating the first iteration; preserving the make-up network G after this iteration 1 Make-up removal network G 2 Parameters of (2);
step 10, judging whether the preset iteration times are reached, if so, storing the makeup network G 1 Make-up removal network G 2 As trained parameters; otherwise, updating the dressing feature image set and the plain feature image set to the dressing feature image set and the plain Yan Tezheng image set obtained in the step 2, and returning to the step 4.
Further, the residual layer in the generator in the step 3 is composed of three convolution layers 3*3, the output of the 1 st convolution layer is multiplied by the weight δ and added to the output of the 2 nd convolution layer, and the residual layer expression is shown as the following formula as the input of the third convolution layer:
L=δL 1 +L 2
wherein L is 1 Representing the output of the layer 1 residual layer, L 2 Represents the output of the layer 2 residual layer, delta represents the weight parameter.
Further, δ=0.3.
Further, the generator parameters are shown in the following table:
further, the network parameters of the arbiter are shown in the following table:
further, the loss function of the training network model based on adaptive normalized makeup migration is shown as follows:
l total =λ adv (l D +l G )+λ global l global +λ local l local +λ cyc l cyc +λ makeup l makeup wherein, (l) D +l G ) Counter loss of generator and arbiter, l global To sense loss as a whole, l local For local perceived loss, l cyc For, l makeup Lambda for the loss of make-up adv ,λ global ,λ local ,λ cyc ,λ makeup Respectively, their weight coefficients;
overall loss combined countermeasures loss:
l D =-E x~X [logD X (x)]-E y~Y [logD Y (y)]-E x~X,y~Y [log((1-D X (G(y,x)))×(1-D Y (G(x,y))))]
l G =-E x~X,y~Y [log(D X (G(y,x))×D Y (G(x,y)))];
overall perceived loss:
l global =||F l (G(y,x))-F l (y)|| 2 +||F l (G(x,y))-F l (x)|| 2 ;
wherein F is l (.) is a feature of the L-th layer on the VGG model, I 2 Normalizing the L2 norm;
local perceptual loss:
wherein M represents a mask of a specific part of the face image;
cyclic consistency loss:
l cyc =||G(G(y,x),y)-y|| 1 +||G(G(x,y),x)-x|| 1 ;
make-up loss:
l makeup =||g(x,y)-HM(x,y)|| 2 +||g(y,x)-HM(y,x)|| 2
where HM (-) represents histogram matching, the output pattern of HM (x, y) is y, while preserving the identity of x.
Further, lambda adv =0.8,λ global =0.5,λ local =0.8,λ cyc =0.5,λ makeup =0.8。
Compared with the prior art, the invention has the following technical effects:
1. in order to finish the makeup migration of different styles, the training set of the invention further perfects the traditional makeup data set, not only comprises daily light makeup drawings and European and American rich makeup drawings, but also particularly increases the drama makeup drawings, and lays a foundation for improving the migration effect of the algorithm on extreme makeup.
2. The invention utilizes VGG network to extract the plain image and the makeup image.
3. Experiments prove that the method has strong universality, realizes good migration effect on common makeup, is particularly effective on drama makeup migration effect, improves the problems of smearing phenomenon and incomplete retention of facial textures in place during the migration of the rich makeup, and improves the problem of partial makeup distortion.
4. Introducing AdaIN and residual blocks to optimize a network structure, specifically using AdaIN in a residual layer of a generator network decoder, and actively learning information of different makeup styles during training; the neural network autonomously and flexibly selects a proper normalization mode, and finally, migration tasks of different styles of makeup are realized under the condition that the overall structure and related parameters of the model are not modified.
5. Through jumping parameter transmission among layers in the residual error network of the generator, the pixel characteristic information of the source image can be better reserved, meanwhile, the problem of gradient disappearance is avoided to a certain extent, and the training difficulty of a model is reduced.
Drawings
FIG. 1 is a schematic diagram of a makeup migration method based on adaptive normalization of the present invention;
FIG. 2 is a structure of a makeup migration network model based on adaptive normalization;
FIG. 3 is a residual layer structure;
FIG. 4 is a comparative MT dataset make-up experiment;
FIG. 5 is a comparison of MT data set make-up experiments;
FIG. 6 is an OM (Opera-wakeup) dataset experimental comparison.
The first and second columns in fig. 4, fig. 5, fig. 6 are real face images and makeup images in the dataset, and the third, fourth, and fifth columns are BeautyGAN, PSGAN and makeup migration generated images of the algorithm proposed herein.
The invention is further explained below with reference to the drawing and the specific embodiments.
Detailed Description
1. The design idea of the invention
The invention not only needs to migrate the target makeup to the face image, but also keeps the face characteristics of the original face image as much as possible, and because the face and the makeup image in pairs are difficult to collect, the invention researches a self-adaptive normalization-based makeup migration method to realize the makeup migration. The method specifically comprises the following steps:
step 1, constructing a data set, wherein the data set is divided into two sets of a prime Yan Tu and a dressing chart. Wherein the dressing drawing comprises daily makeup, european and American rich makeup and drama dressing. The data set is divided into a training set and a test set.
Step 2, respectively extracting a face feature map and a makeup feature map of the training set element Yan Tu by utilizing VGG19, wherein the face feature map and the makeup feature map comprise features of faces, eyes and lips;
step 3, constructing a training network model based on adaptive normalization makeup migration, wherein the model comprises two generators and corresponding discriminators; wherein the two generators are respectively used as make-up networks G 1 Make-up removal network G 2 ;
Step 4, inputting the makeup characteristic atlas and the plain characteristic atlas into the makeup network G 1 In (a) and (b);
step 5, make-up network G 1 Randomly selecting a dressing feature image set and a plain feature image from the dressing feature image set and the plain feature image set respectively to form a pair of images, and carrying out dressing treatment on the pair of images to obtain a generated image 1; updating the dressing feature atlas and the element Yan Tezheng atlas;
step 6, the generated image 1 and the corresponding dressing characteristic diagram are input into a discriminator D together 1 By means of a discriminator D 1 Judging whether the styles of the generated image 1 and the dressing pictures in the training set are consistent, and outputting the judging value to the make-up network G 1 For guiding the application of make-up networks G 1 Correcting parameters;
step 7, inputting the generated image 1 into the makeup removing network G 2 The makeup removing treatment is carried out to obtain a generated image 2;
step 8, inputting the generated image 2 and the corresponding plain feature map into a discriminator D 2 In the process, whether the content characteristics are reserved in place is judged, and the judging value is output to the makeup removing network G 2 For guiding the make-up removing network G 2 And correcting the parameters.
Step 9, returning to the step 5 until any one of the dressing feature atlas and the element Yan Tezheng atlas is empty, and terminating the first iteration; preserving the make-up network G after this iteration 1 Make-up removal network G 2 Parameters of (2);
step 10, judging whether the preset iteration times (150 times in the test) are reached, if so, storing the makeup network G 1 Make-up removal network G 2 As trained parameters; otherwise, updating the makeup characteristic image set and the plain characteristic image set into the makeup characteristic image set and the plain Yan Tezheng image set obtained in the step 2, and returning to the step 4;
2. generators (make-up network G) 1 Make-up removal network G 2 )
As shown in fig. 5, the generator includes a multi-layer perceptron, a combination module of 4 AdaIN modules and a residual layer, 2 up-sampling layers and an output layer, which are sequentially connected; the output of the AdaIN module in each combination module enters the input of the residual layer, and the 2 upsampling layers are connected in series.
The dressing network 1 (1 st generator) receives the dressing feature map and the plain Yan Tezheng map, and injects the dressing feature into the plain feature map to generate a virtual dressing image; in the makeup removal process, the makeup removal network 2 (2 nd generator) firstly inputs the makeup vectors into the multilayer perceptron, and the multilayer perceptron can map the input makeup characteristic diagram onto the output vectors with specified lengths, wherein the output of the multilayer perceptron is used as the input of the AdaIN module; the residual layer input of the decoder is accessed to the AdaIN layer to carry out self-adaptive instance normalization, and the decoder and the multi-layer perceptron are trained by the corresponding discrimination value input by the discriminator to realize makeup migration.
The AdaIN module (self-adaptive layer instance normalization method) is a combination of layer normalization and instance normalization, in the traditional unsupervised style migration, batch normalization is generally used for carrying out average processing on data, and compared with a common style conversion task, the makeup migration work has higher processing requirements on face details. Therefore, the processing of the face images generally adopts example normalization, and the mainstream makeup migration method introduces the example normalization to process each group of faces independently, so that the mutual independence among different makeup images is enhanced; however, the method still has the conditions of distortion, smearing and the like for treating the makeup with a large style span. The layer normalization can control the global information of the image more clearly, and AdaIN combines the advantages of the two to guide the learning of the migration network. In normalization, adaIN adjusts parameter values according to different styles to select normalization more suitable for the current cosmetic image.
The residual layer is designed in the generator to better preserve the face information of the original plain image in the generated image. As shown in fig. 6, each residual layer is composed of three convolution layers 3*3, the output of the 1 st convolution layer is multiplied by a weight δ and added to the output of the 2 nd convolution layer, and as the input of the third convolution layer, the residual layer expression is as shown in formula 1:
L=δL 1 +L 2 (1)
wherein L is 1 Representing the output of the layer 1 residual layer, L 2 Represents the output of the layer 2 residual layer, δ represents the weight parameter (δ=0.3 in the embodiment). The jump parameter transmission among layers in the residual error network can better keep the pixel characteristic information of the source image, meanwhile, the gradient disappearance problem is avoided to a certain extent, and the training difficulty of the model is reduced.
Table 1 generator network detail parameters
3. Distinguishing device (Distinguishing device D) 1 Sum discriminator D 2 )
The main purpose of the discriminator is to distinguish the original picture from the fake picture, and to form a picture with clearer contrast output with the generator. This chapter incorporates PatchGAN as a framework arbiter. The network detail parameters are shown in table 2.
Table 2 discriminant network detail parameters
4. Loss function
A. Countering losses
The main purpose of the countermeasure loss is to make the migrated image similar to the image style of the target domain as much as possible, so that the discriminator is difficult to distinguish the true from the false, and the countermeasure loss of the generator and the discriminator of the method of the invention is as follows:
l D =-E x~X [log D X (x)]-E y~Y [log D Y (y)]-E x~X,y~Y [log((1-D X (G(y,x)))×(1-D Y (G(x,y))))] (2)
l G =-E x~X,y~Y [log(D X (G(y,x))×D Y (G(x,y)))] (3)
B. loss of overall perception
Since the image comes from both domains, pixel level constraints are not available. To ensure face identity between the input source image and the output transmission image, a perceptual penalty is used to maintain global face identity. The definition of the overall perceptual loss is as follows:
l global =||F l (G(y,x))-F l (y)|| 2 +||F l (G(x,y))-F l (x)|| 2 (4)
wherein F is l (.) is a feature of the L-th layer on the VGG model, I 2 Normalized to the L2 norm.
C. Local perceived loss
In addition to the overall loss of perception, local loss of perception is introduced, further leaving the non-transferred parts unchanged, such as teeth, eyebrows, etc. The local perceptual loss is shown in equation 5:
where M represents a mask of a specific part of the face image.
D. Loss of cyclic uniformity
For unsupervised learning of unpaired images, the present invention uses a cyclical consistency loss as follows:
l cyc =||G(G(y,x),y)-y|| 1 +||G(G(x,y),x)-x|| 1 (6)
E. loss of make-up
Cosmetic losses are proposed by Li et al, which use Histogram Matching (HM) to provide a transmitted image as a false ground truth. It consists of local histogram matching of 3 different facial regions: skin, lips, and eyes. Cosmetic loss is defined as:
l makeup =||g(x,y)-HM(x,y)|| 2 +||g(y,x)-HM(y,x)|| 2 (7)
where HM (-) represents histogram matching, the output pattern of HM (x, y) is y, while preserving the identity of x.
F. Overall loss of
The overall loss combines with the countering loss, the overall perceived loss, the local perceived loss, the cyclic consistency loss, and the make-up loss as shown in equation 8:
l total =λ adv (l D +l G )+λ global l global +λ local l local +λ cyc l cyc +λ makeup l makeup (8)
wherein lambda is adv ,λ global ,λ local ,λ cyc ,λ makeup Weighting coefficients for different loss functions. The weight coefficients in the embodiment are respectively lambda adv =0.8,λ global =0.5,λ local =0.8,λ cyc =0.5,λ makeup =0.8。
In order to verify the feasibility and effectiveness of the method of the invention, the invention has been subjected to the following experiments.
1. Experimental environment
The experimental environment is Windows10 system 20H2, the display card is NVIDIA RTX 3080Ti (12G memory), and the image size is unified and standardized as 256×256 images. The network updates the generator parameters and the discriminator parameters once with the loss function, respectively, each iteration. The network loss function optimizer uses an Adam optimizer, where the parameter is set to a 1 =0.001,β 1 =0.5,β 2 =0.999。
2. Data set
The mainstream migration algorithm is mostly aimed at modern makeup, while less research is done on migration of classical drama makeup. The drama dressing is different from the daily dressing, and is mainly divided into life, denier, net and ugly. Compared with daily makeup, the color is finer and finer, and the change is more abundant. At present, the dressing data set of the drama is deficient, so the invention creates a brand new dressing data set OM (Opera-Makeup) data set. The OM dataset contains 876 drama looks and 832 and Zhang Suyan figures, the looks contain Beijing opera, qin Chamber and different songs of the drama. Element Yan Tu is mainly from existing dressing data sets, while the dressing drawings are from search websites and drama video clips.
The data set is formed by mixing the MT classical data set with the OM data set in consideration of daily makeup, extreme makeup and multi-style migration of drama makeup. The dataset is divided into two different spatial domain styles, one is an make-up image space and one is a make-up image space. The two spatial image data have 4000 training sample sets with 256×256 resolution and 400 test sample sets with the same size. The training set contains white and yellow race people and includes different kinds of makeup styles such as daily makeup, antique makeup, smoke makeup, japanese and korean system makeup, and dramatic makeup.
3. Comparative experiments and analysis
The invention respectively selects the classical BeautyGAN network and the PSGAN network aiming at the robust cosmetic algorithm for experiment as comparison. In addition, in order to verify the effectiveness of the algorithm provided by the chapter, the makeup and the dressing table of the light makeup, the dressing table of the thick makeup and the dressing table of the drama are respectively selected for migration test, and experiments show that the dressing table migration algorithm based on the normalization of the self-adaptive examples can complete migration tasks under the condition of large dressing table style differences.
The invention firstly selects daily make-up to test three algorithms, and the experimental result is shown in figure 4. BeautyGAN has inconsistency in brightness in transfer learning of makeup, and makeup transfer between different people has a makeup smearing phenomenon, and has a problem that the makeup transfer effect is not obvious, such as lipstick missing on a third group of images. Comparing lipstick with eye area, PSGAN has stronger control over color than the former, and has no partial migration failure, but is slightly affected by shadow tone; note that the two-three sets of test results, due to the different brightness of the face of the character in the make-up image and the face of the character in the plain image, make the eyes of the generated image have partial halation. The method of the invention has smoother migration result and better control of light and shadow due to the autonomous learning of AdaIN and the introduction of Gaussian noise.
Secondly, the invention selects a plurality of Japanese and Korean cosmetics and European cosmetics to test, and the experimental result is shown in figure 5. Beautyygan is superior to the migration of thick makeup on light makeup images, but still has partial makeup undelivered (second set of tests) and facial artifacts (third set of tests) that occur. Note that the third set of experiments used extreme makeup to test, the method of the present invention was better for migration of facial make-up color, but compared to PSGAN, lipstick color was slightly smeared and eyes were also artifact.
Finally, the present invention selects the front-side drama makeup as the test item, as shown in fig. 6. Beautyygan cannot be used for transferring extreme drama makeup, only the lips of a human face are transferred, and eye shadow, blush and makeup base color are failed to transfer. In the process of transferring the dressing of the drama, the PSGAN controls the shapes of the dressing of different parts well, and the phenomena of dressing smearing and distortion do not occur, but some errors (eyes of the first group of dressing) exist in transferring the color of the dressing. The algorithm provided by the chapter successfully completes migration of the eye shadow, the skin color, the lipstick and other parts of the drama makeup due to the characteristic of autonomous learning, is smoother and more natural for migration of the drama style, and is superior to other two algorithms for grasping the color of the drama makeup.
Claims (7)
1. A makeup migration method based on self-adaptive normalization is characterized by comprising the following steps:
step 1, constructing a data set, wherein the data set is divided into two sets of a essence Yan Tu and a dressing chart, and the dressing chart comprises daily makeup, european and American makeup and drama dressing; the data set is divided into a training set and a testing set;
step 2, respectively extracting a face feature map and a makeup feature map of the training set element Yan Tu by utilizing VGG19, wherein the face feature map and the makeup feature map comprise features of faces, eyes and lips;
step 3, constructing a training network model based on adaptive normalization makeup migration, wherein the model comprises two generators and corresponding discriminators; wherein the two generators are respectively used as make-up networks G 1 Make-up removal network G 2 The method comprises the steps of carrying out a first treatment on the surface of the The generator comprises a multi-layer perceptron, a combination module of 4 AdaIN modules and residual layers, 2 up-sampling layers and an output layer which are sequentially connected; the output of the AdaIN module in each combination module enters the input of the residual error layer, and the 2 upsampling layers are connected in series;
step 4, inputting the makeup characteristic atlas and the plain characteristic atlas into the makeup network G 1 In (a) and (b);
step 5, make-up network G 1 Randomly selecting a feature image from the makeup feature image set and the plain feature image set to form a pair of images, and performing makeup processing on the pair of images to obtain a generated image 1; updating the dressing feature atlas and the element Yan Tezheng atlas;
step 6, the generated image 1 and the corresponding dressing characteristic diagram are input into a discriminator D together 1 Outputting the discrimination value to the make-up network G 1 For guiding the application of make-up networks G 1 Correcting parameters;
step 7, inputting the generated image 1 into the makeup removing network G 2 The makeup removing treatment is carried out to obtain a generated image 2;
step 8, inputting the generated image 2 and the corresponding plain feature map into a discriminator D 2 In the process, the discrimination value is output to the makeup removing network G 2 For guiding the make-up removing network G 2 Correcting parameters;
step 9, returning to the step 5 until any one of the dressing feature atlas and the element Yan Tezheng atlas is empty, and terminating the first iteration; preserving the make-up network G after this iteration 1 Make-up removal network G 2 Parameters of (2);
step 10, judging whether the preset iteration times are reached, if so, then the methodMake-up network G for secondary storage 1 Make-up removal network G 2 As trained parameters; otherwise, updating the dressing feature image set and the plain feature image set to the dressing feature image set and the plain Yan Tezheng image set obtained in the step 2, and returning to the step 4.
2. A method of cosmetic migration based on adaptive normalization according to claim 1, wherein the residual layer in the generator in step 3 is composed of three convolution layers 3*3, the output of the 1 st convolution layer is multiplied by the weight δ and added to the output of the 2 nd convolution layer, and the residual layer expression is shown as follows as the input of the third convolution layer:
L=δL 1 +L 2
wherein L is 1 Representing the output of the layer 1 residual layer, L 2 Represents the output of the layer 2 residual layer, delta represents the weight parameter.
3. A cosmetic migration method based on adaptive normalization according to claim 2, in which δ=0.3.
4. A method of cosmetic migration based on adaptive normalization according to any one of claims 1 to 3, characterised in that the generator parameters are as shown in the following table:
5. a method of cosmetic migration based on adaptive normalization as claimed in claim 1, wherein the network parameters of the arbiter are as follows:
6. a method of adaptive normalization based cosmetic migration as claimed in claim 1, wherein the loss function of the adaptive normalization based cosmetic migration training network model is as follows:
l total =λ adv (l D +l G )+λ global l global +λ local l local +λ cyc l cyc +λ makeup l makeup
wherein, (l) D +l G ) Counter loss of generator and arbiter, l global To sense loss as a whole, l local For local perceived loss, l cyc For, l makeup Lambda for the loss of make-up adv ,λ global ,λ local ,λ cyc ,λ makeup Respectively, their weight coefficients;
overall loss combined countermeasures loss:
l D =-E x~X [logD X (x)]-E y~Y [logD Y (y)]-E x~X,y~Y [log((1-D X (G(y,x)))×(1-D Y (G(x,y))))]
overall perceived loss:
l global =||F l (G(y,x))-F l (y)|| 2 +||F l (G(x,y))-F l (x)|| 2 ;
wherein F is l (.) is a feature of the L-th layer on the VGG model, I 2 Normalizing the L2 norm;
local perceptual loss:
wherein M represents a mask of a specific part of the face image;
cyclic consistency loss:
l cyc =||G(G(y,x),y)-y|| 1 +||G(G(x,y),x)-x|| 1 ;
make-up loss:
l makeup =||g(x,y)-HM(x,y)|| 2 +||g(y,x)-HM(y,x)|| 2
where HM (-) represents histogram matching, the output pattern of HM (x, y) is y, while preserving the identity of x.
7. A method for cosmetic migration based on adaptive normalization as claimed in claim 6, wherein λ adv =0.8,λ global =0.5,λ local =0.8,λ cyc =0.5,λ makeup =0.8。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210254916.4A CN114742693B (en) | 2022-03-15 | 2022-03-15 | Dressing migration method based on self-adaptive instance normalization |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210254916.4A CN114742693B (en) | 2022-03-15 | 2022-03-15 | Dressing migration method based on self-adaptive instance normalization |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114742693A CN114742693A (en) | 2022-07-12 |
CN114742693B true CN114742693B (en) | 2024-02-27 |
Family
ID=82276557
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210254916.4A Active CN114742693B (en) | 2022-03-15 | 2022-03-15 | Dressing migration method based on self-adaptive instance normalization |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114742693B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115345773B (en) * | 2022-08-15 | 2023-02-17 | 哈尔滨工业大学(深圳) | Makeup migration method based on generation of confrontation network |
CN118469793B (en) * | 2024-07-11 | 2024-10-01 | 齐鲁工业大学(山东省科学院) | Image steganography method based on style migration |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113538214A (en) * | 2021-06-09 | 2021-10-22 | 华南师范大学 | Method and system for controlling makeup migration and storage medium |
CN113947520A (en) * | 2021-10-14 | 2022-01-18 | 湖南大学 | Method for realizing face makeup conversion based on generation of confrontation network |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111739077B (en) * | 2020-06-15 | 2022-11-18 | 大连理工大学 | Monocular underwater image depth estimation and color correction method based on depth neural network |
-
2022
- 2022-03-15 CN CN202210254916.4A patent/CN114742693B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113538214A (en) * | 2021-06-09 | 2021-10-22 | 华南师范大学 | Method and system for controlling makeup migration and storage medium |
CN113947520A (en) * | 2021-10-14 | 2022-01-18 | 湖南大学 | Method for realizing face makeup conversion based on generation of confrontation network |
Non-Patent Citations (1)
Title |
---|
基于密集连接生成对抗网络的图像颜色迁移;王晓宇;朱一峰;郗金洋;王尧;段锦;;液晶与显示;20200315(第03期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN114742693A (en) | 2022-07-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114742693B (en) | Dressing migration method based on self-adaptive instance normalization | |
Wu et al. | Stylespace analysis: Disentangled controls for stylegan image generation | |
Song et al. | Geometry-aware face completion and editing | |
CN112614077B (en) | Unsupervised low-illumination image enhancement method based on generation countermeasure network | |
CN109376582A (en) | A kind of interactive human face cartoon method based on generation confrontation network | |
Sajid et al. | Data augmentation‐assisted makeup‐invariant face recognition | |
Shah et al. | Generating synthetic irises by feature agglomeration | |
CN111950432B (en) | Dressing style migration method and system based on regional style consistency | |
Li et al. | Globally and locally semantic colorization via exemplar-based broad-GAN | |
KR20230153451A (en) | An attempt using inverse GANs | |
CN111986075A (en) | Style migration method for target edge clarification | |
Wang et al. | Evaluate and improve the quality of neural style transfer | |
CN111476241B (en) | Character clothing conversion method and system | |
CN106846281A (en) | image beautification method and terminal device | |
Zhang et al. | Style separation and synthesis via generative adversarial networks | |
Gupta et al. | Image style transfer using convolutional neural networks based on transfer learning | |
CN116012255A (en) | Low-light image enhancement method for generating countermeasure network based on cyclic consistency | |
CN110930471B (en) | Image generation method based on man-machine interaction type countermeasure network | |
Zhang et al. | Deep camouflage images | |
CN113706407B (en) | Infrared and visible light image fusion method based on separation characterization | |
CN117033688B (en) | Character image scene generation system based on AI interaction | |
Kubiak et al. | Silt: Self-supervised lighting transfer using implicit image decomposition | |
CN117036876A (en) | Generalizable target re-identification model construction method based on three-dimensional visual angle alignment | |
CN116777738A (en) | Authenticity virtual fitting method based on clothing region alignment and style retention modulation | |
Zhong et al. | Sara: Controllable makeup transfer with spatial alignment and region-adaptive normalization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |