CN110458750A - A kind of unsupervised image Style Transfer method based on paired-associate learning - Google Patents

A kind of unsupervised image Style Transfer method based on paired-associate learning Download PDF

Info

Publication number
CN110458750A
CN110458750A CN201910552097.XA CN201910552097A CN110458750A CN 110458750 A CN110458750 A CN 110458750A CN 201910552097 A CN201910552097 A CN 201910552097A CN 110458750 A CN110458750 A CN 110458750A
Authority
CN
China
Prior art keywords
style
image
network
loss
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910552097.XA
Other languages
Chinese (zh)
Other versions
CN110458750B (en
Inventor
宋丹丹
李志凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Beijing Institute of Technology BIT
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Publication of CN110458750A publication Critical patent/CN110458750A/en
Application granted granted Critical
Publication of CN110458750B publication Critical patent/CN110458750B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06T3/04

Abstract

The unsupervised image Style Transfer method based on paired-associate learning that the present invention relates to a kind of, belongs to computer vision field.The present invention pre-processes training data first, then designs the network structure of generator and arbiter;Following allowable loss function simultaneously is trained generator, arbiter to obtain unsupervised image Style Transfer network S with training data and loss functionT: aesthstic Rating Model is introduced, the aesthetic quality scoring for generating image is maximized;The base pixel feature and high-level semantics feature for using image simultaneously, as the antithesis consistency constraint of unsupervised training, and dynamic adjusts the weight of both features;Using style balancing technique, the convergence rate in model different-style migratory direction is adaptively adjusted;Finally apply STStyle Transfer is carried out to input picture.Existing method is compared, the present invention can generate higher-quality target image, have good universality, while keeping the training process of model more stable, and the selection and design of network structure are more flexible.

Description

A kind of unsupervised image Style Transfer method based on paired-associate learning
Technical field
The present invention designs a kind of unsupervised image Style Transfer method based on paired-associate learning, more particularly to a kind of based on quilt It referred to as generates confrontation network, train the method to carry out unsupervised image Style Transfer with a variety of loss functions, belong to calculating Machine vision technique field.
Background technique
As the artificial intelligence epoch go deep into, a large amount of image application emerges in large numbers like the mushrooms after rain, and one of representative is exactly leading The various images of filter function beautify App, and the key technology of filter function is exactly image Style Transfer.
Image Style Transfer refers to the image for original image being converted to another style, while keeping image subject content It is constant, for example, the conversion of Various Seasonal landscape, the conversion etc. of difference drawing style.Unsupervised image style neural network based It migrates, using no label data when referring to that model structure uses neural network, but training, specific implementation generallys use generation confrontation Network;Unsupervised learning is primarily to reply lacks the problem of a large amount of mark samples.
Some researchers have carried out part to unsupervised image Style Transfer and have attempted, and generally use generation confrontation net The form of network.But the method for being based solely on confrontation network obtains Style Transfer image and often has that noise is more, local distortion The disadvantages of.
Summary of the invention
The purpose of the present invention is overcome the deficiencies in the prior art, propose a kind of unsupervised image style based on paired-associate learning Moving method can obtain and be more clear true Style Transfer image.
The purpose of the present invention is what is be achieved through the following technical solutions.
A kind of unsupervised image Style Transfer method based on paired-associate learning: include:
Step 1: pretreatment training data;
Prepare the image of the certain amount of two kinds of styles as training data;Concentrate all images are unified to contract training data Put the fixed dimension for m × n;Wherein, m and n is natural number;
Preferably, m=n=256.
Step 2: planned network structural model;
Network structure model includes five networks: Style Transfer network G altogetherA、GB, differentiate network DA、DB, aesthetics scoring net Network Ns
Wherein, GA、GBNetwork structure having the same is respectively used to the image Style Transfer of different directions;DA、DBHave Identical network structure judges whether certain image of different-style is true respectively;NsIt is the aesthstic Rating Model of pre-training, as The plug-in unit of whole network uses, itself is not involved in update;Entire model is made of the depth convolutional neural networks of end-to-end training;
For A style original image a0, first through GBIt generates B style and generates image b1, then through GAIt generates A style and rebuilds figure As a2;For B style original image b0, first through GAIt generates A style and generates image a1, then through GBGenerate B style reconstruction image b2
Step 3: the loss function designed for training network;
It is combined using a variety of loss functions, the loss function of network includes four parts: confrontation loss Ladv, aesthetics loss Laes, antithesis consistency lose Ldual, style balance loss Lstyle;Whole loss function Loss are as follows:
Loss=Ladv1Laes2Ldual3Lstyle
Wherein, λ1、λ2、λ3Respectively indicate aesthetics loss Laes, antithesis consistency lose Ldual, style balance loss Lstyle's Weight;
Confrontation loss LadvIt is lost using least square, for DAAnd GA, it respectively indicates as follows:
For DBAnd GB, then it respectively indicates as follows:
Wherein, DA() indicates to differentiate network DADifferentiation to image is as a result, DB() indicates to differentiate network DBTo figure The differentiation result of picture;GA() indicates that image passes through Style Transfer network GAIt is after conversion as a result, GB() indicates image By Style Transfer network GBResult after conversion;It indicates about a0Mathematic expectaion,It indicates about b0 Mathematic expectaion.
Aesthetics loss LaesIt is calculated, is expressed as follows by aesthetic model:
Wherein, K is natural number, NsThe probability that scoring is 1-K points, p are provided respectivelyiIndicate that scoring is the probability of i;Aesthetics damage The aesthetics scoring expectation for generating image by maximizing is lost, the training of Style Transfer network is instructed, to eliminate image noise and abnormal Become;
Antithesis consistency loses LdualBase pixel feature and high-level semantics feature are used simultaneously, and carry out single order normal form about Beam (hereinafter referred to as L1Constraint), there is corresponding relationship in terms of content for constraining the image after Style Transfer and original image, indicates such as Under:
LdualpLpsLs
Wherein, Lp、LsRespectively indicate the L of base pixel feature1The L of constraint and the high-level semantics feature from differentiation network1 Constraint, θp、θsFor dynamically adjusting the weight of pixel constraint and semantic constraint;
Pixel constrains LpIt is expressed as follows:
Semantic constraint LsIt is expressed as follows:
Wherein, | | | |1The L of expression1Constraint;
Style balance loss LstyleIt is mainly used for balancing the training speed on different directions, to guarantee mould when joint training Type can obtain preferable effect;For Style Transfer network, it is expressed as follows:
Wherein,Respectively indicate GA,GBConfrontation loss;
For differentiation network, it is expressed as follows:
Wherein,Respectively indicate DA,DBConfrontation loss;
Step 4: with the pretreatment training data of step 1, the loss function of step 3, the network model of training step 2 is obtained Unsupervised image Style Transfer network ST
Preferably, this step is realized by following procedure:
Step1: initialization model parameter, by Style Transfer network GA、GBWith network arbiter DA、DBParameter carry out it is high This distribution initialization, starts to train using pretreatment training dataset;
Step2: image will be generated and be input to differentiation network DA、DB, calculate confrontation loss Ladv
Step3: the antithesis consistency for calculating reconstruction image and original image loses Ldual
Step4: it will generate that image is not scaled directly inputs aesthetic model NS, calculate aesthetics loss Laes
Step5: the style balance loss L of entire model is calculatedstyle
Step6: being calculated by the whole loss function Loss of step 3, obtain final loss function, then reversed to pass Calculating gradient is broadcast, and updates Style Transfer network and differentiates the parameter value of network, while keeping the parameter value of aesthetic model always It is constant;
Step7: Step2-Step6 is repeated, until loss function tends towards stability.
For each data set, using unsupervised mode, after above-mentioned end-to-end training, a unsupervised image is obtained Style Transfer model ST
Step 5: carrying out Style Transfer application, the Style Transfer network S that image input step 4 to be converted is obtainedT, obtain Image after Style Transfer.
Beneficial effect
The method of the present invention has the advantages that compared with prior art
Invention introduces aesthstic Rating Models can more effectively eliminate image by maximizing aesthetic quality scoring Noise and pattern distortion.
Needle of the present invention has redefined antithesis consistency constraint, and dynamic adjusts the weight of pixel characteristic and semantic feature, can To accelerate the convergence of Style Transfer model, higher-quality Style Transfer image is generated.
The present invention uses style balancing technique, can be with the convergence speed in automatic adjusument model different-style migratory direction Degree greatly improves the stability of model, and the network structure of model is made to select and design more flexible.
Effect of the present invention on multiple data sets is more satisfactory, has good universality.
Detailed description of the invention
Fig. 1 is the work flow diagram of the method for the present invention;
Fig. 2 is the overall network architecture diagram of the method for the present invention;
Fig. 3 is the common convolution unit CIR of the method for the present invention;
Fig. 4 is the residual block unit R esBlock of the method for the present invention;
Fig. 5 is the common transposition convolution unit DIR of the method for the present invention.
Specific embodiment
The present invention is described in detail with specific embodiment below in conjunction with the accompanying drawings
Embodiment
The present embodiment is the overall flow and network structure of unsupervised image Style Transfer model.
A kind of unsupervised image Style Transfer method based on paired-associate learning, as shown in Figure 1, comprising the following steps:
Step 1: pretreatment training data.High-definition picture is obtained in common data sets, as training data;Training Include multiple various sizes of pictures in data set, for convenience of network structure design, reduces calculation amount, ignore original image first Aspect Ratio is uniformly scaled 284 × 284 size;It is random on zoomed image for the problem for making up amount of training data deficiency 256 × 256 region is cut out, to realize that data enhance;Using this size be for convenience model calculation when inside it is more (down-sampling of every progress, picture size can all halve, therefore only odd-sized image following can be adopted for secondary down-sampling operation Sample;And 256x256 can guarantee after multiple down-sampling, picture size not will become odd number still).
Step 2: planned network structural model.As shown in Fig. 2, the input of network is respectively A style original image a0With B wind Lattice original image b0, output is respectively the B style generation image b after Style Transfer1Image a is generated with A style1;In training rank Section, it is also necessary to use A style reconstruction image a2With B style reconstruction image b2.In the training stage, image a is generated1、b1Except input is each From the differentiation network D of styleA、DBIt carries out outside dual training, also inputs aesthstic Rating Model NS, by maximizing NSAesthstic matter Amount scoring, instructs Style Transfer network GA、GBTraining.
Fig. 3, Fig. 4, Fig. 5 respectively show common convolution unit CIR, residual block unit R esBlock, common transposition convolution The realization details of cells D IR mainly includes convolution Conv, example normalization InstanceNorm, transposition convolution Deconv and is swashed Function ReLU living.
Table 1, table 2 respectively show Style Transfer network and differentiate network network structure, Style Transfer network mainly by Convolution module, characteristic extracting module, output module, tanh active coating Tanh are constituted, wherein convolution module is multiple common The stacking of convolution unit CIR is mainly used for preliminary feature extraction and characteristic pattern dimensionality reduction;Characteristic extracting module is multiple same scales The stacking of residual block ResBlock is mainly used for efficient feature extraction;Output module is the heap of multiple transposition convolution unit DIR It is folded, it is mainly used for generating the image of target style.K, M, N indicate convolution kernel size, input channel number, output channel number;H,W,C Indicate characteristic pattern height, wide, port number.Differentiate Web vector graphic partitioned network, output is one 32 × 32 matrix, is respectively indicated every One piece of label for belonging to authentic specimen.
Aesthstic Rating Model uses the NIMA model based on MobileNet, and score classification number K=10.
Table 1
Operation Convolution kernel (KKMN) It inputs (HWC) It exports (HWC)
CIR 7×7×3×64 256×256×3 256×256×64
CIR 3×3×64×128 256×256×64 128×128×128
CIR 3×3×128×256 128×128×128 64×64×256
9×ResBlock 3×3×256×256 64×64×256 64×64×256
DIR 3×3×256×128 64×64×256 128×128×128
DIR 3×3×128×64 128×128×128 256×256×64
Conv 7×7×64×3 256×256×64 256×256×3
Tanh N/A 256×256×3 256×256×3
Table 2
Step 3: the loss function designed for training network.L is lost including confrontationadv, aesthetics loss Laes, antithesis it is consistent Property loss Ldual, style balance loss Lstyle;Whole loss function Loss is the weighting of above-mentioned four losses, it may be assumed that
Loss=Ladv1Laes2Ldual3Lstyle
Wherein λ1From 0.0 linear increment to 0.5, λ2It is fixed as 10, λ3It is fixed as 1;Calculating LdualWhen, pixel constraint power Weight θpLinearly it is decreased to 0.4 from 0.6, semantic constraint weight θsFrom 0.4 linear increment to 0.6.
Specifically, confrontation loss LadvIt is lost using least square, generates image as far as possible close to true picture for motivating; Confrontation loss is the most basic loss function of model for generating the dual training of confrontation network;Aesthetics loss LaesMaximum metaplasia At the aesthetic quality scoring mathematic expectaion of image, for eliminating image noise and distortion;Antithesis consistency loses LdualIt is original The L of the pixel characteristic of image and reconstruction image, semantic feature1Normal form constraint weighting, for guarantee the image after Style Transfer with Original image deposits corresponding relationship in terms of content;Style balance loss LstyleAlways it is equal to the confrontation loss of A style with B style to damage-retardation The larger value of disalignment updates amplitude by additional parameter to maintain the convergence rate of different-style migratory direction.
Step 4: with the pretreatment training data of step 1, the loss function of step 3, the network of end-to-end ground training step 2 Model;Specific step is as follows:
Step1: initialization model parameter, by network GA、GBWith network DA、DBParameter carry out Gaussian Profile initialization ( Value is 0, variance 0.01), using the training dataset of two thousand sheets pictures, size scaling is fixed and carries out random cropping, For training unsupervised image Style Transfer network;
Step2: in view of video memory consumes, 1~2 image is input to Style Transfer network S every timeT, image will be generated It is input to and differentiates network DA、DB, calculate confrontation loss Ladv
Step3: the antithesis consistency for calculating reconstruction image and original image loses Ldual
Step4: will generate that image is not scaled to directly input NIMA aesthetic model, calculate aesthetics loss Laes
Step5: the style balance loss L of entire model is calculatedstyle
Step6: it is calculated by the whole loss function Loss of step 3, obtains final loss function, then use Adam back-propagation gradient simultaneously updates Style Transfer network and differentiates the parameter of network, and the single order moment coefficient of Adam is set as 0.5, Second order moment coefficient is set as 0.99, while keeping the parameter value of aesthetic model constant always;
Step7: Step2-Step6 is repeated, until loss function tends towards stability.
For each data set, using unsupervised mode, after above-mentioned end-to-end training, a unsupervised image is obtained Style Transfer model ST
Step 5: carrying out Style Transfer application, the Style Transfer network S that image input step 4 to be converted is obtainedT, obtain Image after Style Transfer.
The method of the present invention is on the data sets such as apple2orange, summer2winter_yosemite, Style Transfer net Network STThere is good migration effect;Under the premise of keeping image subject content constant, by the transformation of color, texture etc., The Style Transfer between the image of different-style may be implemented.
In order to illustrate the contents of the present invention and implementation method, this specification gives above-mentioned specific embodiment.But ability Field technique personnel should be understood that the present invention is not limited to above-mentioned preferred forms, anyone can obtain under the inspiration of the present invention Other various forms of products out, however, make any variation in its shape or structure, it is all have it is same as the present application or Similar technical solution, is within the scope of the present invention.

Claims (3)

1. a kind of unsupervised image Style Transfer method based on paired-associate learning, which comprises the following steps:
Step 1: pretreatment training data;
Prepare the image of the certain amount of two kinds of styles as training data;All images are concentrated uniformly to be scaled training data The fixed dimension of m × n;Wherein, m and n is natural number;
Step 2: planned network structural model;
Network structure model includes five networks: Style Transfer network G altogetherA、GB, differentiate network DA、DB, aesthetics scoring network Ns
Wherein, GA、GBNetwork structure having the same is respectively used to the image Style Transfer of different directions;DA、DBIt is having the same Network structure judges whether certain image of different-style is true respectively;NsIt is the aesthstic Rating Model of pre-training, as entire net The plug-in unit of network uses, itself is not involved in update;Entire model is made of the depth convolutional neural networks of end-to-end training;
For A style original image a0, first through GBIt generates B style and generates image b1, then through GAGenerate A style reconstruction image a2; For B style original image b0, first through GAIt generates A style and generates image a1, then through GBGenerate B style reconstruction image b2
Step 3: the loss function designed for training network;
It is combined using a variety of loss functions, the loss function of network includes four parts: confrontation loss Ladv, aesthetics loss Laes、 Antithesis consistency loses Ldual, style balance loss Lstyle;Whole loss function Loss are as follows:
Loss=Ladv1Laes2Ldual3Lstyle
Wherein, λ1、λ2、λ3Respectively indicate aesthetics loss Laes, antithesis consistency lose Ldual, style balance loss LstylePower Weight;
Confrontation loss LadvIt is lost using least square, for DAAnd GA, it respectively indicates as follows:
For DBAnd GB, then it respectively indicates as follows:
Wherein, DA() indicates to differentiate network DADifferentiation to image is as a result, DB() indicates to differentiate network DBTo image Differentiate result;GA() indicates that image passes through Style Transfer network G#It is after conversion as a result, GB() indicates that image passes through wind Lattice migrate network GBResult after conversion;It indicates about a0Mathematic expectaion,It indicates about b0Mathematics It is expected that.
Aesthetics loss LaesIt is calculated, is expressed as follows by aesthetic model:
Wherein, K is natural number, NsThe probability that scoring is 1-K points, p are provided respectivelyiIndicate that scoring is the probability of i;Aesthetics loss is logical It crosses and maximizes the aesthetics scoring expectation for generating image, the training of Style Transfer network is instructed, to eliminate image noise and distortion;
Antithesis consistency loses LdualBase pixel feature and high-level semantics feature are used simultaneously, and carries out single order normal form constraint, i.e., L1Constraint has corresponding relationship for constraining the image after Style Transfer and original image in terms of content, is expressed as follows:
LdualpLpsLs
Wherein, Lp、LsRespectively indicate the L of base pixel feature1The L of constraint and the high-level semantics feature from differentiation network1Constraint, θp、θsFor dynamically adjusting the weight of pixel constraint and semantic constraint;
Pixel constrains LpIt is expressed as follows:
Semantic constraint LsIt is expressed as follows:
Wherein, | | | |1The L of expression1Constraint;
Style balance loss LstyleIt is mainly used for balancing the training speed on different directions, to guarantee that model when joint training can To obtain preferable effect;For Style Transfer network, it is expressed as follows:
Wherein,Respectively indicate GA,GBConfrontation loss;
For differentiation network, it is expressed as follows:
Wherein,Respectively indicate DA,DBConfrontation loss;
Step 4: with the pretreatment training data of step 1, the loss function of step 3, the network model of training step 2 obtains no prison Superintend and direct image Style Transfer network ST
Step 5: carrying out Style Transfer application, the Style Transfer network S that image input step 4 to be converted is obtainedT, obtain style Image after migration.
2. a kind of unsupervised image Style Transfer method based on paired-associate learning according to claim 1, which is characterized in that The m=n=256.
3. according to claim 1 or a kind of 2 any unsupervised image Style Transfer methods based on paired-associate learning, special Sign is that the process of training described in step 3 is as follows:
Step1: initialization model parameter, by Style Transfer network GA、GBWith network arbiter DA、DBParameter carry out Gaussian Profile Initialization starts to train using pretreatment training dataset;
Step2: image will be generated and be input to differentiation network DA、DB, calculate confrontation loss Ladv
Step3: the antithesis consistency for calculating reconstruction image and original image loses Ldual
Step4: it will generate that image is not scaled directly inputs aesthetic model NS, calculate aesthetics loss Laes
Step5: the style balance loss L of entire model is calculatedstyle
Step6: it is calculated by the whole loss function Loss of step 3, obtains final loss function, then backpropagation meter Gradient is calculated, and updates Style Transfer network and differentiates the parameter value of network, while keeping the parameter value of aesthetic model constant always;
Step7: Step2-Step6 is repeated, until loss function tends towards stability.
CN201910552097.XA 2019-05-31 2019-06-25 Unsupervised image style migration method based on dual learning Active CN110458750B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910469536 2019-05-31
CN2019104695360 2019-05-31

Publications (2)

Publication Number Publication Date
CN110458750A true CN110458750A (en) 2019-11-15
CN110458750B CN110458750B (en) 2021-05-25

Family

ID=68480807

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910552097.XA Active CN110458750B (en) 2019-05-31 2019-06-25 Unsupervised image style migration method based on dual learning

Country Status (1)

Country Link
CN (1) CN110458750B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111047507A (en) * 2019-11-29 2020-04-21 北京达佳互联信息技术有限公司 Training method of image generation model, image generation method and device
CN111127309A (en) * 2019-12-12 2020-05-08 杭州格像科技有限公司 Portrait style transfer model training method, portrait style transfer method and device
CN111161137A (en) * 2019-12-31 2020-05-15 四川大学 Multi-style Chinese painting flower generation method based on neural network
CN111275713A (en) * 2020-02-03 2020-06-12 武汉大学 Cross-domain semantic segmentation method based on countermeasure self-integration network
CN111429342A (en) * 2020-03-31 2020-07-17 河南理工大学 Photo style migration method based on style corpus constraint
CN111597972A (en) * 2020-05-14 2020-08-28 南开大学 Makeup recommendation method based on ensemble learning
CN111739115A (en) * 2020-06-23 2020-10-02 中国科学院自动化研究所 Unsupervised human body posture migration method, system and device based on cycle consistency
CN112418310A (en) * 2020-11-20 2021-02-26 第四范式(北京)技术有限公司 Text style migration model training method and system and image generation method and system
CN112581360A (en) * 2020-12-30 2021-03-30 杭州电子科技大学 Multi-style image aesthetic quality enhancement method based on structural constraint
CN113066114A (en) * 2021-03-10 2021-07-02 北京工业大学 Cartoon style migration method based on Retinex model
CN113222114A (en) * 2021-04-22 2021-08-06 北京科技大学 Image data augmentation method and device
CN113256513A (en) * 2021-05-10 2021-08-13 杭州格像科技有限公司 Face beautifying method and system based on antagonistic neural network
CN113283444A (en) * 2021-03-30 2021-08-20 电子科技大学 Heterogeneous image migration method based on generation countermeasure network
CN113538218A (en) * 2021-07-14 2021-10-22 浙江大学 Weak pairing image style migration method based on pose self-supervision countermeasure generation network
CN115641253A (en) * 2022-09-27 2023-01-24 南京栢拓视觉科技有限公司 Material nerve style migration method for improving content aesthetic quality

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875794A (en) * 2018-05-25 2018-11-23 中国人民解放军国防科技大学 Image visibility detection method based on transfer learning
CN109345446A (en) * 2018-09-18 2019-02-15 西华大学 A kind of image style branching algorithm based on paired-associate learning
CN109345507A (en) * 2018-08-24 2019-02-15 河海大学 A kind of dam image crack detection method based on transfer learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875794A (en) * 2018-05-25 2018-11-23 中国人民解放军国防科技大学 Image visibility detection method based on transfer learning
CN109345507A (en) * 2018-08-24 2019-02-15 河海大学 A kind of dam image crack detection method based on transfer learning
CN109345446A (en) * 2018-09-18 2019-02-15 西华大学 A kind of image style branching algorithm based on paired-associate learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HOSSEIN TALEBI ET AL.: ""NIMA: Neural Image Assessment"", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 *
ZILI YI ET AL.: ""DualGAN: Unsupervised Dual Learning for Image-to-Image Translation"", 《2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV)》 *
支双双 等: ""基于CNN和DLTL的步态虚拟样本生成方法"", 《计算机应用研究》 *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111047507A (en) * 2019-11-29 2020-04-21 北京达佳互联信息技术有限公司 Training method of image generation model, image generation method and device
CN111047507B (en) * 2019-11-29 2024-03-26 北京达佳互联信息技术有限公司 Training method of image generation model, image generation method and device
CN111127309A (en) * 2019-12-12 2020-05-08 杭州格像科技有限公司 Portrait style transfer model training method, portrait style transfer method and device
CN111127309B (en) * 2019-12-12 2023-08-11 杭州格像科技有限公司 Portrait style migration model training method, portrait style migration method and device
CN111161137A (en) * 2019-12-31 2020-05-15 四川大学 Multi-style Chinese painting flower generation method based on neural network
CN111275713B (en) * 2020-02-03 2022-04-12 武汉大学 Cross-domain semantic segmentation method based on countermeasure self-integration network
CN111275713A (en) * 2020-02-03 2020-06-12 武汉大学 Cross-domain semantic segmentation method based on countermeasure self-integration network
CN111429342A (en) * 2020-03-31 2020-07-17 河南理工大学 Photo style migration method based on style corpus constraint
CN111429342B (en) * 2020-03-31 2024-01-05 河南理工大学 Photo style migration method based on style corpus constraint
CN111597972A (en) * 2020-05-14 2020-08-28 南开大学 Makeup recommendation method based on ensemble learning
CN111597972B (en) * 2020-05-14 2022-08-12 南开大学 Makeup recommendation method based on ensemble learning
CN111739115A (en) * 2020-06-23 2020-10-02 中国科学院自动化研究所 Unsupervised human body posture migration method, system and device based on cycle consistency
CN111739115B (en) * 2020-06-23 2021-03-16 中国科学院自动化研究所 Unsupervised human body posture migration method, system and device based on cycle consistency
CN112418310A (en) * 2020-11-20 2021-02-26 第四范式(北京)技术有限公司 Text style migration model training method and system and image generation method and system
CN112581360A (en) * 2020-12-30 2021-03-30 杭州电子科技大学 Multi-style image aesthetic quality enhancement method based on structural constraint
CN112581360B (en) * 2020-12-30 2024-04-09 杭州电子科技大学 Method for enhancing aesthetic quality of multi-style image based on structural constraint
CN113066114A (en) * 2021-03-10 2021-07-02 北京工业大学 Cartoon style migration method based on Retinex model
CN113283444A (en) * 2021-03-30 2021-08-20 电子科技大学 Heterogeneous image migration method based on generation countermeasure network
CN113283444B (en) * 2021-03-30 2022-07-15 电子科技大学 Heterogeneous image migration method based on generation countermeasure network
CN113222114A (en) * 2021-04-22 2021-08-06 北京科技大学 Image data augmentation method and device
CN113222114B (en) * 2021-04-22 2023-08-15 北京科技大学 Image data augmentation method and device
CN113256513B (en) * 2021-05-10 2022-07-01 杭州格像科技有限公司 Face beautifying method and system based on antagonistic neural network
CN113256513A (en) * 2021-05-10 2021-08-13 杭州格像科技有限公司 Face beautifying method and system based on antagonistic neural network
CN113538218A (en) * 2021-07-14 2021-10-22 浙江大学 Weak pairing image style migration method based on pose self-supervision countermeasure generation network
CN115641253A (en) * 2022-09-27 2023-01-24 南京栢拓视觉科技有限公司 Material nerve style migration method for improving content aesthetic quality
CN115641253B (en) * 2022-09-27 2024-02-20 南京栢拓视觉科技有限公司 Material nerve style migration method for improving aesthetic quality of content

Also Published As

Publication number Publication date
CN110458750B (en) 2021-05-25

Similar Documents

Publication Publication Date Title
CN110458750A (en) A kind of unsupervised image Style Transfer method based on paired-associate learning
CN108304826A (en) Facial expression recognizing method based on convolutional neural networks
CN110222722A (en) Interactive image stylization processing method, calculates equipment and storage medium at system
CN101149787B (en) Fingerprint synthesis method and system based on orientation field model and Gabor filter
CN112085836A (en) Three-dimensional face reconstruction method based on graph convolution neural network
CN106056155A (en) Super-pixel segmentation method based on boundary information fusion
CN107240136B (en) Static image compression method based on deep learning model
CN105844635A (en) Sparse representation depth image reconstruction algorithm based on structure dictionary
CN106780713A (en) A kind of three-dimensional face modeling method and system based on single width photo
CN109345446A (en) A kind of image style branching algorithm based on paired-associate learning
CN116091886A (en) Semi-supervised target detection method and system based on teacher student model and strong and weak branches
CN110349085A (en) A kind of single image super-resolution feature Enhancement Method based on generation confrontation network
CN101894263A (en) Computer-aided classification system and classification method for discriminating mapped plant species based on level set and local sensitivity
Battiato et al. A Survey of Digital Mosaic Techniques.
CN101964055B (en) Visual perception mechansim simulation natural scene type identification method
CN113935899A (en) Ship plate image super-resolution method based on semantic information and gradient supervision
CN101609557B (en) Texture image segmenting method based on reinforced airspace-transform domain statistical model
CN109285171A (en) A kind of insulator hydrophobicity image segmentation device and method
CN112529774A (en) Remote sensing simulation image generation method based on cycleGAN
CN201707291U (en) Computer aided classification system of plant species based on level set and local sensitive discrimination mapping
CN115661612A (en) General climate data downscaling method based on element migration learning
CN115760552A (en) Face image makeup migration method and system based on image makeup migration network
CN115471424A (en) Self-learning-based directed point cloud denoising method
CN108492365A (en) A kind of adaptive textures visual simulation method of leaf based on color grading
Zhou et al. Conditional generative adversarial networks for domain transfer: a survey

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant