CN107705242A - A kind of image stylization moving method of combination deep learning and depth perception - Google Patents

A kind of image stylization moving method of combination deep learning and depth perception Download PDF

Info

Publication number
CN107705242A
CN107705242A CN201710596250.XA CN201710596250A CN107705242A CN 107705242 A CN107705242 A CN 107705242A CN 201710596250 A CN201710596250 A CN 201710596250A CN 107705242 A CN107705242 A CN 107705242A
Authority
CN
China
Prior art keywords
mrow
msub
mover
style
artwork
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710596250.XA
Other languages
Chinese (zh)
Other versions
CN107705242B (en
Inventor
叶武剑
卢俊羽
刘怡俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN201710596250.XA priority Critical patent/CN107705242B/en
Publication of CN107705242A publication Critical patent/CN107705242A/en
Application granted granted Critical
Publication of CN107705242B publication Critical patent/CN107705242B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses the image stylization moving method of a kind of combination deep learning and depth perception, it comprises the following steps:1) artwork x carries out pretreatment generation picture y by image converting network*;Will generation picture y*Inputted respectively with artwork x in the first model trained and obtain the characteristic pattern of each layer of model and calculate the penalty values of style and content;2) will generation picture y*Inputted respectively with artwork x in the second model trained and obtain the depth of field estimation figure of model output layer, calculate depth of field penalty values;3) above-mentioned content, style and depth of field three parts loss function are fitted to a linear function, calculate generation figure and the total losses value of artwork;4) by optimized algorithm self-optimizing model parameter, the stylized image of previous generation is then generated new style figure again through model and calculates new penalty values, model parameter is updated, until model is restrained.The present invention can retain depth of view information and the stereochemical structure sense of artwork during image Style Transfer, make different-style more natural and three-dimensional in the involvement of artwork, improve the quality of image stylization.

Description

A kind of image stylization moving method of combination deep learning and depth perception
Technical field
The invention belongs to image procossing and deep learning field, more particularly to it is a kind of retain the artwork depth of field, contour of object and The image Style Transfer method that distance is felt.
Background technology
Deep neural network is through being widely used in various computer vision problems.In computer vision field, The problem of comparing the problem of seeming has image denoising, segmentation etc., also some problems such as keyword search, and relatively advanced has mesh Mark not etc..Except with the training data largely marked come the monitor task (such as scene classification problem) completed, depth nerve Network also can solve the problem that many abstract problems without necessary being training data.Image Style Transfer is exactly that so one kind is asked Topic, she is subsequently used for the content of purpose image, and then realize the transfer of style by extracting the styles of artistic pictures.
Neural Style Transfer (Neural Style Transfer, NST) is by convolutional neural networks separation and constitutional diagram The content and style of piece, the semantic content of picture is combined from different artistic styles, show different glamours.To the greatest extent The Style Transfer of one image is had existed for nearly 15 year by pipe into the technology of the style of another, but utilizes neutral net Still just occur recently to do this part thing.
For example, in document 1 (A Neural Algorithm of Artistic Style), researcher Gatys, Ecker and Bethge describes a kind of method for realizing image Style Transfer using depth convolutional neural networks (CNN) iteration. In the image Style Transfer technology of document 1, original contents image and style image are directly input to the convolution trained The characteristic pattern of extraction particular network layer in neutral net (VGG19), calculates style loss and content loss;Then total losses is changed In generation, to minimum value, obtains generation figure.
For example, in (the Perceptual losses for real-time style transfer and of document 2 Super-resolution in), record and be used for using perception loss function (perceptual loss functions) to train The feedforward network of image convert task.The feedforward network trained can solve the optimization problem proposed by Gatys et al. in real time.With Method (optimization-based method) based on optimization is compared, and the network can provide similar result, while speed Can upper three orders of magnitude soon.
The content of the invention
In view of the deficienciess of the prior art, the present invention combines depth perception network (Hourglass3) in its target loss Depth of field loss is added in function, the depth of field of style figure of the artwork with generating is estimated.During Style Transfer, generation Figure not only natural fusion corresponding style and content, also keep the far and near structural information of artwork.In the process problem of landscape painting On, effect is especially pronounced.
To achieve the above object, the Style Transfer method proposed comprises the following steps:
(1) generation picture y* and artwork x is inputted to the VGG16 models (http trained respectively:// Cs.stanford.edu/people/jcjohns/fast-neural-style/models/ vgg16.t7) in and obtain model The characteristic pattern of each layer;Image can produce many characteristic patterns by VGG16 models, utilize the characteristic pattern of VGG16 network relu2_2 layers Calculating, which is carried out, with the contrast of artwork characteristic pattern tries to achieve object content penalty values;The style of image is represented with Gram matrixes, is utilized The characteristic pattern of VGG16 networks relu1_2, relu2_2, relu3_3 and relu4_3 layer and the characteristic pattern comparing calculation of artwork are tried to achieve Target style penalty values.
(2) generation picture y* and artwork x is inputted to the Hourglass3 models (https trained respectively:// Vllab.eecs.umich.edu/data/nips2016/hourglass3.tar.gz in) and the depth of field of model output layer is obtained Estimation figure;Then y* and x depth of field estimation figure is subjected to set operation, tries to achieve depth of field penalty values.
(3) above-mentioned content, style and depth of field three parts loss function are fitted to a linear function, calculate generation figure With the total losses value of artwork.Based on the total losses function, by the downward gradient of optimized algorithm counting loss function, minimum wind transmission Format the total losses of model.
(4) repeat step (1) arrives (3), and each image size for being used to train is 256 × 256.Maximum training iteration time Number is arranged to 40000 times.Optimized algorithm selects L-BFGS, and learning rate is arranged to 1 × 10-3, batch_size is dimensioned to 4。
The technical characterstic and beneficial effect of the present invention:
Newest research results of the invention based on current image Style Transfer, using torch7 deep learning frameworks, ensure The reliability of image Style Transfer algorithm;Further to improve to landscape painting stereochemical structure rendering effect, with reference to Hourglass3 adds depth of field loss in the target loss function of Style Transfer.So the model of iteration convergence just can be certain Retain depth of view information and the stereochemical structure sense of artwork in degree;
To sum up, the technical characterstic of the inventive method is the depth of view information that can retain artwork during image Style Transfer With stereochemical structure sense, make different-style more natural and three-dimensional in the involvement of artwork, improve the quality of image stylization.
Brief description of the drawings
Fig. 1 is model structure of the present invention
Fig. 2 is the algorithm flow chart of the inventive method
Fig. 3 is stylized effect contrast figure
Fig. 4 is far and near depth of field comparison diagram
Embodiment
Embodiment 1
The invention provides a kind of image Style Transfer method that can keep artwork distance depth of field structure, this model uses Torch7 deep learning frameworks, it comprises the following steps:
1) change of scale is carried out to input picture x, input picture is maintained 1024*512, it is convenient to calculate;
2) it is default value by image converting network fw parameter setting;
3) input picture x is obtained into initial y* by image converting network fw processing;
4) generation picture y* and artwork x is inputted to the VGG16 models (http trained respectively:// Cs.stanford.edu/people/jcjohns/fast-neural-style/models/ vgg16.t7) in and obtain model The characteristic pattern of each layer;
5) carry out calculating with the contrast of artwork characteristic pattern using the characteristic pattern of VGG16 network relu2_2 layers and try to achieve object content Penalty values;
Wherein Ni (φ) represents i-th layer of normalization characteristic (numerical value is equal to for convolutional neural networks), φ i (y) Represent characteristic patterns of the input picture y in i-th of convolutional layer of convolutional neural networks
6) characteristic pattern of VGG16 networks relu1_2, relu2_2, relu3_3 and relu4_3 layer and the feature of artwork are utilized Figure comparing calculation tries to achieve target style penalty values, and image style is represented with Gram matrixes:
Wherein Ni (φ) represents i-th layer of normalization characteristic (numerical value is equal to for convolutional neural networks), Gi (y) tables Show that input picture y represents in the Gram matrixes of i-th of convolutional layer characteristic pattern of convolutional neural networks
7) generation picture y* and artwork x is inputted to the Hourglass3 models (https trained respectively://vl- Lab.eecs.umich.edu/data/nips2016/hourglass3.tar.gz in) and obtain the depth of field of model output layer and estimate Meter figure;
8) y* and x depth of field estimation figure is subjected to set operation, tries to achieve depth of field penalty values;
9) above-mentioned content, style and depth of field three parts loss function are fitted to a linear function, calculate generation figure With the total losses value of artwork.
Based on the total losses function, restrain stylized model by L-BGFS optimized algorithms.Finally by the mould after restraining The style figure that type is drawn can retain the artistic style of style image, the content of content images and far and near depth of field structural information.
10) downward gradient is calculated, and adjust fw parameters to make always to damage according to the feedback of penalty values in image switching network fw Lose toward minimum value and draw close;
11) the style Gram matrixes of artwork are updated, learning rate is arranged to 1 × 10-3;
12) style Gram matrixes and the characteristic pattern of artwork content are subjected to set operation and try to achieve new generation figure;
13) repeat preceding step and be iterated training, iterations is arranged to 40000 times;
14) 40000 repetitive exercises are passed through, the total losses of model converges on global minimum, the style figure of generation substantially The depth of artwork and three-dimensional what comes into a driver's are not only kept, and artwork details and structure sense can be retained well.
Embodiment two.
The image stylization moving method of a kind of the combination deep learning and depth perception of the present embodiment comprises the following steps:
1) image carries out pretreatment generation y by image converting network*;Will generation picture y*Input and instructed respectively with artwork x In the VGG16 models perfected and the characteristic pattern of each layer of model is obtained, calculates the penalty values of style and content;
2) will generation picture y*Inputted respectively with artwork x in the Hourglass3 models trained and obtain model output The depth of field estimation figure of layer, calculates depth of field penalty values;
3) above-mentioned content, style and depth of field three parts loss function are fitted to a linear function, calculate generation figure With the total losses value of artwork;
4) repeat step 1) to 3), restrain stylized model by L-BGFS optimized algorithms.Finally by the mould after restraining The style figure that type is drawn can retain the artistic style of style image, the content of content images and far and near depth of field structural information.
Preferably, the step 1) obtains the style and content characteristic of image, its feature in VGG16 depth perception networks It is, comprises the following steps:
1) carry out calculating with the contrast of artwork characteristic pattern using the characteristic pattern of VGG16 network relu2_2 layers and try to achieve object content Penalty values;
Wherein Ni(numerical value is equal to the normalization characteristic of i-th layer of (φ) expression for convolutional neural networksφi (y) characteristic patterns of the input picture y in i-th of convolutional layer of convolutional neural networks is represented
2) characteristic pattern of VGG16 networks relu1_2, relu2_2, relu3_3 and relu4_3 layer and the feature of artwork are utilized Figure comparing calculation tries to achieve target style penalty values, and image style is represented with Gram matrixes:
Wherein Ni(φ) represents i-th layer of normalization characteristic (numerical value is equal to for convolutional neural networks), Gi(y) table Show that input picture y represents in the Gram matrixes of i-th of convolutional layer characteristic pattern of convolutional neural networks;
Preferably, the step 2) generates the depth of field penalty values of image and artwork using depth perception network calculations style, It is characterized in that introduce the Hourglass3 models that University of Michigan Weifeng Chen et al. are trained and define one Depth of field loss function goes to calculate the depth of field loss between input picture x and output image in style metastasis model.Ideally, Output image should have identical depth characteristic value with input picture x.Especially, we can define depth of field loss function Into the form as content loss function;
Preferably, content, style and depth of field three parts loss function are combined into a linear function by the step 3), meter Calculate generation figure and the total losses value of artwork, it is characterised in that define a linear function and carry out these penalty values and combine, and The ratio that style, content and depth of field structure occupy can be realized by changing weights;
Preferably, step 4 includes,
(1) in image switching network fwThe middle feedback according to penalty values, downward gradient is calculated, and adjust fw parameters to make always to damage Lose toward minimum value and draw close;
(2) the style Gram matrixes of artwork are updated, learning rate is arranged to 1 × 10-3
(3) style Gram matrixes and the characteristic pattern of artwork content are subjected to set operation and try to achieve new generation figure;
(4) 40000 repetitive exercises are passed through, the total losses of model converges on global minimum, the style figure of generation substantially The depth of artwork and three-dimensional what comes into a driver's are not only kept, and artwork details and structure sense can be retained well.

Claims (5)

1. the image stylization moving method of a kind of combination deep learning and depth perception, it is characterised in that comprise the following steps:
1) artwork x carries out pretreatment generation picture y by image converting network*;Will generation picture y*Is inputted respectively with artwork x In one model and calculate the penalty values of style and content;
2) will generation picture y*Inputted respectively in the second model with artwork x and obtain the depth of field estimation figure of model output layer, calculate scape Deep penalty values;
3) total losses value is calculated according to above-mentioned content, style and depth of field three parts penalty values;
4) by optimized algorithm self-optimizing model parameter, then the previous stylized image generated is given birth to again through model The Cheng Xin style figure penalty values new with calculating, update model parameter, until model is restrained.
2. the image stylization moving method of a kind of combination deep learning according to claim 1 and depth perception, it is special Sign is that the step 1) is in the first model and obtains the characteristic pattern of each layer of model and calculates the penalty values of style and content, Specifically include following steps:
1) carry out calculating with the contrast of artwork characteristic pattern first with the characteristic pattern of the first model output and try to achieve object content penalty values:
<mrow> <msub> <mi>loss</mi> <mrow> <mi>f</mi> <mi>e</mi> <mi>a</mi> <mi>t</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>y</mi> <mo>,</mo> <mover> <mi>y</mi> <mo>^</mo> </mover> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>&amp;Element;</mo> <msub> <mi>I</mi> <mi>&amp;delta;</mi> </msub> </mrow> </munder> <mfrac> <mn>1</mn> <mrow> <msub> <mi>N</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>&amp;phi;</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>|</mo> <mo>|</mo> <msub> <mi>&amp;phi;</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mover> <mi>y</mi> <mo>^</mo> </mover> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&amp;phi;</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <msubsup> <mo>|</mo> <mn>2</mn> <mn>2</mn> </msubsup> </mrow>
Wherein Ni(φ) represents to perceive i-th layer of loss network φ normalization characteristic, φi(y) represent that input picture y is perceiving damage Lose the characteristic pattern of network i-th of convolutional layer of φ, IδIt is the total number of plies of δ networks;
2) target style penalty values are tried to achieve using the characteristic pattern of the first model output and the characteristic pattern comparing calculation of artwork:
<mrow> <msub> <mi>loss</mi> <mrow> <mi>s</mi> <mi>t</mi> <mi>y</mi> <mi>l</mi> <mi>e</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>y</mi> <mo>,</mo> <mover> <mi>y</mi> <mo>^</mo> </mover> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>&amp;Element;</mo> <msub> <mi>I</mi> <mi>&amp;delta;</mi> </msub> </mrow> </munder> <mfrac> <mn>1</mn> <mrow> <msub> <mi>N</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>&amp;phi;</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>|</mo> <mo>|</mo> <msubsup> <mi>G</mi> <mi>i</mi> <mi>&amp;phi;</mi> </msubsup> <mrow> <mo>(</mo> <mover> <mi>y</mi> <mo>^</mo> </mover> <mo>)</mo> </mrow> <mo>-</mo> <msubsup> <mi>G</mi> <mi>i</mi> <mi>&amp;phi;</mi> </msubsup> <mrow> <mo>(</mo> <msub> <mi>y</mi> <mi>s</mi> </msub> <mo>)</mo> </mrow> <mo>|</mo> <msubsup> <mo>|</mo> <mn>2</mn> <mn>2</mn> </msubsup> </mrow>
Wherein Ni(φ) represents to perceive i-th layer of loss network φ normalization characteristic, φi(y) represent that input picture y is perceiving damage Lose the characteristic pattern of network i-th of convolutional layer of φ, IδIt is the total number of plies of δ networks;Image style is represented with Gram matrixes:Gi(y) represent that input picture y represents in the Gram matrixes of i-th of convolutional layer characteristic pattern of convolutional neural networks.
3. the image stylization moving method of a kind of combination deep learning according to claim 2 and depth perception, it is special Sign is that the step 2) specifically includes calculates the depth of field penalty values according to following formula:
<mrow> <msub> <mi>loss</mi> <mrow> <mi>d</mi> <mi>e</mi> <mi>p</mi> <mi>t</mi> <mi>h</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>y</mi> <mo>,</mo> <mover> <mi>y</mi> <mo>^</mo> </mover> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>&amp;Element;</mo> <msub> <mi>I</mi> <mi>&amp;delta;</mi> </msub> </mrow> </munder> <mfrac> <mn>1</mn> <mrow> <msub> <mi>N</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>&amp;delta;</mi> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>|</mo> <mo>|</mo> <msub> <mi>&amp;delta;</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mover> <mi>y</mi> <mo>^</mo> </mover> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>&amp;delta;</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <msubsup> <mo>|</mo> <mn>2</mn> <mn>2</mn> </msubsup> </mrow>
Wherein Ni(φ) represents i-th layer of depth perception network δ normalization characteristic, δi(y) represent input picture y in depth perception The characteristic pattern of i-th of convolutional layer of network δ, IδIt is the total number of plies of δ networks.
4. the image stylization moving method of a kind of combination deep learning according to claim 3 and depth perception, it is special Sign is:The step 3), which calculates generation figure and the total losses value of artwork, to be included:
<mrow> <mi>L</mi> <mrow> <mo>(</mo> <mover> <mi>y</mi> <mo>^</mo> </mover> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>&amp;lambda;</mi> <mn>1</mn> </msub> <msub> <mi>loss</mi> <mrow> <mi>c</mi> <mi>o</mi> <mi>n</mi> <mi>t</mi> <mi>e</mi> <mi>n</mi> <mi>t</mi> </mrow> </msub> <mrow> <mo>(</mo> <mover> <mi>y</mi> <mo>^</mo> </mover> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>&amp;lambda;</mi> <mn>2</mn> </msub> <msub> <mi>loss</mi> <mrow> <mi>s</mi> <mi>t</mi> <mi>y</mi> <mi>l</mi> <mi>e</mi> </mrow> </msub> <mrow> <mo>(</mo> <mover> <mi>y</mi> <mo>^</mo> </mover> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>&amp;lambda;</mi> <mn>3</mn> </msub> <msub> <mi>loss</mi> <mrow> <mi>d</mi> <mi>e</mi> <mi>p</mi> <mi>t</mi> <mi>h</mi> </mrow> </msub> <mrow> <mo>(</mo> <mover> <mi>y</mi> <mo>^</mo> </mover> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow>
Wherein λ123The weight of content loss, style loss and depth of field loss is represented respectively.
5. the image stylization moving method of a kind of combination deep learning according to claim 4 and depth perception, it is special Sign is that the step 4) specifically includes following steps:
(1) in image switching network fwIn step 1) according to claim 1 to the penalty values 3) being calculated Downward gradient d ω are calculated by L-BFGS optimization methods, and fw parameters is adjusted, minimizes total losses, learning rate is arranged to 1 × 10-3
(2) style Gram matrixes are added with the characteristic pattern of artwork content and averaged, obtain generation figure.Generation figure is weighed again It is multiple to try to achieve new penalty values and stylized generation figure by perceiving loss network φ and depth perception network δ;
(3) downward gradient is asked for according to new penalty values to continue to adjust fw parameters;
(4) repeat the above steps, approximately pass through 40000 repetitive exercises so that the total losses of model converges on the overall situation most substantially Small value.
CN201710596250.XA 2017-07-20 2017-07-20 Image stylized migration method combining deep learning and depth perception Active CN107705242B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710596250.XA CN107705242B (en) 2017-07-20 2017-07-20 Image stylized migration method combining deep learning and depth perception

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710596250.XA CN107705242B (en) 2017-07-20 2017-07-20 Image stylized migration method combining deep learning and depth perception

Publications (2)

Publication Number Publication Date
CN107705242A true CN107705242A (en) 2018-02-16
CN107705242B CN107705242B (en) 2021-12-17

Family

ID=61170732

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710596250.XA Active CN107705242B (en) 2017-07-20 2017-07-20 Image stylized migration method combining deep learning and depth perception

Country Status (1)

Country Link
CN (1) CN107705242B (en)

Cited By (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108470320A (en) * 2018-02-24 2018-08-31 中山大学 A kind of image stylizing method and system based on CNN
CN108537776A (en) * 2018-03-12 2018-09-14 维沃移动通信有限公司 A kind of image Style Transfer model generating method and mobile terminal
CN108596830A (en) * 2018-04-28 2018-09-28 国信优易数据有限公司 A kind of image Style Transfer model training method and image Style Transfer method
CN108769644A (en) * 2018-06-06 2018-11-06 浙江大学 A kind of binocular animation style rendering intent based on deep learning
CN108765278A (en) * 2018-06-05 2018-11-06 Oppo广东移动通信有限公司 A kind of image processing method, mobile terminal and computer readable storage medium
CN108846793A (en) * 2018-05-25 2018-11-20 深圳市商汤科技有限公司 Image processing method and terminal device based on image style transformation model
CN108924528A (en) * 2018-06-06 2018-11-30 浙江大学 A kind of binocular stylization real-time rendering method based on deep learning
CN108961349A (en) * 2018-06-29 2018-12-07 广东工业大学 A kind of generation method, device, equipment and the storage medium of stylization image
CN109064428A (en) * 2018-08-01 2018-12-21 Oppo广东移动通信有限公司 A kind of image denoising processing method, terminal device and computer readable storage medium
CN109191444A (en) * 2018-08-29 2019-01-11 广东工业大学 Video area based on depth residual error network removes altering detecting method and device
CN109345446A (en) * 2018-09-18 2019-02-15 西华大学 A kind of image style branching algorithm based on paired-associate learning
CN109447936A (en) * 2018-12-21 2019-03-08 江苏师范大学 A kind of infrared and visible light image fusion method
CN109447137A (en) * 2018-10-15 2019-03-08 聚时科技(上海)有限公司 A kind of image local Style Transfer method based on factoring
CN109949214A (en) * 2019-03-26 2019-06-28 湖北工业大学 A kind of image Style Transfer method and system
CN110084741A (en) * 2019-04-26 2019-08-02 衡阳师范学院 Image wind network moving method based on conspicuousness detection and depth convolutional neural networks
CN110166759A (en) * 2018-05-28 2019-08-23 腾讯科技(深圳)有限公司 The treating method and apparatus of image, storage medium, electronic device
CN110210347A (en) * 2019-05-21 2019-09-06 赵森 A kind of colored jacket layer paper-cut Intelligentized design method based on deep learning
CN110232392A (en) * 2018-03-05 2019-09-13 北京大学 Vision optimization method, optimization system, computer equipment and readable storage medium storing program for executing
CN110310221A (en) * 2019-06-14 2019-10-08 大连理工大学 A kind of multiple domain image Style Transfer method based on generation confrontation network
CN110335206A (en) * 2019-05-31 2019-10-15 平安科技(深圳)有限公司 Smart filter method, apparatus and computer readable storage medium
CN110458906A (en) * 2019-06-26 2019-11-15 重庆邮电大学 A kind of medical image color method based on depth color transfer
CN110490791A (en) * 2019-07-10 2019-11-22 西安理工大学 Dress ornament Graphic Arts generation method based on deep learning Style Transfer
CN110660018A (en) * 2018-09-13 2020-01-07 南京大学 Image-oriented non-uniform style migration method
CN110706151A (en) * 2018-09-13 2020-01-17 南京大学 Video-oriented non-uniform style migration method
CN110895795A (en) * 2018-09-13 2020-03-20 北京工商大学 Improved semantic image inpainting model method
CN110930295A (en) * 2019-10-25 2020-03-27 广东开放大学(广东理工职业学院) Image style migration method, system, device and storage medium
CN111127309A (en) * 2019-12-12 2020-05-08 杭州格像科技有限公司 Portrait style transfer model training method, portrait style transfer method and device
CN111260548A (en) * 2018-11-30 2020-06-09 浙江宇视科技有限公司 Mapping method and device based on deep learning
CN111325681A (en) * 2020-01-20 2020-06-23 南京邮电大学 Image style migration method combining meta-learning mechanism and feature fusion
CN111353964A (en) * 2020-02-26 2020-06-30 福州大学 Structure-consistent stereo image style migration method based on convolutional neural network
CN111383165A (en) * 2018-12-29 2020-07-07 Tcl集团股份有限公司 Image processing method, system and storage medium
CN111476708A (en) * 2020-04-03 2020-07-31 广州市百果园信息技术有限公司 Model generation method, model acquisition method, model generation device, model acquisition device, model generation equipment and storage medium
CN111768438A (en) * 2020-07-30 2020-10-13 腾讯科技(深圳)有限公司 Image processing method, device, equipment and computer readable storage medium
CN111950608A (en) * 2020-06-12 2020-11-17 中国科学院大学 Domain self-adaptive object detection method based on contrast loss
CN112508815A (en) * 2020-12-09 2021-03-16 中国科学院深圳先进技术研究院 Model training method and device, electronic equipment and machine-readable storage medium
CN113240576A (en) * 2021-05-12 2021-08-10 北京达佳互联信息技术有限公司 Method and device for training style migration model, electronic equipment and storage medium
CN114792355A (en) * 2022-06-24 2022-07-26 北京百度网讯科技有限公司 Virtual image generation method and device, electronic equipment and storage medium
CN114820908A (en) * 2022-06-24 2022-07-29 北京百度网讯科技有限公司 Virtual image generation method and device, electronic equipment and storage medium
CN115249221A (en) * 2022-09-23 2022-10-28 阿里巴巴(中国)有限公司 Image processing method and device and cloud equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090310189A1 (en) * 2008-06-11 2009-12-17 Gallagher Andrew C Determining the orientation of scanned hardcopy medium
CN106228198A (en) * 2016-08-17 2016-12-14 广东工业大学 A kind of super-resolution recognition methods of medical treatment CT image
US20160364625A1 (en) * 2015-06-10 2016-12-15 Adobe Systems Incorporated Automatically Selecting Example Stylized Images for Image Stylization Operations Based on Semantic Content
CN106709532A (en) * 2017-01-25 2017-05-24 京东方科技集团股份有限公司 Image processing method and device
CN106780367A (en) * 2016-11-28 2017-05-31 上海大学 HDR photo style transfer methods based on dictionary learning
CN106886975A (en) * 2016-11-29 2017-06-23 华南理工大学 It is a kind of can real time execution image stylizing method
CN106952224A (en) * 2017-03-30 2017-07-14 电子科技大学 A kind of image style transfer method based on convolutional neural networks

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090310189A1 (en) * 2008-06-11 2009-12-17 Gallagher Andrew C Determining the orientation of scanned hardcopy medium
US20160364625A1 (en) * 2015-06-10 2016-12-15 Adobe Systems Incorporated Automatically Selecting Example Stylized Images for Image Stylization Operations Based on Semantic Content
CN106228198A (en) * 2016-08-17 2016-12-14 广东工业大学 A kind of super-resolution recognition methods of medical treatment CT image
CN106780367A (en) * 2016-11-28 2017-05-31 上海大学 HDR photo style transfer methods based on dictionary learning
CN106886975A (en) * 2016-11-29 2017-06-23 华南理工大学 It is a kind of can real time execution image stylizing method
CN106709532A (en) * 2017-01-25 2017-05-24 京东方科技集团股份有限公司 Image processing method and device
CN106952224A (en) * 2017-03-30 2017-07-14 电子科技大学 A kind of image style transfer method based on convolutional neural networks

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JUSTIN JOHNSON: "Perceptual Losses for Real-Time Style Transfer and Super-Resolution", 《COMPUTER VISION ECCV 2016: 14TH EUROPEAN CONFERENCE, AMSTERDAM, THE NETHERLANDS》 *
刘怡俊: "电子商务系统中基于多Agent系统的信息流控制研究", 《中国优秀硕士学位论文全文数据库(信息科技辑)》 *
欠扁的小篮子: "基于深度学习的图像风格转换", 《博客园: HTTPS://WWW.CNBLOGS.COM/Z941030/P/7056814.HTML》 *

Cited By (63)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108470320B (en) * 2018-02-24 2022-05-20 中山大学 Image stylization method and system based on CNN
CN108470320A (en) * 2018-02-24 2018-08-31 中山大学 A kind of image stylizing method and system based on CNN
CN110232392B (en) * 2018-03-05 2021-08-17 北京大学 Visual optimization method, optimization system, computer device and readable storage medium
CN110232392A (en) * 2018-03-05 2019-09-13 北京大学 Vision optimization method, optimization system, computer equipment and readable storage medium storing program for executing
CN108537776A (en) * 2018-03-12 2018-09-14 维沃移动通信有限公司 A kind of image Style Transfer model generating method and mobile terminal
CN108596830A (en) * 2018-04-28 2018-09-28 国信优易数据有限公司 A kind of image Style Transfer model training method and image Style Transfer method
CN108596830B (en) * 2018-04-28 2022-04-22 国信优易数据股份有限公司 Image style migration model training method and image style migration method
CN108846793A (en) * 2018-05-25 2018-11-20 深圳市商汤科技有限公司 Image processing method and terminal device based on image style transformation model
CN108846793B (en) * 2018-05-25 2022-04-22 深圳市商汤科技有限公司 Image processing method and terminal equipment based on image style conversion model
CN110166759B (en) * 2018-05-28 2021-10-15 腾讯科技(深圳)有限公司 Image processing method and device, storage medium and electronic device
CN110166759A (en) * 2018-05-28 2019-08-23 腾讯科技(深圳)有限公司 The treating method and apparatus of image, storage medium, electronic device
CN108765278A (en) * 2018-06-05 2018-11-06 Oppo广东移动通信有限公司 A kind of image processing method, mobile terminal and computer readable storage medium
CN108765278B (en) * 2018-06-05 2023-04-07 Oppo广东移动通信有限公司 Image processing method, mobile terminal and computer readable storage medium
CN108924528A (en) * 2018-06-06 2018-11-30 浙江大学 A kind of binocular stylization real-time rendering method based on deep learning
CN108769644A (en) * 2018-06-06 2018-11-06 浙江大学 A kind of binocular animation style rendering intent based on deep learning
CN108961349A (en) * 2018-06-29 2018-12-07 广东工业大学 A kind of generation method, device, equipment and the storage medium of stylization image
CN109064428A (en) * 2018-08-01 2018-12-21 Oppo广东移动通信有限公司 A kind of image denoising processing method, terminal device and computer readable storage medium
CN109191444A (en) * 2018-08-29 2019-01-11 广东工业大学 Video area based on depth residual error network removes altering detecting method and device
CN110706151B (en) * 2018-09-13 2023-08-08 南京大学 Video-oriented non-uniform style migration method
CN110660018B (en) * 2018-09-13 2023-10-17 南京大学 Image-oriented non-uniform style migration method
CN110660018A (en) * 2018-09-13 2020-01-07 南京大学 Image-oriented non-uniform style migration method
CN110706151A (en) * 2018-09-13 2020-01-17 南京大学 Video-oriented non-uniform style migration method
CN110895795A (en) * 2018-09-13 2020-03-20 北京工商大学 Improved semantic image inpainting model method
CN109345446B (en) * 2018-09-18 2022-12-02 西华大学 Image style transfer algorithm based on dual learning
CN109345446A (en) * 2018-09-18 2019-02-15 西华大学 A kind of image style branching algorithm based on paired-associate learning
CN109447137A (en) * 2018-10-15 2019-03-08 聚时科技(上海)有限公司 A kind of image local Style Transfer method based on factoring
CN111260548A (en) * 2018-11-30 2020-06-09 浙江宇视科技有限公司 Mapping method and device based on deep learning
CN111260548B (en) * 2018-11-30 2023-07-21 浙江宇视科技有限公司 Mapping method and device based on deep learning
CN109447936A (en) * 2018-12-21 2019-03-08 江苏师范大学 A kind of infrared and visible light image fusion method
CN111383165A (en) * 2018-12-29 2020-07-07 Tcl集团股份有限公司 Image processing method, system and storage medium
CN111383165B (en) * 2018-12-29 2024-04-16 Tcl科技集团股份有限公司 Image processing method, system and storage medium
CN109949214A (en) * 2019-03-26 2019-06-28 湖北工业大学 A kind of image Style Transfer method and system
CN110084741A (en) * 2019-04-26 2019-08-02 衡阳师范学院 Image wind network moving method based on conspicuousness detection and depth convolutional neural networks
CN110210347B (en) * 2019-05-21 2021-03-23 赵森 Intelligent color jacket paper-cut design method based on deep learning
CN110210347A (en) * 2019-05-21 2019-09-06 赵森 A kind of colored jacket layer paper-cut Intelligentized design method based on deep learning
CN110335206B (en) * 2019-05-31 2023-06-09 平安科技(深圳)有限公司 Intelligent filter method, device and computer readable storage medium
CN110335206A (en) * 2019-05-31 2019-10-15 平安科技(深圳)有限公司 Smart filter method, apparatus and computer readable storage medium
CN110310221A (en) * 2019-06-14 2019-10-08 大连理工大学 A kind of multiple domain image Style Transfer method based on generation confrontation network
CN110310221B (en) * 2019-06-14 2022-09-20 大连理工大学 Multi-domain image style migration method based on generation countermeasure network
CN110458906B (en) * 2019-06-26 2024-03-15 广州大鱼创福科技有限公司 Medical image coloring method based on depth color migration
CN110458906A (en) * 2019-06-26 2019-11-15 重庆邮电大学 A kind of medical image color method based on depth color transfer
CN110490791A (en) * 2019-07-10 2019-11-22 西安理工大学 Dress ornament Graphic Arts generation method based on deep learning Style Transfer
CN110930295B (en) * 2019-10-25 2023-12-26 广东开放大学(广东理工职业学院) Image style migration method, system, device and storage medium
CN110930295A (en) * 2019-10-25 2020-03-27 广东开放大学(广东理工职业学院) Image style migration method, system, device and storage medium
CN111127309A (en) * 2019-12-12 2020-05-08 杭州格像科技有限公司 Portrait style transfer model training method, portrait style transfer method and device
CN111127309B (en) * 2019-12-12 2023-08-11 杭州格像科技有限公司 Portrait style migration model training method, portrait style migration method and device
CN111325681A (en) * 2020-01-20 2020-06-23 南京邮电大学 Image style migration method combining meta-learning mechanism and feature fusion
CN111325681B (en) * 2020-01-20 2022-10-11 南京邮电大学 Image style migration method combining meta-learning mechanism and feature fusion
CN111353964A (en) * 2020-02-26 2020-06-30 福州大学 Structure-consistent stereo image style migration method based on convolutional neural network
CN111353964B (en) * 2020-02-26 2022-07-08 福州大学 Structure-consistent stereo image style migration method based on convolutional neural network
CN111476708A (en) * 2020-04-03 2020-07-31 广州市百果园信息技术有限公司 Model generation method, model acquisition method, model generation device, model acquisition device, model generation equipment and storage medium
CN111950608B (en) * 2020-06-12 2021-05-04 中国科学院大学 Domain self-adaptive object detection method based on contrast loss
CN111950608A (en) * 2020-06-12 2020-11-17 中国科学院大学 Domain self-adaptive object detection method based on contrast loss
CN111768438A (en) * 2020-07-30 2020-10-13 腾讯科技(深圳)有限公司 Image processing method, device, equipment and computer readable storage medium
CN111768438B (en) * 2020-07-30 2023-11-24 腾讯科技(深圳)有限公司 Image processing method, device, equipment and computer readable storage medium
CN112508815A (en) * 2020-12-09 2021-03-16 中国科学院深圳先进技术研究院 Model training method and device, electronic equipment and machine-readable storage medium
CN113240576A (en) * 2021-05-12 2021-08-10 北京达佳互联信息技术有限公司 Method and device for training style migration model, electronic equipment and storage medium
CN113240576B (en) * 2021-05-12 2024-04-30 北京达佳互联信息技术有限公司 Training method and device for style migration model, electronic equipment and storage medium
CN114820908B (en) * 2022-06-24 2022-11-01 北京百度网讯科技有限公司 Virtual image generation method and device, electronic equipment and storage medium
CN114792355A (en) * 2022-06-24 2022-07-26 北京百度网讯科技有限公司 Virtual image generation method and device, electronic equipment and storage medium
CN114820908A (en) * 2022-06-24 2022-07-29 北京百度网讯科技有限公司 Virtual image generation method and device, electronic equipment and storage medium
CN114792355B (en) * 2022-06-24 2023-02-24 北京百度网讯科技有限公司 Virtual image generation method and device, electronic equipment and storage medium
CN115249221A (en) * 2022-09-23 2022-10-28 阿里巴巴(中国)有限公司 Image processing method and device and cloud equipment

Also Published As

Publication number Publication date
CN107705242B (en) 2021-12-17

Similar Documents

Publication Publication Date Title
CN107705242A (en) A kind of image stylization moving method of combination deep learning and depth perception
CN107358293A (en) A kind of neural network training method and device
CN106408595A (en) Neural network painting style learning-based image rendering method
CN111402129A (en) Binocular stereo matching method based on joint up-sampling convolutional neural network
CN107464210A (en) A kind of image Style Transfer method based on production confrontation network
CN108470320A (en) A kind of image stylizing method and system based on CNN
CN106952224A (en) A kind of image style transfer method based on convolutional neural networks
CN109949214A (en) A kind of image Style Transfer method and system
CN109977428A (en) A kind of method and device that answer obtains
CN106548208A (en) A kind of quick, intelligent stylizing method of photograph image
CN111553968A (en) Method for reconstructing animation by three-dimensional human body
CN106776540A (en) A kind of liberalization document creation method
CN107886169A (en) A kind of multiple dimensioned convolution kernel method that confrontation network model is generated based on text image
CN109410251B (en) Target tracking method based on dense connection convolution network
CN108171649A (en) A kind of image stylizing method for keeping focus information
CN106651765A (en) Method for automatically generating thumbnail by use of deep neutral network
CN108763718A (en) The method for quick predicting of Field Characteristics amount when streaming object and operating mode change
CN101833785A (en) Controllable dynamic shape interpolation method with physical third dimension
CN106203628A (en) A kind of optimization method strengthening degree of depth learning algorithm robustness and system
CN107610062A (en) The quick identification and bearing calibration of piecture geometry fault based on BP neural network
CN104317195A (en) Improved extreme learning machine-based nonlinear inverse model control method
CN112819692A (en) Real-time arbitrary style migration method based on double attention modules
CN106411683A (en) Determination method and apparatus of key social information
Xu et al. Archimedean copula estimation of distribution algorithm based on artificial bee colony algorithm
Rossi et al. Computational Morphologies: Design Rules Between Organic Models and Responsive Architecture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant