CN108171649A - A kind of image stylizing method for keeping focus information - Google Patents

A kind of image stylizing method for keeping focus information Download PDF

Info

Publication number
CN108171649A
CN108171649A CN201711292746.4A CN201711292746A CN108171649A CN 108171649 A CN108171649 A CN 108171649A CN 201711292746 A CN201711292746 A CN 201711292746A CN 108171649 A CN108171649 A CN 108171649A
Authority
CN
China
Prior art keywords
image
network
loss
content
style
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711292746.4A
Other languages
Chinese (zh)
Other versions
CN108171649B (en
Inventor
叶武剑
徐佐腾
刘怡俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN201711292746.4A priority Critical patent/CN108171649B/en
Publication of CN108171649A publication Critical patent/CN108171649A/en
Application granted granted Critical
Publication of CN108171649B publication Critical patent/CN108171649B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to a kind of image stylizing methods for keeping focus information, " focal position difference " is added to as a penalty term in traditional images stylizing method, will perceive loss and focal loss and as total losses, and using the weights of Adam algorithms adjustment image switching network, obtain peak optimizating network;After certain pictures is input to the peak optimizating network, the image of a reservation artwork focus information is generated, and style involvement is more natural.The present invention does not make singly the stylization figure of generation still maintain the main semantic content of artwork, maintains the focus information of image, it is thus also avoided that the style transfer of previous simple textures superposition, design sketch can more protrude artwork theme.

Description

A kind of image stylizing method for keeping focus information
Technical field
The present invention relates to the technical fields more particularly to a kind of figure for keeping focus information of image procossing and deep learning As stylizing method.
Background technology
It is existing that image is generated based on residual error neural network, and pass through comparison generation image and passing through with artwork and style figure The characteristic pattern obtained during VGG networks calculates perception loss, then trains residual error neural network by backpropagation, residual so as to enable Poor neural network generation is satisfactory, has certain specific style and the picture of content.Loss is perceived in order to obtain, needs to count Calculate two-part loss:A part is to compare artwork with generation picture in VGG networks feature on the middle and senior level, obtains content damage It loses;Another part is to compare style picture and generate the feature of picture low level in VGG networks, obtains style loss.
For example, (Johnson J, Alahi A, the Li F F.Perceptual Losses for Real-Time of document 1 Style Transfer and Super-Resolution [M] .2016.) in, it discusses one kind and is referred to as " perceiving loss " Image difference computational methods.This method does not compare the difference between two pictures pixels directly, but compares picture and passing through god Difference through the feature generated during network.Utilize the method style texture information high-dimensional to image and shape contour information It is compared, loss is perceived so as to calculate, finally train the nerve that an energy adds certain specific style for any image Network.
For example, (Gatys L A, Ecker A S, the Bethge M.A Neural Algorithm of of document 2 Artistic Style [J] .Computer Science, 2015.) it in, discusses a kind of using gradient descent method, constantly changes The picture of one each pixel of random initializtion, to minimize the damage that this picture obtains after by trained neural network It loses, finally obtains the image for having merged given style and content.In the gradient descent method of document 2, using VGG-19 as The neural network of counting loss, the network is modified, and object content image and style are had recorded in a propagated forward The information of image, when using modified image as network inputs carry out a propagated forward after, just can obtain the image with Difference between target image calculates loss and gradient, and changes target image.
In the method for document 1, a residual error neural network is obtained using loss training is perceived, a kind of spy of the network bound Fixed image style.By the network inputs for needing to carry out style conversion to the network, after a propagated forward, can obtain Version to after the image stylization.But the stylized picture obtained in this way, it is to picture entirety, indifference Not, it without the stylization stressed, is simply superimposed in picture to be converted similar to by the texture information in target style figure.Style It is more general to change effect.
The image stylization process of document 2 directly acts on target image, during one stylized image of generation, Hundreds of forward directions and back-propagating process are generally required, each pixel is constantly changed by gradient descent method by random initializtion Image, until the image is close to expected requirement.This method have with document 1 it is similary the shortcomings that, i.e. stylization is whole , indifference is without the stylization stressed.Also, it is required for due to generating a stylized image every time by repeatedly front and rear to biography It broadcasts, this method generation stylization figure takes larger.
Invention content
It is an object of the invention to overcome the deficiencies of the prior art and provide a kind of styles that can keep image focal point information Change method makes the image after stylization not lose the information for thinking expression originally, has focus, have and stress.
To achieve the above object, technical solution provided by the present invention is:Include the following steps:
S1, a residual error neural network as image switching network is built;
The residual error neural network has 12 layers, including 5 residual error neural network blocks (residual blocks), often A residual error neural network block includes the convolutional layer that two layers of convolution kernel is 3 × 3 sizes.It is with stronger ability to express, Neng Gouji The information of the lower target style image of record.A kind of style is only specified, then the image of a large amount of different contents is input to during training every time Network obtains the image after stylization, so as to which the image switching network to be trained for having recorded target style, can incite somebody to action arbitrary Content images carry out the network of stylization.
S2, pending image is sent into image switching network, obtains stylized image;
S3, network is perceptually lost using VGG networks, target style image is first input to the network, capture target Style information, then pending image and the stylized image of generation are respectively fed to the network, perception loss is calculated;
The loss is made of two parts, and a part represents the difference in terms of the content profile of generation image and original image, is claimed For content loss;A part represents the difference in terms of the tone texture of generation image and target style figure, and referred to as style is lost;
S4, stylized image and artwork the difference focal loss network and calculating matrix of generation are accumulated, the equal of the two is obtained Square error, as focal loss;
The focal loss network that this step is used is the residual error neural network with 18 layers of structure, with image switching network Structure level number it is inconsistent, and trained;The weights that last layer of Softmax layers of the network are the square of one 1000 × 512 Battle array for each classification results, obtains the vector of one 512 dimension;By by the vector and network the last one convolutional layer Activation value carries out matrix multiplication operation, obtains what the network implied, to a part of special attention of image;
S5, using perceive loss and focal loss and as total losses, and utilize Adam algorithms adjustment image switching network Weights;
S6, the image switching network that an image is input to after adjustment is removed from training set, repeats step S2 to S5, until Reach maximum iteration, obtain peak optimizating network;
S7, the picture of needs progress stylization is input to the stylization figure that holding focus information is obtained in peak optimizating network Picture.
Further, it calculates to perceive to lose in step S3 and be as follows:
S31, the relu1_2, relu2_2, relu3_3 for perceiving loss network are chosen, relu4_3 four layers of the characteristic patterns are made For style and features figure, relu3_3 layers of characteristic pattern is chosen as content characteristic figure;
S32, first by target style image, propagated forward is primary in loss network is perceived, and captures each layer of style and features Scheme and preserve, as the target style and features figure in training process;
S33, a pictures are read from data set, as object content figure, is input to perception loss network, captures Content characteristic figure simultaneously preserves, the object content characteristic pattern as this training;
S34, the picture read in step S33 is input in image switching network, the stylization figure generated;It will This picture is input in perception loss network, respectively obtains the content characteristic figure of the picture and style and features figure;
S35, calculate in step S34 object content characteristic pattern in the content characteristic figure and step S33 of stylized figure it Between mean square error, the content loss part perceptually lost;
S36, calculate in step S34 target style and features figure in the style and features figure and step S32 of stylized figure it Between mean square error, perceptually lose style loss part;
S37, set jth layer characteristic pattern size as C × H × W, perceive costing bio disturbance formula it is as follows:
S38, the loss of each layer is added, obtains total perception loss.
Further, focal loss is calculated in step S4 to be as follows:
The weights of S41, last layer of Softmax layers of extraction focal loss network;
S42, a content graph is taken out from data set, and obtains it and pass through the stylization that is obtained after image switching network Figure;Then the stylized figure of content graph and generation is zoomed in and out and normalized pretreatment;
S43, will be pretreated after content graph and stylization figure propagated forward is primary in focal loss network respectively, obtain Take the classification results of corresponding picture and the activation value of the last one convolutional layer;
S44, the index value according to classification results, extract corresponding vector from the weight data in step S41, by it Make matrix multiplication operation with the activation value in step S43, obtain the initial focus information for corresponding to content graph and stylized figure;
S45, the size by initial focus information scaling to content graph, and normalize between 0 to 256, corresponded to The focus positioning figure of content graph and stylized figure;
Difference between the focus positioning figure of S46, calculating content graph and stylized figure, obtains focal loss, focal loss Calculation formula it is as follows:
Compared with prior art, this programme principle is as follows:
Perception loss need to be only calculated in traditional images stylizing method, then residual error neural network is trained by backpropagation (image switching network), so as to which the generation of image switching network be enabled to meet the requirements and there is certain specific style and the picture of content; " focal position difference " is added to as a penalty term in traditional images stylizing method by this programme, will perceive loss and Focal loss and as total losses, and using the weights of Adam algorithms adjustment image switching network, obtain peak optimizating network;When After certain pictures is input to the peak optimizating network, one will be generated and do not change artwork focus information and be not simple textures Image after the stylization of superposition.
Compared with prior art, this programme has the advantages that following two:
1. the stylization figure of generation is made still to maintain the main semantic content of artwork, the focus information of image is maintained.
2. avoiding the style transfer of previous simple textures superposition, design sketch can more protrude artwork theme.
Description of the drawings
Fig. 1 is a kind of structure diagram of the image stylizing method of holding focus information of the present invention;
Fig. 2 is the structure diagram that loss network is perceived in the present invention;
The stylization figure that the stylization that Fig. 3 is artwork, Gatys et al. institutes extracting method obtains in document 2 is schemed, the present invention obtains Focus positioning figure comparison.
Specific embodiment
With reference to specific embodiment, the invention will be further described:
Referring to shown in attached drawing 1, a kind of image stylizing method of holding focus information described in the present embodiment, wherein, X is The image of stylization will be carried out, it is different in repetitive exercise each time, while be also current object content image;Xs is Target style image that is given, intentionally getting;Y is what present image switching network converted, has merged the content and Xs of X Style, and keep the immovable image of focus information of X;
It is as follows:
S1, a residual error neural network as image switching network is built;
S2, pending image is sent into image switching network, obtains the image after stylization;
S3, network is perceptually lost using VGG networks, target style image is first input to the network, capture target Style information, then pending image and the stylized image of generation are respectively fed to the network, perception loss is calculated;Meter It calculates to perceive to lose and be as follows:
S31, the relu1_2, relu2_2, relu3_3 for perceiving loss network are chosen, relu4_3 four layers of the characteristic patterns are made For style and features figure, relu3_3 layers of characteristic pattern is chosen as content characteristic figure;
S32, first by target style image, propagated forward is primary in loss network is perceived, and captures each layer of style and features Scheme and preserve, as the target style and features figure in training process;
S33, a pictures are read from data set, as object content figure, is input to perception loss network, captures Content characteristic figure simultaneously preserves, the object content characteristic pattern as this training;
S34, the picture read in step S33 is input in image switching network, the stylization figure generated;It will This picture is input in perception loss network, respectively obtains the content characteristic figure of the picture and style and features figure;
S35, calculate in step S34 object content characteristic pattern in the content characteristic figure and step S33 of stylized figure it Between mean square error, the content loss part perceptually lost;
S36, calculate in step S34 target style and features figure in the style and features figure and step S32 of stylized figure it Between mean square error, perceptually lose style loss part;
S37, set jth layer characteristic pattern size as C × H × W, perceive costing bio disturbance formula it is as follows:
S38, the loss of each layer is added, obtains total perception loss;
Both S4, the stylized image and artwork of generation are respectively fed to focal loss network and calculating matrix product, be obtained Root-mean-square error, as focal loss;Focal loss is calculated to be as follows:
S41, using ResNet-18 residual errors neural network as focal loss network, extract the network last layer Softmax layers of weights;
S42, a content graph is taken out from data set, and obtains it and pass through the stylization that is obtained after image switching network Figure;Then the stylized figure of content graph and generation is zoomed in and out and normalized pretreatment;
S43, will be pretreated after content graph and stylization figure propagated forward is primary in focal loss network respectively, obtain Take the classification results of corresponding picture and the activation value of the last one convolutional layer;
S44, the index value according to classification results, extract corresponding vector from the weight data in step S41, by it Make matrix multiplication operation with the activation value in step S43, obtain the initial focus information for corresponding to content graph and stylized figure;
S45, the size by initial focus information scaling to content graph, and normalize between 0 to 256, corresponded to The focus positioning figure of content graph and stylized figure;
Difference between the focus positioning figure of S46, calculating content graph and stylized figure, obtains focal loss, focal loss Calculation formula it is as follows:
S5, using perceive loss and focal loss and as total losses, and utilize Adam algorithms adjustment image switching network Weights;
S6, the image switching network that an image is input to after adjustment is removed from training set, repeats step S2 to S5, until Reach maximum iteration, obtain peak optimizating network;
S7, the picture of needs progress stylization is input to the stylization figure that holding focus information is obtained in peak optimizating network Picture.
The present embodiment is solved in traditional images stylization, style transfer it is relatively simple stiff, style cannot well with The problems such as artwork content blends, the offset of raw content figure focus information or loss.Stylized result is not made singly still to remain in original Appearance figure thinks the main semantic information of expression, and makes stylized result more naturally, avoiding the wind of previous simple textures superposition Lattice shift, and design sketch can more protrude artwork theme.
The examples of implementation of the above are only the preferred embodiments of the invention, and the implementation model of the present invention is not limited with this It encloses, therefore the variation that all shape, principles according to the present invention are made, it should all cover within the scope of the present invention.

Claims (3)

1. a kind of image stylizing method for keeping focus information, it is characterised in that:Include the following steps:
S1, a residual error neural network as image switching network is built;
S2, pending image is sent into image switching network, obtains stylized image;
S3, network is perceptually lost using VGG networks, target style image is first input to the network, capture target style Information, then pending image and the stylized image of generation are respectively fed to the network, perception loss is calculated;
S4, the stylized image and artwork of generation are respectively fed to focal loss network and calculating matrix product, the equal of the two is obtained Square error, as focal loss;
S5, using perceive loss and focal loss and as total losses, and utilize the power of Adam algorithms adjustment image switching network Value;
S6, the image switching network that an image is input to after adjustment is removed from training set, step S2 to S5 is repeated, until reaching Maximum iteration obtains peak optimizating network;
S7, the picture for carrying out stylization will be needed to be input to the stylized image for obtaining holding focus information in peak optimizating network.
2. a kind of image stylizing method for keeping focus information according to claim 1, it is characterised in that:The step It calculates to perceive to lose in S3 and be as follows:
S31, the relu1_2, relu2_2, relu3_3 for perceiving loss network are chosen, relu4_3 four layers of the characteristic patterns are as wind Lattice characteristic pattern chooses relu3_3 layers of characteristic pattern as content characteristic figure;
S32, by target style image, propagated forward is primary in loss network is perceived, and captures each layer of style and features figure and simultaneously protects It leaves and, as the target style and features figure in training process;
S33, a pictures are read from data set, as object content figure, is input to perception loss network, captures content Characteristic pattern simultaneously preserves, the object content characteristic pattern as this training;
S34, the picture read in step S33 is input in image switching network, the stylization figure generated;By this figure Piece is input in perception loss network, respectively obtains the content characteristic figure of the picture and style and features figure;
S35, it calculates in step S34 between object content characteristic pattern in the content characteristic figure and step S33 of stylized figure Mean square error, the content loss part perceptually lost;
S36, it calculates in step S34 between target style and features figure in the style and features figure and step S32 of stylized figure Mean square error, the style loss part perceptually lost;
S37, set jth layer characteristic pattern size as C × H × W, perceive costing bio disturbance formula it is as follows:
S38, the loss of each layer is added, obtains total perception loss.
3. a kind of image stylizing method for keeping focus information according to claim 1, it is characterised in that:The step Focal loss is calculated in S4 to be as follows:
S41, using ResNet-18 residual errors neural network as focal loss network, extract last layer of Softmax layers of the network Weights;
S42, a content graph is taken out from data set, and obtains it and schemed by the stylization obtained after image switching network;So The stylized figure of content graph and generation is zoomed in and out and normalized pretreatment afterwards;
S43, will be pretreated after content graph and stylization figure propagated forward is primary in focal loss network respectively, obtain pair Answer the classification results of picture and the activation value of the last one convolutional layer;
S44, the index value according to classification results, extract corresponding vector from the weight data in step S41, by itself and step Activation value in rapid S43 makees matrix multiplication operation, obtains the initial focus information for corresponding to content graph and stylized figure;
S45, the size by initial focus information scaling to content graph, and normalize between 0 to 256, it obtains corresponding to content The focus positioning figure of figure and stylized figure;
Difference between the focus positioning figure of S46, calculating content graph and stylized figure, obtains focal loss, the meter of focal loss It is as follows to calculate formula:
CN201711292746.4A 2017-12-08 2017-12-08 Image stylization method for keeping focus information Active CN108171649B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711292746.4A CN108171649B (en) 2017-12-08 2017-12-08 Image stylization method for keeping focus information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711292746.4A CN108171649B (en) 2017-12-08 2017-12-08 Image stylization method for keeping focus information

Publications (2)

Publication Number Publication Date
CN108171649A true CN108171649A (en) 2018-06-15
CN108171649B CN108171649B (en) 2021-08-17

Family

ID=62525490

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711292746.4A Active CN108171649B (en) 2017-12-08 2017-12-08 Image stylization method for keeping focus information

Country Status (1)

Country Link
CN (1) CN108171649B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109144641A (en) * 2018-08-14 2019-01-04 四川虹美智能科技有限公司 A kind of method and device showing image by refrigerator display screen
CN109345446A (en) * 2018-09-18 2019-02-15 西华大学 Image style transfer algorithm based on dual learning
CN109559363A (en) * 2018-11-23 2019-04-02 网易(杭州)网络有限公司 Stylized processing method, device, medium and the electronic equipment of image
CN111160138A (en) * 2019-12-11 2020-05-15 杭州电子科技大学 Fast face exchange method based on convolutional neural network
CN111860823A (en) * 2019-04-30 2020-10-30 北京市商汤科技开发有限公司 Neural network training method, neural network training device, neural network image processing method, neural network image processing device, neural network image processing equipment and storage medium
CN112700365A (en) * 2019-10-22 2021-04-23 财团法人工业技术研究院 Image conversion method and image conversion network
CN113469923A (en) * 2021-05-28 2021-10-01 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
WO2022204868A1 (en) * 2021-03-29 2022-10-06 深圳高性能医疗器械国家研究院有限公司 Method for correcting image artifacts on basis of multi-constraint convolutional neural network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006009257A1 (en) * 2004-07-23 2006-01-26 Matsushita Electric Industrial Co., Ltd. Image processing device and image processing method
US20090310863A1 (en) * 2008-06-11 2009-12-17 Gallagher Andrew C Finding image capture date of hardcopy medium
CN105913377A (en) * 2016-03-24 2016-08-31 南京大学 Image splicing method for reserving image correlation information
CN106952224A (en) * 2017-03-30 2017-07-14 电子科技大学 A kind of image style transfer method based on convolutional neural networks
CN107292875A (en) * 2017-06-29 2017-10-24 西安建筑科技大学 A kind of conspicuousness detection method based on global Local Feature Fusion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006009257A1 (en) * 2004-07-23 2006-01-26 Matsushita Electric Industrial Co., Ltd. Image processing device and image processing method
US20090310863A1 (en) * 2008-06-11 2009-12-17 Gallagher Andrew C Finding image capture date of hardcopy medium
CN105913377A (en) * 2016-03-24 2016-08-31 南京大学 Image splicing method for reserving image correlation information
CN106952224A (en) * 2017-03-30 2017-07-14 电子科技大学 A kind of image style transfer method based on convolutional neural networks
CN107292875A (en) * 2017-06-29 2017-10-24 西安建筑科技大学 A kind of conspicuousness detection method based on global Local Feature Fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JUSTIN JOHNSON 等: "Perceptual Losses for Real-Time Style Transfer and Super-Resolution", 《ECCV 2016: COMPUTER VISION – ECCV 201》 *
周娟丽 等: "一种改进的图像色彩迁徙方法", 《计算机工程》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109144641A (en) * 2018-08-14 2019-01-04 四川虹美智能科技有限公司 A kind of method and device showing image by refrigerator display screen
CN109345446A (en) * 2018-09-18 2019-02-15 西华大学 Image style transfer algorithm based on dual learning
CN109345446B (en) * 2018-09-18 2022-12-02 西华大学 Image style transfer algorithm based on dual learning
CN109559363A (en) * 2018-11-23 2019-04-02 网易(杭州)网络有限公司 Stylized processing method, device, medium and the electronic equipment of image
CN109559363B (en) * 2018-11-23 2023-05-23 杭州网易智企科技有限公司 Image stylization processing method and device, medium and electronic equipment
CN111860823A (en) * 2019-04-30 2020-10-30 北京市商汤科技开发有限公司 Neural network training method, neural network training device, neural network image processing method, neural network image processing device, neural network image processing equipment and storage medium
CN111860823B (en) * 2019-04-30 2024-06-11 北京市商汤科技开发有限公司 Neural network training method, neural network image processing method, neural network training device, neural network image processing equipment and storage medium
CN112700365A (en) * 2019-10-22 2021-04-23 财团法人工业技术研究院 Image conversion method and image conversion network
CN111160138A (en) * 2019-12-11 2020-05-15 杭州电子科技大学 Fast face exchange method based on convolutional neural network
WO2022204868A1 (en) * 2021-03-29 2022-10-06 深圳高性能医疗器械国家研究院有限公司 Method for correcting image artifacts on basis of multi-constraint convolutional neural network
CN113469923A (en) * 2021-05-28 2021-10-01 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
CN113469923B (en) * 2021-05-28 2024-05-24 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN108171649B (en) 2021-08-17

Similar Documents

Publication Publication Date Title
CN108171649A (en) A kind of image stylizing method for keeping focus information
CN110119780B (en) Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network
CN108399616B (en) Orthopedics disease lesion classification and classification method based on depth residual error network
CN109308679A (en) A kind of image style conversion side and device, equipment, storage medium
CN107705242A (en) A kind of image stylization moving method of combination deep learning and depth perception
CN108765319A (en) A kind of image de-noising method based on generation confrontation network
CN111080511A (en) End-to-end face exchange method for high-resolution multi-feature extraction
CN110570377A (en) group normalization-based rapid image style migration method
CN104899921B (en) Single-view videos human body attitude restoration methods based on multi-modal own coding model
CN107358293A (en) A kind of neural network training method and device
CN106447626A (en) Blurred kernel dimension estimation method and system based on deep learning
WO2017021322A1 (en) Method and device for image synthesis
CN110363068B (en) High-resolution pedestrian image generation method based on multiscale circulation generation type countermeasure network
CN107590497A (en) Off-line Handwritten Chinese Recognition method based on depth convolutional neural networks
Zhang et al. Inkthetics: a comprehensive computational model for aesthetic evaluation of Chinese ink paintings
CN108924528B (en) Binocular stylized real-time rendering method based on deep learning
CN110458085A (en) Video behavior recognition methods based on attention enhancing three-dimensional space-time representative learning
CN110322529A (en) A method of it is painted based on deep learning aided art
CN114331830B (en) Super-resolution reconstruction method based on multi-scale residual error attention
CN111627101A (en) Three-dimensional human body reconstruction method based on graph convolution
CN114581560A (en) Attention mechanism-based multi-scale neural network infrared image colorizing method
CN111814891A (en) Medical image synthesis method, device and storage medium
Li et al. High-resolution network for photorealistic style transfer
Liu et al. Facial image inpainting using attention-based multi-level generative network
CN116645287A (en) Diffusion model-based image deblurring method

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant