CN113496460A - Neural style migration method and system based on feature adjustment - Google Patents

Neural style migration method and system based on feature adjustment Download PDF

Info

Publication number
CN113496460A
CN113496460A CN202010258460.XA CN202010258460A CN113496460A CN 113496460 A CN113496460 A CN 113496460A CN 202010258460 A CN202010258460 A CN 202010258460A CN 113496460 A CN113496460 A CN 113496460A
Authority
CN
China
Prior art keywords
style
layer
content
layers
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010258460.XA
Other languages
Chinese (zh)
Other versions
CN113496460B (en
Inventor
刘家瑛
汪文靖
许继征
张莉
王悦
郭宗明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Beijing Douyin Information Service Co Ltd
Original Assignee
Peking University
Beijing ByteDance Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University, Beijing ByteDance Technology Co Ltd filed Critical Peking University
Priority to CN202010258460.XA priority Critical patent/CN113496460B/en
Publication of CN113496460A publication Critical patent/CN113496460A/en
Application granted granted Critical
Publication of CN113496460B publication Critical patent/CN113496460B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a neural style migration method and a system based on feature adjustment, which belong to the field of stylization of images and videos.

Description

Neural style migration method and system based on feature adjustment
Technical Field
The invention belongs to the field of stylization of images and videos, and particularly relates to a neural style migration method and system based on feature adjustment.
Background
Style migration aims to migrate the style of a specific image to a given target image or video, and realize batch generation of the style images or videos. In recent years, style migration has received increasing attention from academia.
Style migration methods can be divided into three categories. Based on a block matching method, a matching relation of blocks is established between the reference style image and the target content image, and style migration is realized through migration and fusion of the blocks. The deep learning method based on iteration extracts features by using a classification neural network pre-trained on a large data set, uses the mean square error of the features as a content loss function item, uses the distance of feature distribution as a style loss function item, and realizes style migration by iteration and a minimized loss function. A deep learning method based on forward transmission is characterized in that a module with a style migration function is designed, training is carried out on a large data set, and only one parameter forward transmission is needed during reasoning.
However, the block matching-based method cannot transfer semantic information of a style, the iteration-based deep learning method requires a long iteration process and a complex calculation amount, and the existing forward-based deep learning method has poor style transfer effect and cannot meet the requirements of practical application.
Disclosure of Invention
Aiming at the technical problem, the invention provides a neural style migration method and system based on feature adjustment. The method and the device can transfer the artistic representation of the specified reference style image to the target content image or the video frame, realize the stylization of the target content image or the video frame, and improve the subjective visual quality and the artistic effect of the target content image or the video frame.
The technical scheme adopted by the invention is as follows:
a neural style migration method based on feature adjustment comprises the following steps:
training a neural style migration network model by utilizing a training data set; the neural style migration network model comprises a content encoder, a style encoder, a decorator and a decoder; the content encoder and the style encoder respectively comprise a plurality of continuous convolution layers, each convolution layer is followed by a linear rectification function, and the partial convolution layers which are not adjacent in the middle and the final convolution layer are subjected to down-sampling by using maximum pooling; the decorator comprises a single sample normalization layer and a plurality of style decoration modules; the decoder comprises a plurality of mutually-alternating adaptive single-sample normalization layers and residual blocks, and a convolution layer positioned at the tail end;
inputting the target content image or the video frame and the reference style image into a trained neural style migration network model to perform the following steps:
inputting the target content image or video frame into a content encoder, and processing to obtain a content code;
inputting the reference style image into a style encoder, and respectively outputting style codes by a first convolution layer and a convolution layer containing the largest pooling;
inputting the content codes into a decorator, and simultaneously inputting style codes into each style decoration module;
and inputting the content output by the decorator into a decoder, and sequentially inputting the mean value and the variance of the style codes of different layers of the style encoder in each self-adaptive single-sample normalization layer to process to obtain a final style migration result image or video frame.
Further, the training data set includes a content data set composed of photographic images and a style data set composed of pictorial images; during training, the content data set is input into the content encoder and the style data set is input into the style encoder.
Furthermore, the convolution layers of the content encoder and the style encoder are 9, wherein the maximum pooling is used for downsampling before the 3 rd, 5 th and 9 th convolution layers; the number of style decoration modules of the decorator is 3; the decoder has 4 adaptive single-sample normalization layers, 3 residual blocks and 1 convolutional layer.
Furthermore, the style decoration module comprises at least one convolution kernel prediction unit, a plurality of convolution layers and a linear rectification function, wherein the convolution kernel prediction unit comprises a convolution layer, a global pooling layer, a merging layer and a full-connection layer.
Further, the residual block includes 1 upsampled layer, 3 convolutional layers, 2 leaky rectifying functions, and 2 single-sample normalization layers.
Furthermore, after sampling the input data, the up-sampling layer of the residual block respectively outputs the sampled input data to 1 convolutional layer and a multilayer structure formed by the other 2 convolutional layers, 2 leaky rectification functions and 2 single-sample normalization layers in an alternating mode, and the two data are added and converged to serve as output.
A neural style migration system based on feature adjustment comprises a neural style migration network model, wherein after training, a style migration result image or a video frame is obtained by processing an input target content image or a video frame and a reference style image; the neural style migration network model comprises:
a content encoder comprising a plurality of successive convolutional layers, each convolutional layer being followed by a linear rectification function, the partial convolutional layers not adjacent in the middle and the last convolutional layer being downsampled using maximum pooling; the system is used for processing the target content image to obtain a content code;
the style encoder comprises a plurality of continuous convolution layers, each convolution layer is followed by a linear rectification function, and partial convolution layers which are not adjacent in the middle and the last convolution layer are subjected to down-sampling by using maximum pooling; the method comprises the steps of processing a reference style image, and respectively outputting style codes by a first convolutional layer and a convolutional layer containing the largest pooling;
the decorator comprises a single sample normalization layer and a plurality of subsequent style decoration modules; the single sample normalization layer is used for receiving the content codes and processing and outputting the content codes to the style decoration module, and the style decoration module is used for receiving data input by the upper layer and receiving the style codes at the same time for comprehensive processing and outputting the style codes to the next layer;
the decoder comprises a plurality of mutually-alternating adaptive single-sample normalization layers and residual blocks, and a convolution layer positioned at the tail end; and the self-adaptive single-sample normalization layer is used for processing the content output by the decorator, wherein the self-adaptive single-sample normalization layer is also sequentially input with the mean value and the variance of the style codes of different layers of the style encoder, and the final style migration result image or video frame is obtained through processing.
According to the invention, by adjusting the distribution of two dimensions of the characteristic space and the channel of the neural network, the style migration with good effect can be realized only by one-time parameter forwarding. Compared with the prior art, the method has better comprehensive effects on the aspects of maintaining the semantic structure of the target content, representing the reference style, processing speed and the like, and particularly optimizes the maintenance of the time-domain continuity when the target content is the video.
Drawings
FIG. 1 is a block diagram of a neural style migration network used in an embodiment of the present invention.
FIG. 2 is a block diagram of a style decoration module used in an embodiment of the present invention.
Fig. 3A-3C are diagrams of a target content graph, a reference style graph, and a style migration result, according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned and other features and advantages of the invention more comprehensible, embodiments accompanied with figures are described in detail below. It should be noted that the specific number of layers, the number of modules, the number of functions, the arrangement of some layers, etc. given in the following examples are only a preferred implementation manner, and are not limited thereto.
The embodiment discloses a neural style migration method based on feature adjustment, which takes style migration of a target content image as an example, and specifically describes the following steps:
step 1: a plurality of photographic images and a plurality of pictorial images are collected, the photographic images forming a content data set and the pictorial images forming a style data set.
Step 2: and building a neural style migration network model.
The network structure is shown in FIG. 1, and the model is divided into a content encoder EcStyle encoder EsDecorator M and decorator MThe decoder D has four subnetworks. Content encoder EcConsists of 9 successive convolutional layers, each followed by a linear rectification function (ReLU), downsampled using maximum pooling before the 3 rd, 5 th, and 9 th convolutional layers, and the output of the last layer is the content code L. Style encoder EsAnd a content encoder EcThe same network structure exists, but the results of the 1 st, 3 rd and 5 th layers are also output on the basis of the output of the final layer result.
The decorator M consists of one single sample Normalization layer (Instance Normalization) and three style decoration modules, the input being the content code L. The result of the style decoration module is shown in FIG. 2, where the style code is input based on the input of the previous layer of output. And inputting the previous layer of output and style codes into a convolution kernel prediction unit, specifically, sequentially passing the previous layer of output and style codes through a convolution layer and a global pooling layer, merging the output and style codes, and inputting the merged output and style codes into a full-link layer to obtain a dynamic convolution kernel. And (3) sequentially passing the output of the previous layer through a module consisting of a plurality of convolution layers, a dynamic convolution layer and a linear rectification function alternately, and adding the output of the previous layer and the original input to obtain a characteristic style decoration result.
Decoder D consists of 4 Adaptive single sample Normalization layers (Adaptive error Normalization), 3 residual blocks, and 1 convolutional layer in alternation. The 4 adaptive single-sample normalization layers sequentially input the mean and variance of different layers of style coding of the style coder. The residual block is composed of one nearest neighbor upsampling layer, 3 convolutional layers, two Leaky rectifying functions (leak ReLU), and two single sample normalization layers, as shown in FIG. 1. The up-sampling layer of the residual block samples the input data and outputs the sampled data to 1 convolution layer and a multilayer structure formed by the other 2 convolution layers, 2 leakage-carrying rectification functions and 2 single-sample normalization layers alternately, and the two data are added to serve as output. The final output of the decoder is the style migration result O.
And step 3: and training a neural style migration network model.
The overall loss function term of the model is:
L=λstyleLstylecontentLcontentreconLrecontvLtvtempLtemp
in the formula, λstyle、λcontent、λrecon、λtvAnd λtempIs a weight term, usually λstyleIs set to 20, lambdacontentIs set to 1, lambdareconIs set to 100, lambdatvIs set to 10, lambdatempSet to 150.
LstyleFor the style loss function term:
Lstyle=∑l(||Mean(Φl(S))-Mean(Φl(M(S,C)))||2+||Var(Φl(S))-Var(Φl(M(S,C)))||2),
in the formula philFor the features of the pre-trained image classification model, Mean (-) is the Mean of the features, Var (-) is the variance of the features, l is usually the RELU1_1, RELU2_1, RELU3_1, RELU4_1 layer of the VGG19 model, S is the reference style image, C is the target content image, and M (S, C) is the style migration result of the style migration model M input S and C.
LcontentFor the content loss function term:
Lcontent=∑l(||Φl(C)-Φl(M(S,C))||2),
l usually takes the RELU4_1 layer of the VGG19 model.
LreconFor reconstruction loss function terms:
Lrecon=||M(I,IGray)-I||,
i is either a genre image or a content image, IGrayThe gray level is obtained by graying I.
LtvFor the fully-variational loss function term, let Δ beXFor the residual in the horizontal direction, let ΔYFor the vertical residual, the total variation loss function is:
Ltv=||ΔX(M(S,C))+ΔY(M(S,C))||。
Ltempfor the time domain continuity loss function term:
Ltemp=||M(S,F(C)+Δ)-F(M(S,C))||,
f is random optical flow, firstly, a two-channel random noise image with the same length and width as that of C is generated, and then Gaussian blurring is carried out on the two-channel random noise image. F (C) warping the image C using an optical flow F, for each pixel of C with coordinates (w, h), the horizontal optical flow FXThe vertical direction light flow is fYThen its new coordinate is (w + f)X,h+fY) The warping operation is done using nearest neighbor interpolation.
And step 3: and an inference stage, wherein a reference style image S (shown in figure 3B) and a target content image or video frame C (shown in figure 3A) are input, and a desired style migration result graph (shown in figure 3C) is finally output.
The above embodiments are only intended to illustrate the technical solution of the present invention and not to limit the same, and a person skilled in the art can modify the technical solution of the present invention or substitute the same without departing from the spirit and scope of the present invention, and the scope of the present invention should be determined by the claims.

Claims (10)

1. A neural style migration method based on feature adjustment is characterized by comprising the following steps:
training a neural style migration network model by utilizing a training data set; the neural style migration network model comprises a content encoder, a style encoder, a decorator and a decoder; the content encoder and the style encoder respectively comprise a plurality of continuous convolution layers, each convolution layer is followed by a linear rectification function, and the partial convolution layers which are not adjacent in the middle and the final convolution layer are subjected to down-sampling by using maximum pooling; the decorator comprises a single sample normalization layer and a plurality of style decoration modules; the decoder comprises a plurality of mutually-alternating adaptive single-sample normalization layers and residual blocks, and a convolution layer positioned at the tail end;
inputting the target content image or video frame and the reference style image into a trained neural style migration network model to perform the following steps:
inputting the target content image or video frame into a content encoder, and processing to obtain a content code;
inputting the reference style image into a style encoder, and respectively outputting style codes by a first convolution layer and a convolution layer containing the largest pooling;
inputting the content codes into a decorator, and simultaneously inputting style codes into each style decoration module;
and inputting the content output by the decorator into a decoder, and sequentially inputting the mean value and the variance of the style codes of different layers of the style encoder in each self-adaptive single-sample normalization layer to process to obtain a final style migration result image or video frame.
2. The method of claim 1, wherein the training dataset comprises a content dataset consisting of photographic images and a style dataset consisting of pictorial images; during training, the content data set is input into the content encoder and the style data set is input into the style encoder.
3. The method of claim 1, wherein the convolutional layers of the content encoder and the genre encoder are 9, and wherein the downsampling is performed using maximum pooling before the 3 rd, 5 th, and 9 th convolutional layers; the number of style decoration modules of the decorator is 3; the decoder has 4 adaptive single-sample normalization layers, 3 residual blocks and 1 convolutional layer.
4. The method of claim 1, wherein the style adornment module comprises a plurality of convolution kernel prediction units, a plurality of convolution layers, and a linear rectification function, the convolution kernel prediction units comprising convolution layers, global pooling layers, merging layers, and fully-connected layers.
5. The method of claim 1, wherein a residual block comprises 1 upsampled layer, 3 convolutional layers, 2 leaky rectifying functions, and 2 single sample normalization layers.
6. The method of claim 5, wherein the upsampling layer of the residual block samples the input data and outputs the sampled data to 1 convolutional layer and a multi-layer structure consisting of 2 other convolutional layers alternating with 2 leaky rectifying functions and 2 single-sample normalization layers, and the two pieces of data are added to form an output.
7. The method of claim 1, wherein the style encoding of the last layer convolution of the style encoder is input in each style adornment module.
8. The method of claim 1, wherein the mean and variance of the final layer of convolved style encodings of the style encoder are input in a first adaptive single sample normalization layer of the decoder, and the mean and variance of the last layer of convolved style encodings of the style encoder with the largest pooled layer are input in a second adaptive single sample normalization layer, and so on.
9. The method of claim 1, wherein when training the neural-style migration network model, the total loss function term is:
L=λstyleLstylecontentLcontentreconLrecontvLtvtempLtemp
in the formula, λstyle、λcontent、λrecon、λtvAnd λtempIs a weight term, LstyleAs a function of the style loss function, LcontentAs a function of content loss, LreconFor reconstructing the loss function term, LtvIs a fully variational loss function term, LtempIs a time domain continuity loss function term; wherein the content of the first and second substances,
Lstyle=∑l(||Mean(Φl(S))-Mean(Φl(M(S,C)))||2+||Var(Φl(S))-Var(Φl(M(S,C)))||2),
in the formula, Mean (-) is the Mean value of the features of the pre-trained image classification model VGG19, Var (-) is the variance of the features, i takes RELU1_1, RELU2_1, RELU3_1, RELU4_1 layer, S is a reference style image, C is a target content image or video frame, and M (S, C) is the style migration result of the style migration model M input S and C;
Lcontent=∑l(||Φl(C)-Φl(M(S,C))||2),
in the formula, l is the RELU4_1 layer of a pre-trained image classification model VGG 19;
Lrecon=||M(I,IGray)-I||,
in the formula, M (I, I)Gray) To use IGrayAs a result of the style transition obtained from the target content image or video frame, I is a content image or video frame or style image, IGrayIs obtained by graying I;
Ltv=||ΔX(M(S,C))+ΔY(M(S,C))||,
in the formula,. DELTA.XFor the residual in the horizontal direction, let ΔYIs the residual error in the vertical direction;
Ltemp=||M(S,F(C)+Δ)-F(M(S,C))||,
in the formula, F is a random optical flow, firstly, a two-channel random noise image with the same length and width as the target content image or the video frame C is generated, and then Gaussian blur is carried out on the two-channel random noise image; f (C) warping C with a random optical flow F, and for each pixel of C with coordinates (w, h), the horizontal optical flow is FXThe vertical direction light flow is fYThen its new coordinate is (w + f)X,h+fY) The warping operation is done using nearest neighbor interpolation.
10. A neural style migration system based on feature adjustment comprises a neural style migration network model, wherein after training, a style migration result image or a video frame is obtained by processing an input target content image or a video frame and a reference style image; the neural style migration network model is characterized by comprising the following steps:
a content encoder comprising a plurality of successive convolutional layers, each convolutional layer being followed by a linear rectification function, the partial convolutional layers not adjacent in the middle and the last convolutional layer being downsampled using maximum pooling; the system is used for processing a target content image or video frame to obtain a content code;
the style encoder comprises a plurality of continuous convolution layers, each convolution layer is followed by a linear rectification function, and partial convolution layers which are not adjacent in the middle and the last convolution layer are subjected to down-sampling by using maximum pooling; the method comprises the steps of processing a reference style image, and respectively outputting style codes by a first convolutional layer and a convolutional layer containing the largest pooling;
the decorator comprises a single sample normalization layer and a plurality of subsequent style decoration modules; the single sample normalization layer is used for receiving the content codes and processing and outputting the content codes to the style decoration module, and the style decoration module is used for receiving data input by the upper layer and receiving the style codes at the same time for comprehensive processing and outputting the style codes to the next layer;
the decoder comprises a plurality of mutually-alternating adaptive single-sample normalization layers and residual blocks, and a convolution layer positioned at the tail end; and the self-adaptive single-sample normalization layer is used for processing the content output by the decorator, and the mean value and the variance of the style codes of different layers of the style encoder are sequentially input into each self-adaptive single-sample normalization layer for processing to obtain a final style migration result image or video frame.
CN202010258460.XA 2020-04-03 2020-04-03 Neural style migration method and system based on feature adjustment Active CN113496460B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010258460.XA CN113496460B (en) 2020-04-03 2020-04-03 Neural style migration method and system based on feature adjustment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010258460.XA CN113496460B (en) 2020-04-03 2020-04-03 Neural style migration method and system based on feature adjustment

Publications (2)

Publication Number Publication Date
CN113496460A true CN113496460A (en) 2021-10-12
CN113496460B CN113496460B (en) 2024-03-22

Family

ID=77995118

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010258460.XA Active CN113496460B (en) 2020-04-03 2020-04-03 Neural style migration method and system based on feature adjustment

Country Status (1)

Country Link
CN (1) CN113496460B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989102A (en) * 2021-10-19 2022-01-28 复旦大学 Rapid style migration method with high shape-preserving property
CN114692733A (en) * 2022-03-11 2022-07-01 华南理工大学 End-to-end video style migration method, system and storage medium for inhibiting time domain noise amplification

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651766A (en) * 2016-12-30 2017-05-10 深圳市唯特视科技有限公司 Image style migration method based on deep convolutional neural network
CN107767328A (en) * 2017-10-13 2018-03-06 上海交通大学 The moving method and system of any style and content based on the generation of a small amount of sample
CN108470320A (en) * 2018-02-24 2018-08-31 中山大学 A kind of image stylizing method and system based on CNN
US20180357800A1 (en) * 2017-06-09 2018-12-13 Adobe Systems Incorporated Multimodal style-transfer network for applying style features from multi-resolution style exemplars to input images
CN109218134A (en) * 2018-09-27 2019-01-15 华东师范大学 A kind of Test cases technology system based on neural Style Transfer
CN109949214A (en) * 2019-03-26 2019-06-28 湖北工业大学 A kind of image Style Transfer method and system
WO2019137131A1 (en) * 2018-01-10 2019-07-18 Oppo广东移动通信有限公司 Image processing method, apparatus, storage medium, and electronic device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106651766A (en) * 2016-12-30 2017-05-10 深圳市唯特视科技有限公司 Image style migration method based on deep convolutional neural network
US20180357800A1 (en) * 2017-06-09 2018-12-13 Adobe Systems Incorporated Multimodal style-transfer network for applying style features from multi-resolution style exemplars to input images
CN107767328A (en) * 2017-10-13 2018-03-06 上海交通大学 The moving method and system of any style and content based on the generation of a small amount of sample
WO2019137131A1 (en) * 2018-01-10 2019-07-18 Oppo广东移动通信有限公司 Image processing method, apparatus, storage medium, and electronic device
CN108470320A (en) * 2018-02-24 2018-08-31 中山大学 A kind of image stylizing method and system based on CNN
CN109218134A (en) * 2018-09-27 2019-01-15 华东师范大学 A kind of Test cases technology system based on neural Style Transfer
CN109949214A (en) * 2019-03-26 2019-06-28 湖北工业大学 A kind of image Style Transfer method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
周浩;周先军;邱书畅;: "下采样迭代和超分辨率重建的图像风格迁移", 湖北工业大学学报, no. 01, 15 February 2020 (2020-02-15) *
杨文瀚;刘家瑛;夏思烽;郭宗明;: "数据外补偿的深度网络超分辨率重建", 软件学报, no. 04, 4 December 2017 (2017-12-04) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989102A (en) * 2021-10-19 2022-01-28 复旦大学 Rapid style migration method with high shape-preserving property
CN114692733A (en) * 2022-03-11 2022-07-01 华南理工大学 End-to-end video style migration method, system and storage medium for inhibiting time domain noise amplification

Also Published As

Publication number Publication date
CN113496460B (en) 2024-03-22

Similar Documents

Publication Publication Date Title
Gai et al. New image denoising algorithm via improved deep convolutional neural network with perceptive loss
CN109033095B (en) Target transformation method based on attention mechanism
CN109919204B (en) Noise image-oriented deep learning clustering method
CN113658051A (en) Image defogging method and system based on cyclic generation countermeasure network
CN112307714B (en) Text style migration method based on dual-stage depth network
CN110045419A (en) A kind of perceptron residual error autoencoder network seismic data denoising method
CN110517329A (en) A kind of deep learning method for compressing image based on semantic analysis
CN112232485B (en) Cartoon style image conversion model training method, image generation method and device
CN115393396B (en) Unmanned aerial vehicle target tracking method based on mask pre-training
CN113496460A (en) Neural style migration method and system based on feature adjustment
CN116912257B (en) Concrete pavement crack identification method based on deep learning and storage medium
CN113392711A (en) Smoke semantic segmentation method and system based on high-level semantics and noise suppression
CN114723630A (en) Image deblurring method and system based on cavity double-residual multi-scale depth network
CN113706545A (en) Semi-supervised image segmentation method based on dual-branch nerve discrimination dimensionality reduction
CN114662788A (en) Seawater quality three-dimensional time-space sequence multi-parameter accurate prediction method and system
CN116740223A (en) Method for generating image based on text
CN109871790B (en) Video decoloring method based on hybrid neural network model
CN114692733A (en) End-to-end video style migration method, system and storage medium for inhibiting time domain noise amplification
CN112200752B (en) Multi-frame image deblurring system and method based on ER network
CN113962905A (en) Single image rain removing method based on multi-stage feature complementary network
CN111667401B (en) Multi-level gradient image style migration method and system
CN113949880B (en) Extremely-low-bit-rate man-machine collaborative image coding training method and coding and decoding method
CN116012266A (en) Image denoising method, system, equipment and storage medium
CN116630369A (en) Unmanned aerial vehicle target tracking method based on space-time memory network
CN113298719B (en) Feature separation learning-based super-resolution reconstruction method for low-resolution fuzzy face image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant