CN112488956A - Method for image restoration based on WGAN network - Google Patents

Method for image restoration based on WGAN network Download PDF

Info

Publication number
CN112488956A
CN112488956A CN202011464168.XA CN202011464168A CN112488956A CN 112488956 A CN112488956 A CN 112488956A CN 202011464168 A CN202011464168 A CN 202011464168A CN 112488956 A CN112488956 A CN 112488956A
Authority
CN
China
Prior art keywords
image
network
restoration
wgan
generator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011464168.XA
Other languages
Chinese (zh)
Inventor
方巍
顾恩明
王伟清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202011464168.XA priority Critical patent/CN112488956A/en
Publication of CN112488956A publication Critical patent/CN112488956A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The invention relates to a method for image restoration based on a WGAN network. The method belongs to the fields of DCGAN and WGAN model optimization and the like, and comprises the following specific steps: 1. constructing a shallow layer convolution network structure; 2. constructing a deep neural network; 3. optimizing related parameters and a function algorithm; 4. designing codes; 5. and (6) performance evaluation. The evaluation test is carried out on the international universal ImageNet data set, and the result shows that the method obtains high-quality repairing effect and better efficiency, which shows that the method improves the definition and consistency of the image to a certain extent.

Description

Method for image restoration based on WGAN network
Technical Field
The invention discloses an Image Inpainting (Image Inpainting) improvement method based on deep learning (WGAN), and relates to the fields of DCGAN, WGAN model optimization and the like. The field of the invention comprises: the method based on the Conditional Neural Network (CNN) and the DCGAN often generates boundary artifacts, distorted structures and fuzzy textures inconsistent with surrounding areas, slow training, unstable training results, and other problems.
Background
At present, the combination of image restoration and deep learning has become one of the research hotspots in computer vision. There are still great limitations in terms of the current development situation in solving the problem of semantic retrieval of images based on context. Due to the limitation factors such as the memory limitation of an algorithm model, the instability of training, the lack of sample diversity and the like, the image restoration algorithm based on deep learning still encounters the difficult problem of poor fusion of a restored image and an original image in the image restoration process.
Research on Convolutional Neural Networks (CNN) in the past feedforward Neural networks shows that the CNN Network has excellent performance in large-scale image processing because each artificial neuron only responds to a part of surrounding units within a coverage range. In recent years, CNN-based deep learning networks have proven their ability to capture abstract information of images at a high level. Meanwhile, in the research of texture synthesis and image style conversion, the image features extracted by the trained CNN network can be used as a part of the target function, which shows that a picture generated by a generating network can be semantically more similar to the target picture. Together with extensive research on the Generative Adaptive Network (GAN), it is demonstrated that the visual effect of the generated Network generated image can be enhanced by the antagonistic training. Based on the above research background knowledge, the image restoration method based on deep learning has recently been widely tested.
Since then, according to the different types of methods adopted to solve the problem, the methods can be classified into methods based on partial differentiation and variation, sample image-based restoration methods derived from texture synthesis technology, transform domain-based image restoration methods, and hybrid image restoration methods, and the restoration method based on deep learning is a new type of emerging method proposed in recent years. The partial differential and variational method, the transform domain method and the mixed method can obtain better repairing effect in repairing small-size damaged images; sample-based methods can achieve relatively good results in image restoration of large area damage, especially when the area to be restored can be represented well by a known sample region. Although research on a restoration method based on deep learning is still in an initial stage, the method has the characteristic of the deep learning technology, namely, a stacked deep neural network containing a large number of hidden layers can obtain mapping of nonlinear complex relationships among training samples through training and learning of mass data, which is the problem expected to be solved by semantic restoration based on image content in image restoration, and particularly, quite amazing results can be sometimes obtained in image restoration of a large area.
In image restoration, the consistency of the boundary is generally considered as a priority in restoration of a small area, and the consistency of the area is considered as a priority. The basic idea is to perform diffusion repair based on local information. However, experiments have shown that this concept is difficult to apply to large area repairs. This means that in a wide range of repairs, the present invention generally first considers high level semantic information of the repaired object. Therefore, the region-mapping process based on a large amount of a priori information accumulates a large amount of information. The method mainly solves the problem of embedding high-level semantic information in the image.
Disclosure of Invention
In view of the above problems, the present invention provides a method for image restoration based on a WGAN network, which uses the WGAN network to adjust a hidden layer network in combination with the previous implementation of image restoration using Deep conditional generic adaptive Networks (hereinafter referred to as DCGAN). In addition, the Adam algorithm is used for replacing the traditional random gradient descent algorithm and other algorithms to optimize the training popular in recent years, and the efficiency and the precision of image modification are further improved.
The technical scheme of the invention is as follows: a method for image restoration based on a WGAN network comprises the following steps:
step (1.1), constructing a shallow layer convolution network structure: extracting the spatial features of the collected images by using the void convolution, and inputting the obtained spatial features into a deep neural network;
step (1.2), constructing a deep neural network: training a deep neural network through reconstruction loss and GAN loss;
step (1.3), optimizing parameters and function algorithm: using an ELU that does not mutate the derivative at any point and that would produce a negative output as an activation function;
step (1.4), code design: before the generator does not converge, training the generator by using reconstruction loss;
step (1.5), performance test: and performing image restoration on the damaged image to obtain a final restoration result.
Further, in the step (1.1), a specific operation method for constructing the shallow convolutional network structure is as follows:
firstly, collecting an image, inputting the image into a shallow neural network, and extracting spatial features by using a hole convolution; secondly, inputting the obtained spatial features into a deep neural network for extraction and restoration, and performing image restoration on the restored result through a training decoder;
wherein the specific operations of the training coder include: feed-forward propagation and overall parameter tuning; the feed-forward propagation is to input the image into the encoder and finally obtain the restored data; and after the feed forward propagation is finished, fine adjustment is carried out on the whole encoder through an error back propagation algorithm.
Further, in the step (1.3), by optimizing the parameters and the function algorithm: wherein, the activation function is specifically described as follows:
Figure BDA0002833562590000021
further, in step (1.4), the specific operation method of the code design is as follows:
firstly, randomly selecting a plurality of pictures x from a training data set;
then, declaring a variable t, and simultaneously satisfying that t obeys a uniform distribution of 0-1;
then, several masks are selected for x, and the sequence of the mask is performed,
Figure BDA0002833562590000031
Figure BDA0002833562590000032
Then according to
Figure BDA0002833562590000033
Updating the discriminator;
second, randomly generating a mask m for x, updating a generator G, and calculating l1Loss and two countermeasures loss, the above steps being carried out until the generator G converges;
finally, whether the generator G finally converges is determined, if so, the step (1.5) is carried out to carry out performance test; if not, returning to the step (1.3), continuing to optimize the parameters and the function algorithm, and determining whether the generator G is converged or not through the design code.
Further, in the step (1.5), the specific method for performance evaluation is as follows: firstly, randomly selecting 300 images from an ImageNet test data set, and generating a regular rectangular mask for each image; then, image restoration is performed on the damaged image to obtain a final restoration result.
The invention has the beneficial effects that: (1) the technical problems to be solved by the invention are as follows: the invention mainly focuses on the problems of structural distortion and fuzzy texture of the image repairing result on the repaired edge and content; (2) the invention adopts the corresponding technical scheme that: the vivid and smooth image of the image to be repaired on the semantics and vision is achieved by introducing a shallow layer network and a deep layer network and a loss function containing the loss of the semantics concern loss and the loss of the perception concern loss; (3) the method can achieve the effect that the repaired image obtained by the method is more natural and can be more satisfactory in the repairing result; randomly selecting 300 images from an ImageNet test data set, and generating a regular rectangular mask for each image; then, each method is run on the damaged image to obtain the final result; in training, images with a resolution of 256 × 256 with a maximum aperture size of 128 × 128 are used; both methods are based on full convolution neural networks; all results reported are direct outputs of the trained model without any post-processing; the performance of the model was quantified using common evaluation metrics (i.e., L1, L2, PSNR and SSIM) computed using a full image in pixel space and a ground truth image.
Drawings
FIG. 1 is a flow chart of the architecture of the present invention;
FIG. 2 is a schematic diagram of the basic structure of GAN in the present invention;
FIG. 3 is a schematic diagram of the basic WGAN structure of the invention;
FIG. 4 is a diagram of a discriminator network in the present invention;
FIG. 5 is a diagram illustrating the repairing result of rectangular deletion region in the present invention;
FIG. 6 is a schematic diagram of the ELU function image and Relu in the present invention distinguished from the x negative half-axis region;
FIG. 7 is a graph of a comprehensive evaluation comparison using ImageNet data sets in the present invention.
Detailed Description
In order to more clearly illustrate the technical solution of the present invention, the following detailed description is made with reference to the accompanying drawings:
specifically; a method for image restoration based on a WGAN network, a flow diagram of which is shown in fig. 1, and the specific steps include the following:
step (1.1), constructing a shallow layer convolution network structure: extracting the spatial features of the collected images by using the void convolution, and inputting the obtained spatial features into a deep neural network;
step (1.2), constructing a deep neural network: training a deep neural network through reconstruction loss and GAN loss;
step (1.3), optimizing parameters and function algorithm: using an ELU that does not mutate the derivative at any point and that would produce a negative output as an activation function;
step (1.4), code design: before the generator does not converge, training the generator by using reconstruction loss;
step (1.5), performance test: and performing image restoration on the damaged image to obtain a final restoration result.
Further, in the step (1.1), a specific operation method for constructing the shallow convolutional network structure is as follows:
firstly, collecting an image, inputting the image into a shallow neural network, and extracting spatial features by using a hole convolution; secondly, inputting the obtained spatial features into a deep neural network for extraction and restoration, and performing image restoration on the restored result through a training decoder;
wherein the specific operations of the training coder include: feed-forward propagation and overall parameter tuning; the feed-forward propagation is to input the image into the encoder and finally obtain the restored data; and after the feed forward propagation is finished, fine adjustment is carried out on the whole encoder through an error back propagation algorithm.
Further, in the step (1.2), the specific method for constructing the deep neural network is as follows: training a deep network through reconstruction loss and GAN loss, wherein the deep network can see a scene more complete than an original image with a missing region, so that a deep encoder can learn better feature representation than a shallow network; the two major network structures are inspired by residual learning and deep supervised learning; the present invention adds WGAN-GP loss to the deep network because the WGAN-GP loss is superior to the current GAN loss in image generation tasks.
Further, in the step (1.3), by optimizing the parameters and the function algorithm: unlike other GAN methods that use ReLu as an activation function, the present invention uses an ELU that does not mutate the derivative at any point and produces a negative output as an activation function; in contrast to ReLu, ELU does not mutate the derivative at any one point and can produce a negative output; this function tends to converge the result to zero at a faster rate, often allowing more accurate results to be achieved;
wherein, the activation function is specifically described as follows:
Figure BDA0002833562590000051
different from the prior image restoration network, only DCGAN is relied on for countercheck supervision, and the WGAN-GP method is used in the invention; taking WGAN-GP loss as a loss function of an image restoration task; the method has better performance when used in combination with EM distance;
the invention compares the generated data distribution with the actual data distribution by using Earth coordinate distance (Earth-Mover distance); the learning objective function constructed by applying Kantorovich-Rubinstein is as follows:
Figure BDA0002833562590000052
wherein; d*A set of functions representing a condition that satisfies the RippHitz continuity condition; pgRepresenting an implicitly defined distribution, Z representing an input to the generator;
the structure of the neural network is as follows:
max(Ex~Pr[fw(x)]-Ez~p(z)[fw(gθ(z))])
this neural network is very similar to the discriminator in GAN; there are only a few subtle differences: 1. the discriminator removes the sigmoid function in the last layer, and the target function of the discriminator has no log item; 2. after each iteration update, the discriminator must cut off the parameters within a certain range;
by introducing a gradient penalty s:
Figure BDA0002833562590000053
the WGAN approach can be improved; wherein the content of the first and second substances,
Figure BDA0002833562590000054
is through the pair PgAnd PrA point on the sampling line in (1); d*In a straight line
Figure BDA0002833562590000055
Will point to the current
Figure BDA0002833562590000056
In the implementation of the image restoration task, the invention only introduces gradient punishment to the missing pixels.
Further, in step (1.4), the operation method of the code design is as follows:
Figure BDA0002833562590000057
Figure BDA0002833562590000061
specifically, the method comprises the following steps:
firstly, randomly selecting a plurality of pictures x from a training data set;
then, declaring a variable t, and simultaneously satisfying that t obeys a uniform distribution of 0-1;
then, several masks are selected for x, and the sequence of the mask is performed,
Figure BDA0002833562590000062
Figure BDA0002833562590000063
Then according to
Figure BDA0002833562590000064
Updating the discriminator;
second, randomly generating a mask m for x, updating a generator G, and calculating l1Loss and two countermeasures loss, the above steps being carried out until the generator G converges;
finally, whether the generator G finally converges is determined, if so, the step (1.5) is carried out to carry out performance test; if not, returning to the step (1.3), continuing to optimize the parameters and the function algorithm, and determining whether the generator G is converged or not through the design code.
Further, in the step (1.5), the specific method for performance evaluation is as follows: firstly, randomly selecting 300 images from an ImageNet test data set, and generating a regular rectangular mask for each image; then, image restoration is carried out on the damaged image to obtain a final restoration result; in the model evaluation phase, the performance of the model is quantified by common evaluation indexes (i.e., L1, L2, PSNR and SSIM) (calculated with the generated complete image and the real image);
specifically, in this training, the resolution of the repair image used is 256 × 256, and the area to be repaired does not exceed 128 × 128 (maximum hole size); both methods are based on full convolution neural networks; all results are direct outputs of the trained model without any post-processing; in the model evaluation stage, the evaluation indexes of commonly used image restoration such as L1, L2, PSNR and SSIM are calculated by using the restored complete image and the real complete image, so that the performance of the model is quantized; FIG. 7 shows the evaluation results; compared with other methods based on deep learning, the model is superior to other methods in two indexes; this result can be interpreted as the contrast method only considers making the finished image texture realistic, ignoring the structure of the image; in addition, the method utilizes the contour guidance model, brings certain improvement on a baseline under the condition of no intervention, and proves the effectiveness of the method in utilizing the contour prediction idea.
In carrying out the loss function comparison experiment, an attempt was made to use l2Loss instead of perceptual loss; however, in subsequent verification experiments, this method leads to a blurring of the coloration results; while attempting to extract the characteristics of VGG19 using different activation layers; the results show that the results of ELU calculation are superior to those of ReLU.
Attempting to perform experiments without using reconstructed losses when reconstructing lost contrasts; the result is to make the edges of the repair result repetitive; the experimental repair results show that this will produce a ghost image making the repair results unreliable; although reconstructed l1The loss is an important component in image restoration, but it does not improve the semantic restoration result of the image.
The specific embodiment is as follows:
the network has certain approximation capability for high-dimensional complex mapping, can effectively extract semantics in an image to a certain extent, better improves contour edge distortion and repairs the problem of fuzzy texture; for image restoration, the image semantics can be accurately obtained and the details can be clearly restored, which requires that the image restoration network can synthesize texture components in addition to capturing semantic components. The present invention uses DCGAN technology to solve this problem.
The method can be applied to image restoration, and the improvement of the image restoration result plays a role in promoting some professional fields in the society; for example, in the research and repair of historical works of art such as ancient paintings and the structure drawings of ancient buildings, professionals can generate a referable repair blueprint by means of an image repair tool; the method has better inspiring and reference functions on the actual repair work of professionals; meanwhile, in the fields of ancient paintings, ancient architectural drawings and the like, due to the characteristics of images, color types, texture details and the like, the certain rules exist, and the sample data volume of specific problems is generally small, so training samples need to be designed in a targeted manner; furthermore, the trained deep network structure can be modified and Fine-tune and other processing can be carried out, so that the migration capability of the network and the generalization capability on a small data set can be improved.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of embodiments of the present invention; other variations are possible within the scope of the invention; thus, by way of example, and not limitation, alternative configurations of embodiments of the invention may be considered consistent with the teachings of the present invention; accordingly, the embodiments of the invention are not limited to the embodiments explicitly described and depicted.

Claims (5)

1. A method for image restoration based on a WGAN network is characterized by comprising the following steps:
step (1.1), constructing a shallow layer convolution network structure: extracting the spatial features of the collected images by using the void convolution, and inputting the obtained spatial features into a deep neural network;
step (1.2), constructing a deep neural network: training a deep neural network through reconstruction loss and GAN loss;
step (1.3), optimizing parameters and function algorithm: using an ELU that does not mutate the derivative at any point and that would produce a negative output as an activation function;
step (1.4), code design: before the generator does not converge, training the generator by using reconstruction loss;
step (1.5), performance test: and performing image restoration on the damaged image to obtain a final restoration result.
2. The method for image restoration based on WGAN network of claim 1, wherein in the step (1.1), the specific operation method for constructing the shallow convolutional network structure is as follows:
firstly, collecting an image, inputting the image into a shallow neural network, and extracting spatial features by using a hole convolution; secondly, inputting the obtained spatial features into a deep neural network for extraction and restoration, and performing image restoration on the restored result through a training decoder;
wherein the specific operations of the training coder include: feed-forward propagation and overall parameter tuning; the feed-forward propagation is to input the image into the encoder and finally obtain the restored data; and after the feed forward propagation is finished, fine adjustment is carried out on the whole encoder through an error back propagation algorithm.
3. The method for image inpainting based on WGAN network of claim 1, wherein in the step (1.3), through optimizing parameters and function algorithm: wherein, the activation function is specifically described as follows:
Figure FDA0002833562580000011
4. the method for image restoration based on WGAN network of claim 1, wherein in step (1.4), the specific operation method of the code design is as follows:
firstly, randomly selecting a plurality of pictures x from a training data set;
then, declaring a variable t, and simultaneously satisfying that t obeys a uniform distribution of 0-1;
then, several masks are selected for x, and the sequence of the mask is performed,
Figure FDA0002833562580000012
Figure FDA0002833562580000013
Then according to
Figure FDA0002833562580000014
Updating the discriminator;
second, randomly generating a mask m for x, updating a generator G, and calculating l1Loss and two countermeasures loss, the above steps being carried out until the generator G converges;
finally, whether the generator G finally converges is determined, if so, the step (1.5) is carried out to carry out performance test; if not, returning to the step (1.3), continuing to optimize the parameters and the function algorithm, and determining whether the generator G is converged or not through the design code.
5. The method of claim 1, wherein the WGAN network-based image inpainting comprises: in the step (1.5), the specific method for performance evaluation is as follows: firstly, randomly selecting 300 images from an ImageNet test data set, and generating a regular rectangular mask for each image; then, image restoration is performed on the damaged image to obtain a final restoration result.
CN202011464168.XA 2020-12-14 2020-12-14 Method for image restoration based on WGAN network Pending CN112488956A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011464168.XA CN112488956A (en) 2020-12-14 2020-12-14 Method for image restoration based on WGAN network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011464168.XA CN112488956A (en) 2020-12-14 2020-12-14 Method for image restoration based on WGAN network

Publications (1)

Publication Number Publication Date
CN112488956A true CN112488956A (en) 2021-03-12

Family

ID=74916159

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011464168.XA Pending CN112488956A (en) 2020-12-14 2020-12-14 Method for image restoration based on WGAN network

Country Status (1)

Country Link
CN (1) CN112488956A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180075581A1 (en) * 2016-09-15 2018-03-15 Twitter, Inc. Super resolution using a generative adversarial network
CN109191402A (en) * 2018-09-03 2019-01-11 武汉大学 The image repair method and system of neural network are generated based on confrontation
US20190325621A1 (en) * 2016-06-24 2019-10-24 Rensselaer Polytechnic Institute Tomographic image reconstruction via machine learning
CN110570366A (en) * 2019-08-16 2019-12-13 西安理工大学 Image restoration method based on double-discrimination depth convolution generation type countermeasure network
US20200111194A1 (en) * 2018-10-08 2020-04-09 Rensselaer Polytechnic Institute Ct super-resolution gan constrained by the identical, residual and cycle learning ensemble (gan-circle)
CN111127449A (en) * 2019-12-25 2020-05-08 汕头大学 Automatic crack detection method based on encoder-decoder
CN111598789A (en) * 2020-04-08 2020-08-28 西安理工大学 Sparse color sensor image reconstruction method based on deep learning
CN111696036A (en) * 2020-05-25 2020-09-22 电子科技大学 Residual error neural network based on cavity convolution and two-stage image demosaicing method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190325621A1 (en) * 2016-06-24 2019-10-24 Rensselaer Polytechnic Institute Tomographic image reconstruction via machine learning
US20180075581A1 (en) * 2016-09-15 2018-03-15 Twitter, Inc. Super resolution using a generative adversarial network
CN109191402A (en) * 2018-09-03 2019-01-11 武汉大学 The image repair method and system of neural network are generated based on confrontation
US20200111194A1 (en) * 2018-10-08 2020-04-09 Rensselaer Polytechnic Institute Ct super-resolution gan constrained by the identical, residual and cycle learning ensemble (gan-circle)
CN110570366A (en) * 2019-08-16 2019-12-13 西安理工大学 Image restoration method based on double-discrimination depth convolution generation type countermeasure network
CN111127449A (en) * 2019-12-25 2020-05-08 汕头大学 Automatic crack detection method based on encoder-decoder
CN111598789A (en) * 2020-04-08 2020-08-28 西安理工大学 Sparse color sensor image reconstruction method based on deep learning
CN111696036A (en) * 2020-05-25 2020-09-22 电子科技大学 Residual error neural network based on cavity convolution and two-stage image demosaicing method

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
FANG WEI 等: "A method for improving CNN-based image recognition using DCGAN", 《COMPUTERS, MATERIALS AND CONTINUA》, vol. 57, no. 01, pages 167 - 178 *
FANG WEI 等: "A new method of image restoration technology based on WGAN", 《COMPUTER SYSTEMS SCIENCE AND ENGINEERING》, vol. 41, no. 02, pages 689 - 698 *
ISHAAN GULRAJANI 等: "Improved training of wasserstein gans", 《ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS》, vol. 28, no. 05, pages 5767 *
MARTIN ARJOVSKY 等: "Wasserstein generative adversarial networks", 《INTERNATIONAL CONFERENCE ON MACHINE LEARNING.PMLR》, pages 214 - 223 *
YU JIAHUI 等: "Generative image inpainting with contextual attention", 《PROCEEDINGS OF THE IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》, pages 5505 - 5514 *
哈文全 等: "基于WGAN的图像修复应用", 《电子技术与软件工程》, vol. 132, no. 10, pages 51 *
李天成 等: "一种基于生成对抗网络的图像修复算法", 《计算机应用与软件》, vol. 36, no. 12, pages 195 - 200 *

Similar Documents

Publication Publication Date Title
CN109584325B (en) Bidirectional colorizing method for animation image based on U-shaped period consistent countermeasure network
CN108875935B (en) Natural image target material visual characteristic mapping method based on generation countermeasure network
CN109903236B (en) Face image restoration method and device based on VAE-GAN and similar block search
CN110570366A (en) Image restoration method based on double-discrimination depth convolution generation type countermeasure network
CN112489164B (en) Image coloring method based on improved depth separable convolutional neural network
CN112686816A (en) Image completion method based on content attention mechanism and mask code prior
CN110895795A (en) Improved semantic image inpainting model method
CN114926553A (en) Three-dimensional scene consistency stylization method and system based on nerve radiation field
CN109345604B (en) Picture processing method, computer device and storage medium
CN113870128A (en) Digital mural image restoration method based on deep convolution impedance network
Chen et al. Dual-former: Hybrid self-attention transformer for efficient image restoration
CN112686817B (en) Image completion method based on uncertainty estimation
CN113592715A (en) Super-resolution image reconstruction method for small sample image set
CN112862946B (en) Gray rock core image three-dimensional reconstruction method for generating countermeasure network based on cascade condition
CN112634168A (en) Image restoration method combined with edge information
CN116863053A (en) Point cloud rendering enhancement method based on knowledge distillation
CN116416161A (en) Image restoration method for improving generation of countermeasure network
CN112488956A (en) Method for image restoration based on WGAN network
CN116051407A (en) Image restoration method
CN113378980B (en) Mask face shielding recovery method based on self-adaptive context attention mechanism
CN112907456B (en) Deep neural network image denoising method based on global smooth constraint prior model
Yang Super resolution using dual path connections
CN112329799A (en) Point cloud colorization algorithm
CN110689618A (en) Three-dimensional deformable object filling method based on multi-scale variational graph convolution
Wu et al. Semantic image inpainting based on generative adversarial networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination