CN112116535A - Image completion method based on parallel self-encoder - Google Patents

Image completion method based on parallel self-encoder Download PDF

Info

Publication number
CN112116535A
CN112116535A CN202010803512.7A CN202010803512A CN112116535A CN 112116535 A CN112116535 A CN 112116535A CN 202010803512 A CN202010803512 A CN 202010803512A CN 112116535 A CN112116535 A CN 112116535A
Authority
CN
China
Prior art keywords
model
damaged
image
pictures
encoder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010803512.7A
Other languages
Chinese (zh)
Other versions
CN112116535B (en
Inventor
王进军
邓烨
李梦柳
辛晓萌
黄文丽
惠思奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202010803512.7A priority Critical patent/CN112116535B/en
Publication of CN112116535A publication Critical patent/CN112116535A/en
Application granted granted Critical
Publication of CN112116535B publication Critical patent/CN112116535B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Train Traffic Observation, Control, And Security (AREA)
  • Electric Propulsion And Braking For Vehicles (AREA)

Abstract

The invention discloses an image completion method based on a parallel self-encoder, which comprises the following steps: 1) searching complete image data with similar style to help the model to train according to the existing damaged image data, wherein the complete image data with similar style refers to data close to the damaged image and is divided into a training set and a test set according to a set proportion; 2) respectively copying the training set and the test set in the step 1), then carrying out artificial damage, and carrying out model training by using damaged training set pictures and corresponding intact pictures; 3) and (3) repairing the damaged pictures of the test set according to the model obtained in the step 2), evaluating the damaged pictures and the corresponding real and intact pictures, and finely adjusting the model according to the defects. The method of the invention enables the model to overcome the problems of inconsistent pixels at the image completion part, poor generated image detail texture and the like to a certain extent, and realizes the pixel completion from scratch.

Description

Image completion method based on parallel self-encoder
Technical Field
The invention belongs to the field of image restoration in computer vision, and particularly relates to an image completion method based on a parallel self-encoder.
Background
The goal of image completion is to fill in missing or corrupted pixels in an image, a challenging task in the field of computer vision today. Image completion is very widely used, except for the most basic application of repairing damaged photographs. But also to remove objects in the image that are not desired to be present or to complement the occluded part. The conventional method is mostly used for repairing by spreading the image information of the known area to the missing area. Such methods do not yield satisfactory results when the challenges associated with repairing missing or damaged areas of fine texture patterns and known areas are not significant. With the development of deep convolutional neural networks, the proposal of generation of a countermeasure network makes image generation "from scratch" and high-quality generation possible. Achieving what is said above as "neutral" and better quality of completion is a desire of those skilled in the art to overcome.
Disclosure of Invention
The invention aims to provide an image completion method based on a parallel self-encoder, which is characterized in that not only a breakage graph is used in the training process, but also the compensation graph of the breakage graph is used as input to improve the extraction capability of a model on the characteristics of the whole image. Secondly, noise is added to the obtained low-dimensional features to hope to improve the robustness of the model, and a Variational Auto Encoder (VAE) framework is utilized to process the noise, so that the smoothness of the features on a feature space is improved. Based on the improvement measures, the model can overcome the problems of inconsistent pixels at the image completion part, poor generated image detail texture and the like to a certain extent, and the pixel completion from the nonexistence to the nonexistence is realized.
The invention is realized by adopting the following technical scheme:
an image completion method based on a parallel self-encoder comprises the following steps:
1) searching complete image data with similar style to help the model to train according to the existing damaged image data, wherein the complete image data with similar style refers to data close to the damaged image and is divided into a training set and a test set according to a set proportion;
2) respectively copying the training set and the test set in the step 1), then carrying out artificial damage, and carrying out model training by using damaged training set pictures and corresponding intact pictures;
3) and (3) repairing the damaged pictures of the test set according to the model obtained in the step 2), evaluating the damaged pictures and the corresponding real and intact pictures, and finely adjusting the model according to the defects.
The further improvement of the invention is that the specific implementation method of the step 1) is as follows:
firstly, collecting intact image data with similar styles to the images needing to be repaired by the target to train the model, wherein the similar styles refer to that the broken images and the original images have similar contents and colors.
The further improvement of the invention is that the specific implementation method of the step 2) is as follows:
firstly, after copying an obtained training set and a test set, artificially damaging copied data so as to prepare for subsequent model training; artificial damage carries out similar damage processing according to the damage condition of an actual picture, or random damage is carried out according to a set damage proportion, namely the size of a damaged pixel area/the whole image;
then inputting the complementary graph and the damaged graph in the training set into a self-encoder of the model to obtain respective low-dimensional characteristics; in order to improve the robustness of the model, different white Gaussian noises with the same dimensionality are added to the damaged graph and the complementary graph respectively to obtain low-dimensional characteristics;
inputting the obtained respective low-dimensional characteristics into a model self-encoder decoder to obtain a preliminarily predicted repair map;
finally, the preliminary repairing graph is put into a countermeasure generation network framework, the self-encoder is used as a generator, and a discriminator is additionally introduced to carry out countermeasure optimization on the preliminary repairing result, so that a better repairing result is expected; using two discriminators to respectively judge the repair result of the damaged graph and the generated result of the repair graph;
the loss function used in this process is divided into two parts, where for a generator consisting of parallel encoders:
Figure BDA0002628264080000021
wherein
Figure BDA0002628264080000022
In order to generate the pair-wise loss-immunity function,
Figure BDA0002628264080000023
in order to average the absolute error of the signal,
Figure BDA0002628264080000024
is the Kullback-Leibler divergence, alpha1、α2、β1、β2、γ1、γ2Is a set hyper-parameter;
for the arbiter network:
Figure BDA0002628264080000031
wherein
Figure BDA0002628264080000032
For the discriminant confrontation loss function, is e1,∈2Is a hyper-parameter.
The invention has at least the following beneficial technical effects:
the invention provides an image complementing method based on a parallel self-encoder, which is characterized in that in the training process of a model, a damaged graph and a graph (called a complementing graph hereinafter) formed by missing pixels of the damaged graph are used for training at the same time. The damage maps are encoded by a parallel encoder composed of a CNN (convolutional neural network) to obtain corresponding low-dimensional features. And then, Gaussian white noise is added to the low-dimensional features to improve the robustness of the model. Then, a decoder composed of CNN is used to perform preliminary patching on the low-dimensional features. Finally, the obtained preliminary image repairing result is put into a discriminator for generating a countermeasure network, and the performance is improved through a countermeasure loss function.
One of the key points of the completion image method of the present invention is that the damage map and its complement are used simultaneously in the training process, and the optimization function is used in the optimization process to make the feature distributions of the complement map and the damage map as close as possible. It is contemplated that the model may extract features that cover the entire image for any portion of the pixels of the image. Compared with a repair model which is generally trained by using a breakage graph alone, the method has better model robustness for irregular pixel breakage repair. The invention has the advantages that:
1. a parallel neural network image completion architecture is provided. In the training process, besides the common damage map, a complement map of the damage map is also used, and in fact, the complement process of the complement map can understand the self-reconstruction process of the image. Therefore, compared with other models, the invention is more robust to image breakage of different areas through parallel input and partial parameter sharing.
2. The invention adds noise in the process of extracting the characteristics, and processes by means of the thought of a variational self-encoder, so that the model is more robust to the extracted characteristics, and the characteristics are smoother in a characteristic space, thereby being more beneficial to subsequent decoding operation and the like.
Drawings
FIG. 1 is a flowchart of an image completion method based on a parallel auto-encoder according to the present invention.
Detailed Description
The invention is further described below with reference to the following figures and examples.
The invention provides an image completion method based on a parallel self-encoder, which comprises the following steps:
1) searching complete image data with similar style to help the model to train according to the existing damaged image data, wherein the complete image data with similar style refers to data close to the damaged image and is divided into a training set and a test set according to a set proportion; the method specifically comprises the following steps: first, the intact image data with similar styles is collected to train the model aiming at the image needing to be repaired by the target. Wherein stylistic similarity means that the broken image and the original image have similar content and color. For example, if a damaged photo is to be repaired, some good face images need to be collected; if some street view photos are to be repaired, street view pictures with similar architectural styles need to be collected.
2) Respectively copying the training set and the test set in the step 1), then carrying out artificial damage, and carrying out model training by using damaged training set pictures and corresponding intact pictures; the method specifically comprises the following steps: firstly, after the obtained training set and the test set are copied, the copied data are artificially damaged, so that preparation is made for subsequent model training. Artificial damage can be done by performing similar damage processing according to the actual picture damage, or by performing random damage according to an artificially set damage ratio (damaged pixel area/entire image size).
Then, a complementary image (an image formed by pixels needing to be complemented) in the training set and the breakage image are input into a model self-encoder to obtain respective low-dimensional features.
In order to improve the robustness of the model, different white Gaussian noises with the same dimensionality are added to the damaged graph and the complementary graph respectively to obtain low-dimensional features.
And inputting the obtained respective low-dimensional features into a model self-encoder decoder to obtain a preliminarily predicted repair map.
Finally, the preliminary repairing graph is put into a countermeasure generation Network (GAN) framework, the self-encoder is used as a generator, and a discriminator is additionally introduced to carry out countermeasure optimization on the preliminary repairing result, so that a better repairing result is expected. Here, two discriminators are used to determine the repair result of the damage map and the generation result of the repair map, respectively.
The loss function used in this process is divided into two parts, where, for a generator consisting of parallel encoders,
Figure BDA0002628264080000051
wherein
Figure BDA0002628264080000052
In order to generate the pair-wise loss-immunity function,
Figure BDA0002628264080000053
in order to average the absolute error of the signal,
Figure BDA0002628264080000054
is the Kullback-Leibler divergence. Alpha is alpha1、α2、β1、β2、γ1、γ2Is a set hyper-parameter.
In the case of a network of discriminators,
Figure BDA0002628264080000055
wherein
Figure BDA0002628264080000056
Is a function of the discrimination countermeasure loss. E is the same as1,∈2Is a hyper-parameter.
3) And (3) repairing the damaged pictures of the test set according to the model obtained in the step 2), evaluating the damaged pictures and the corresponding real and intact pictures, and finely adjusting the model according to the defects.
Examples
As shown in FIG. 1, assume a breakage diagram ImThe corresponding part needing to be filled is Ig(complement for short) and the complete image (by I)m,IgObtained by combining the above pixels) is IgtDamage graph ImAnd complement drawing IgCorresponding picture features are obtained by the encoder, and a randomly sampled noise from a standard gaussian distribution is added to the features respectively, and then the noise is input to the decoder part and then a rough prediction is carried out. And then, the confrontation optimization is carried out through a discriminator neural network respectively, so that the quality of the model is improved. In addition, in the model training process, a binary mask M is provided, and the value of the binary mask M at the position where the picture pixel is damaged is 1, and the value of the position where the picture pixel is intact is 0. In the training process, the mask M and the real graph I are combinedgtBy performing element dot multiplication, a damage map I can be obtainedm. The inventive model is based on a generative confrontation network, precisely an input damage map, with supervision signals, for which a repair map is desired. The main structure of the generation countermeasure network is divided into two parts, a discriminator network and a generator network. The generator portion is introduced first.
The generator network can be regarded as a self-encoder (Autoencoder) in principle, with input ImGenerating
Figure BDA0002628264080000057
Then expect to
Figure BDA0002628264080000061
And IgAs close as possible. In a generator network, most of the current image completion models are trained by using a single ImThe invention is based on the technical model as the input of the generator, but can adapt to different pixel deletion of each region of the image in order to improve the image completion capability of the model, and the invention not only uses ImAlso, also IgAs input, as shown in fig. 1. Firstly, I ismAnd IgInputting a coder (Encoder) formed by a neural network to carry out coding to obtain the low-dimensional characteristics of the pictures, and notably the parameters of the front 5 layers of the coder network of the broken graph and the complementary graph are shared. After the low-dimensional features are obtained, white noise is added to the features obtained from the respective images (the broken image and the complementary image). Specifically, using the breakdown diagram ImFor example, falseGiven the feature fmAdding a normal distribution from the norm
Figure BDA0002628264080000069
Is e.g. asmObtaining a new feature zmCan be defined as:
zm=fm+∈m (1)
the characteristic z of the complementary graph can be obtained after similar operations are carried out on the complementary graphcTo obtain zmAnd zcThen, it is put into a Decoder (Decoder) constituted by a neural network, and similarly, there is a case where the Decoder shares a weight, which is different from the encoder in that not the previous number of layers but the reciprocal number of layers is shared. z is a radical ofmAnd zcRespectively obtaining results after passing through a coder
Figure BDA0002628264080000062
And
Figure BDA0002628264080000063
it should be noted that the operation of formula (1) can be regarded as a special case of a fixed variance of 1 in a weighted parameter technique (reconstruction lock) in a Variational auto-encoder (VAE), and therefore, this part of the inventive model can be put into a Variational auto-encoder framework for optimization. Thus, for breakage chart ImGenerating results
Figure BDA0002628264080000064
Using a loss function Lm
Figure BDA0002628264080000065
Similarly, another complete graph is generated from the complementary graph
Figure BDA0002628264080000066
Using a loss function Lc
Figure BDA0002628264080000067
In the following, a network of discriminators is described, in which the goal of the discriminator is to discriminate between true and false, i.e. whether a sample (picture, video, etc.) is from the generator generated or from the real dataset. Similarly, in the process of image completion model, the object of the inventive model discriminator is to distinguish whether an intact picture is from the restored generated network of the model or the true original picture. But since the model is a parallel generator network, two results will be output
Figure BDA0002628264080000068
And
Figure BDA0002628264080000071
the inventive model therefore provides for two different discriminator networks to be used for determining the result (fig. 1) and for feeding them into the different discriminator networks. Specifically, the results of the breakage map generation
Figure BDA0002628264080000072
And the true result IgBased on arbiter network D1In the process of discrimination, a countermeasure network (Least square generated adaptive Networks) is generated according to Least square, and an optimization function of an auxiliary optimization generator network is provided
Figure BDA0002628264080000073
Figure BDA0002628264080000074
For arbiter network D1Is optimized function
Figure BDA0002628264080000075
Figure BDA0002628264080000076
Similarly, the result generated from the complement
Figure BDA0002628264080000077
And the true result IgBased on arbiter network D2Optimization function for auxiliary generator in discrimination process
Figure BDA0002628264080000078
Figure BDA0002628264080000079
For arbiter network D2Is optimized function
Figure BDA00026282640800000710
Figure BDA00026282640800000711
In general, the loss function of the whole restoration model of the technical model can be divided into two parts, wherein one part is the optimization of the generator network and is based on (2), (3), (4) and (6)
The total loss function of the optimized generator network can be obtained
Figure BDA00026282640800000712
Figure BDA00026282640800000713
And optimizing the total loss function of the arbiter
Figure BDA00026282640800000714
Figure BDA00026282640800000715
Wherein alpha is1=α2=0.5,β1=β2=20,γ1=γ21, they are all hyperparameters.
The whole process of the invention is as follows:
(1) after a neural network model is built, 1 batch (batch) is randomly extracted from a data set, and each batch contains equal numbers of breakage graphs, complement graphs and real graphs. Inputting the damaged graph and the complementary graph into the generator network to obtain the result
Figure BDA00026282640800000716
And
Figure BDA00026282640800000717
handle bar
Figure BDA00026282640800000718
And
Figure BDA00026282640800000719
input to arbiter network D1And D2Equation (9) is used to perform the optimized updating of the parameters of the arbiter network by using a back propagation Algorithm (back propagation Algorithm), and the parameters of the generator network are fixed at this time.
(2) The arbiter network parameters are fixed and updated according to equation (8) using the same back propagation algorithm.
(3) And (3) repeating the steps (1) and (2) until the network converges.

Claims (3)

1. An image completion method based on a parallel self-encoder is characterized by comprising the following steps:
1) searching complete image data with similar style to help the model to train according to the existing damaged image data, wherein the complete image data with similar style refers to data close to the damaged image and is divided into a training set and a test set according to a set proportion;
2) respectively copying the training set and the test set in the step 1), then carrying out artificial damage, and carrying out model training by using damaged training set pictures and corresponding intact pictures;
3) and (3) repairing the damaged pictures of the test set according to the model obtained in the step 2), evaluating the damaged pictures and the corresponding real and intact pictures, and finely adjusting the model according to the defects.
2. The image completion method based on the parallel self-encoder as claimed in claim 1, wherein the step 1) is implemented as follows:
firstly, collecting intact image data with similar styles to the images needing to be repaired by the target to train the model, wherein the similar styles refer to that the broken images and the original images have similar contents and colors.
3. The image completion method based on the parallel self-encoder as claimed in claim 1, wherein the step 2) is implemented as follows:
firstly, after copying an obtained training set and a test set, artificially damaging copied data so as to prepare for subsequent model training; artificial damage carries out similar damage processing according to the damage condition of an actual picture, or random damage is carried out according to a set damage proportion, namely the size of a damaged pixel area/the whole image;
then inputting the complementary graph and the damaged graph in the training set into a self-encoder of the model to obtain respective low-dimensional characteristics; in order to improve the robustness of the model, different white Gaussian noises with the same dimensionality are added to the damaged graph and the complementary graph respectively to obtain low-dimensional characteristics;
inputting the obtained respective low-dimensional characteristics into a model self-encoder decoder to obtain a preliminarily predicted repair map;
finally, the preliminary repairing graph is put into a countermeasure generation network framework, the self-encoder is used as a generator, and a discriminator is additionally introduced to carry out countermeasure optimization on the preliminary repairing result, so that a better repairing result is expected; using two discriminators to respectively judge the repair result of the damaged graph and the generated result of the repair graph;
the loss function used in this process is divided into two parts, where for a generator consisting of parallel encoders:
Figure FDA0002628264070000021
wherein
Figure FDA0002628264070000023
In order to generate the pair-wise loss-immunity function,
Figure FDA0002628264070000024
in order to average the absolute error of the signal,
Figure FDA0002628264070000025
is the Kullback-Leibler divergence, alpha1、α2、β1、β2、γ1、γ2Is a set hyper-parameter;
for the arbiter network:
Figure FDA0002628264070000022
wherein
Figure FDA0002628264070000026
For the discriminant confrontation loss function, is e1,∈2Is a hyper-parameter.
CN202010803512.7A 2020-08-11 2020-08-11 Image completion method based on parallel self-encoder Active CN112116535B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010803512.7A CN112116535B (en) 2020-08-11 2020-08-11 Image completion method based on parallel self-encoder

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010803512.7A CN112116535B (en) 2020-08-11 2020-08-11 Image completion method based on parallel self-encoder

Publications (2)

Publication Number Publication Date
CN112116535A true CN112116535A (en) 2020-12-22
CN112116535B CN112116535B (en) 2022-08-16

Family

ID=73804853

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010803512.7A Active CN112116535B (en) 2020-08-11 2020-08-11 Image completion method based on parallel self-encoder

Country Status (1)

Country Link
CN (1) CN112116535B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112836573A (en) * 2020-12-24 2021-05-25 浙江大学 Lane line image enhancement and completion method based on confrontation generation network
CN113362242A (en) * 2021-06-03 2021-09-07 杭州电子科技大学 Image restoration method based on multi-feature fusion network
CN114612425A (en) * 2022-03-09 2022-06-10 广东省科学院广州地理研究所 Natural environment restoration evaluation method, device and equipment based on neural network
CN116681604A (en) * 2023-04-24 2023-09-01 吉首大学 Qin simple text restoration method based on condition generation countermeasure network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108520503A (en) * 2018-04-13 2018-09-11 湘潭大学 A method of based on self-encoding encoder and generating confrontation network restoration face Incomplete image
CN109191402A (en) * 2018-09-03 2019-01-11 武汉大学 The image repair method and system of neural network are generated based on confrontation
CN109801230A (en) * 2018-12-21 2019-05-24 河海大学 A kind of image repair method based on new encoder structure
US20200160176A1 (en) * 2018-11-16 2020-05-21 Royal Bank Of Canada System and method for generative model for stochastic point processes
CN111242874A (en) * 2020-02-11 2020-06-05 北京百度网讯科技有限公司 Image restoration method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108520503A (en) * 2018-04-13 2018-09-11 湘潭大学 A method of based on self-encoding encoder and generating confrontation network restoration face Incomplete image
CN109191402A (en) * 2018-09-03 2019-01-11 武汉大学 The image repair method and system of neural network are generated based on confrontation
US20200160176A1 (en) * 2018-11-16 2020-05-21 Royal Bank Of Canada System and method for generative model for stochastic point processes
CN109801230A (en) * 2018-12-21 2019-05-24 河海大学 A kind of image repair method based on new encoder structure
CN111242874A (en) * 2020-02-11 2020-06-05 北京百度网讯科技有限公司 Image restoration method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHENRUI ZHANG,AND ETC: "Stacking VAE and GAN for Context-aware Text-to-Image Generation", 《2018 IEEE FOURTH INTERNATIONAL CONFERENCE ON MULTIMEDIA BIG DATA (BIGMM)》 *
王力等: "基于生成式对抗网络的路网交通流数据补全方法", 《交通运输系统工程与信息》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112836573A (en) * 2020-12-24 2021-05-25 浙江大学 Lane line image enhancement and completion method based on confrontation generation network
CN113362242A (en) * 2021-06-03 2021-09-07 杭州电子科技大学 Image restoration method based on multi-feature fusion network
CN113362242B (en) * 2021-06-03 2022-11-04 杭州电子科技大学 Image restoration method based on multi-feature fusion network
CN114612425A (en) * 2022-03-09 2022-06-10 广东省科学院广州地理研究所 Natural environment restoration evaluation method, device and equipment based on neural network
CN114612425B (en) * 2022-03-09 2023-08-29 广东省科学院广州地理研究所 Natural environment restoration evaluation method, device and equipment based on neural network
CN116681604A (en) * 2023-04-24 2023-09-01 吉首大学 Qin simple text restoration method based on condition generation countermeasure network
CN116681604B (en) * 2023-04-24 2024-01-02 吉首大学 Qin simple text restoration method based on condition generation countermeasure network

Also Published As

Publication number Publication date
CN112116535B (en) 2022-08-16

Similar Documents

Publication Publication Date Title
CN112116535B (en) Image completion method based on parallel self-encoder
CN108520503B (en) Face defect image restoration method based on self-encoder and generation countermeasure network
Wang et al. Domain adaptation for underwater image enhancement
CN111047522B (en) Image restoration method based on edge generation
CN111861901A (en) Edge generation image restoration method based on GAN network
Sun et al. ICycleGAN: Single image dehazing based on iterative dehazing model and CycleGAN
CN112365556B (en) Image extension method based on perception loss and style loss
CN111612708A (en) Image restoration method based on countermeasure generation network
CN112884758A (en) Defective insulator sample generation method and system based on style migration method
CN117151990B (en) Image defogging method based on self-attention coding and decoding
CN113936318A (en) Human face image restoration method based on GAN human face prior information prediction and fusion
CN112037109A (en) Improved image watermarking method and system based on saliency target detection
Zhao et al. Detecting deepfake video by learning two-level features with two-stream convolutional neural network
CN113379715A (en) Underwater image enhancement and data set true value image acquisition method
CN116245861A (en) Cross multi-scale-based non-reference image quality evaluation method
CN114494387A (en) Data set network generation model and fog map generation method
Gunawan et al. Modernizing old photos using multiple references via photorealistic style transfer
CN116523985B (en) Structure and texture feature guided double-encoder image restoration method
Du et al. UIEDP: Underwater Image Enhancement with Diffusion Prior
CN116579952A (en) Image restoration method based on DU-GAN network
Ding et al. Domain knowledge driven deep unrolling for rain removal from single image
CN114821174A (en) Power transmission line aerial image data cleaning method based on content perception
Wu et al. Semantic image inpainting based on generative adversarial networks
Ling et al. Single Image Dehazing Using Scene Depth Ordering
CN117745593B (en) Diffusion model-based old photo scratch repairing method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant