CN111161158B - Image restoration method based on generated network structure - Google Patents

Image restoration method based on generated network structure Download PDF

Info

Publication number
CN111161158B
CN111161158B CN201911217769.8A CN201911217769A CN111161158B CN 111161158 B CN111161158 B CN 111161158B CN 201911217769 A CN201911217769 A CN 201911217769A CN 111161158 B CN111161158 B CN 111161158B
Authority
CN
China
Prior art keywords
network
image
training
layer
discrimination
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911217769.8A
Other languages
Chinese (zh)
Other versions
CN111161158A (en
Inventor
王敏
林竹
岳炜翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN201911217769.8A priority Critical patent/CN111161158B/en
Publication of CN111161158A publication Critical patent/CN111161158A/en
Application granted granted Critical
Publication of CN111161158B publication Critical patent/CN111161158B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image restoration method based on a new network structure, which comprises the following steps: inputting a complete three-channel image and any missing image corresponding to the image; preprocessing the image; deploying a generating network based on SE-ResNet; deploying a discrimination network; sending the missing image into a generation network to obtain a repair image; updating parameters of the generated network by repairing the picture and the original picture; the restored picture and the original picture are simultaneously sent into a discrimination network to train the discrimination network; performing joint training to generate a network and a judgment network until the whole training set is traversed for a plurality of times, and ending the training stage; and randomly selecting missing images in the test set, and generating a network through the training to obtain a repaired image. According to the invention, the SE-ResNet structure is added into the generator, so that the parameter quantity is obviously reduced, the running speed is improved, the gradient disappearance phenomenon is reduced, the network characteristic utilization is enhanced, the repairing time is shorter, and the repaired image is clearer and more vivid.

Description

Image restoration method based on generation network structure
Technical Field
The invention belongs to the technical field of computer vision and image processing, and particularly relates to an image restoration method based on a generated network structure.
Background
Image restoration is an image processing technology that repairs missing information in an image or removes specific information in the image by using undamaged information in the image on the premise of ensuring that the quality of the image and its natural effect are not damaged. The core challenge of this technique is to synthesize visually realistic and semantically reasonable pixels for the missing regions to stay consistent with the existing pixels. Image restoration has important practical significance, and especially has many applications in the aspects of protection of artistic works, restoration of old photos, image-based rendering and computer photography.
At present, a plurality of image restoration methods exist, wherein the image restoration method based on the deep learning method has a remarkable effect. The network designed in large scale by the existing method can not fully extract and utilize the characteristics of the image, and the definition and the fidelity of the generated image are not ideal.
Therefore, a new technical solution is required to solve this problem.
Disclosure of Invention
The invention aims to: in order to overcome the defects in the prior art, an image restoration method based on a generated network structure is provided.
The technical scheme is as follows: in order to achieve the above object, the present invention provides an image restoration method based on a generated network structure, which includes the following steps:
s1: inputting a complete three-channel image and any missing image corresponding to the image;
s2: preprocessing the image obtained in step S1, and cutting the image into a fixed size;
s3: deploying a generating network based on SE-ResNet; deploying a discrimination network;
s4: sending the missing image processed in the step S2 to a generation network to obtain a repair picture;
s5: updating the parameters of the generated network through the repair picture and the original picture obtained in the step S4;
s6: repeating the step S5 until the whole training set is traversed for several times;
s7: the repair picture obtained through the step S4 and the original picture are simultaneously sent to a discrimination network to train the discrimination network;
s8: repeating the step S7 until the whole training set is traversed for several times;
s9: performing joint training to generate a network and a judgment network until the whole training set is traversed for a plurality of times, and ending the training stage;
s10: and randomly selecting missing images in the test set, and generating a network through the training to obtain a repaired image.
Furthermore, in step S1, it is necessary to arbitrarily generate a mask M having the same size as the original image and including only 0 and 1, and the product of the original image and the arbitrary mask M is the input missing image.
Further, the deployment of the network generated in step S3 specifically includes:
A) an encoder part in a generating network firstly passes through a convolutional layer with the convolutional kernel size of 5x5 and the step length of 1, the output channel number is c, and then three layers of residual blocks based on SE-ResNet are arranged, wherein the first residual block comprises 3 sub-residual blocks, the second residual block comprises 4 sub-residual blocks, the third residual block comprises 6 sub-residual blocks, and for each sub-residual block, two layers of convolutional layers with the convolutional kernel size of 3x3 are included; each residual block performs downsampling operation on the image in the first layer convolution of the first sub-residual block, and channels are doubled, wherein the final channel number is 4 c; then, through convolutional layers with the sizes of two layers of convolutional kernels of 3x3 and the step length of 1, the number of output channels is 4c, and finally through convolutional layers with the sizes of 4 expansion convolutional layers and the step length of 1, the sizes of the convolutional kernels are 3x3, the number of the output channels is 4c, so that the image is reduced to the original 1/4, and the number of the channels is 4 c; the feature map contains rich feature information, and the feature map is decoded by a decoder;
B) the decoder part in the generation network firstly passes through a layer of deconvolution layer with the convolution kernel size of 4x4, the step length of 2 and the output channel of c/2, and then passes through a layer of convolution layer with the convolution kernel size of 3x3, the step length of 1 and the output channel of c/2; the convolution layers passing next are the same as above except that the output channel of the former deconvolution layer is c/4, the output channel of the latter ordinary convolution layer is also c/4, so far, the image is restored to be the same size as the original image, but the number of the channels is c/4; then, two convolution layers with convolution kernel size of 3x3, step length of 1 and channel number of c/8 and 3 respectively are passed through, and finally one sigmoid layer is passed through. All convolutions except the last convolution above are followed by the BatchNorm and ReLU operations.
Further, the discrimination network in step S3 is divided into a local discrimination network and a global discrimination network, and the deployment specifically includes:
a) the local discrimination network is used for discriminating the true and false of the missing local generation and consists of five convolution layers and a full-connection layer, the sizes and the step lengths of the first five convolution layers are respectively 5 and 2, the number of output channels is c, 2c, 4c, 8c and 8c in sequence, and the BatchNorm and ReLU operation is carried out on each convolution layer; the output of the full connection layer is 1024, and the output is carried out after one layer of ReLU, namely the final output is a 1024-dimensional vector;
b) the global discrimination network is used for discriminating the truth of global generation, is the same as the local discrimination network and finally outputs a 1024-dimensional vector;
c) and splicing the two 1024-dimensional vectors to obtain a 2048-dimensional vector.
Further, the step S5 is executed by calculating the L2 distance between the repair picture and the original picture as a reconstruction loss function of the generated network:
Figure GDA0003723470360000021
the gradient update is performed using an adapelta optimizer.
Further, the training of the discriminant network in step S7 specifically includes the following steps:
s7-1: fixing the parameters of the generation network, generating a random missing image, and sending the image into the trained generation network to obtain a repaired image G (x) 0 );
S7-2: sending two groups of image pairs into a discrimination network, wherein the first group of image pairs are an original image x and a repaired image G (x) 0 ) Combining with original image x, splicing the original missing image and the original image-nondefective part, and sending them into the discrimination network, i.e. the first group of inputs of the discrimination network is x, x (1-M) + G (x) 0 ) M; the second set of image pairs is the original picture part M x and the restored image part M x G (x) 0 ));
S7-3: constructing a loss function L D =log D(g 1 )+log(1-D(g 2 )),g 1 And g 2 Respectively, step S7-2 is shown to result in two inputs of the two sets, resulting in two losses L real And L fake Finally, the loss of the network is judged to be (L) real +L fake ) α/2, gradient update using adaelta optimizer.
Further, the specific steps of jointly training the generation network and the discrimination network in step S9 are as follows:
s9-1: training a discrimination network by using the method of step S7;
s9-2: training the generation network, training the generation network using the joint discrimination network in addition to the training by the method of step S5, and training L obtained in the third step of step S7 fake Taking the inverse as an assistantTo assist in training the generating network, i.e. L adv =-L fake So that the loss function of the generator is L G =L rec +α*L adv Gradient updates are performed using an adapelta optimizer, which iterates through the entire training set several times.
The present invention trains a convolutional neural network consisting of an encoder and a decoder to predict the missing part of the pixels. The encoder compresses and extracts image features by layer-by-layer convolution, and the decoder restores the compressed image features and generates pixels of the missing part. In order to obtain a clear restored image, the semantic features of the image are fully learned.
Has the advantages that: compared with the prior art, the SE-ResNet structure is added into the generator, so that the parameter quantity is obviously reduced, the running speed is improved, the gradient disappearance phenomenon is reduced, the network characteristic utilization is enhanced, the repairing time is shorter, and the repaired image is clearer and more vivid.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of a formation network structure;
FIG. 3 is a schematic diagram of a discriminating network structure;
FIG. 4 is a diagram of the test effect of the test set.
Detailed Description
The invention is further elucidated with reference to the drawings and the embodiments.
In this embodiment, the image restoration method based on the generated network structure provided by the present invention is applied to image restoration, and taking a face data set in CelebA as an example, the image restoration method of the present invention is adopted to perform image restoration, as shown in fig. 1, the specific steps are as follows:
s1: inputting a complete three-channel image and any missing image corresponding to the image, wherein the any missing image needs to generate a mask image M which is only composed of 0 and 1 and has the same size as the original image, and the product of the original image and the any mask image M is the input missing image.
S2: preprocessing the image obtained in step S1, and cutting the image into a fixed size;
s3: deploying a generating network G based on SE-ResNet to obtain a generating network structure shown in FIG. 2, wherein the specific deployment comprises the following steps A and B:
A) an encoder part in a generating network firstly passes through a convolutional layer with the convolutional kernel size of 5x5 and the step length of 1, the output channel number is c, and then three layers of residual blocks based on SE-ResNet are arranged, wherein the first residual block comprises 3 sub-residual blocks, the second residual block comprises 4 sub-residual blocks, the third residual block comprises 6 sub-residual blocks, and for each sub-residual block, two layers of convolutional layers with the convolutional kernel size of 3x3 are included; each residual block performs downsampling operation on the image in the first layer convolution of the first sub-residual block, and channels are doubled, wherein the final channel number is 4 c; then, through convolutional layers with the sizes of two layers of convolutional kernels of 3x3 and the step length of 1, the number of output channels is 4c, and finally through convolutional layers with the sizes of 4 expansion convolutional layers and the step length of 1, the sizes of the convolutional kernels are 3x3, the number of the output channels is 4c, so that the image is reduced to the original 1/4, and the number of the channels is 4 c; the feature map contains rich feature information, and the feature map is decoded by a decoder;
B) the decoder part in the generation network firstly passes through a layer of deconvolution layer with the convolution kernel size of 4x4, the step length of 2 and the output channel of c/2, and then passes through a layer of convolution layer with the convolution kernel size of 3x3, the step length of 1 and the output channel of c/2; the convolution layers passing next are the same as above except that the output channel of the former deconvolution layer is c/4, the output channel of the latter ordinary convolution layer is also c/4, so far, the image is restored to be the same size as the original image, but the number of the channels is c/4; then, two convolution layers with convolution kernel size of 3x3, step length of 1 and channel number of c/8 and 3 respectively are passed through, and finally one sigmoid layer is passed through. All convolutions except the last convolution above are followed by the BatchNorm and ReLU operations.
In the embodiment, SE-ResNet added to the generated network is to add an SE (squeeze-and-excitation) module to a ResNet residual block, give an input x with a characteristic channel number of c _1, and obtain a characteristic with a characteristic channel number of c _2 through a series of general transformations such as convolution. The previously derived features are recalibrated by three operations. Firstly, the Squeeze operation performs feature compression along the spatial dimension, each two-dimensional feature channel is changed into a real number, the real number has a global receptive field in a certain range, and the output dimension is matched with the number of input feature channels. It characterizes the global distribution of responses over the eigen-channels and makes it possible to obtain a global receptive field also for layers close to the input. The second is the Excitation operation, which is a mechanism similar to gates in a recurrent neural network. A weight is generated for each feature channel by a parameter w that is learned to explicitly model the correlation between feature channels. And finally, a Reweight operation is carried out, the weight of the output of the Excitation is regarded as the importance of each feature channel after feature selection, and then the feature channels are weighted to the previous feature channel by channel through multiplication, so that the original feature is recalibrated in the channel dimension.
S4: deploying a discrimination network D to obtain a discrimination network structure shown in FIG. 3, wherein the discrimination network is divided into a local discrimination network and a global discrimination network, and the deployment specifically comprises:
a) the local discrimination network is used for discriminating the truth of the missing local generation and consists of five convolution layers and a full-connection layer, the sizes and step lengths of the first five convolution layers are respectively 5 and 2, the number of output channels is c, 2c, 4c, 8c and 8c in sequence, and the BatchNorm and ReLU operation is carried out on each convolution layer; the output of the full connection layer is 1024, and the output is carried out after one layer of ReLU, namely the final output is a 1024-dimensional vector;
b) the global discrimination network is used for discriminating the truth of global generation, is the same as the local discrimination network and finally outputs a 1024-dimensional vector;
c) and splicing the two 1024-dimensional vectors to obtain a 2048-dimensional vector.
S5: and sending the preprocessed missing image of the step S2 to a generation network to obtain a repair image.
S6: updating the parameters of the generated network through the restored picture and the original picture obtained in the step S5:
restoring pictures and originals by computationThe L2 distance of the picture as a function of reconstruction loss to generate the network:
Figure GDA0003723470360000051
the gradient update is performed using an adapelta optimizer.
S7: repeating step S6 until the entire training set is traversed several times;
s8: the restored picture obtained in step S5 and the original picture are simultaneously sent to the discriminant network to train the discriminant network, and the specific training process is as follows:
s8-1: fixing the parameters of the generation network, generating a random missing image, and sending the image into the trained generation network to obtain a repaired image G (x) 0 );
S8-2: sending two groups of image pairs into a discrimination network, wherein the first group of image pairs are an original image x and a repaired image G (x) 0 ) Combining with original image x, splicing the original missing image and the original image-nondefective part, and sending them into the discrimination network, i.e. the first group of inputs of the discrimination network is x, x (1-M) + G (x) 0 ) M; the second set of image pairs is the original image portion M x and the restored image portion M G (x) 0 ));
S8-3: constructing a loss function L D =log D(g 1 )+log(1-D(g 2 )),g 1 And g 2 Respectively, step S7-2 is shown to result in two inputs of the two sets, resulting in two losses L real And L fake Finally, the loss of the network is judged to be (L) real +L fake ) α/2, gradient update using adaelta optimizer.
S9: repeating step S8 until the entire training set is traversed several times;
s10: combining the steps S3 and S4 to train and generate a network and a discriminant network, the specific process is as follows:
s10-1: training a discrimination network by using the method of step S8;
s10-2: training the generation network, training the generation network using the joint discrimination network in addition to the training by the method of step S6, and training L obtained in the third step of step S8 fake Training the generating network as an aid to negation, i.e. L adv =-L fake So that the loss function of the generator is L G =L rec +α*L adv The gradient update is performed using an adapelta optimizer.
S11: the training of step S10 is repeated until the entire training set is traversed several times, and the training phase ends.
S12: and randomly selecting missing images in the test set, and generating a network through the training to obtain a repaired image.
In this embodiment, the test result diagram of the test set shown in fig. 4 is obtained through the above method, where the first row and the fourth row in fig. 4 are original images, the second row and the fifth row are images to be restored, respectively, and the third row and the sixth row are images after being restored, respectively, it can be seen that the restored images have better definition, and compared with the original images, the restored images have better fidelity.

Claims (6)

1. An image restoration method based on a generated network structure is characterized by comprising the following steps:
s1: inputting a complete three-channel image and an arbitrary missing image corresponding to the image;
s2: preprocessing the image obtained in the step S1;
s3: deploying a generating network based on SE-ResNet; deploying a discrimination network;
s4: sending the missing image processed in the step S2 to a generation network to obtain a repair picture;
s5: updating the parameters of the generated network through the repair picture and the original picture obtained in the step S4;
s6: repeating step S5 until the entire training set is traversed several times;
s7: the repair picture obtained through the step S4 and the original picture are simultaneously sent to a discrimination network to train the discrimination network;
s8: repeating step S7 until the entire training set is traversed several times;
s9: performing joint training to generate a network and a judgment network until the whole training set is traversed for a plurality of times, and ending the training stage;
s10: and randomly selecting missing images in the test set, and obtaining the repaired images through the trained generation network.
2. The image inpainting method based on the generated network structure according to claim 1, wherein deployment of the generated network in step S3 is specifically:
A) an encoder part in a generating network firstly passes through a convolutional layer with the convolutional kernel size of 5x5 and the step length of 1, the output channel number is c, and then three layers of residual blocks based on SE-ResNet are arranged, wherein the first residual block comprises 3 sub-residual blocks, the second residual block comprises 4 sub-residual blocks, the third residual block comprises 6 sub-residual blocks, and for each sub-residual block, two layers of convolutional layers with the convolutional kernel size of 3x3 are included; each residual block performs downsampling operation on the image in the first layer convolution of the first sub-residual block, and channels are doubled, wherein the final channel number is 4 c; then, through convolutional layers with the sizes of two layers of convolutional kernels of 3x3 and the step length of 1, the number of output channels is 4c, and finally through convolutional layers with the sizes of 4 expansion convolutional layers and the step length of 1, the sizes of the convolutional kernels are 3x3, the number of the output channels is 4c, so that the image is reduced to the original 1/4, and the number of the channels is 4 c;
B) the decoder part in the generation network firstly passes through a layer of deconvolution layer with the convolution kernel size of 4x4, the step length of 2 and the output channel of c/2, and then passes through a layer of convolution layer with the convolution kernel size of 3x3, the step length of 1 and the output channel of c/2; the convolution layers passing next are the same as above except that the output channel of the former deconvolution layer is c/4, the output channel of the latter ordinary convolution layer is also c/4, so far, the image is restored to be the same size as the original image, but the number of the channels is c/4; then, two convolution layers with convolution kernel size of 3x3, step length of 1 and channel number of c/8 and 3 respectively are passed through, and finally one sigmoid layer is passed through.
3. The image inpainting method based on the generated network structure, as claimed in claim 1, wherein the discriminant networks in step S3 are divided into local discriminant networks and global discriminant networks, and their deployment specifically includes:
a) the local discrimination network is used for discriminating the true and false of the missing local generation and consists of five convolution layers and a full-connection layer, the sizes and the step lengths of the first five convolution layers are respectively 5 and 2, the number of output channels is c, 2c, 4c, 8c and 8c in sequence, and the BatchNorm and ReLU operation is carried out on each convolution layer; the output of the full connection layer is 1024, and the output is carried out after one layer of ReLU, namely the final output is a 1024-dimensional vector;
b) the global discrimination network is used for discriminating the truth of global generation, is the same as the local discrimination network and finally outputs a 1024-dimensional vector;
c) and splicing the two 1024-dimensional vectors to obtain a 2048-dimensional vector.
4. The image inpainting method based on the generated network structure as claimed in claim 1, wherein in step S5, by calculating L2 distance between the inpainted picture and the original picture as the reconstruction loss function of the generated network:
Figure FDA0003723470350000021
the gradient update is performed using an adapelta optimizer.
5. The image inpainting method based on the generated network structure, as claimed in claim 1, wherein the training of the discriminant network in step S7 specifically includes the following steps:
s7-1: fixing the parameters of the generation network, generating a random missing image, and sending the image into the trained generation network to obtain a repaired image G (x) 0 );
S7-2: sending two groups of image pairs into a discrimination network, wherein the first group of image pairs are an original image x and a repaired image G (x) 0 ) Combining with original image x, splicing the original missing image and the original image-nondefective part, and sending them into the discrimination network, i.e. the first group of inputs of the discrimination network is x, x (1-M) + G (x) 0 ) M; the second group of image pairs is local M of the original imagex and local M (G (x) of the repaired image 0 ));
S7-3: constructing a loss function L D =logD(g 1 )+log(1-D(g 2 )),g 1 And g 2 Respectively, step S7-2 is shown to obtain two inputs of the two groups, and two losses L are obtained real And L fake Finally, the loss of the network is judged to be (L) real +L fake ) α/2, gradient update using adaelta optimizer.
6. The image inpainting method based on the generated network structure according to claim 4 or 5, wherein the specific steps of jointly training the generated network and the discriminant network in step S9 are as follows:
s9-1: training a discrimination network;
s9-2: training the generation network, training the generation network by the joint discrimination network, and obtaining L fake Training the generating network, i.e. L, with the aid of negation adv =-L fake So that the loss function of the generator is L G =L rec +α*L adv Gradient updates are performed using an adapelta optimizer, which iterates through the entire training set several times.
CN201911217769.8A 2019-12-03 2019-12-03 Image restoration method based on generated network structure Active CN111161158B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911217769.8A CN111161158B (en) 2019-12-03 2019-12-03 Image restoration method based on generated network structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911217769.8A CN111161158B (en) 2019-12-03 2019-12-03 Image restoration method based on generated network structure

Publications (2)

Publication Number Publication Date
CN111161158A CN111161158A (en) 2020-05-15
CN111161158B true CN111161158B (en) 2022-08-26

Family

ID=70556485

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911217769.8A Active CN111161158B (en) 2019-12-03 2019-12-03 Image restoration method based on generated network structure

Country Status (1)

Country Link
CN (1) CN111161158B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111612721B (en) * 2020-05-22 2023-09-22 哈尔滨工业大学(深圳) Image restoration model training method and device and satellite image restoration method and device
CN111899191B (en) * 2020-07-21 2024-01-26 武汉工程大学 Text image restoration method, device and storage medium
CN112465718B (en) * 2020-11-27 2022-07-08 东北大学秦皇岛分校 Two-stage image restoration method based on generation of countermeasure network
CN114331903B (en) * 2021-12-31 2023-05-12 电子科技大学 Image restoration method and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460746A (en) * 2018-04-10 2018-08-28 武汉大学 A kind of image repair method predicted based on structure and texture layer
CN109801230A (en) * 2018-12-21 2019-05-24 河海大学 A kind of image repair method based on new encoder structure
CN110458765A (en) * 2019-01-25 2019-11-15 西安电子科技大学 The method for enhancing image quality of convolutional network is kept based on perception

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10753997B2 (en) * 2017-08-10 2020-08-25 Siemens Healthcare Gmbh Image standardization using generative adversarial networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460746A (en) * 2018-04-10 2018-08-28 武汉大学 A kind of image repair method predicted based on structure and texture layer
CN109801230A (en) * 2018-12-21 2019-05-24 河海大学 A kind of image repair method based on new encoder structure
CN110458765A (en) * 2019-01-25 2019-11-15 西安电子科技大学 The method for enhancing image quality of convolutional network is kept based on perception

Also Published As

Publication number Publication date
CN111161158A (en) 2020-05-15

Similar Documents

Publication Publication Date Title
CN111161158B (en) Image restoration method based on generated network structure
CN111292264B (en) Image high dynamic range reconstruction method based on deep learning
CN110136062B (en) Super-resolution reconstruction method combining semantic segmentation
CN109920021B (en) Face sketch synthesis method based on regularized width learning network
CN113689517B (en) Image texture synthesis method and system for multi-scale channel attention network
CN111612708B (en) Image restoration method based on countermeasure generation network
CN111179196B (en) Multi-resolution depth network image highlight removing method based on divide-and-conquer
CN112465718A (en) Two-stage image restoration method based on generation of countermeasure network
CN110895795A (en) Improved semantic image inpainting model method
CN112288632A (en) Single image super-resolution method and system based on simplified ESRGAN
CN112884758B (en) Defect insulator sample generation method and system based on style migration method
Li et al. Single image super-resolution reconstruction based on fusion of internal and external features
An et al. RBDN: Residual bottleneck dense network for image super-resolution
CN114022506A (en) Image restoration method with edge prior fusion multi-head attention mechanism
CN116823647A (en) Image complement method based on fast Fourier transform and selective attention mechanism
CN115965844B (en) Multi-focus image fusion method based on visual saliency priori knowledge
CN114529450B (en) Face image super-resolution method based on improved depth iteration cooperative network
CN113205503B (en) Satellite coastal zone image quality evaluation method
CN112686822B (en) Image completion method based on stack generation countermeasure network
CN111814543B (en) Depth video object repairing and tampering detection method
CN115512100A (en) Point cloud segmentation method, device and medium based on multi-scale feature extraction and fusion
CN113034390A (en) Image restoration method and system based on wavelet prior attention
CN113298814A (en) Indoor scene image processing method based on progressive guidance fusion complementary network
Nie et al. Image restoration from patch-based compressed sensing measurement
Tian et al. A modeling method for face image deblurring

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant