CN113689360A - Image restoration method based on generation countermeasure network - Google Patents

Image restoration method based on generation countermeasure network Download PDF

Info

Publication number
CN113689360A
CN113689360A CN202111160452.2A CN202111160452A CN113689360A CN 113689360 A CN113689360 A CN 113689360A CN 202111160452 A CN202111160452 A CN 202111160452A CN 113689360 A CN113689360 A CN 113689360A
Authority
CN
China
Prior art keywords
network
image
module
convolution
adain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111160452.2A
Other languages
Chinese (zh)
Other versions
CN113689360B (en
Inventor
史明光
李明娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei University of Technology
Original Assignee
Hefei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei University of Technology filed Critical Hefei University of Technology
Priority to CN202111160452.2A priority Critical patent/CN113689360B/en
Publication of CN113689360A publication Critical patent/CN113689360A/en
Application granted granted Critical
Publication of CN113689360B publication Critical patent/CN113689360B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image restoration method based on a generation countermeasure network, which comprises the following steps: 1. preprocessing an original image to obtain an image to be restored; 2. coding an image data set Z to be repaired to obtain a coded image data set; 3. constructing a generation countermeasure network consisting of a generation network and a discrimination network; 4. and training the pre-constructed confrontation network model by utilizing the encoded image data set to be repaired. According to the method, the key characteristics of the image to be restored are collected, the generation model and the discrimination model are trained, the efficiency of model optimization can be effectively improved, and the restoration effect of the image can be improved.

Description

Image restoration method based on generation countermeasure network
Technical Field
The invention relates to the field of image restoration, in particular to an image restoration method based on a generation countermeasure network.
Background
The image restoration technology is a hot problem in the field of image processing, and belongs to the cross problem of multiple disciplines such as pattern recognition, computer vision and the like. Image restoration refers to processing image damage or loss of a local area according to a specific requirement to restore the integrity of the image. The principles of similarity, structure, texture consistency, structure priority, etc. must be followed during the repair process. At present, a restoration method based on deep learning is a new method proposed in recent years, a deep neural network is utilized to obtain mapping of nonlinear complex relations among training samples through training and learning of a large amount of data, researchers propose various image restoration methods on the basis of the mapping, and the method is widely applied to the fields of ancient painting cultural relic protection, medicine, movie industry and the like.
The current image restoration technology restores an image to be restored by using a preset algorithm, but the restoration result has a large difference with an original image, so that a boundary artifact and inconsistent fuzzy textures in a surrounding area are often generated, the training is slow or unstable, the image to be restored has a large damage degree and is easy to blur, the restoration result is unsatisfactory, and the image restoration work is inconvenient.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and provides an image restoration method based on a generation countermeasure network, so that a generation model and a discrimination model are trained by collecting key features of an image to be restored, the efficiency of model optimization can be effectively improved, and the restoration effect of the image can be improved.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention discloses an image restoration method based on a generation countermeasure network, which is characterized by comprising the following steps:
step 1, preprocessing an original image and obtaining an image to be restored;
step 1.1, extracting M images from the original image set data to form an image training set M ═ x1,x2,…,xmIn which xmRepresenting the m-th image, for the m-th image xmAfter selecting the key characteristic area, the mth image xmThe non-key characteristic region is randomly divided into regions with fixed sizes and used as different damaged regions;
step 1.2, comparing each damaged areaFinding the domain block with the highest similarity to fill the corresponding damaged area so as to obtain the mth image xmTo-be-repaired image, and then obtaining a to-be-repaired image data set Z ═ Z1,Z2,…,Zm};
Step 2, coding the image data set to be repaired Z to obtain a coded image data set Z '═ Z'1,Z′2,…,Z′mWherein, Z'mCoded image data representing an mth image to be restored;
step 3, constructing a generation countermeasure network consisting of a generation network and a judgment network;
step 3.1, reconstructing a generation network of the styleGAN network and using the reconstructed generation network as a generation network in a generation countermeasure network;
setting a generation network in the generation countermeasure network to be composed of a mapping network f and a synthesis network g;
the mapping network f is composed of a fully-connected layers, and each fully-connected layer comprises s neurons;
encoding image data set Z '{ Z'1,Z′2,…,Z′mNormalizing, inputting into the mapping network f, and performing spatial mapping, thereby outputting an intermediate mapping vector W ═ W1,W2,…,WmIn which WmRepresents the mth intermediate mapping vector;
the synthetic network consists of two composite networks; the first composite network is formed by sequentially connecting a first AdaIN module, a first convolution module and a second AdaIN module in series, and the output of each module is used as the input of the next module;
the second composite network is formed by sequentially connecting a second convolution module, a third AdaIN module, a third convolution module and a fourth AdaIN module, wherein the first convolution module, the second convolution module and the third convolution module are all composed of an NxN convolution layer, a batch normalization layer and a LeakyReLu activation function, and random noise is added behind each convolution module;
using a constant tensor with dimension of c × c × e asAn input to a first AdaIN module in a first composite network, an intermediate mapping vector W ═ W1,W2,…,Wm-as another input to a first AdaIN module in a first composite network, and outputting a set of images of dimensions c × c × e via said first composite network;
inputting the image set with the dimension of c × c × e into a second convolution module in a second composite network, obtaining convolution results, inputting the convolution results into a third AdaIN module and a third convolution module respectively, and outputting a repaired image data set with the dimension of f × f × e by a fourth AdaIN module, wherein G (Z) ═ G (Z) (Z is the same as G (Z) × e)1),G(Z2),…,G(Zm)};
Step 3.2, constructing and generating a discrimination network in the countermeasure network;
the method comprises the steps that a discrimination network is arranged and composed of b convolution modules, d full-connection layers and a Sigmoid activation function which are sequentially connected in series, each convolution module comprises a convolution layer with a convolution kernel of NxN and a LeakyReLu activation function, and the other b-1 convolution modules are respectively connected with a batch normalization layer except a first convolution module;
the inpainted image dataset G (Z) ═ G (Z)1),G(Z2),…,G(Zm) X and image training set M ═ x1,x2,…,xmInputting the result into the discrimination network for processing, outputting discrimination true and false values by the Sigmoid activation function, feeding the values back to the generation network to accelerate network training, and obtaining a repair result D (G (Z)) which passes through the discrimination network after training is finished1)),D(G(Z2)),…,D(G(Zm) In which D (G (Z)) ism) Represents the mth image to be restored Z passing through the discrimination networkmThe repair result of (2);
step 4, establishing a confrontation target function L shown in the formula (1)D
Figure BDA0003289992250000031
In the formula (1), E represents a desired, D (x)m) Representing the m-th original image xmInput deviceThe repair result output after the network is judged,
Figure BDA0003289992250000032
representing the distribution of a training set M from images
Figure BDA0003289992250000033
Taking out any image x;
Figure BDA0003289992250000034
representing a distribution from a data set Z to be repaired
Figure BDA0003289992250000035
Taking out any image to be repaired;
step 5, establishing a generation target function L shown in the formula (2)G
Figure BDA0003289992250000036
In the formula (2), LCRepresents the content loss and is obtained by the formula (3), and lambda is a regular parameter;
LC=||D(G(Zm))-xm||2 (3)
step 6, based on the confrontation objective function LDAnd generating an objective function LGUsing the encoded image data set Z '{ Z'1,Z′2,…,Z′mAnd training the generated countermeasure network until the output values of the network are judged to be true, so as to obtain a trained image restoration model for restoring the damaged image.
The image restoration method based on the generation countermeasure network is characterized in that the input of other AdaIN modules except the first AdaIN module is an intermediate mapping vector W ═ W1,W2,…,WmAnd the output of the previous module, wherein any AdaIN module obtains a corresponding output result AdaIN (x) by using the formula (4)m):
Figure BDA0003289992250000037
In the formula (4), ys,mRepresenting the m-th image xmScaling factor of, yb,mRepresenting the m-th image xmDeviation factor of, mu (x)m) Is the m-th image xmMean value of (a), σ (x)m) Is the m-th image xmIf the variance of (a) is the first AdaIN block, then xmIs a constant tensor.
Compared with the prior art, the invention has the beneficial effects that:
1. according to the method, the key features of the image to be restored are collected, and the image preprocessing is performed by using similar block filling to form a training set, so that the details and image semantics of the original image can be better retained, and the training of the model is facilitated;
2. according to the method, the restored image obtained by the generation model is used as the input of the discrimination model, the output of the discrimination model is used as the input of the generation model, so that the generation model and the discrimination model in the confrontation network model are continuously optimized, the optimized image restoration model based on the confrontation network model can be obtained and used for restoring the image, the efficiency of model optimization is effectively improved, and the restoration effect of the image is improved.
Drawings
FIG. 1 is a flow chart of an image inpainting method of the present invention;
FIG. 2 is a schematic diagram of the generation of a countermeasure network model of the present invention;
fig. 3 is a schematic diagram of the generation network of the countermeasure network of the present invention.
Detailed Description
In this embodiment, as shown in fig. 1, an image restoration method based on a generative countermeasure network includes the following steps:
step 1, preprocessing an original image and obtaining an image to be restored;
step 1.1, 6000 images are extracted from the original image set data to form an image training set M ═ x1,x2,…,xmIn which xmTo representM-th image, for m-th image xmSelecting a key characteristic area, keeping the detail points of the image as much as possible, and taking the m-th image xmThe non-key characteristic region is randomly divided into regions with fixed sizes and used as different damaged regions;
step 1.2, comparing the similarity degree of the neighborhood blocks around each damaged area, finding the domain block with the highest similarity degree to fill the corresponding damaged area, utilizing the surrounding similar blocks to enable the structure and texture of the damaged area to be similar to the key feature area, reducing loss, and obtaining the mth image xmTo-be-repaired image, and then obtaining a to-be-repaired image data set Z ═ Z1,Z2,…,Zm};
Step 2, coding the image data set to be repaired Z to obtain a coded image data set Z '═ Z'1,Z′2,…,Z′mWherein, Z'mCoded image data representing an mth image to be restored;
step 3, as shown in fig. 2, constructing a generation countermeasure network composed of a generation network and a discrimination network;
step 3.1, reconstructing a generation network of the styleGAN network and using the reconstructed generation network as a generation network in a generation countermeasure network;
as shown in fig. 3, the generation network in the setup generation countermeasure network is composed of a mapping network f and a synthesis network g, the mapping network f controls the style of the generated image, and the generation network g is used to generate the image like a conventional generator;
the mapping network f is composed of 8 fully-connected layers, and each fully-connected layer comprises 128 neurons;
encoding image data set Z '{ Z'1,Z′2,…,Z′mNormalizing the vector, inputting the normalized vector into a mapping network f for spatial mapping, and outputting an intermediate mapping vector W (W) { W }1,W2,…,WmIn which WmRepresents the mth intermediate mapping vector;
the synthetic network g consists of two composite networks; the first composite network is formed by sequentially connecting a first AdaIN module, a first convolution module and a second AdaIN module in series, and the output of each module is used as the input of the next module;
the second composite network is formed by sequentially connecting a second convolution module, a third AdaIN module, a third convolution module and a fourth AdaIN module, wherein the first convolution module, the second convolution module and the third convolution module are all composed of a 3 x 3 convolution layer, a batch normalization layer and a LeakyReLu activation function, and random noise is added behind each convolution module;
in the AdaIN modules, the inputs of the other AdaIN modules except the first AdaIN module are all intermediate mapping vectors W ═ W1,W2,…,WmAnd the output of the previous module, wherein any AdaIN module obtains a corresponding output result AdaIN (x) by using the formula (1)m):
Figure BDA0003289992250000051
In the formula (1), ys,mRepresenting the m-th image xmScaling factor of, yb,mRepresenting the m-th image xmDeviation factor of, mu (x)m) Is the m-th image xmMean value of (a), σ (x)m) Is the m-th image xmIf the variance of (a) is the first AdaIN block, then xmIs a constant tensor.
Taking a constant tensor with the dimension of 64 × 64 × 128 as an input of a first AdaIN module in a first composite network, and setting an intermediate mapping vector W to { W ═ W1,W2,…,WmAs another input to a first AdaIN module in the first composite network, and outputting a set of images having dimensions 64 × 64 × 128 via the first composite network;
inputting an image set with the dimension of 64 × 64 × 128 into a second convolution module of a second composite network, obtaining convolution results, inputting the convolution results into a third AdaIN module and a third convolution module respectively, and outputting a repaired image data set with the dimension of 128 × 128 × 128 by a fourth AdaIN module, wherein G (Z) ═ G (Z) (Z is1),G(Z2),…,G(Zm)};
Step 3.2, constructing and generating a discrimination network in the countermeasure network;
the method comprises the steps that a set discrimination network is formed by sequentially connecting 4 convolution modules, 1 full-connection layer and a Sigmoid activation function in series, each convolution module comprises a convolution layer with a convolution kernel of 3 x 3 and a LeakyReLu activation function, and the rest 3 convolution modules are respectively connected with a batch normalization layer except the first convolution module;
the repaired image data set G (Z) ═ G (Z)1),G(Z2),…,G(Zm) X and image training set M ═ x1,x2,…,xmInputting the data into a discrimination network for processing, outputting discrimination true and false values by a Sigmoid activation function, feeding the values back to a generation network to accelerate network training, and obtaining a repair result D (G (Z)) which passes through the discrimination network after the training is finished, wherein D (G (Z)) is equal to { D (G (Z)) }1)),D(G(Z2)),…,D(G(Zm) In which D (G (Z)) ism) Represents the mth image to be restored Z passing through the discrimination networkmThe repair result of (2);
step 4, establishing a confrontation target function L shown as the formula (2)D
Figure BDA0003289992250000061
In the formula (2), E represents a desired, D (x)m) Representing the m-th original image xmInputting the repair result output after the judgment network,
Figure BDA0003289992250000062
representing the distribution of a training set M from images
Figure BDA0003289992250000063
Taking out any image x;
Figure BDA0003289992250000064
representing a distribution from a data set Z to be repaired
Figure BDA0003289992250000065
Taking out any graph to be repairedAn image;
step 5, establishing a generation target function L shown in the formula (3)G
Figure BDA0003289992250000066
In the formula (3), LCRepresents the content loss and is obtained by equation (4), λ is a regularization parameter;
LC=||D(G(Zm))-xm||2 (4)
step 6, based on the confrontation objective function LDAnd generating an objective function LGUsing the encoded image data set Z '{ Z'1,Z′2,…,Z′mTraining the generation countermeasure network until the output values of the judgment network are all true, thereby obtaining a trained image restoration model for restoring the damaged image.
And comparing the repaired image data with the image data of the database, and then storing the image data in the field, thereby facilitating the query.
Experimental effect or technical effect of comparison:
in order to verify the effectiveness, the invention evaluates the repairing result of the test data set by using PSNR (Peak Signal to Noise ratio) peak Signal-to-Noise ratio and SSIM (structural similarity) structure similarity. The peak signal-to-noise ratio of PSNR is the difference in measured pixel values, with larger values representing less distortion. The SSIM structural similarity represents the similarity between the repair result and the original image, and is a numerical value between 0 and 1, and a larger numerical value represents a smaller difference. The specific quantitative comparison results are shown in table 1:
TABLE 1
Figure BDA0003289992250000067
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that various changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention; the scope of the invention is defined by the appended claims and equivalents thereof.

Claims (2)

1. An image restoration method based on a generation countermeasure network is characterized by comprising the following steps:
step 1, preprocessing an original image and obtaining an image to be restored;
step 1.1, extracting M images from the original image set data to form an image training set M ═ x1,x2,…,xmIn which xmRepresenting the m-th image, for the m-th image xmAfter selecting the key characteristic area, the mth image xmThe non-key characteristic region is randomly divided into regions with fixed sizes and used as different damaged regions;
step 1.2, comparing the similarity of the neighborhood blocks around each damaged area, and finding the domain block with the highest similarity to fill the corresponding damaged area, thereby obtaining the mth image xmTo-be-repaired image, and then obtaining a to-be-repaired image data set Z ═ Z1,Z2,…,Zm};
Step 2, coding the image data set Z to be repaired to obtain a coded image data set Z' ═ { Z ═ Z1′,Z2′,…,Z′mWherein, Z'mCoded image data representing an mth image to be restored;
step 3, constructing a generation countermeasure network consisting of a generation network and a judgment network;
step 3.1, reconstructing a generation network of the styleGAN network and using the reconstructed generation network as a generation network in a generation countermeasure network;
setting a generation network in the generation countermeasure network to be composed of a mapping network f and a synthesis network g;
the mapping network f is composed of a fully-connected layers, and each fully-connected layer comprises s neurons;
encoding image data set Z '{ Z'1,Z′2,…,Z′mInputting the mapping after normalization processingSpatial mapping is performed in the network f, thereby outputting an intermediate mapping vector W ═ W1,W2,…,WmIn which WmRepresents the mth intermediate mapping vector;
the synthetic network consists of two composite networks; the first composite network is formed by sequentially connecting a first AdaIN module, a first convolution module and a second AdaIN module in series, and the output of each module is used as the input of the next module;
the second composite network is formed by sequentially connecting a second convolution module, a third AdaIN module, a third convolution module and a fourth AdaIN module, wherein the first convolution module, the second convolution module and the third convolution module are all composed of an NxN convolution layer, a batch normalization layer and a LeakyReLu activation function, and random noise is added behind each convolution module;
taking a constant tensor with the dimension of c × c × e as an input of a first AdaIN module in a first composite network, wherein an intermediate mapping vector W is { W ═ W1,W2,…,Wm-as another input to a first AdaIN module in a first composite network, and outputting a set of images of dimensions c × c × e via said first composite network;
inputting the image set with the dimension of c × c × e into a second convolution module in a second composite network, obtaining convolution results, inputting the convolution results into a third AdaIN module and a third convolution module respectively, and outputting a repaired image data set with the dimension of f × f × e by a fourth AdaIN module, wherein G (Z) ═ G (Z) (Z is the same as G (Z) × e)1),G(Z2),…,G(Zm)};
Step 3.2, constructing and generating a discrimination network in the countermeasure network;
the method comprises the steps that a discrimination network is arranged and composed of b convolution modules, d full-connection layers and a Sigmoid activation function which are sequentially connected in series, each convolution module comprises a convolution layer with a convolution kernel of NxN and a LeakyReLu activation function, and the other b-1 convolution modules are respectively connected with a batch normalization layer except a first convolution module;
the inpainted image dataset G (Z) ═ G (Z)1),G(Z2),…,G(Zm) X and image training set M ═ x1,x2,…,xmInputting the result into the discrimination network for processing, outputting discrimination true and false values by the Sigmoid activation function, feeding the values back to the generation network to accelerate network training, and obtaining a repair result D (G (Z)) which passes through the discrimination network after training is finished1)),D(G(Z2)),…,D(G(Zm) In which D (G (Z)) ism) Represents the mth image to be restored Z passing through the discrimination networkmThe repair result of (2);
step 4, establishing a confrontation target function L shown in the formula (1)D
Figure FDA0003289992240000021
In the formula (1), E represents a desired, D (x)m) Representing the m-th original image xmInputting the repair result output after the judgment network,
Figure FDA0003289992240000022
representing the distribution of a training set M from images
Figure FDA0003289992240000023
Taking out any image x;
Figure FDA0003289992240000024
representing a distribution from a data set Z to be repaired
Figure FDA0003289992240000025
Taking out any image to be repaired;
step 5, establishing a generation target function L shown in the formula (2)G
Figure FDA0003289992240000026
In the formula (2), LCRepresents the content loss and is obtained by the formula (3), and lambda is a regular parameter;
LC=||D(G(Zm))-xm||2 (3)
step 6, based on the confrontation objective function LDAnd generating an objective function LGUsing the encoded image data set Z '{ Z'1,Z′2,…,Z′mAnd training the generated countermeasure network until the output values of the network are judged to be true, so as to obtain a trained image restoration model for restoring the damaged image.
2. The method as claimed in claim 1, wherein the AdaIN modules except the first AdaIN module have intermediate mapping vector W ═ W1,W2,…,WmAnd the output of the previous module, wherein any AdaIN module obtains a corresponding output result AdaIN (x) by using the formula (4)m):
Figure FDA0003289992240000031
In the formula (4), ys,mRepresenting the m-th image xmScaling factor of, yb,mRepresenting the m-th image xmDeviation factor of, mu (x)m) Is the m-th image xmMean value of (a), σ (x)m) Is the m-th image xmIf the variance of (a) is the first AdaIN block, then xmIs a constant tensor.
CN202111160452.2A 2021-09-30 2021-09-30 Image restoration method based on generation countermeasure network Active CN113689360B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111160452.2A CN113689360B (en) 2021-09-30 2021-09-30 Image restoration method based on generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111160452.2A CN113689360B (en) 2021-09-30 2021-09-30 Image restoration method based on generation countermeasure network

Publications (2)

Publication Number Publication Date
CN113689360A true CN113689360A (en) 2021-11-23
CN113689360B CN113689360B (en) 2024-02-20

Family

ID=78587469

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111160452.2A Active CN113689360B (en) 2021-09-30 2021-09-30 Image restoration method based on generation countermeasure network

Country Status (1)

Country Link
CN (1) CN113689360B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116777848A (en) * 2023-06-06 2023-09-19 北京师范大学 Jade ware similarity analysis method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110689499A (en) * 2019-09-27 2020-01-14 北京工业大学 Face image restoration method based on dense expansion convolution self-coding countermeasure network
CN111612708A (en) * 2020-05-06 2020-09-01 长沙理工大学 Image restoration method based on countermeasure generation network
WO2021056969A1 (en) * 2019-09-29 2021-04-01 中国科学院长春光学精密机械与物理研究所 Super-resolution image reconstruction method and device
US20210150678A1 (en) * 2019-11-15 2021-05-20 Zili Yi Very high-resolution image in-painting with neural networks
CN113112411A (en) * 2020-01-13 2021-07-13 南京信息工程大学 Human face image semantic restoration method based on multi-scale feature fusion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110689499A (en) * 2019-09-27 2020-01-14 北京工业大学 Face image restoration method based on dense expansion convolution self-coding countermeasure network
WO2021056969A1 (en) * 2019-09-29 2021-04-01 中国科学院长春光学精密机械与物理研究所 Super-resolution image reconstruction method and device
US20210150678A1 (en) * 2019-11-15 2021-05-20 Zili Yi Very high-resolution image in-painting with neural networks
CN113112411A (en) * 2020-01-13 2021-07-13 南京信息工程大学 Human face image semantic restoration method based on multi-scale feature fusion
CN111612708A (en) * 2020-05-06 2020-09-01 长沙理工大学 Image restoration method based on countermeasure generation network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李天成;何嘉;: "一种基于生成对抗网络的图像修复算法", 计算机应用与软件, no. 12 *
王可新;王力;: "基于生成对抗网络的图像修复算法", 智能计算机与应用, no. 04 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116777848A (en) * 2023-06-06 2023-09-19 北京师范大学 Jade ware similarity analysis method and system
CN116777848B (en) * 2023-06-06 2024-05-31 北京师范大学 Jade ware similarity analysis method and system

Also Published As

Publication number Publication date
CN113689360B (en) 2024-02-20

Similar Documents

Publication Publication Date Title
Nazeri et al. Edgeconnect: Generative image inpainting with adversarial edge learning
CN110211045B (en) Super-resolution face image reconstruction method based on SRGAN network
CN111047541B (en) Image restoration method based on wavelet transformation attention model
CN113962893A (en) Face image restoration method based on multi-scale local self-attention generation countermeasure network
CN112686816A (en) Image completion method based on content attention mechanism and mask code prior
CN113298734B (en) Image restoration method and system based on mixed hole convolution
CN116645716B (en) Expression recognition method based on local features and global features
Zhu et al. PNEN: Pyramid non-local enhanced networks
CN111368734B (en) Micro expression recognition method based on normal expression assistance
CN112801914A (en) Two-stage image restoration method based on texture structure perception
Du et al. Blind image denoising via dynamic dual learning
CN111340189B (en) Space pyramid graph convolution network implementation method
Jang et al. Dual path denoising network for real photographic noise
CN110414516B (en) Single Chinese character recognition method based on deep learning
CN113689360B (en) Image restoration method based on generation countermeasure network
CN111598822A (en) Image fusion method based on GFRW and ISCM
Elkerdawy et al. Fine-grained vehicle classification with unsupervised parts co-occurrence learning
CN112686817B (en) Image completion method based on uncertainty estimation
CN113962905A (en) Single image rain removing method based on multi-stage feature complementary network
CN111275076B (en) Image significance detection method based on feature selection and feature fusion
Yu et al. MagConv: Mask-guided convolution for image inpainting
CN115131226A (en) Image restoration method based on wavelet tensor low-rank regularization
CN112634281A (en) Grid segmentation method based on graph convolution network
CN111292238A (en) Face image super-resolution reconstruction method based on orthogonal partial least squares
Zhou et al. Design of lightweight convolutional neural network based on dimensionality reduction module

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant