CN110689495B - Image restoration method for deep learning - Google Patents

Image restoration method for deep learning Download PDF

Info

Publication number
CN110689495B
CN110689495B CN201910913818.5A CN201910913818A CN110689495B CN 110689495 B CN110689495 B CN 110689495B CN 201910913818 A CN201910913818 A CN 201910913818A CN 110689495 B CN110689495 B CN 110689495B
Authority
CN
China
Prior art keywords
image
edge
network
real
loss function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910913818.5A
Other languages
Chinese (zh)
Other versions
CN110689495A (en
Inventor
万家山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Institute of Information Engineering
Original Assignee
Anhui Institute of Information Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Institute of Information Engineering filed Critical Anhui Institute of Information Engineering
Priority to CN201910913818.5A priority Critical patent/CN110689495B/en
Publication of CN110689495A publication Critical patent/CN110689495A/en
Application granted granted Critical
Publication of CN110689495B publication Critical patent/CN110689495B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image visual processing, and provides an image restoration method for deep learning, which comprises the following steps: predicting an edge map of a covered area on the original image using an edge generator; verifying whether the edge mapping predicted by the edge generator is real or not; the image completion network carries out image restoration and synthesis on the real edge mapping and eliminates the unreal edge mapping; verifying whether the image repaired and synthesized by the image completion network is real or not; and generating a real image and removing unreal images. According to the method, by combining context content of the picture, the edge generator generates 'illusion' for the edge of the image missing region (regular and irregular), the image completion network fills the missing region by utilizing the 'illusion' edge, so that edge information of the missing part can be obtained by utilizing a heuristic generation model, and then the edge information is sent to the repair network together with the image as the image missing prior part to reconstruct the image, so that more fine detail filling is reproduced.

Description

Image restoration method for deep learning
Technical Field
The invention relates to the technical field of image visual processing, in particular to an image restoration method for deep learning.
Background
The traditional graphics and vision research methods are mainly based on mathematics and indoor methods, however, with the excellent effect of deep learning in the visual field in recent years, the front edge of the visual field research is basically occupied by the deep learning, and under the situation, more and more graphics researchers begin to aim at the deep learning. Conventional image inpainting may be handled using a diffusion-based approach that propagates local structures to the location portion, or an exemplary-based approach that constructs one pixel of the missing portion at a time while maintaining consistency with surrounding pixels. These methods fail when the missing part is large, and therefore an additional part is required to provide a reasonable imagination.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provide an image restoration method for deep learning.
In order to achieve the above object, the embodiments of the present invention provide the following technical solutions:
an image inpainting method for deep learning comprises the following steps:
step S1: predicting an edge map of a covered area on the original image using an edge generator;
step S2: verifying whether the edge mapping predicted by the edge generator is real;
and step S3: the image completion network carries out image restoration and synthesis on the real edge mapping and eliminates the unreal edge mapping;
and step S4: verifying whether the image repaired and synthesized by the image completion network is real or not;
step S5: and generating a real image and removing unreal images.
Further, in order to better implement the present invention, the step of predicting the edge mapping of the covered area on the original image by using the edge generator includes:
the edge generator predicts the phantom edges of the original image as:
S pred =G(I gray ,S gt )
wherein S is pred Representing a fantasy edge of the image;
I gray a gray value matrix representing an original image;
S gt representing an edge map of the original image.
Further, in order to better implement the present invention, the step of verifying whether the edge mapping predicted by the edge generator is real includes: and verifying the edge mapping predicted by the edge generator by using a first network training target:
Figure BDA0002215492850000021
wherein λ is adv And λ FM Is a regularization parameter;
the first network trains the target in a manner that includes a countering network loss function and a feature matching loss function.
Further, for better implementation of the present invention, the countering network loss function is:
Figure BDA0002215492850000022
the feature matching loss function is:
Figure BDA0002215492850000031
wherein L represents the number of convolution layers, N i Indicates the number of elements of the i-th layer, D i A discriminator representing a first network training target at layer i.
Furthermore, in order to better implement the present invention, the step of performing image restoration and synthesis on the real edge map by the image completion network and removing the unreal edge map includes:
and (3) carrying out image restoration and synthesis on the real edge mapping:
I pred =G`(S gt ,S comp )
wherein, I pred Representing the repaired and synthesized image;
S comp representing the image generated by combining the real edge area of the original image and the corrupted fantasy image.
Further, in order to better implement the present invention, the step of verifying whether the image repaired and synthesized by the image completion network is authentic includes:
and (3) verifying the image repaired and synthesized by the image completion network by using a second network training target:
Figure BDA0002215492850000032
the second network trains the target in a manner that includes a countering network loss function and a perceptual loss function.
Further, for better implementation of the present invention, the countering network loss function is:
L G' =λ' adv L' advp L perc
the perceptual loss function is:
Figure BDA0002215492850000033
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0002215492850000041
and representing the excitation function of the training target of the second network of the ith layer.
Compared with the prior art, the invention has the following beneficial effects:
compared with the traditional technology, the reasonable structure can not be reconstructed, so that the phenomenon of over-smoothness or fuzzy edge is easy to occur. According to the method, by combining context content of the picture, the edge generator generates 'illusion' for the edge of the image missing region (regular and irregular), the image completion network fills the missing region by utilizing the 'illusion' edge, so that edge information of the missing part can be obtained by utilizing a heuristic generation model, and then the edge information is sent to the repair network together with the image as the image missing prior part to reconstruct the image, so that more fine detail filling is reproduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
FIG. 1 is a schematic diagram of image restoration according to embodiment 2 of the present invention;
FIG. 2 is a flow chart of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined or explained in subsequent figures.
Example 1:
the invention is realized by the following technical scheme, and provides an image restoration method for deep learning, which is applied to the fields of computer vision, biological authentication, multi-mode interaction, learning of the multi-mode interaction, and the like.
Deep neural network learning is applied, the deficiency in the images is filled up by relying on the illusion of a pre-trained neural network, the deep neural network has a specific label for each image by using supervised image classification, and the mapping between the images and the labels is learned through a series of basic operation operations. The image restoration is a process of information filling of an information incomplete area on the image, namely, a prior model and a data model of the image are established through a deep neural network technology.
The image restoration and synthesis of the method mainly comprise two parts according to the integral thought of 'image to edge' and 'edge to image again': edge generation and image composition. Here, two parts are defined:
an edge generator: the 'fantasy edge' can be generated in the real area of the image, and the whole structure of the image is drawn.
Image completion network: the missing image module is combined with the color and texture information of the rest of the image to fill in the missing image area.
As shown in fig. 2, the proposed image restoration method of the present invention is mainly implemented by four steps:
step S1: an edge generator is used to predict an edge map of the masked area on the original image.
The edge generator predicts the phantom edges of the original image as:
S pred =G(I gray ,S gt )
wherein S is pred Representing a fantasy edge of the image;
I gray a gray value matrix representing an original image;
S gt representing an edge map of the original image.
Step S2: and verifying whether the edge mapping predicted by the edge generator is real or not.
Verifying the edge mapping predicted by the edge generator by using a first network training target:
Figure BDA0002215492850000061
wherein λ is adv And λ FM Is a regularization parameter;
the first network training target comprises a confrontation network loss function and a feature matching loss function;
the countering network loss function is:
Figure BDA0002215492850000062
the feature matching loss function is:
Figure BDA0002215492850000063
wherein L represents the number of convolution layers, N i Indicates the number of elements of the i-th layer, D i A discriminator representing a first network training target at layer i.
And step S3: and the image completion network carries out image restoration and synthesis on the real edge mapping and eliminates the unreal edge mapping.
And (3) carrying out image restoration and synthesis on the real edge mapping:
I pred =G`(S gt ,S comp )
wherein, I pred Representing the repaired and synthesized image;
S comp representing the image generated by combining the real edge area of the original image and the corrupted fantasy image.
And step S4: and verifying whether the image repaired and synthesized by the image completion network is real or not.
And verifying the image repaired and synthesized by the image completion network by using a second network training target:
Figure BDA0002215492850000071
the second network trains the target in a manner that includes a countering network loss function and a perceptual loss function.
The countering network loss function is:
L G' =λ' adv L' advp L perc
the perceptual loss function is:
Figure BDA0002215492850000072
wherein the content of the first and second substances,
Figure BDA0002215492850000073
and representing the excitation function of the training target of the ith layer second network.
Step S5: and generating a real image and removing unreal images.
As shown, the deep neural network has excellent classification performance and ultra-high accuracy when trained on several labeled images. And implementing a discriminant pre-trained neural network on an input layer to guide image reconstruction, wherein an output layer of the deep neural network is directly applied during image restoration.
The regularization strategy is as follows:
total Variation (TV) norm is a strategy for removing poor details while preserving important details such as image edges, and is widely used as a regularization component in inverse problems such as dessication, super-resolution, etc. due to its edge-preserving property.
The deep neural network correctly completes the shape of the graph in the image, and the combination of illusion and regularization of the deep neural network completes effective image recovery. In practical applications, the image is usually corrupted by noise, which is either dust or water drops on the lens, or scratches on the old picture, or an artificial picture such as a mosaic on the image, or a part of the image itself is damaged. If we want to recover the damaged pictures as original as possible, we use the color and structure of the damaged area edge, i.e. edge, to deduce the information content of the damaged pictures according to the information left by these pictures, and then fill up the damaged area to achieve the purpose of image inpainting.
Example 2:
the original image was purposely marked for performance testing as shown in FIG. 1 (a); as shown in fig. 1 (b), the image is covered, the covering is to mask the image, the image is a single-channel image, the size of the image is consistent with that of the original image, and the pixel values of the other parts of the image except the part needing to be repaired are all 0; as shown in fig. 1 (c), the diffusion method applied to the masked image may cause edge loss, and the damaged image may not be reconstructed effectively; as shown in fig. 1 (d), the shape in the image can be correctly completed by using deep neural network learning on the masked image, effective image restoration is completed by combining the illusion of the visible deep neural network learning and regularization, and finally the repaired image is optimized, for example, the edge contour of an object in the image is perfected.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present invention, and shall cover the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (3)

1. An image restoration method for deep learning, characterized in that: the method comprises the following steps:
step S1: predicting an edge map of a covered area on the original image using an edge generator;
step S2: verifying whether the edge mapping predicted by the edge generator is real;
the step of verifying whether the edge mapping predicted by the edge generator is real includes: verifying the edge mapping predicted by the edge generator by using a first network training target:
Figure FDA0003827910390000011
wherein λ is adv And λ FM Is a regularization parameter;
the mode of the first network training target comprises a first countermeasure network loss function and a characteristic matching loss function;
the first countering network loss function is:
Figure FDA0003827910390000012
the feature matching loss function is:
Figure FDA0003827910390000013
wherein L represents the number of convolution layers, N i Indicates the number of elements of the i-th layer, D i A discriminator representing a first network training target of an i-th layer;
and step S3: the image completion network carries out image restoration and synthesis on the real edge mapping and eliminates the unreal edge mapping;
and step S4: verifying whether the image repaired and synthesized by the image completion network is real or not;
the step of verifying whether the image repaired and synthesized by the image completion network is real comprises the following steps:
and (3) verifying the image repaired and synthesized by the image completion network by using a second network training target:
Figure FDA0003827910390000021
the mode of the second network training target comprises a second antagonistic network loss function and a perception loss function;
the second opposing network loss function is:
L G' =λ' adv L' advp L perc
the perceptual loss function is:
Figure FDA0003827910390000022
wherein the content of the first and second substances,
Figure FDA0003827910390000023
and representing the excitation function of the training target of the ith layer second network.
Step S5: and generating a real image and removing unreal images.
2. The deep-learning image inpainting method according to claim 1, wherein: the step of predicting the edge map of the covered area on the original image by using the edge generator comprises the following steps:
the edge generator predicts the phantom edges of the original image as:
S pred =G(I gray ,S gt )
wherein S is pred Representing a fantasy edge of the image;
I gray a gray value matrix representing an original image;
S gt representing an edge map of the original image.
3. The deep-learning image inpainting method according to claim 1, wherein: the image completion network carries out image restoration and synthesis on real edge mapping and eliminates unreal edge mapping, and the method comprises the following steps:
and (3) carrying out image restoration and synthesis on the real edge mapping:
I pred =G`(S gt ,S comp )
wherein, I pred Representing the repaired and synthesized image;
S comp representing the image generated by combining the real edge area of the original image and the corrupted fantasy image.
CN201910913818.5A 2019-09-25 2019-09-25 Image restoration method for deep learning Active CN110689495B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910913818.5A CN110689495B (en) 2019-09-25 2019-09-25 Image restoration method for deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910913818.5A CN110689495B (en) 2019-09-25 2019-09-25 Image restoration method for deep learning

Publications (2)

Publication Number Publication Date
CN110689495A CN110689495A (en) 2020-01-14
CN110689495B true CN110689495B (en) 2022-10-04

Family

ID=69110125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910913818.5A Active CN110689495B (en) 2019-09-25 2019-09-25 Image restoration method for deep learning

Country Status (1)

Country Link
CN (1) CN110689495B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111275646B (en) * 2020-01-20 2022-04-26 南开大学 Edge-preserving image smoothing method based on deep learning knowledge distillation technology
CN111553869B (en) * 2020-05-13 2021-04-06 北京航空航天大学 Method for complementing generated confrontation network image under space-based view angle
CN111476213A (en) * 2020-05-19 2020-07-31 武汉大势智慧科技有限公司 Method and device for filling covering area of shelter based on road image
CN111861901A (en) * 2020-06-05 2020-10-30 西安工程大学 Edge generation image restoration method based on GAN network
CN112184585B (en) * 2020-09-29 2024-03-29 中科方寸知微(南京)科技有限公司 Image completion method and system based on semantic edge fusion
CN113298733B (en) * 2021-06-09 2023-02-14 华南理工大学 Implicit edge prior based scale progressive image completion method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222628A (en) * 2019-06-03 2019-09-10 电子科技大学 A kind of face restorative procedure based on production confrontation network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10600185B2 (en) * 2017-03-08 2020-03-24 Siemens Healthcare Gmbh Automatic liver segmentation using adversarial image-to-image network

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222628A (en) * 2019-06-03 2019-09-10 电子科技大学 A kind of face restorative procedure based on production confrontation network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于生成对抗网络的图像修复;孙全等;《计算机科学》;20181215(第12期);全文 *
基于生成式对抗网络的裂缝图像修复方法;胡敏等;《计算机应用与软件》;20190612(第06期);全文 *

Also Published As

Publication number Publication date
CN110689495A (en) 2020-01-14

Similar Documents

Publication Publication Date Title
CN110689495B (en) Image restoration method for deep learning
CN111292264B (en) Image high dynamic range reconstruction method based on deep learning
Jiang et al. Unsupervised decomposition and correction network for low-light image enhancement
CN108460746B (en) Image restoration method based on structure and texture layered prediction
CN103400342A (en) Mixed color gradation mapping and compression coefficient-based high dynamic range image reconstruction method
CN110418139B (en) Video super-resolution restoration method, device, equipment and storage medium
US11887218B2 (en) Image optimization method, apparatus, device and storage medium
CN112184585A (en) Image completion method and system based on semantic edge fusion
Zheng et al. T-net: Deep stacked scale-iteration network for image dehazing
CN115829880A (en) Image restoration method based on context structure attention pyramid network
CN113256494A (en) Text image super-resolution method
CN115953311A (en) Image defogging method based on multi-scale feature representation of Transformer
CN114881879A (en) Underwater image enhancement method based on brightness compensation residual error network
CN114862697A (en) Face blind repairing method based on three-dimensional decomposition
Liu et al. Facial image inpainting using multi-level generative network
CN114862707A (en) Multi-scale feature recovery image enhancement method and device and storage medium
Zhang et al. Enhanced visual perception for underwater images based on multistage generative adversarial network
Huang et al. Underwater image enhancement via LBP‐based attention residual network
CN116109510A (en) Face image restoration method based on structure and texture dual generation
Kumar et al. Underwater image enhancement using deep learning
CN116051407A (en) Image restoration method
CN116958317A (en) Image restoration method and system combining edge information and appearance stream operation
CN114331894A (en) Face image restoration method based on potential feature reconstruction and mask perception
Zhu et al. HDRD-Net: High-resolution detail-recovering image deraining network
Jiang et al. Mask‐guided image person removal with data synthesis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20200114

Assignee: SUZHOU ESON ROBOT TECHNOLOGY CO.,LTD.

Assignor: ANHUI INSTITUTE OF INFORMATION TECHNOLOGY

Contract record no.: X2023980037918

Denomination of invention: A deep learning Inpainting method

Granted publication date: 20221004

License type: Common License

Record date: 20230718

EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20200114

Assignee: Anhui Zairong Network Technology Co.,Ltd.

Assignor: ANHUI INSTITUTE OF INFORMATION TECHNOLOGY

Contract record no.: X2024980007059

Denomination of invention: A Deep Learning Image Restoration Method

Granted publication date: 20221004

License type: Common License

Record date: 20240618