CN110675339A - Image restoration method and system based on edge restoration and content restoration - Google Patents

Image restoration method and system based on edge restoration and content restoration Download PDF

Info

Publication number
CN110675339A
CN110675339A CN201910870523.4A CN201910870523A CN110675339A CN 110675339 A CN110675339 A CN 110675339A CN 201910870523 A CN201910870523 A CN 201910870523A CN 110675339 A CN110675339 A CN 110675339A
Authority
CN
China
Prior art keywords
image
edge
generator
gray
defect image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910870523.4A
Other languages
Chinese (zh)
Inventor
秦茂玲
杨胜男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Normal University
Original Assignee
Shandong Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Normal University filed Critical Shandong Normal University
Priority to CN201910870523.4A priority Critical patent/CN110675339A/en
Publication of CN110675339A publication Critical patent/CN110675339A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The disclosure discloses an image restoration method and system based on edge restoration and content restoration, which preprocesses an original defect image to obtain a gray defect image after processing; carrying out smoothing treatment on the gray defect image; extracting incomplete edge images and image masks from the gray defect images after the smoothing processing; taking the image mask, the gray defect image and the incomplete edge image as input values of an edge generator, and generating a complete edge structure chart by the edge generator; and taking the complete edge structure diagram and the original defect image as input values of a content generator, and generating an image with the filled missing area by the content generator. Therefore, image restoration is realized, the method is considered more comprehensively than the traditional method, and practice proves that the method is effective to the actual data set.

Description

Image restoration method and system based on edge restoration and content restoration
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image inpainting method and system based on edge inpainting and content inpainting.
Background
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
Image inpainting, which is to fill in the missing or blank of an image, may also be to remove an undesired object in the image. The research of image completion technology is a significant research subject of computer vision and computer graphics. For an image with a lost area, since we do not know the specific form of the original image, we can only generate some pixels which can reach true credibility as much as possible to fill the missing. As such, image restoration actually analyzes an image according to human's own visual rules and then repairs the missing image.
The improvement of image restoration techniques mainly depends on the study of image models and human visual cognitive rules. The related work in the field is very rich, but due to the complexity of structures such as various image texture semantics, the filling of pictures is still a huge challenge. Conventional repair methods are mainly based on propagation or diffusion of pixels, and on block matching. Such as the BSCB algorithm, the criminiisi algorithm, the analytical dictionary based iterative repair algorithm, etc., which are very effective for small areas, but they often produce significant artificial visual effects when the holes are large or the texture varies widely.
In the course of implementing the present disclosure, the inventors found that the following technical problems exist in the prior art:
with the continuous progress of deep learning methods, machine learning represented by deep learning is gradually rolling up the entire field of graphics research. Researchers have gradually discovered that when traditional physics-based model development encounters bottlenecks, machine learning approaches may help us interpret these complex mathematical models. After all, the generation and processing of the image can be better guided only by understanding the deep structure of the image. The work of image inpainting based on deep learning is endless. These methods have the obvious advantage over non-learning of the ability to learn and understand the semantics of the restored image. Such as the original context coder repair model, the image repair model incorporating the dilation convolution and global and local discriminators, the high resolution repair model that subdivides the network into a content generation network and a texture generation network, and so on. In some cases, however, these methods may still introduce discontinuities and artifacts when the scene is complex, or may generate images that are too smooth or blurred. This is because the output of a missing pixel necessarily depends on the values of the inputs that must be provided to the neural network to find the missing pixel. This results in phenomena such as color differences or blurring in the image.
Disclosure of Invention
In order to solve the deficiencies of the prior art, the present disclosure provides an image inpainting method and system based on edge inpainting and content inpainting; the method comprises the steps of dividing a repairing process into repairing of image edges and image contents, providing an image repairing method based on partial convolution and edge repairing, and constructing a repairing model which takes an edge structure of an image as a guide and integrates mask updating and partial convolution ideas.
In a first aspect, the present disclosure provides an image inpainting method based on edge inpainting and content inpainting;
the image restoration method based on edge restoration and content restoration comprises the following steps:
preprocessing an original defect image to obtain a gray defect image;
carrying out smoothing treatment on the gray defect image;
extracting incomplete edge images and image masks from the gray defect images after the smoothing processing;
taking the image mask, the gray defect image and the incomplete edge image as input values of an edge generator, and generating a complete edge structure chart by the edge generator;
and taking the complete edge structure diagram and the original defect image as input values of a content generator, and generating an image with the filled missing area by the content generator.
In a second aspect, the present disclosure also provides an image inpainting system based on edge inpainting and content inpainting;
an image inpainting system based on edge inpainting and content inpainting, comprising:
a pre-processing module configured to: preprocessing an original defect image to obtain a gray defect image;
a smoothing module configured to: carrying out smoothing treatment on the gray defect image;
an edge image extraction module configured to: extracting incomplete edge images and image masks from the gray defect images after the smoothing processing;
a full edge structure map generation module configured to: taking the image mask, the gray defect image and the incomplete edge image as input values of an edge generator, and generating a complete edge structure chart by the edge generator;
a content population module configured to: and taking the complete edge structure diagram and the original defect image as input values of a content generator, and generating an image with the filled missing area by the content generator.
In a third aspect, the present disclosure also provides an electronic device comprising a memory and a processor, and computer instructions stored on the memory and executed on the processor, wherein the computer instructions, when executed by the processor, perform the steps of the method of the first aspect.
In a fourth aspect, the present disclosure also provides a computer-readable storage medium for storing computer instructions which, when executed by a processor, perform the steps of the method of the first aspect.
Compared with the prior art, the beneficial effect of this disclosure is:
1. the invention provides a generating network specially dedicated to repairing damaged image edges, which is divided into two stages, wherein the first stage outputs a rough edge structure through learning, and the second stage trains further to obtain a more refined edge structure prediction result. In the experiment, the generated edge structure is used as a guide and applied to image repairing related work, compared with a plurality of deep learning-based methods, the method is obviously superior to the existing method, and a complete result with a very fine and clear structure is obtained.
2. In terms of repairing effect, the invention provides the idea that edge completion and image completion are carried out step by step, and partial convolution is combined in the forward propagation part of the network, so that a repairing result with clearer repairing effect and high quality is obtained.
3. In the aspects of applicability and expansibility, the model established by the invention is suitable for various natural images, can meet the actual needs of users, improves the restoration performance and improves the satisfaction degree of the users on image vision.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application.
FIG. 1 is a flow chart of a method of the first embodiment;
fig. 2(a) -2 (f) are graphs comparing the repairing effect of the first embodiment by using the present invention and other methods.
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
In the first embodiment, the present embodiment provides an image repairing method based on edge repairing and content repairing;
as shown in fig. 1, the image inpainting method based on edge inpainting and content inpainting includes:
s1: preprocessing an original defect image to obtain a gray defect image;
s2: carrying out smoothing treatment on the gray defect image;
s3: extracting an image mask and an incomplete edge image from the gray defect image after the smoothing processing;
s4: taking the image mask, the gray defect image and the incomplete edge image as input values of an edge generator, and generating a complete edge structure chart by the edge generator;
s5: and taking the complete edge structure diagram and the original defect image as input values of a content generator, and generating an image with the filled missing area by the content generator.
The comparative figures of the repairing effect by the invention and other methods are shown in fig. 2(a) -fig. 2 (f).
As one or more embodiments, the original defect image is preprocessed to obtain a gray defect image; the method comprises the following specific steps:
and cutting the original defect image into an image with 256 pixels by 256, and performing gray scale conversion on all the cut images to obtain a gray scale defect image.
As one or more embodiments, the smoothing of the gray-scale defect image; the method comprises the following specific steps: and after the gray defect image is subjected to Gaussian filtering processing, smoothing processing is carried out again by using median filtering.
It will be appreciated that the purpose of the smoothing process is to reduce noise.
It will be appreciated that there may be a lot of interference noise in the real image, of which impulse noise and gaussian noise are common. The gaussian filter used by the traditional Canny algorithm has a smooth filtering effect on gaussian noise, but the filtering is relatively single. The mean value filtering has better filtering effect on Gaussian noise, and the median filter has better treatment on impulse noise. Therefore, we can use median filtering in combination with mean filtering to apply to the traditional Canny algorithm.
As one or more embodiments, the incomplete edge image is extracted from the smoothed gray defect image by using a Canny edge detection algorithm.
As one or more embodiments, the image mask is extracted from the smoothed gray scale defect image by subtracting the gray scale defect image from the incomplete edge image.
It should be understood that the dual thresholds of the conventional Canny algorithm are generally selected manually and are limited to a single image, so we adopt the dual threshold automatic selection method proposed by Ostu to perform edge detection connection.
It will be appreciated that the resulting segmentation map contains some noise, since the input image is corrupted by holes. Some of the holes may even be treated as protruding objects. To address this problem, we use a binary hole mask to remove regions of the segmentation map that may be mistaken for salient objects.
In one or more embodiments, the image mask, the gray defect image and the incomplete edge image are used as input values of an edge generator, and the edge generator generates a complete edge structure diagram; wherein the edge generator is pre-trained, the pre-training of the edge generator comprises:
constructing a first generator and a first discriminator;
constructing a first training set, the first training set comprising: an image mask for training, a gray defect image for training, and an incomplete edge image for training;
inputting a first training set into a first generator, wherein the first generator generates a complete edge structure chart for training;
the first discriminator discriminates the complete edge structure chart for training generated by the first generator and the complete edge structure chart of the ground truth-value image until the first discriminator cannot distinguish true from false;
the resulting first generator is the edge generator.
It should be appreciated that the edge generator pre-training stage, instead of using the complete edge map, directly extracts the incomplete edge image of the broken image and the image mask as input to the generator, and trains the network to generate the complete edge image. Then, the ground truth image (i.e. the complete real image without missing) is used as an additional condition, and the complete edge structure diagram of the ground truth and the edge diagram generated by the first generator are used as the input of the first discriminator to predict whether the edge mapping is real or not. In addition, unlike a natural image having an understandable distribution in each local region, the distribution of pixels in the edge map is sparse, and the information contained therein is small, so that the first discriminator cannot determine whether the generated distribution is close to the true distribution. Therefore, if only the paired edge map (the complete edge generated and the edge of the true value image) is input to the discriminator, the resistance loss is difficult to optimize, and the training is easy to fail. Therefore, the ground truth image is used as an additional condition, and the image and edge map pair is used as the input of the discriminator. With this arrangement, the generated edges need to be aligned not only to be similar to the original full edge map, but also to the edges of the truth image.
As one or more embodiments, the complete edge structure diagram and the original defect image are used as input values of a content generator, and the content generator generates an image in which a missing region is filled; wherein the content generator is pre-trained, the pre-training of the content generator comprising:
constructing a second generator and a second discriminator;
constructing a second training set, the second training set comprising: a complete edge structure chart for training and an original defect image for training;
inputting the second training set into a second generator, wherein the second generator generates images with the missing areas for training filled;
the second discriminator judges the image which is generated by the second generator and used for training and is filled with the missing area and the image which is generated by the real missing area and is filled until the second discriminator can not distinguish true from false;
the resulting second generator is the content generator.
The first generator and the second generator are both U-net network models.
The first discriminator and the second discriminator are both full convolution PatchGAN discriminators.
Let IgtIs a ground truth image (i.e. a complete and non-missing real image), respectively using IedgeAnd IgrayImage I for representing ground truthgtThe full edge map and the corresponding grayscale map.
Training objectives for the network include adversarial loss, content loss, and feature matching loss:
Figure BDA0002202641120000081
wherein λadv,1And λFMIs a regularization parameter.
The resistance loss is defined as:
Figure BDA0002202641120000082
since the elements in the edge map are sparse, leading to data imbalance problems, it is difficult to determine the weight of each pixel. To solve this problem, we use the inherent property of the edge map, i.e., the probability that each pixel in the mask can be interpreted as a boundary pixel in the original image.
Therefore, taking the edge map as a sample of a distribution, the distance to the ground truth edge is calculated by calculating the binary cross entropy between each location.
The loss of focus is then employed to balance the importance of each pixel.
Because the main goal is to complete the missing part, more attention is placed on the pixels in the hole by providing more weight. This loss is denoted as edge-filled content loss.
Wherein H is an aperture mask, Le(x, y) is a binary cross entropy loss function, where x and y are the predicted probability score and ground truth probability, respectively.
Loss of feature matching LFMThe activation maps of the discriminator intermediate layers are compared. The training process is stabilized by forcing the generator to generate a representation similar to the real image.
We define a feature matching penalty LFMComprises the following steps:
Figure BDA0002202641120000091
wherein L is the final convolution layer of the discriminator, NiThe number of elements to be activated for the ith layer,
Figure BDA0002202641120000092
the number of elements is activated for the i-th layer of the discriminator.
A content generator trained using a full ground truth edge map and RGB images with deletions. The loss of training includes the original loss l1Resistance loss, perception loss and style loss.
To ensure proper scaling, the raw penalty l1Is the normalized mask size.
The adversarial loss is similar to that of the edge-generated network, and is defined as:
Figure BDA0002202641120000093
loss of perception LpercPenalizing perceptually dissimilar results from the labels by defining a distance metric between activation maps of the pre-trained network, defined as:
Figure BDA0002202641120000094
wherein phiiIs the activation map of the i-th layer of the pre-trained network.
Given dimension Cj×Hj×WjCalculating the style loss:
Figure BDA0002202641120000095
whereinIs Cj×CjBy activating map phijAnd (4) forming.
The total loss function for this stage is:
Figure BDA0002202641120000097
in the first generator G1, Mask (binary matrix, where 1 is missing part and 0 is not missing part) is used as a condition, and a gray scale map with a Mask is used
Figure BDA0002202641120000098
And its edge map
Figure BDA0002202641120000099
Here, ⊙ represents a Hadamard product using a first generator to generate an edge map of the predicted missing region:
Figure BDA0002202641120000101
in the second generator G2, incomplete color images are usedComplete edge structure graph using completion as inputIs a condition.
Combining the background area of the ground truth value edge with the edge generated by the damage area of the previous stage to obtain
Figure BDA0002202641120000103
And constructing a composite edge map.
The second generator G2 returns an image I in which the missing region has been filledcomp
Figure BDA0002202641120000104
The first discriminator D1 and the second discriminator D2 employ the same architecture based on 70 × 70 PatchGAN. Let Ck-s denote a 4-4 contribution-Spectral Norm Leaky ReLU layer containing k step s filters. The structure of the discriminator is C64-2, C128-2, C256-2, C512-1 and C1-1. The score generated by the last convolution layer may predict whether 70 x 70 overlapping image patches are authentic or false. The technical scheme of the invention ensures the trueness and credibility of the generated image missing content.
The idea of partial convolution is introduced in the forward propagation part of the network. It uses stacked partial convolution operations and mask update steps to perform image inpainting, which includes a convolution and mask update mechanism.
Instead of using the complete edge map, the incomplete edge image of the broken image and the mask are directly extracted as input to the generator, and the network is trained to generate the complete edge image. And using the ground truth value as an additional condition, and using the edge map and the generated edge map as the input of the discriminator to predict whether the edge map is real.
TABLE 1 comparison table of repair indexes of the present invention and other methods
Figure BDA0002202641120000111
The second embodiment further provides an image restoration system based on edge restoration and content restoration;
an image inpainting system based on edge inpainting and content inpainting, comprising:
a pre-processing module configured to: preprocessing an original defect image to obtain a gray defect image;
a smoothing module configured to: carrying out smoothing treatment on the gray defect image;
an edge image extraction module configured to: extracting incomplete edge images and image masks from the gray defect images after the smoothing processing;
a full edge structure map generation module configured to: taking the image mask, the gray defect image and the incomplete edge image as input values of an edge generator, and generating a complete edge structure chart by the edge generator;
a content population module configured to: and taking the complete edge structure diagram and the original defect image as input values of a content generator, and generating an image with the filled missing area by the content generator.
The present disclosure also provides an electronic device, which includes a memory, a processor, and a computer instruction stored in the memory and executed on the processor, where when the computer instruction is executed by the processor, each operation in the method is completed, and details are not described herein for brevity.
The electronic device may be a mobile terminal and a non-mobile terminal, the non-mobile terminal includes a desktop computer, and the mobile terminal includes a Smart Phone (such as an Android Phone and an IOS Phone), Smart glasses, a Smart watch, a Smart bracelet, a tablet computer, a notebook computer, a personal digital assistant, and other mobile internet devices capable of performing wireless communication.
It should be understood that in the present disclosure, the processor may be a central processing unit CPU, but may also be other general purpose processors, digital signal processors DSP, application specific integrated circuits ASIC, off-the-shelf programmable gate arrays FPGA or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may include both read-only memory and random access memory, and may provide instructions and data to the processor, and a portion of the memory may also include non-volatile random access memory. For example, the memory may also store device type information.
In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or instructions in the form of software. The steps of a method disclosed in connection with the present disclosure may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor. The software modules may be located in ram, flash, rom, prom, or eprom, registers, among other storage media as is well known in the art. The storage medium is located in a memory, and a processor reads information in the memory and completes the steps of the method in combination with hardware of the processor. To avoid repetition, it is not described in detail here. Those of ordinary skill in the art will appreciate that the various illustrative elements, i.e., algorithm steps, described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is merely a division of one logic function, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. The image restoration method based on edge restoration and content restoration is characterized by comprising the following steps:
preprocessing an original defect image to obtain a gray defect image;
carrying out smoothing treatment on the gray defect image;
extracting incomplete edge images and image masks from the gray defect images after the smoothing processing;
taking the image mask, the gray defect image and the incomplete edge image as input values of an edge generator, and generating a complete edge structure chart by the edge generator;
and taking the complete edge structure diagram and the original defect image as input values of a content generator, and generating an image with the filled missing area by the content generator.
2. The method of claim 1, wherein the original defect image is preprocessed to obtain a gray defect image; the method comprises the following specific steps:
and cutting the original defect image into an image with 256 pixels by 256, and performing gray scale conversion on all the cut images to obtain a gray scale defect image.
3. The method of claim 1, wherein said smoothing of the gray scale defect image; the method comprises the following specific steps: and after the gray defect image is subjected to Gaussian filtering processing, smoothing processing is carried out again by using median filtering.
4. The method of claim 1, wherein the incomplete edge image is extracted from the smoothed gray scale defect image by using a Canny edge detection algorithm.
5. The method of claim 1, wherein the image mask is extracted from the smoothed gray scale defect image by subtracting the gray scale defect image from the incomplete edge image.
6. The method of claim 1, wherein the image mask, the gray scale defect image, and the incomplete edge image are used as input values for an edge generator, the edge generator generating a complete edge structure map; wherein the edge generator is pre-trained, the pre-training of the edge generator comprises:
constructing a first generator and a first discriminator;
constructing a first training set, the first training set comprising: an image mask for training, a gray defect image for training, and an incomplete edge image for training;
inputting a first training set into a first generator, wherein the first generator generates a complete edge structure chart for training;
the first discriminator discriminates the complete edge structure chart for training generated by the first generator and the complete edge structure chart of the ground truth-value image until the first discriminator cannot distinguish true from false;
the resulting first generator is the edge generator.
7. The method according to claim 1, wherein the complete edge structure diagram and the original defect image are used as input values of a content generator, and the content generator generates an image in which the missing area is filled; wherein the content generator is pre-trained, the pre-training of the content generator comprising:
constructing a second generator and a second discriminator;
constructing a second training set, the second training set comprising: a complete edge structure chart for training and an original defect image for training;
inputting the second training set into a second generator, wherein the second generator generates images with the missing areas for training filled;
the second discriminator judges the image which is generated by the second generator and used for training and is filled with the missing area and the image which is generated by the real missing area and is filled until the second discriminator can not distinguish true from false;
the resulting second generator is the content generator.
8. An image restoration system based on edge restoration and content restoration, comprising:
a pre-processing module configured to: preprocessing an original defect image to obtain a gray defect image;
a smoothing module configured to: carrying out smoothing treatment on the gray defect image;
an edge image extraction module configured to: extracting incomplete edge images and image masks from the gray defect images after the smoothing processing;
a full edge structure map generation module configured to: taking the image mask, the gray defect image and the incomplete edge image as input values of an edge generator, and generating a complete edge structure chart by the edge generator;
a content population module configured to: and taking the complete edge structure diagram and the original defect image as input values of a content generator, and generating an image with the filled missing area by the content generator.
9. An electronic device comprising a memory and a processor and computer instructions stored on the memory and executable on the processor, the computer instructions when executed by the processor performing the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium storing computer instructions which, when executed by a processor, perform the steps of the method of any one of claims 1 to 7.
CN201910870523.4A 2019-09-16 2019-09-16 Image restoration method and system based on edge restoration and content restoration Pending CN110675339A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910870523.4A CN110675339A (en) 2019-09-16 2019-09-16 Image restoration method and system based on edge restoration and content restoration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910870523.4A CN110675339A (en) 2019-09-16 2019-09-16 Image restoration method and system based on edge restoration and content restoration

Publications (1)

Publication Number Publication Date
CN110675339A true CN110675339A (en) 2020-01-10

Family

ID=69076974

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910870523.4A Pending CN110675339A (en) 2019-09-16 2019-09-16 Image restoration method and system based on edge restoration and content restoration

Country Status (1)

Country Link
CN (1) CN110675339A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476213A (en) * 2020-05-19 2020-07-31 武汉大势智慧科技有限公司 Method and device for filling covering area of shelter based on road image
CN111553869A (en) * 2020-05-13 2020-08-18 北京航空航天大学 Method for complementing generated confrontation network image under space-based view angle
CN111861901A (en) * 2020-06-05 2020-10-30 西安工程大学 Edge generation image restoration method based on GAN network
CN112381725A (en) * 2020-10-16 2021-02-19 广东工业大学 Image restoration method and device based on deep convolution countermeasure generation network
CN112614066A (en) * 2020-12-23 2021-04-06 文思海辉智科科技有限公司 Image restoration method and device and electronic equipment
CN113487512A (en) * 2021-07-20 2021-10-08 陕西师范大学 Digital image restoration method and device based on edge information guidance
CN113496470A (en) * 2020-04-02 2021-10-12 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
CN113674176A (en) * 2021-08-23 2021-11-19 北京市商汤科技开发有限公司 Image restoration method and device, electronic equipment and storage medium
CN113744142A (en) * 2021-08-05 2021-12-03 南方科技大学 Image restoration method, electronic device and storage medium
CN113781296A (en) * 2021-09-22 2021-12-10 亿图软件(湖南)有限公司 Image watercolor processing method and device, computer equipment and storage medium
CN113487512B (en) * 2021-07-20 2024-07-02 陕西师范大学 Digital image restoration method and device based on edge information guidance

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101459843A (en) * 2008-12-31 2009-06-17 浙江师范大学 Method for precisely extracting broken content region in video sequence
CN105261051A (en) * 2015-09-25 2016-01-20 沈阳东软医疗系统有限公司 Method and apparatus for obtaining image mask
CN106023102A (en) * 2016-05-16 2016-10-12 西安电子科技大学 Image restoration method based on multi-scale structure block
CN107437252A (en) * 2017-08-04 2017-12-05 山东师范大学 Disaggregated model construction method and equipment for ARM region segmentation
CN108460746A (en) * 2018-04-10 2018-08-28 武汉大学 A kind of image repair method predicted based on structure and texture layer
CN108537753A (en) * 2018-04-10 2018-09-14 武汉大学 A kind of image repair method based on contextual feature space constraint

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101459843A (en) * 2008-12-31 2009-06-17 浙江师范大学 Method for precisely extracting broken content region in video sequence
CN105261051A (en) * 2015-09-25 2016-01-20 沈阳东软医疗系统有限公司 Method and apparatus for obtaining image mask
CN106023102A (en) * 2016-05-16 2016-10-12 西安电子科技大学 Image restoration method based on multi-scale structure block
CN107437252A (en) * 2017-08-04 2017-12-05 山东师范大学 Disaggregated model construction method and equipment for ARM region segmentation
CN108460746A (en) * 2018-04-10 2018-08-28 武汉大学 A kind of image repair method predicted based on structure and texture layer
CN108537753A (en) * 2018-04-10 2018-09-14 武汉大学 A kind of image repair method based on contextual feature space constraint

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
DAN ZHAO等: "Parallel Image Completion with Edge and Color Map", 《MDPI》 *
DAN ZHAO等: "Parallel Image Completion with Edge and Color Map", 《MDPI》, 13 September 2019 (2019-09-13), pages 1 - 29 *
KAMYAR NAZERI等: "EdgeConnect: Generative Image Inpainting with Adversarial Edge Learning", 《ARXIV:1901.00212V2》 *
KAMYAR NAZERI等: "EdgeConnect: Generative Image Inpainting with Adversarial Edge Learning", 《ARXIV:1901.00212V2》, 5 January 2019 (2019-01-05), pages 1 - 17, XP081010575 *
刘建伟等: "生成对抗网络在各领域应用研究进展", 《自动化学报》, 25 June 2019 (2019-06-25), pages 1 - 38 *
曾接贤等: "基于优先权改进和块划分的图像修复", 《中国图象图形学报》 *
曾接贤等: "基于优先权改进和块划分的图像修复", 《中国图象图形学报》, no. 09, 16 September 2017 (2017-09-16) *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113496470A (en) * 2020-04-02 2021-10-12 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
CN113496470B (en) * 2020-04-02 2024-04-09 北京达佳互联信息技术有限公司 Image processing method and device, electronic equipment and storage medium
CN111553869A (en) * 2020-05-13 2020-08-18 北京航空航天大学 Method for complementing generated confrontation network image under space-based view angle
CN111476213A (en) * 2020-05-19 2020-07-31 武汉大势智慧科技有限公司 Method and device for filling covering area of shelter based on road image
CN111861901A (en) * 2020-06-05 2020-10-30 西安工程大学 Edge generation image restoration method based on GAN network
CN112381725B (en) * 2020-10-16 2024-02-02 广东工业大学 Image restoration method and device based on depth convolution countermeasure generation network
CN112381725A (en) * 2020-10-16 2021-02-19 广东工业大学 Image restoration method and device based on deep convolution countermeasure generation network
CN112614066A (en) * 2020-12-23 2021-04-06 文思海辉智科科技有限公司 Image restoration method and device and electronic equipment
CN113487512A (en) * 2021-07-20 2021-10-08 陕西师范大学 Digital image restoration method and device based on edge information guidance
CN113487512B (en) * 2021-07-20 2024-07-02 陕西师范大学 Digital image restoration method and device based on edge information guidance
CN113744142A (en) * 2021-08-05 2021-12-03 南方科技大学 Image restoration method, electronic device and storage medium
CN113744142B (en) * 2021-08-05 2024-04-16 南方科技大学 Image restoration method, electronic device and storage medium
CN113674176B (en) * 2021-08-23 2024-04-16 北京市商汤科技开发有限公司 Image restoration method and device, electronic equipment and storage medium
CN113674176A (en) * 2021-08-23 2021-11-19 北京市商汤科技开发有限公司 Image restoration method and device, electronic equipment and storage medium
CN113781296A (en) * 2021-09-22 2021-12-10 亿图软件(湖南)有限公司 Image watercolor processing method and device, computer equipment and storage medium
CN113781296B (en) * 2021-09-22 2024-05-28 亿图软件(湖南)有限公司 Image watercolor processing method, device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110675339A (en) Image restoration method and system based on edge restoration and content restoration
KR102640237B1 (en) Image processing methods, apparatus, electronic devices, and computer-readable storage media
CN107330956B (en) Cartoon hand drawing unsupervised coloring method and device
CN112132156B (en) Image saliency target detection method and system based on multi-depth feature fusion
CN108171663B (en) Image filling system of convolutional neural network based on feature map nearest neighbor replacement
CN110516541B (en) Text positioning method and device, computer readable storage medium and computer equipment
CN113642390B (en) Street view image semantic segmentation method based on local attention network
CN111178211A (en) Image segmentation method and device, electronic equipment and readable storage medium
CN110148088B (en) Image processing method, image rain removing method, device, terminal and medium
CN111476213A (en) Method and device for filling covering area of shelter based on road image
CN112712472A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111179196B (en) Multi-resolution depth network image highlight removing method based on divide-and-conquer
CN112749578A (en) Remote sensing image automatic road extraction method based on deep convolutional neural network
CN113971644A (en) Image identification method and device based on data enhancement strategy selection
CN112801914A (en) Two-stage image restoration method based on texture structure perception
CN111860465A (en) Remote sensing image extraction method, device, equipment and storage medium based on super pixels
CN114841974A (en) Nondestructive testing method and system for internal structure of fruit, electronic equipment and medium
CN116452511B (en) Intelligent identifying method, device and medium for surrounding rock level of tunnel face of drilling and blasting method
CN116798041A (en) Image recognition method and device and electronic equipment
CN110378852A (en) Image enchancing method, device, computer equipment and storage medium
CN110135379A (en) Tongue picture dividing method and device
CN115937121A (en) Non-reference image quality evaluation method and system based on multi-dimensional feature fusion
CN116051407A (en) Image restoration method
CN113160081A (en) Depth face image restoration method based on perception deblurring
CN115115537B (en) Image restoration method based on mask training

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200110