CN111047522B - Image restoration method based on edge generation - Google Patents
Image restoration method based on edge generation Download PDFInfo
- Publication number
- CN111047522B CN111047522B CN201911082748.XA CN201911082748A CN111047522B CN 111047522 B CN111047522 B CN 111047522B CN 201911082748 A CN201911082748 A CN 201911082748A CN 111047522 B CN111047522 B CN 111047522B
- Authority
- CN
- China
- Prior art keywords
- image
- edge
- network
- generation network
- convolutional layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 230000007547 defect Effects 0.000 claims abstract description 47
- 238000012549 training Methods 0.000 claims abstract description 29
- 230000002950 deficient Effects 0.000 claims abstract description 20
- 238000012545 processing Methods 0.000 claims abstract description 11
- 230000008569 process Effects 0.000 claims description 10
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 9
- 239000000126 substance Substances 0.000 claims description 9
- 230000006835 compression Effects 0.000 claims description 4
- 238000007906 compression Methods 0.000 claims description 4
- 238000002372 labelling Methods 0.000 claims description 4
- 238000010606 normalization Methods 0.000 claims description 4
- 238000005070 sampling Methods 0.000 claims description 4
- 230000008485 antagonism Effects 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims 1
- 238000013473 artificial intelligence Methods 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 12
- 230000008439 repair process Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 7
- 230000009471 action Effects 0.000 description 3
- 230000003042 antagnostic effect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000011840 criminal investigation Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G06T5/77—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/181—Segmentation; Edge detection involving edge growing; involving edge linking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
- G06T7/41—Analysis of texture based on statistical description of texture
- G06T7/44—Analysis of texture based on statistical description of texture using image operators, e.g. filters, edge density metrics or local histograms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
Abstract
The invention provides an image restoration method based on edge generation, which can effectively solve the problems of fixed restoration area and fuzzy generated image in image restoration. The method comprises the following steps: generating a defect image, and extracting the edge contour of the defect image; constructing an edge generation network and a content generation network, wherein the content generation network adopts a U-Net structure; in the training phase: inputting a defective image and an extracted edge contour to train an edge generation network, and inputting an image edge feature generated by the trained edge generation network, texture information of the defective image extracted by the trained texture generation network and a defective image to train a content generation network; in the repairing stage, the edge characteristics of the image to be repaired generated by the edge generating network, the texture information of the image to be repaired extracted by the texture generating network and the image to be repaired are input into the trained content generating network, so that the original appearance of the image is repaired. The invention relates to the field of artificial intelligence and image processing.
Description
Technical Field
The invention relates to the field of artificial intelligence and image processing, in particular to an image restoration method based on edge generation.
Background
The image restoration technology is an important branch of computer vision and belongs to the problem of multidisciplinary intersection of deep learning, image processing and the like. Image restoration is a technology for removing and restoring partial pixel loss by using information of a known region of an image, and is widely applied to the fields of image redundant target removal, public security criminal investigation face restoration and biomedical images at present.
The image restoration method in the prior art has the problems of fixed restoration area and fuzzy generated image.
Disclosure of Invention
The invention aims to provide an image restoration method based on edge generation to solve the problems of fixed restoration area and fuzzy generated image in the prior art.
In order to solve the above technical problem, an embodiment of the present invention provides an image inpainting method based on edge generation, including:
acquiring a training set image, processing the image to generate a defect image, and extracting an edge contour of the defect image; wherein, the training set images are real images;
constructing an edge generating network and a content generating network, wherein the edge generating network and the content generating network respectively correspond to a distinguishing network which is used for distinguishing the authenticity of the generated content of the generating network, and the content generating network adopts a U-Net structure;
in the training phase: inputting a defective image and an extracted edge contour to train an edge generation network, and inputting an image edge feature generated by the trained edge generation network, texture information of the defective image extracted by the trained texture generation network and a defective image to train a content generation network;
in the repairing stage, the image to be repaired is input to the trained edge generation network and the texture generation network, the image edge characteristics generated by the edge generation network, the texture information extracted by the texture generation network and the image to be repaired are input to the trained content generation network, and the original appearance repairing of the image is realized.
Further, the acquiring a training set image, processing the image to generate a defect image, and extracting an edge contour of the defect image includes:
carrying out normalization processing on the training set image to generate any defect image with a preset size, wherein the defect image is a gray scale image;
and extracting the edge contour of the defect image and labeling the edge contour.
Further, the edge generation network includes: a convolutional layer 1, a convolutional layer 2 connected to the convolutional layer 1, a convolutional layer 3 connected to the convolutional layer 2, eight residual blocks connected to the convolutional layer 3, a convolutional layer 4 connected to the eight residual blocks, a convolutional layer 5 connected to the convolutional layer 4, and a convolutional layer 6 connected to the convolutional layer 5;
wherein, the convolution layers 1, 2 and 3 are used for down-sampling the input defect image and the extracted edge contour; the convolutional layers 4, 5, 6 are used to upsample the input defect image and the extracted edge profile.
Further, the texture generation network is a trained VGG-19 network, and includes: a convolutional layer 1, a convolutional layer 2 connected with the convolutional layer 1, a convolutional layer 3 connected with the convolutional layer 2, a convolutional layer 4 connected with the convolutional layer 3, a convolutional layer 5 connected with the convolutional layer 4, and a full-connection layer connected with the convolutional layer 5; wherein, the characteristics of the convolution layers 3 and 4 of the VGG-19 network are jointly extracted as the texture information of the image.
Furthermore, a compression channel of the U-Net structure adopts an encoder, and an expansion channel adopts a decoder; wherein, U-Net is a convolution neural network.
Further, during training, the loss function of the edge-generating network is expressed as:
wherein the content of the first and second substances,representing edge-generating networks G 1 A loss function of adv,1 And λ EF All represent a regularization parameter, L EF Represents the loss of edge features, L adv,1 Representing a loss of confrontation;
the edge feature loss is expressed as:
wherein N represents the number of edge-generated network convolutional layers; n is a radical of i Represents the ith convolutional layer; c r Edge features representing a real image; c p Representing edge-generating networks G 1 The edge characteristics of the generated edge are determined, a gray map representing an input defect image>An edge profile representing the input defect image; />Representation discrimination network D 1 Inputting edge characteristic C of real image r And input edge generation network G 1 Generated edge feature C p Mean square error therebetween;
the challenge loss is expressed as:
wherein the content of the first and second substances,representation discrimination network D 1 Input C r Result of discrimination, I gray A grayscale map representing a real image; />Represents input C p The result of the discrimination (1).
Further, the penalty function of the texture generation network is expressed as:
wherein the content of the first and second substances,representation texture generation network G 2 A loss function of (d); i is g Represents the texture information extracted by the texture generation network, and-> Indicating an input defective picture, I gt Representing a real image; e c (I g ,I gt ) Is represented by g And I gt The euclidean distance between them.
Further, during the training process, the loss function of the content generation network is represented as:
wherein the content of the first and second substances,presentation content generation network G 3 A loss function of adv,3 And λ l1 All represent a regularization parameter, L adv,3 Denotes the loss of antagonism, L l1 Representing a loss of reconstruction;
the challenge loss is expressed as:
wherein, I gt Representing a real image; c and Edge-generating network output result C, which is an input representing a content-generating network p And texture generation network output result I g Superposing; i is p Representing a prediction graph generated by a content generation network, I p =G 3 (I gt ,C and );Representation discrimination network D 3 For input image I gt And C and The result of the discrimination of (1);representation discrimination network D 3 Judging results of prediction graphs generated by a content generation network;
loss of reconstruction L l1 Generated image I representing content generation network p With the real image I gt The difference between, the reconstruction loss is expressed as:
L l1 =||I p -I gt || 1 。
further, in the content generation network training process, the discrimination network D is used 3 Comparing the prediction graph generated by the content generation network with the real image, and updating the parameter lambda adv,3 And λ l1 Up to a loss functionIs greater than a preset first threshold.
The technical scheme of the invention has the following beneficial effects:
in the scheme, in the repair stage, the content generation network with the U-Net structure can realize the repair of the defective image to be repaired in any region according to the image edge characteristics generated by the edge generation network and the texture information extracted by the texture generation network, so that the problems of fixed repair region and fuzzy generated image in image repair are effectively solved, and the repair image which is more in line with the real image can be generated.
Drawings
Fig. 1 is a schematic flowchart of an image inpainting method based on edge generation according to an embodiment of the present invention;
fig. 2 is a schematic frame diagram of an image inpainting method based on edge generation according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an edge generation network according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a texture generation network according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a content generation network according to an embodiment of the present invention.
Detailed Description
To make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments.
The invention provides an image restoration method based on edge generation, aiming at the problems of fixed restoration area and fuzzy generated image.
Example one
As shown in fig. 1, an image inpainting method based on edge generation according to an embodiment of the present invention includes:
s101, acquiring a training set image, processing the image to generate a defect image, and extracting an edge contour of the defect image; wherein, the training set images are real images;
s102, an edge generating network and a content generating network are constructed, wherein the edge generating network and the content generating network respectively correspond to one distinguishing network, the distinguishing network is used for distinguishing the authenticity of the generated content of the generating network, and the content generating network adopts a U-Net structure;
s103, in the training stage: inputting a defective image and an extracted edge profile to train an edge generation network, and inputting an image edge feature generated by the trained edge generation network, texture information of the defective image extracted by the trained texture generation network and the defective image to train a content generation network;
s104, in the repairing stage, inputting the image to be repaired to the trained edge generation network and the texture generation network, and inputting the image edge characteristics generated by the edge generation network, the texture information extracted by the texture generation network and the image to be repaired to the trained content generation network to realize the original appearance repairing of the image.
According to the image restoration method based on the edge generation, in the restoration stage, the content generation network with the U-Net structure can realize restoration of the damaged image to be restored in any area according to the image edge characteristics generated by the edge generation network and the texture information extracted by the texture generation network, so that the problems of fixed restoration area and fuzzy generated image in image restoration are effectively solved, and the restored image which is more in line with the real image can be generated.
For better understanding of the image inpainting method based on edge generation provided by the embodiment of the present invention, the detailed description thereof, as shown in fig. 1 and fig. 2, may specifically include the following steps:
a1, acquiring a training image set, and preprocessing the training image set, specifically:
carrying out normalization processing on the training set images to generate any defect image with a preset size, wherein the defect image is a gray level image;
and extracting the edge contour of the defect image and labeling the edge contour.
In this embodiment, for example, first, a scale normalization process is performed on a training set image to generate a standard image with a pixel size of 256 × 256 and generate a grayscale image thereof; then, a defect image is generated through random simulation, and finally, the Canny edge detection algorithm is used for extracting the edge contour of the defect image and labeling the edge contour, for example, the edge contour of the defect image is drawn in black.
And A2, constructing an edge generation network and a content generation network, and acquiring a trained texture generation network.
In the embodiment, the edge generation network and the content generation network respectively correspond to one discrimination network, the discrimination network is equivalent to a two-classifier and is used for discriminating the authenticity of the content generated by the edge generation network, and the edge generation network and the content generation network achieve a balance in a mutual game mode, so that the result generated by the edge generation network can deceive the discrimination network;
in the embodiment, the generated countermeasure network (GAN) composed of the generation network and the discrimination network has a good effect in the image restoration process, a reasonable image structure can be generated through the game of the generation network and the discrimination network, and the restoration result can be evaluated quickly and accurately.
In the embodiment, the edge generation network is applied to image restoration, and the integrity of image semantic information is ensured by generating the edge features of the image. Because the repairing result may have detail problems, such as local blurring and shading, which seriously affect the visual effect of image repairing, for this reason, a texture generation network is introduced to extract the texture information of the image, and the edge contour and the texture information are input into the content generation network, so that the image repairing result is closer to the original image.
In this embodiment, fig. 3 is a schematic structural diagram of an edge generating network, where the edge generating network includes: a convolutional layer 1, a convolutional layer 2 connected to the convolutional layer 1, a convolutional layer 3 connected to the convolutional layer 2, eight residual blocks connected to the convolutional layer 3, a convolutional layer 4 connected to the eight residual blocks, a convolutional layer 5 connected to the convolutional layer 4, and a convolutional layer 6 connected to the convolutional layer 5; wherein, the convolution layers 1, 2 and 3 are used for down-sampling the input defect image and the extracted edge contour; the convolutional layers 4, 5, 6 are used to upsample the input defect image and the extracted edge profile.
In this embodiment, the edge generation network performs downsampling by three layers of convolution, then performs upsampling by eight residual blocks, and finally performs upsampling by three layers of convolution. The pixel size of convolutional layer 1 is 256 × 256, the pixel size of convolutional layer 2 is 128 × 128, and the pixel size of convolutional layer 3 is 64 × 64, and the memory occupancy of the network can be reduced by performing downsampling through three-layer convolution. The pixel size of the eight residual blocks is 64 × 64, and the residual blocks serve to increase the depth of the network so as to extract more abundant information. Next, three-layer convolution is performed, in which the pixel size of the convolutional layer 4 is 64 × 64, the pixel size of the convolutional layer 5 is 128 × 128, and the pixel size of the convolutional layer 6 is 256 × 256, and after upsampling by the three-layer convolution, the image can be restored to the original size and then output.
In this embodiment, fig. 4 is a schematic structural diagram of a texture generation network, where the texture generation network utilizes a trained VGG-19 network, and includes: a convolutional layer 1, a convolutional layer 2 connected to the convolutional layer 1, a convolutional layer 3 connected to the convolutional layer 2, a convolutional layer 4 connected to the convolutional layer 3, a convolutional layer 5 connected to the convolutional layer 4, and a full connection layer connected to the convolutional layer 5. The pixel size of convolutional layer 1 is 256 × 256, the pixel size of convolutional layer 2 is 128 × 128, the pixel size of convolutional layer 3 is 64 × 64, the pixel size of convolutional layer 4 is 32 × 32, and the pixel size of convolutional layer 5 is 16 × 16. The VGG-19 network is a classifier, but the present embodiment only uses the VGG-19 network to extract features, and in order to obtain better texture effect, the features of the convolutional layers 3 and 4 of the VGG-19 network are jointly extracted and used as texture information of the image.
In this embodiment, fig. 5 is a schematic structural diagram of a constructed content generation network, the content generation network adopts a U-Net structure, the shape is similar to a "U", a left half compression channel adopts an encoder, and a right half expansion channel adopts a decoder, and the repair of a defective image in any region can be completed by using such a structure.
In the foregoing specific implementation of the image inpainting method based on edge generation, further, the content generation network employs a U-Net network, and a compression channel of the U-Net network employs an encoder and an expansion channel employs a decoder; wherein, the U-Net network is a convolution neural network.
A3, training the edge generating network and the content generating network, which may specifically include the following steps:
a31, inputting a defect image and an extracted edge outline to train an edge generation network;
in the embodiment, the input of the edge generation network is composed of the gray scale image of the defect image and the extracted edge profile, so that the calculation amount of the edge generation network can be reduced; the edge generation network is used to generate and label edge features (i.e., edge contours) of the defect image, for example, the edge features of the generated defect image are labeled in blue.
In this embodiment, the size of the defect image input to the edge generation network is 256 × 256 pixels, the size of the convolution kernel of the edge generation network is set to 5 × 5 pixels, and the step size is set to 2 pixels. In the training process, the loss function of the edge generation network is the antagonistic loss L adv,1 And edge feature loss, expressed as:
wherein the content of the first and second substances,representation edge generating network G 1 A loss function of (d); lambda adv,1 And λ EF All represent regularization parameters; l is EF The loss of the edge features is expressed in order to ensure that the generated edge features are as close as possible to the real image; l is adv,1 Representing the fight loss in order to penalize unrealistic images;
the edge feature loss is expressed as:
wherein N represents the number of edge-generated network convolutional layers; n is a radical of i Represents the ith convolutional layer; c r Representing real imagesEdge features; c p Representing edge-generating networks G 1 The edge characteristics of the generated edge feature are determined, a gray map representing an input defect image>An edge profile representing the input defect image; />Representation discrimination network D 1 Inputting edge characteristic C of real image r And input edge generation network G 1 Generated edge feature C p Mean square error therebetween;
and (3) gaming the edge features generated by the edge generation network and the real images to obtain the antagonistic loss, wherein the antagonistic loss is expressed as:
wherein, the first and the second end of the pipe are connected with each other,representation discrimination network D 1 Input C r Result of discrimination, I gray A grayscale map representing a real image; />Represents input C p The result of the discrimination.
In this embodiment, during training of the edge generation network, the discriminant network D is used 1 Comparing the output of the edge generation network with the real image, and updating the parameter lambda adv,1 And λ EF Up to a loss functionIs greater than a preset second threshold.
A32, acquiring texture information of the defect image extracted by the trained texture generation network;
in this embodiment, the input of the texture generation network is a defective image, and the output is extracted texture features; namely: the texture generation network is used for extracting texture information of the defective image, so that the image is not subjected to local blurring.
In this embodiment, the loss function of the texture generation network is expressed as:
wherein, the first and the second end of the pipe are connected with each other,representation texture generation network G 2 A loss function of (d); I.C. A g Represents the texture information extracted by the texture generation network, and-> Representing an input defect image I gt Representing a real image; e c (I g ,I gt ) Is represented by I g And I gt The smaller the distance is, the closer the texture information extracted by the texture generation network is to the real image.
And A33, inputting the image edge characteristics generated by the trained edge generation network, the texture information of the defect image extracted by the trained texture generation network and the defect image to train the content generation network.
In this embodiment, the content creation network (U-net) is trained using the output results of step a31 and step a32, that is: the input of the content generation network consists of the image edge generated by the edge generation network, the texture information extracted by the texture generation network and the defect image. The content generation network is used for repairing semantic content (namely edge characteristics and texture information) of a defective image, so that a generated image result is fuller, and the content generation network with the U-Net structure can realize the repair of the defective image to be repaired in any area according to the edge characteristics of the image generated by the edge generation network and the texture information extracted by the texture generation network, thereby effectively solving the problems of fixed repair area and fuzzy generated image in image repair, and generating a repair image more conforming to a real image.
In the training process, the loss function of the content generation network is formed by the countermeasure loss L adv,3 And reconstruction loss L l1 Composition, expressed as:
wherein the content of the first and second substances,presentation content generation network G 3 A loss function of adv,3 And λ l1 All represent a regularization parameter, L adv,3 Denotes the loss of antagonism, L l1 Representing a loss of reconstruction;
the combat loss increases the realism of the generated result by gaming through the predicted result and the real image of the content generation network, the combat loss being expressed as:
wherein, I gt Representing a real image; c and Edge-generating network output result C, which is an input representing a content-generating network p And texture generation network output result I g Superposition of (2); I.C. A p Representing a prediction graph generated by a content generation network, I p =G 3 (I gt ,C and );Representation discrimination network D 3 For input image I gt And C and The result of the discrimination of (1);representation discrimination network D 3 Judging results of prediction graphs generated by a content generation network;
loss of reconstruction L l1 Generated image I representing content generation network p The difference from the real image, the reconstruction loss is expressed as:
L l1 =||I p -I gt || 1 。
in this embodiment, in the content generation network training process, the discrimination network D is used 3 Comparing the prediction graph generated by the content generation network with the real image, and updating the parameter lambda adv,3 And λ l1 Up to a loss functionIs greater than a preset first threshold.
And A4, repairing a defect image to be repaired (called the image to be repaired for short).
In this embodiment, in the repairing stage, the image to be repaired is input to the trained edge generation network and the texture generation network, the image edge features generated by the edge generation network, the texture information extracted by the texture generation network, and the image to be repaired are input to the trained content generation network, and the image to be repaired is input to the trained content generation network through the loss functionObtaining a parameter lambda adv,3 And λ l1 And the original appearance of the image is restored.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (7)
1. An image inpainting method based on edge generation is characterized by comprising the following steps:
acquiring a training set image, processing the image to generate a defect image, and extracting an edge contour of the defect image; wherein, the training set images are real images;
constructing an edge generating network and a content generating network, wherein the edge generating network and the content generating network respectively correspond to a distinguishing network which is used for distinguishing the authenticity of the generated content of the generating network, and the content generating network adopts a U-Net structure;
in the training phase: inputting a defective image and an extracted edge contour to train an edge generation network, and inputting an image edge feature generated by the trained edge generation network, texture information of the defective image extracted by the trained texture generation network and a defective image to train a content generation network;
in the repairing stage, inputting an image to be repaired to a trained edge generation network and a texture generation network, and inputting image edge characteristics generated by the edge generation network, texture information extracted by the texture generation network and the image to be repaired to the trained content generation network to realize the original image repairing of the image;
the edge generation network includes: a convolutional layer 1, a convolutional layer 2 connected to the convolutional layer 1, a convolutional layer 3 connected to the convolutional layer 2, eight residual blocks connected to the convolutional layer 3, a convolutional layer 4 connected to the eight residual blocks, a convolutional layer 5 connected to the convolutional layer 4, and a convolutional layer 6 connected to the convolutional layer 5;
wherein, the convolution layers 1, 2 and 3 are used for down-sampling the input defect image and the extracted edge contour; the convolutional layers 4, 5 and 6 are used for up-sampling the input defect image and the extracted edge outline;
the texture generation network is a trained VGG-19 network and comprises the following steps: a convolutional layer 1, a convolutional layer 2 connected with the convolutional layer 1, a convolutional layer 3 connected with the convolutional layer 2, a convolutional layer 4 connected with the convolutional layer 3, a convolutional layer 5 connected with the convolutional layer 4, and a full-connection layer connected with the convolutional layer 5; wherein, the characteristics of the convolution layer 3 and the convolution layer 4 of the VGG-19 network are jointly extracted as the texture information of the image.
2. The image inpainting method based on edge generation as claimed in claim 1, wherein the obtaining of the training set image, the processing of the image, the generation of the defect image, and the extraction of the edge contour of the defect image comprise:
carrying out normalization processing on the training set images to generate any defect image with a preset size, wherein the defect image is a gray level image;
and extracting the edge contour of the defect image, and labeling the edge contour.
3. The image inpainting method based on the edge generation as claimed in claim 1, wherein a compression channel of a U-Net structure adopts an encoder, and an expansion channel adopts a decoder; wherein, U-Net is a convolution neural network.
4. The image inpainting method based on edge generation as claimed in claim 1, wherein in the training process, the loss function of the edge generation network is expressed as:
wherein, the first and the second end of the pipe are connected with each other,representation edge generating network G 1 A loss function of adv,1 And λ EF All represent a regularization parameter, L EF Represents the loss of edge features, L adv,1 Representing a loss of confrontation;
the edge feature loss is expressed as:
wherein, N represents the number of the edge generation network convolution layers; n is a radical of i Represents the ith convolutional layer; c r Edge features representing a real image; c p Representing edge-generating networks G 1 The edge characteristics of the generated edge are determined, a gray-scale map representing an input defect image->An edge profile representing the input defect image; />Representation discrimination network D 1 Inputting edge characteristic C of real image r And input edge generation network G 1 Generated edge feature C p Mean square error between;
the challenge loss is expressed as:
5. The method for image inpainting based on edge generation as claimed in claim 1, wherein the loss function of the texture generation network is expressed as:
wherein the content of the first and second substances,representation texture generation network G 2 A loss function of (d); I.C. A g Representing the texture information extracted by the texture generation network, indicating an input defective picture, I gt Representing a real image; e c (I g ,I gt ) Is represented by g And I gt The euclidean distance between them.
6. The image inpainting method based on edge generation as claimed in claim 4, wherein in the training process, the loss function of the content generation network is expressed as:
wherein the content of the first and second substances,presentation content generation network G 3 A loss function of adv,3 And λ l1 All represent a regularization parameter, L adv,3 Denotes the loss of antagonism, L l1 Representing a loss of reconstruction;
the challenge loss is expressed as:
wherein, I gt Representing a real image; c and Edge-generating network output result C, which is an input representing a content-generating network p And texture generation network output result I g Superposition of (2); I.C. A p Representing a prediction graph generated by a content generation network, I p =G 3 (Igt,C and );Representation discrimination network D 3 For input image I gt And C and The result of the discrimination;representation discrimination network D 3 Judging results of prediction graphs generated by a content generation network;
loss of reconstruction L l1 Generated image I representing content generation network p With the real image I gt The difference between, the reconstruction loss is expressed as:
L l1 =||I p -I gt || 1 。
7. the image inpainting method based on edge generation as claimed in claim 6, wherein a discriminant network D is used in the content generation network training process 3 Comparing the prediction graph generated by the content generation network with the real image, and updating the parameter lambda adv,3 And λ l1 Up to a loss functionIs greater than a preset first threshold. />
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911082748.XA CN111047522B (en) | 2019-11-07 | 2019-11-07 | Image restoration method based on edge generation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911082748.XA CN111047522B (en) | 2019-11-07 | 2019-11-07 | Image restoration method based on edge generation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111047522A CN111047522A (en) | 2020-04-21 |
CN111047522B true CN111047522B (en) | 2023-04-07 |
Family
ID=70231825
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911082748.XA Active CN111047522B (en) | 2019-11-07 | 2019-11-07 | Image restoration method based on edge generation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111047522B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112906567A (en) * | 2020-05-05 | 2021-06-04 | 王建华 | Fingerprint sensing device and fingerprint identification method |
CN111612721B (en) * | 2020-05-22 | 2023-09-22 | 哈尔滨工业大学(深圳) | Image restoration model training method and device and satellite image restoration method and device |
CN111968053B (en) * | 2020-08-13 | 2022-08-30 | 南京邮电大学 | Image restoration method based on gate-controlled convolution generation countermeasure network |
CN112381725B (en) * | 2020-10-16 | 2024-02-02 | 广东工业大学 | Image restoration method and device based on depth convolution countermeasure generation network |
CN112465718B (en) * | 2020-11-27 | 2022-07-08 | 东北大学秦皇岛分校 | Two-stage image restoration method based on generation of countermeasure network |
CN112801914A (en) * | 2021-02-09 | 2021-05-14 | 北京工业大学 | Two-stage image restoration method based on texture structure perception |
CN113240613B (en) * | 2021-06-07 | 2022-08-30 | 北京航空航天大学 | Image restoration method based on edge information reconstruction |
CN113487512A (en) * | 2021-07-20 | 2021-10-08 | 陕西师范大学 | Digital image restoration method and device based on edge information guidance |
CN114881864B (en) * | 2021-10-12 | 2023-01-03 | 北京九章云极科技有限公司 | Training method and device for seal restoration network model |
CN114331903B (en) * | 2021-12-31 | 2023-05-12 | 电子科技大学 | Image restoration method and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108460746A (en) * | 2018-04-10 | 2018-08-28 | 武汉大学 | A kind of image repair method predicted based on structure and texture layer |
CN108520503A (en) * | 2018-04-13 | 2018-09-11 | 湘潭大学 | A method of based on self-encoding encoder and generating confrontation network restoration face Incomplete image |
CN109255776A (en) * | 2018-07-23 | 2019-01-22 | 中国电力科学研究院有限公司 | A kind of transmission line of electricity split pin defect automatic identifying method |
CN110020996A (en) * | 2019-03-18 | 2019-07-16 | 浙江传媒学院 | A kind of image repair method based on Prior Knowledge Constraints, system and computer equipment |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10503998B2 (en) * | 2016-11-07 | 2019-12-10 | Gracenote, Inc. | Recurrent deep neural network system for detecting overlays in images |
-
2019
- 2019-11-07 CN CN201911082748.XA patent/CN111047522B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108460746A (en) * | 2018-04-10 | 2018-08-28 | 武汉大学 | A kind of image repair method predicted based on structure and texture layer |
CN108520503A (en) * | 2018-04-13 | 2018-09-11 | 湘潭大学 | A method of based on self-encoding encoder and generating confrontation network restoration face Incomplete image |
CN109255776A (en) * | 2018-07-23 | 2019-01-22 | 中国电力科学研究院有限公司 | A kind of transmission line of electricity split pin defect automatic identifying method |
CN110020996A (en) * | 2019-03-18 | 2019-07-16 | 浙江传媒学院 | A kind of image repair method based on Prior Knowledge Constraints, system and computer equipment |
Non-Patent Citations (3)
Title |
---|
张桂梅 ; 李艳兵 ; .结合纹理结构的分数阶TV模型的图像修复.中国图象图形学报.(第05期),全文. * |
田启川 ; 田茂新 ; .基于Chebyshev神经网络的图像复原算法.计算机工程.(第14期),全文. * |
胡敏 ; 李良福 ; .基于生成式对抗网络的裂缝图像修复方法.计算机应用与软件.(第06期),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN111047522A (en) | 2020-04-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111047522B (en) | Image restoration method based on edge generation | |
CN108520503B (en) | Face defect image restoration method based on self-encoder and generation countermeasure network | |
Wang et al. | Multistage attention network for image inpainting | |
CN108961217B (en) | Surface defect detection method based on regular training | |
CN110889813B (en) | Low-light image enhancement method based on infrared information | |
Li et al. | Single image dehazing via conditional generative adversarial network | |
CN103700065B (en) | A kind of structure sparse propagation image repair method of tagsort study | |
CN112184585B (en) | Image completion method and system based on semantic edge fusion | |
CN111915530A (en) | End-to-end-based haze concentration self-adaptive neural network image defogging method | |
CN103186894B (en) | A kind of multi-focus image fusing method of self-adaptation piecemeal | |
CN112150379A (en) | Image defogging method and device for enhancing generation of countermeasure network based on perception discrimination | |
CN110689499A (en) | Face image restoration method based on dense expansion convolution self-coding countermeasure network | |
CN110895795A (en) | Improved semantic image inpainting model method | |
CN112884758B (en) | Defect insulator sample generation method and system based on style migration method | |
CN113392711A (en) | Smoke semantic segmentation method and system based on high-level semantics and noise suppression | |
CN111612708A (en) | Image restoration method based on countermeasure generation network | |
CN115797216A (en) | Inscription character restoration model and restoration method based on self-coding network | |
CN114897742B (en) | Image restoration method with texture and structural features fused twice | |
CN115063318A (en) | Adaptive frequency-resolved low-illumination image enhancement method and related equipment | |
CN112488935A (en) | Method for generating antagonistic finger vein image restoration based on texture constraint and Poisson fusion | |
CN111724327A (en) | Image restoration model training method and system and image restoration method | |
Liu et al. | Facial image inpainting using multi-level generative network | |
CN116051407A (en) | Image restoration method | |
CN117078553A (en) | Image defogging method based on multi-scale deep learning | |
Zhang et al. | Consecutive context perceive generative adversarial networks for serial sections inpainting |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |