CN112686822B - Image completion method based on stack generation countermeasure network - Google Patents

Image completion method based on stack generation countermeasure network Download PDF

Info

Publication number
CN112686822B
CN112686822B CN202011607204.3A CN202011607204A CN112686822B CN 112686822 B CN112686822 B CN 112686822B CN 202011607204 A CN202011607204 A CN 202011607204A CN 112686822 B CN112686822 B CN 112686822B
Authority
CN
China
Prior art keywords
image
network
completion
discriminator
loss function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011607204.3A
Other languages
Chinese (zh)
Other versions
CN112686822A (en
Inventor
任勇鹏
李孝杰
任红萍
史沧红
吴锡
吕建成
周激流
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu University of Information Technology
Original Assignee
Chengdu University of Information Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu University of Information Technology filed Critical Chengdu University of Information Technology
Priority to CN202011607204.3A priority Critical patent/CN112686822B/en
Publication of CN112686822A publication Critical patent/CN112686822A/en
Application granted granted Critical
Publication of CN112686822B publication Critical patent/CN112686822B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to an image completion method based on a stack generation countermeasure network. Firstly, cutting a mask image into a plurality of image blocks so that the network can extract the characteristics of different image blocks; then, the supplemented multi-image block result is put into a generator of the next layer to further supplement the image; and finally, applying the completion results of different blocks to an entire mask image to obtain final completion output. And (4) performing completion operation from coarse to fine, and fully utilizing high-level semantic information extracted by the convolutional neural network. And the authenticity of the generated image and the original image is discriminated by the image block discriminator. Experimental results show that the method can generate high-quality completion results for the images with irregular masks, and the completion results are closer to the original images.

Description

Image completion method based on stack generation countermeasure network
Technical Field
The invention relates to the field of image processing, in particular to an image completion method based on a stack generation countermeasure network.
Background
Recently, the image completion task based on the deep learning method has been developed, and the application range thereof is gradually expanded. Image completion is a basic task in the field of image processing, and the difficulty is that real natural and semantically correct contents need to be filled in for missing areas. The early image completion algorithm uses a nearest neighbor search method to search the most similar image blocks in a background area to fill in a missing area, but the high-level semantic information of the image cannot be acquired, so that the completion method cannot generate meaningful content. Other image completion algorithms are based on the goal of learning the distribution of the whole data set, and missing contents are constructed through training of a large amount of data, but the completion effect is particularly lack of high-frequency image information, so that the completion image is always blurred and distorted, and the image quality is poor. Some subsequent image completion based on deep learning methods propose learning the basic distribution of the image data, i.e. learning a function that maps the missing image to the real image. For example, a typical image completion method generally uses a method based on a Generic Adaptive Network (GAN), and the GAN can map normally distributed noise into an image, thereby learning the distribution of a real image. The GAN as a generation model can learn the complex distribution of data by using an unsupervised learning mode, and a generator and a discriminator network of the GAN are jointly trained according to opposite targets: the generator minimizes the objective function and the arbiter maximizes the objective function to perform a challenge training, such challenge enabling them to fit any data distribution, ultimately enabling the generator to capture the true data distribution once the training is successful.
The prior art has the following defects:
1. the repaired area of the output image has the defect of fuzzy and unnatural appearance
Some image completion methods generally use a full-connection layer, but because the full-connection layer network uses full connection based on channels, the network is limited when acquiring image semantic information, and thus the final completion result generally has a fuzzy and unnatural problem. There is a need to replace fully connected network architectures to address their bottleneck problem.
2. The training time and the calculation resource space consumption of the network are large
Some existing image completion methods adopt hole convolution to increase the receptive field of the network, so that the network can better extract image features, and finally the quality of the completed image is improved. However, the sparsity of the convolution kernel is increased due to a large void rate, so that the training time is increased, the cost of computing resources is increased, and the training efficiency of the network is influenced finally. There is a need to design networks with low training time and low computational cost to improve training efficiency.
3. When the network does not consider the situation that the local detail information is lost when the image characteristics are extracted
The existing image completion method neglects or does not completely consider that a large amount of local detail information is lost when an image is sampled down, so that a network can output a fuzzy completion result. Still other networks may adopt a network structure of residual block or similar residual block in consideration of detail loss, but the utilization of detail information is still insufficient, and finally, the output result still has a fuzzy defect. It is necessary to design a network that makes full use of the local detail information of the image so that the network produces high quality results.
Therefore, the improvement of the existing image completion algorithm is urgently needed, so that the image completion algorithm can generate a high-quality completion result.
Disclosure of Invention
Aiming at the defects of the prior art, the image completion method based on the stack generator network comprises the following steps:
step 1: collecting and downloading supplementary image data sets Places2 and Paris street View, and preprocessing the supplementary image data sets;
step 2: dividing the complementing image data set into a training set, a verification set and a test set according to an appointed proportion;
and step 3: the training set is adopted to train the constructed image completion network, the image completion network comprises a generator and a discriminator, the generator comprises three layers of completion networks, each layer of completion network comprises an encoder and a decoder, the third layer of completion network is connected to the discriminator, the whole completion network is of a stacked network hierarchical structure, the preprocessed training set is respectively sent to the corresponding network layers to be trained, and the training method specifically comprises the following steps:
step 31: will fourEqual division to complement image Im4Sending the feature maps into a first layer of completion network, respectively outputting 4 blocks of feature maps after being coded by a first coder, splicing the feature maps into 2 first feature maps according to the width dimension, sending the feature maps into a first decoder, and outputting 2 blocks of first completion images by the first decoder;
step 32: 2 of the first complementary image and the halved to-be-compensated image Im2Correspondingly adding 2 image blocks in the image block group, sending the image blocks into a second encoder, outputting 2 second feature maps by the second encoder, correspondingly adding the 2 second feature maps with 2 first feature maps, splicing the image blocks into 1 second feature map according to high dimensionality, then inputting the second feature map into a second decoder, and outputting a whole second complete image by the second decoder;
step 33: the second complementing image and the image I to be complemented are combinedmAdding the third characteristic diagram and the second characteristic diagram of the second layer network, sending the added third characteristic diagram to a third decoder, and outputting a final complete image by the third encoder;
step 34: and the discriminator inputs the final complete image and the original image into the discriminator and judges whether the images are true or false through the discriminator. When the final complement image and the original image cannot be distinguished by the discriminator, indicating that the generator network and the discriminator network have reached equilibrium, the generator has captured a true distribution of image data.
Step 35: performing iterative training on the image completion network according to the set batch size, training a discriminator in each batch, updating parameters of the discriminator according to the countermeasure loss function, freezing the parameters of the discriminator after updating, updating generator parameters according to the reconstruction loss function, the content loss function and the style loss function, and alternately training the whole image completion network;
step 36: judging whether the set verification iteration times are reached, if so, verifying the primary model and storing the primary model, and if not, executing the step 37;
step 37: and judging whether the set total iteration times are reached, if so, ending the training, and otherwise, repeating the steps 31 to 36.
According to a preferred embodiment, the loss functions of the image completion method include a reconstruction loss function, a confrontation loss function, a content loss function and a style loss function for the missing region and the known region, respectively,
the reconstruction loss function is used to constrain the global structure of the generated image,
the countermeasure loss function is used for restraining the discriminator and improving the identification accuracy of the discriminator;
the content loss function is used for reducing the distance between the generated image and the characteristics of the original image in the pre-trained VGG19 network, so that the quality of the generated image is improved;
the style loss function calculates a gram matrix of the image characteristic in the VGG network to grasp the overall style of the image, and the image quality can be improved by restricting the style difference between the generated image and the original image;
according to a preferred embodiment, the preprocessing of the image completion method comprises:
step 11: firstly, uniformly processing the size of images in a data set into 256 × 256;
step 12: all pixel values of the image are then normalized to 0 to 1, and the normalized image is processed using a mask as an intermediate content missing image I to be complementedm
Step 13: the image I to be compensated is processedmRespectively cut into quartered images I to be compensatedm4Halving the image to be compensated Im2And will be complemented by the image ImQuartering image I to be compensatedm4And halving the image I to be compensatedm2As input to a completion network;
the invention has the beneficial effects that:
1. the generator and the discriminator are subjected to countermeasure training by using the generation countermeasure network, and a full connection layer is avoided, so that the problem of fuzzy image edges caused by the full connection layer is solved, the restoration capability of edge details is improved, and the purpose of improving the quality of the complete image is achieved.
2. The stacked hierarchical generator network is used for completing the image according to the multi-scale characteristics, and the stacked generator network structure can be used for completing the image in stages from coarse to fine, so that the network can utilize the multi-scale characteristics of the image as much as possible, and a better completing effect is realized; on the other hand, by extracting the features of the image from a small scale to a large scale, the training time and efficiency are improved.
3. The network is constrained by using the countermeasure loss, the reconstruction loss, the content loss and the style loss, the network performance is improved, and the introduction of the content loss and the style loss can lead the characteristics of the generated image in the VGG network to be close to the characteristics of the corresponding original image, so that the final completion result is closer to the original image in terms of the whole content and style.
Drawings
FIG. 1 is a flow chart of a method of the image completion network of the present invention;
FIG. 2 is a diagram of an image completion network architecture according to the present invention;
FIG. 3 is a graph comparing the experimental results of the present invention; and
FIG. 4 is a graph comparing the effects of another example of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings in conjunction with the following detailed description. It should be understood that the description is intended to be exemplary only, and is not intended to limit the scope of the present invention. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present invention.
The invention mainly solves the problem of unclear and unreal image quality in image completion. The image completion (intra-image completion) task is mainly to fill in images with missing contents, and is a relatively common image editing task, and the application range of the task is quite wide, for example, the task is applied to removing redundant foreground objects in the images, repairing damaged and missing photos, and the like. The image completion algorithm requires that the structural information and semantic information of the image are acquired as much as possible, so that the final completed content is natural, real, clear and attractive. The main problem of most of the existing image completion algorithms is that the acquired image semantic information and structural information are not enough, so that the final completion result has the problem of fuzzy distortion. Therefore, the improvement of the existing image completion algorithm is urgently needed, so that the image completion algorithm can generate a high-quality completion result.
The following detailed description is made with reference to the accompanying drawings.
Fig. 1 is a flow chart of the method of the image completion network of the present invention, and the method of the present invention will be described in detail with reference to fig. 1. The invention provides an image completion method based on a stack generator network, which comprises the following steps:
step 1: collecting and downloading supplementary image data sets Places2 and Paris street View, and preprocessing the supplementary image data sets; the pretreatment method comprises the following steps:
step 11: the size of the images in the dataset are first treated uniformly as 256 x 256.
Step 12: all pixel values of the image are then normalized to 0 to 1, and the normalized image is processed using a mask as an intermediate content missing image I to be complementedm
Step 13: image I to be compensatedmRespectively cut into quartered images I to be compensatedm4Halving the image to be compensated Im2And will be complemented by the image ImQuartering image I to be compensatedm4And halving the image I to be compensatedm2As input to the completion network.
Step 2: and dividing the complementing image data set into a training set, a verification set and a test set according to an appointed proportion.
And step 3: the image completion network which is constructed by adopting the training set is trained, the generator comprises three layers of completion networks, each layer of completion network comprises an encoder and a decoder, the third layer of completion network is connected to the discriminator, the whole completion network is of a stacked network hierarchical structure, and the preprocessed training set is respectively sent to the corresponding network layers for training. The training method specifically comprises the following steps:
step 31: dividing the four parts into four parts to be compensatedLike Im4And sending the feature maps into a first layer completion network, outputting 4 blocks of feature maps after the feature maps are coded by a first coder, splicing the feature maps into 2 first feature maps according to the wide dimension, sending the feature maps into a first decoder, and outputting 2 blocks of first completion images by the first decoder.
Step 32: 2 first full images and halved images I to be filledm2The 2 image blocks in the image block group are correspondingly added and then sent into a second encoder, the second encoder outputs 2 second feature maps, the 2 second feature maps and the 2 first feature maps are correspondingly added, the 2 second feature maps are spliced into 1 second feature map according to the high dimensionality and then input into a second decoder, and the second decoder outputs a whole second complete image.
Step 33: the second complementing image and the image I to be complementedmAnd adding the third characteristic diagram and the second characteristic diagram of the second layer network, sending the added third characteristic diagram to a third decoder, and outputting a final complete image by the third encoder.
Step 34: and the discriminator inputs the final complete image and the original image into the discriminator and judges whether the images are true or false through the discriminator. When the discriminator cannot distinguish between the final, complemented image and the original image, indicating that the generator network and the discriminator network have reached equilibrium, the generator has captured a true distribution of image data.
Step 35: and (3) carrying out iterative training on the image completion network according to the set batch size, training a discriminator in each batch, updating parameters of the discriminator according to the countermeasure loss function, freezing the parameters of the discriminator after updating, updating generator parameters according to the reconstruction loss function, the content loss function and the style loss function, and alternately training the whole image completion network.
Step 36: and judging whether the set verification iteration number is reached, if so, verifying the primary model and storing the primary model, and if not, executing the step 37.
Step 37: and judging whether the set total iteration times are reached, if so, ending the training, and otherwise, repeating the steps 31 to 36.
The loss function of the image completion method comprises a reconstruction loss function, a countermeasure loss function, a content loss function and a style loss function aiming at the missing region and the known region respectively, and the mathematical expression is as follows:
Ltotal=λ1Lhole2Lvalid3Lad4Lperceptual+L5Lstyle (1)
in the experiment, the parameter λ is set1=8.0,λ2=1.0,λ3=1.0,λ4=0.1,λ5=250.0。
The reconstruction loss function is used for constraining the global structure of the generated image, and the mathematical expression is as follows:
Lhole=||(1-M)⊙(Igen-Igt)||2 (2)
Lvalid=||(M⊙(Igen-Igt)||2 (3)
wherein L isholeReconstruction loss function, L, representing missing regionsvalidDenotes the reconstruction loss function of the known region, M denotes mask, IgenRepresenting the generated image IgtRepresenting the original image.
The countermeasure loss function is used for restraining the discriminator and improving the identification accuracy of the discriminator;
the content loss function is used for reducing the distance between the generated image and the features of the original image in the pre-trained VGG19 network, so that the quality of the generated image is improved, and the mathematical expression is as follows:
Lperceptual=E[||Φi(Igen)-Φi(Igt)||1] (4)
Φi(.) denotes the active layer, I, of the I-layer network in a VGG networkgenRepresenting the generated image IgtRepresenting the original image.
The style loss function calculates a gram matrix of the image characteristic in the VGG network to grasp the overall style of the image, and the image quality can be improved by restricting the style difference between the generated image and the original image; the mathematical expression is as follows:
Figure BDA0002873909470000071
g (.) represents the gram matrix of the feature map, phii(.) denotes the active layer, I, of the I-layer network in a VGG networkgenRepresenting the generated image IgtRepresenting the original image.
The image complementing method also comprises the step of testing the trained complementing network, and processing the input image I of the network according to the method in the step 1mAnd its block image Im2、Im4And respectively operating the first to the third layers of completion networks according to the step 1, and finally outputting a test result by a decoder of the third layer network.
The image complementing method also comprises the step of testing the trained complementing network, and processing the input image I of the network according to the method in the step 1mAnd its block image Im2、Im4And respectively operating the first to the third layers of completion networks according to the step 1, and finally outputting a test result by a decoder of the third layer network.
Evaluations were performed on the natural image dataset Places2 and the Paris street view image dataset. Places2 were divided into 308500 training pictures and 20000 test images and used the original division of the Paris dataset, which included 14900 training images and 100 test images. In addition, all training and test images are adjusted to 256 × 256 resolution, normalizing the pixel values to [0, 1 ]. The input to the training and testing is an irregular image of the missing region. The method of the invention is also compared with two existing latest completion methods: partial convolution (PConv) with Coherent Semantic Attention (CSA) and comparison using the same irregular mask data set.
First, the method of the invention was qualitatively compared to PConv and CSA. FIG. 3 is a graph of test results of various methods on a data set Paris street view image, wherein Ours represents experimental results without a content loss function and a style loss function, and Ours*The experimental results with the content loss function and the style loss function added are shown. In FIG. 3(b), the input image is shown in blackColor indicates the missing region. In fig. 3(c), the completed results indicate that PConv does not repair the content and texture of the missing region well and generates some blurred and distorted structures. In fig. 3(d), CSA also performs poorly on the completed area, e.g., it produces many unexpected noise and blurred contours. In contrast, the model of the present invention achieves a natural and realistic result in FIG. 3 (e). In order to further improve the quality of the patch image, a content loss and a lattice loss are added to the model, and the visual effect is shown in fig. 3 (f). Experiment results show that the provided model with content loss and style loss can improve the naturalness and definition of the generated image and finally generate a high-quality completion result.
Then, fig. 4 is a comparison of the effect of the experiment performed on the natural image data set plates 2, and fig. 4 shows the qualitative result compared to the prior art method. In experiments, similar results were obtained. Neither PConv nor CSA can reconstruct a natural, reasonable texture, for example, its complemented building and background produce blurred content and unnatural structures. The completion results generated by the model of the present invention (see fig. 4(e, f)) are more clear and natural than those of fig. 4(c, d). In addition, the method of the invention solves the problems of blurring and distortion of the complete image and ensures the consistency of local details.
TABLE 1 Objective evaluation index comparison of Experimental results
Method IS+ FID- PSNR+ SSIM+ L1 loss-
PConv 2.6873 48.5692 26.9501 0.8116 4.7234
CSA 2.7548 43.3290 29.0037 0.8031 4.3020
Ours 2.8220 35.9758 31.9790 0.8456 2.9825
Ours* 2.8562 15.4625 33.6995 0.8919 2.6240
To further evaluate the effectiveness of this method, quantitative comparative experiments were also performed. Specifically, table 1 shows the quantitative results of different methods on the Paris street view dataset and includes quantitative indices for 100 test images. The model of the present invention achieves the FID indices of IS and 35.9758 of 2.8220, which demonstrates that the completion results produced by the model of the present invention are more diverse and clear than the results of the comparative method. In addition, the method of the invention also realizes good PSNR of 31.9790 and SSIM of 0.8456, which shows that the image quality generated by the completion method of the invention is higher. The smaller the L1 loss value, the closer the result is to the original image. In order to improve the quality of the result, content loss and style loss are added into the model, and finally all indexes are obviously improved.
It should be noted that the above-mentioned embodiments are exemplary, and that those skilled in the art, having benefit of the present disclosure, may devise various arrangements that are within the scope of the present disclosure and that fall within the scope of the invention. It should be understood by those skilled in the art that the present specification and figures are illustrative only and are not limiting upon the claims. The scope of the invention is defined by the claims and their equivalents.

Claims (3)

1. An image completion method for generating a countermeasure network based on stacking, the method comprising the steps of:
step 1: collecting and downloading supplementary image data sets Places2 and Paris street View, and preprocessing the supplementary image data sets;
step 2: dividing the complementing image data set into a training set, a verification set and a test set according to an appointed proportion;
and step 3: the training set is adopted to train the constructed image completion network, the image completion network comprises a generator and a discriminator, the generator comprises three layers of completion networks, each layer of completion network comprises an encoder and a decoder, the third layer of completion network is connected to the discriminator, the whole completion network is of a stacked network hierarchical structure, the preprocessed training set is respectively sent to the corresponding network layers to be trained, and the training method specifically comprises the following steps:
step 31: dividing the four parts into four parts to be compensatedm4Feeding into the first layer for completionThe network, output the characteristic map of 4 pieces separately after the first encoder encodes, send the said characteristic map to the first demoder after splicing into 2 first characteristic maps according to the wide dimension, output the first completion picture of 2 pieces by the first demoder;
step 32: dividing 2 first complementing images and halving images I to be complementedm2Correspondingly adding 2 image blocks in the image block group, sending the image blocks into a second encoder, outputting 2 second feature maps by the second encoder, correspondingly adding the 2 second feature maps with 2 first feature maps, splicing the image blocks into 1 second feature map according to high dimensionality, then inputting the second feature map into a second decoder, and outputting a whole second complete image by the second decoder;
step 33: the second complementing image and the image I to be complemented are combinedmAdding the first characteristic diagram and the 1 block of second characteristic diagram of the second layer network, sending the first characteristic diagram to a first decoder, and outputting a final complete image by a first encoder;
step 34: the discriminator inputs the final complete image and the original image into the discriminator, judges whether the final complete image and the original image are true or false through the discriminator, and when the discriminator cannot distinguish the final complete image and the original image, the network of the generator and the network of the discriminator are balanced, and the generator captures the real distribution of the image data;
step 35: performing iterative training on the image completion network according to the set batch size, training a discriminator in each batch, updating parameters of the discriminator according to the countermeasure loss function, freezing the parameters of the discriminator after updating, updating generator parameters according to the reconstruction loss function, the content loss function and the style loss function, and alternately training the whole image completion network;
step 36: judging whether the set verification iteration times are reached, if so, verifying the primary model and storing the primary model, and if not, executing the step 37;
step 37: and judging whether the set total iteration times are reached, if so, ending the training, and otherwise, repeating the steps 31 to 36.
2. The image completion method according to claim 1, wherein the loss functions of the image completion method include a reconstruction loss function, a countermeasure loss function, a content loss function, and a style loss function for the missing region and the known region, respectively, wherein,
the reconstruction loss function is used for restricting the global structure of the generated image;
the countermeasure loss function is used for restraining the discriminator and improving the identification accuracy of the discriminator;
the content loss function is used for reducing the distance between the generated image and the characteristics of the original image in the pre-trained VGG19 network, so that the quality of the generated image is improved;
the style loss function calculates a gram matrix of the image characteristic in the VGG network to grasp the overall style of the image, and the image quality can be improved by restricting the style difference between the generated image and the original image.
3. The image inpainting method of claim 2, wherein the preprocessing method comprises:
step 11: firstly, uniformly processing the size of images in a data set into 256 × 256;
step 12: all pixel values of the image are then normalized to 0 to 1, and the normalized image is processed using a mask as an intermediate content missing image I to be complementedm
Step 13: the image I to be compensated is processedmRespectively cut into quartered images I to be compensatedm4Halving the image to be compensated Im2And will be complemented by the image ImQuartering image I to be compensatedm4And halving the image I to be compensatedm2As input to the completion network.
CN202011607204.3A 2020-12-30 2020-12-30 Image completion method based on stack generation countermeasure network Active CN112686822B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011607204.3A CN112686822B (en) 2020-12-30 2020-12-30 Image completion method based on stack generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011607204.3A CN112686822B (en) 2020-12-30 2020-12-30 Image completion method based on stack generation countermeasure network

Publications (2)

Publication Number Publication Date
CN112686822A CN112686822A (en) 2021-04-20
CN112686822B true CN112686822B (en) 2021-09-07

Family

ID=75454708

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011607204.3A Active CN112686822B (en) 2020-12-30 2020-12-30 Image completion method based on stack generation countermeasure network

Country Status (1)

Country Link
CN (1) CN112686822B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115639605B (en) * 2022-10-28 2024-05-28 中国地质大学(武汉) Automatic identification method and device for high-resolution fault based on deep learning
CN115984281B (en) * 2023-03-21 2023-06-20 中国海洋大学 Multi-task complement method of time sequence sea temperature image based on local specificity deepening

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110189278A (en) * 2019-06-06 2019-08-30 上海大学 A kind of binocular scene image repair method based on generation confrontation network
CN110288537A (en) * 2019-05-20 2019-09-27 湖南大学 Facial image complementing method based on the depth production confrontation network from attention
CN111915693A (en) * 2020-05-22 2020-11-10 中国科学院计算技术研究所 Sketch-based face image generation method and system

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10552714B2 (en) * 2018-03-16 2020-02-04 Ebay Inc. Generating a digital image using a generative adversarial network
CN108460830A (en) * 2018-05-09 2018-08-28 厦门美图之家科技有限公司 Image repair method, device and image processing equipment
KR102184755B1 (en) * 2018-05-31 2020-11-30 서울대학교 산학협력단 Apparatus and Method for Training Super Resolution Deep Neural Network
US10878575B2 (en) * 2019-04-15 2020-12-29 Adobe Inc. Foreground-aware image inpainting
CN110148085B (en) * 2019-04-22 2023-06-23 智慧眼科技股份有限公司 Face image super-resolution reconstruction method and computer readable storage medium
CN111275637B (en) * 2020-01-15 2024-01-30 北京工业大学 Attention model-based non-uniform motion blurred image self-adaptive restoration method
CN111626926A (en) * 2020-04-06 2020-09-04 温州大学 Intelligent texture image synthesis method based on GAN
CN111860570B (en) * 2020-06-03 2021-06-15 成都信息工程大学 Cloud particle image extraction and classification method
CN111681182A (en) * 2020-06-04 2020-09-18 Oppo广东移动通信有限公司 Picture restoration method and device, terminal equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110288537A (en) * 2019-05-20 2019-09-27 湖南大学 Facial image complementing method based on the depth production confrontation network from attention
CN110189278A (en) * 2019-06-06 2019-08-30 上海大学 A kind of binocular scene image repair method based on generation confrontation network
CN111915693A (en) * 2020-05-22 2020-11-10 中国科学院计算技术研究所 Sketch-based face image generation method and system

Also Published As

Publication number Publication date
CN112686822A (en) 2021-04-20

Similar Documents

Publication Publication Date Title
CN110033410B (en) Image reconstruction model training method, image super-resolution reconstruction method and device
CN108520503B (en) Face defect image restoration method based on self-encoder and generation countermeasure network
CN108460746B (en) Image restoration method based on structure and texture layered prediction
CN113240613B (en) Image restoration method based on edge information reconstruction
CN113962893A (en) Face image restoration method based on multi-scale local self-attention generation countermeasure network
Bhunia et al. Improving document binarization via adversarial noise-texture augmentation
CN112686822B (en) Image completion method based on stack generation countermeasure network
CN111861901A (en) Edge generation image restoration method based on GAN network
CN112837234B (en) Human face image restoration method based on multi-column gating convolution network
CN112288632B (en) Single image super-resolution method and system based on simplified ESRGAN
CN109920021B (en) Face sketch synthesis method based on regularized width learning network
CN111612708A (en) Image restoration method based on countermeasure generation network
CN111161158B (en) Image restoration method based on generated network structure
CN112233129A (en) Deep learning-based parallel multi-scale attention mechanism semantic segmentation method and device
CN113538246A (en) Remote sensing image super-resolution reconstruction method based on unsupervised multi-stage fusion network
CN112801914A (en) Two-stage image restoration method based on texture structure perception
CN114266957A (en) Hyperspectral image super-resolution restoration method based on multi-degradation mode data augmentation
CN112489168A (en) Image data set generation and production method, device, equipment and storage medium
CN114494387A (en) Data set network generation model and fog map generation method
Liu et al. Facial image inpainting using multi-level generative network
An et al. RBDN: Residual bottleneck dense network for image super-resolution
CN117315336A (en) Pollen particle identification method, device, electronic equipment and storage medium
CN115272131B (en) Image mole pattern removing system and method based on self-adaptive multispectral coding
CN116051407A (en) Image restoration method
CN116402702A (en) Old photo restoration method and system based on deep neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant