CN112001847A - Method for generating high-quality image by relatively generating antagonistic super-resolution reconstruction model - Google Patents

Method for generating high-quality image by relatively generating antagonistic super-resolution reconstruction model Download PDF

Info

Publication number
CN112001847A
CN112001847A CN202010884014.XA CN202010884014A CN112001847A CN 112001847 A CN112001847 A CN 112001847A CN 202010884014 A CN202010884014 A CN 202010884014A CN 112001847 A CN112001847 A CN 112001847A
Authority
CN
China
Prior art keywords
super
resolution
resolution image
image
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010884014.XA
Other languages
Chinese (zh)
Inventor
姜代红
张三友
孙天凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xuzhou University of Technology
Original Assignee
Xuzhou University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xuzhou University of Technology filed Critical Xuzhou University of Technology
Priority to CN202010884014.XA priority Critical patent/CN112001847A/en
Publication of CN112001847A publication Critical patent/CN112001847A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

A method for generating high-quality images for a contrast super-resolution reconstruction model is suitable for video processing. Establishing a relative generation confrontation super-resolution reconstruction model which comprises a generator network and a discriminator network, constructing a total loss function, training the generator network by utilizing a low-resolution image sample set and a high-resolution image sample set back propagation algorithm, then training the discriminator network through an Adam algorithm, inputting a low-resolution path image into the trained generator network for processing to generate a super-resolution image, inputting the super-resolution image into the trained discriminator network for judgment, outputting a generated super-resolution picture if the discriminator network judges to be true, and feeding back the generator network to regenerate the super-resolution image if the discriminator network judges to be false. The method has simple steps, high reduction quality and wide practical significance.

Description

Method for generating high-quality image by relatively generating antagonistic super-resolution reconstruction model
Technical Field
The invention relates to a method for generating a high-quality image, in particular to a method for generating a high-quality image by using a relative generation contrast super-resolution reconstruction model in video processing.
Background
The super-resolution reconstruction of images is to improve the spatial resolution by digital signal processing without changing the existing hardware, which is an ill-posed problem in the restoration process. Due to the strong fitting capability of deep learning, the super-resolution imaging method realizes one leap. Applications range from surveillance imaging enhancement, remote sensing systems, target recognition and other computer vision scenes.
Recently, the super-resolution imaging technology of Convolutional Neural Network (CNN) is significantly superior to the conventional method in performance. Most CNN-based super-resolution methods are trained with pixel loss to seek improvements in peak signal-to-noise ratio (PSNR) and Structural Similarity Index (SSIM) over typical quantization indices. Although pixel loss can be easily optimized, it generally does not provide pleasing realistic details consistent with perceptual vision, especially for high magnification, which tends to result in distortion. Due to the tremendous development of the generation of countermeasure networks (GAN) in generating realistic images, it provides a new approach for perceptual super-resolution imaging. The GAN-based super-resolution approach is a great improvement in rich visual perception compared to the CNN-based approach due to the adoption of perceptual and counter-loss in loss minimization. However, since the GAN training method has problems of gradient disappearance, difficulty in optimization, pattern collapse, and the like, the GAN method is limited by the following problems. First, raw GAN for super-resolution imaging is extremely difficult to train due to its unstable intrinsic properties. Second, when training the generator, real high resolution samples are not involved, so the discriminator must remember all the attributes of real samples, leading to a performance bottleneck that guides the generator to further generate more realistic images. Third, the structural information of the geometric texture cannot be fully preserved due to the lack of texture-guided optimization in the lossy function. In addition, conventional measurement methods including PSNR and SSIM have some disadvantages, and they are not suitable for measuring perceptual similarity that is acceptable to human vision.
Disclosure of Invention
The purpose of the invention is as follows: according to the defects of the technology, the method for generating the high-quality image by resisting the super-resolution reconstruction model through the relative generation of the real high-resolution image sample is provided, wherein the relative generation of the image sample is close to the whole quality of the image.
To achieve the above technical object, the method for generating high quality images against super resolution reconstruction model comprises the following steps:
s1, collecting an image sample set used for training, wherein the image sample set comprises a group of low-resolution image sample sets and a group of high-resolution image sample sets, the contents of all image samples in the low-resolution image sample sets and the contents of all image samples in the high-resolution image sample sets are in one-to-one correspondence and are the same, and only the resolutions are different, and the diversity of the training image samples is enhanced through random 90 degrees, 180 degrees, 270 degrees of rotation and horizontal overturning;
s2, establishing a relative generation confrontation super-resolution reconstruction model by utilizing SRGAN expansion, wherein the confrontation super-resolution reconstruction model comprises two confrontation networks consisting of a generator network and a discriminator network, and then constructing a total loss function, wherein the total loss function comprises a loss function of the generator network and a loss function of the discriminator network;
s3, using the low-resolution image sample set as the input of the generator network, then using the high-resolution image sample set as the expected output of the generator network, training the generator network by using a back propagation algorithm, ensuring the content of the low-resolution image sample used each time to be consistent with the content of the high-resolution image sample used at the same time during training, obtaining the generator network after training is completed when the number of times of training the generator reaches the upper limit, and inputting the low-resolution image into the generator network for identification to generate a super-resolution image immediately;
s4, taking a low-resolution image and a high-resolution image of the same picture content in a high-resolution image sample set of a low-resolution image sample set as a positive example image pair, taking the super-resolution image generated by a generator network and the high-resolution image in the high-resolution image as a negative example image pair, taking the positive example image pair and the negative example image pair as training databases, inputting the training databases into a countermeasure network of the discriminator, and performing countermeasure training on a neural network by using an Adam algorithm to finally obtain a trained discriminator network;
s5, inputting the low resolution path image into the trained generator network to process and generate a super-resolution image, then inputting the super-resolution image into the trained discriminator network to judge, outputting and generating a super-resolution picture if the discriminator network judges to be true, and feeding back the generator network to regenerate the super-resolution image if the discriminator network judges to be false.
The generator network comprises 5 residual blocks which are connected in sequence, wherein each residual block consists of two convolution layers, two spectrum normalization layers and an active layer, the specific arrangement sequence is a convolution layer I, a spectrum normalization layer I, an active layer, a convolution layer II and a spectrum normalization layer II, and the active layer is in contact with a jump connection in ResNet and has a PReLU function; respectively arranging two convolution layers and an activation layer in front of and behind the 5 residual blocks to extract the shallow feature of the image; at the end of the network, continuously setting two sub-pixel convolution layers for super-resolution up-sampling of the image;
the generator network takes the low-resolution image in the obtained image sample dataset as input, firstly passes through a convolution layer with a convolution kernel of 3x3 and an LRelu activation layer, and then sequentially passes through 5 residual blocks for deep feature extraction of the image to generate a high-quality image which is as close to a real high-resolution sample as possible, and inputs the high-quality image into an increment module; and sequentially extracting the output characteristic diagram of the Inception module through multi-scale characteristics to obtain a final output super-resolution image.
The discriminator network is used for finding the relative difference between the super-resolution image and the original high-resolution image; the discriminator consists of 8 convolutional layers, the convolutional extracted feature map is increased from 64 to 512, followed by 1 activation layer with the leaked RELU function, and finally two dense convolutional layers are connected to return the probability that the original high-resolution image is more realistic than the generated super-resolution image.
The total loss function includes a generator network loss function LGAnd discriminates the network loss function LD
Figure BDA0002655010680000021
Figure BDA0002655010680000022
In the formula: l isfeaAs a characteristic loss function, LconAs a function of content loss, LtexIn order to be a function of the texture loss,
Figure BDA0002655010680000031
a challenge loss function for generating a model for the relative generated challenge network,
Figure BDA0002655010680000032
Is the countermeasure loss function of the discriminant model; the α, β, γ sums are the weights assigned to the content loss, feature loss, texture loss and adversity loss, respectively, in the total loss, so that the proposed method can satisfy the contribution factors of a combination of multiple loss functions.
The loss functions include a content loss function, a feature loss function, a texture loss function, and a counter loss function, wherein:
the content loss is a measure for measuring the similarity of the content based on the pixel between the generated image and the real sample, Charbonnier loss is introduced as the content loss to keep edge details, pixel space regularization is provided for loss optimization, and the quality improvement is facilitated:
Figure BDA0002655010680000033
in the formula:
Figure BDA0002655010680000034
is a super-resolution image of the image,
Figure BDA0002655010680000035
is the original high resolution image. Is a constant term close to 0, representing the influence on the Charbonier penalty term;
the feature loss function is a measure for measuring semantic perception similarity between a generated image and a real sample, feature mapping extracted after activation is used in a super-resolution task based on generation of a countermeasure network, more accurate texture details can be generated by using a feature map before activation, and feature loss is defined as follows:
Figure BDA0002655010680000036
in the formula: phi is ai,jMapping to jth convolutional layer, W, before ith pooling layer for features of conventional VGG network structurei,jAnd Hi,jRespectively, the width and height of the feature map;
texture loss is a measure of structural style similarity between the generated super-resolution image and the original high-resolution image sample, and is used to visually approximate the low-quality image as close as possible to the true texture style of the original high-resolution image, and the texture loss defines:
Figure BDA0002655010680000037
in the formula:
Figure BDA0002655010680000038
and IHRRespectively the generated super-resolution image and the true high-resolution sample,
Figure BDA0002655010680000039
is a feature layer of n feature maps with length m extracted from a pre-trained VGG network, gram (F) FFTA Gram matrix representing a feature map F;
the discriminator network not only involves true high resolution image samples in the confrontation training, but also includes the super-resolution images generated, and the relative loss of the confrontation training is expressed as follows:
Figure BDA00026550106800000310
Figure BDA0002655010680000041
Figure BDA0002655010680000042
Figure BDA0002655010680000043
in the formula: x is the number ofrP and xfQ represents the data distribution of the real high resolution image sample and the generated super-resolution image, respectively, C (.) represents the output of the untransformed discriminator, σ refers to Sigmoid function, E represents the mean value, and D represents the discrimination network.
Has the advantages that:
the invention adopts a relative discrimination network to improve the overall quality of the generated image, so that the resolution height of the generated super-resolution image is close to the quality of a real high-resolution sample, and simultaneously provides a new combination of multiple loss functions, so that the real texture details are enhanced to the maximum extent through the weighted sum of content loss, characteristic loss, texture loss and antagonism loss, the steps are simple, the processing efficiency is high, and the super-resolution image with real details can be restored through the low-resolution image.
Description of the drawings:
FIG. 1 is a schematic diagram of a generator network of the present invention;
FIG. 2 is a schematic diagram of an authenticator network of the present invention;
fig. 3 is a schematic diagram of a VGG network structure.
The specific implementation mode is as follows:
the embodiments of the present invention will be further explained with reference to the accompanying drawings:
a method for generating high quality images for generation of a robust super resolution reconstruction model, characterized by the steps of:
s1, collecting an image sample set used for training, wherein the image sample set comprises a group of low-resolution image sample sets and a group of high-resolution image sample sets, the contents of all image samples in the low-resolution image sample sets and the contents of all image samples in the high-resolution image sample sets are in one-to-one correspondence and are the same, and only the resolutions are different, and the diversity of the training image samples is enhanced through random 90 degrees, 180 degrees, 270 degrees of rotation and horizontal overturning;
s2, using SRGAN expansion to build relative generation confrontation super-resolution reconstruction model, wherein the confrontation super-resolution reconstruction model comprises two confrontation networks composed of generator network and discriminator network,
as shown in fig. 1, the generator network includes 5 residual blocks connected in sequence, where each residual block is composed of two convolution layers, two spectrum normalization layers and an active layer, the specific arrangement order is convolution layer i, spectrum normalization layer i, active layer, convolution layer ii and spectrum normalization layer ii, and the active layer has a prime lu function in contact with a hopping connection in ResNet; respectively arranging two convolution layers and an activation layer in front of and behind the 5 residual blocks to extract the shallow feature of the image; at the end of the network, continuously setting two sub-pixel convolution layers for super-resolution up-sampling of the image;
the generator network takes the low-resolution image in the obtained image sample dataset as input, firstly passes through a convolution layer with a convolution kernel of 3x3 and an LRelu activation layer, and then sequentially passes through 5 residual blocks for deep feature extraction of the image to generate a high-quality image which is as close to a real high-resolution sample as possible, and inputs the high-quality image into an increment module; and sequentially extracting the output characteristic diagram of the Inception module through multi-scale characteristics to obtain a final output super-resolution image.
The discriminator network is used for finding the relative difference between the super-resolution image and the original high-resolution image; the discriminator comprises 8 convolutional layers, the feature map after convolutional extraction is increased from 64 to 512, then 1 activation layer with a leaked RELU function is connected, and finally two dense convolutional layers are connected to return the probability that the original high-resolution image is more vivid than the generated super-resolution image; wherein element-wise sum represents a summary of elements, the relationship between 5 layers is shown in FIG. 2, where k represents the convolution size, n represents the number of feature maps, s represents the convolution step,
then constructing a total loss function, wherein the total loss function comprises a loss function of the generator network and a loss function of the discriminator network;
the total loss function includes a generator network loss function LGAnd discriminates the network loss function LD
Figure BDA0002655010680000051
Figure BDA0002655010680000052
In the formula: l isfeaAs a characteristic loss function, LconAs a function of content loss, LtexIn order to be a function of the texture loss,
Figure BDA0002655010680000053
a challenge loss function for generating a model for the relative generated challenge network,
Figure BDA0002655010680000054
Is the countermeasure loss function of the discriminant model; α, β, γ and are the weights given to the content loss, feature loss, texture loss and adversity loss, respectively, in the total loss, so that the proposed method can satisfy the contribution factors of a plurality of combinations of loss functions;
s3, using the low-resolution image sample set as the input of the generator network, then using the high-resolution image sample set as the expected output of the generator network, training the generator network by using a back propagation algorithm, ensuring the content of the low-resolution image sample used each time to be consistent with the content of the high-resolution image sample used at the same time during training, obtaining the generator network after training when the number of times of training the generator network reaches the upper limit, and inputting the low-resolution image into the generator network for recognition to generate a super-resolution image immediately;
s4, taking a low-resolution image and a high-resolution image of the same picture content in a high-resolution image sample set of a low-resolution image sample set as a positive example image pair, taking the super-resolution image generated by a generator network and the high-resolution image in the high-resolution image as a negative example image pair, taking the positive example image pair and the negative example image pair as training databases, inputting the training databases into a countermeasure network of the discriminator, and performing countermeasure training on a neural network by using an Adam algorithm to finally obtain a trained discriminator network;
s5, inputting the low resolution path image into the trained generator network to process and generate a super-resolution image, then inputting the super-resolution image into the trained discriminator network to judge, outputting and generating a super-resolution picture if the discriminator network judges to be true, and feeding back the generator network to regenerate the super-resolution image if the discriminator network judges to be false.
The loss functions further include a content loss function, a feature loss function, a texture loss function, and a counter loss function, wherein:
the content loss is a measure for measuring the similarity of the content based on the pixel between the generated image and the real sample, Charbonnier loss is introduced as the content loss to keep edge details, pixel space regularization is provided for loss optimization, and the quality improvement is facilitated:
Figure BDA0002655010680000061
in the formula:
Figure BDA0002655010680000062
is a super-resolution image of the image,
Figure BDA0002655010680000063
is the original high resolution image. Is a constant term close to 0, representing the influence on the Charbonier penalty term;
the feature loss function is a measure for measuring semantic perception similarity between a generated image and a real sample, feature mapping extracted after activation is used in a super-resolution task based on generation of a countermeasure network, more accurate texture details can be generated by using a feature map before activation, and feature loss is defined as follows:
Figure BDA0002655010680000064
in the formula: phi is ai,jMapping to jth convolutional layer, W, before ith pooling layer for features of conventional VGG network structurei,jAnd Hi,jRespectively, the width and height of the feature map;
texture loss is a measure of structural style similarity between the generated super-resolution image and the original high-resolution image sample, and is used to visually approximate the low-quality image as close as possible to the true texture style of the original high-resolution image, and the texture loss defines:
Figure BDA0002655010680000065
in the formula:
Figure BDA0002655010680000066
and IHRRespectively the generated super-resolution image and the true high-resolution sample,
Figure BDA0002655010680000067
is a feature layer of n feature maps with length m extracted from a pre-trained VGG network, as shown in fig. 3, gram (f) FFTA Gram matrix representing a feature map F;
the discriminator network not only involves true high resolution image samples in the confrontation training, but also includes the super-resolution images generated, and the relative loss of the confrontation training is expressed as follows:
Figure BDA0002655010680000068
Figure BDA0002655010680000069
Figure BDA00026550106800000610
Figure BDA0002655010680000071
in the formula: x is the number ofrP and xfQ represents the data distribution of the real high resolution image sample and the generated super-resolution image, respectively, C (.) represents the output of the untransformed discriminator, σ refers to Sigmoid function, E represents the mean value, and D represents the discrimination network.

Claims (5)

1. A method for generating high-quality images for a contrast pair super-resolution reconstruction model is characterized by comprising the following steps:
s1, collecting an image sample set used for training, wherein the image sample set comprises a group of low-resolution image sample sets and a group of high-resolution image sample sets, the contents of all image samples in the low-resolution image sample sets and the contents of all image samples in the high-resolution image sample sets are in one-to-one correspondence and are the same, and only the resolutions are different, and the diversity of the training image samples is enhanced through random 90 degrees, 180 degrees, 270 degrees of rotation and horizontal overturning;
s2, establishing a relative generation confrontation super-resolution reconstruction model by utilizing SRGAN expansion, wherein the confrontation super-resolution reconstruction model comprises two confrontation networks consisting of a generator network and a discriminator network, and then constructing a total loss function, wherein the total loss function comprises a loss function of the generator network and a loss function of the discriminator network;
s3, using the low-resolution image sample set as the input of the generator network, then using the high-resolution image sample set as the expected output of the generator network, training the generator network by using a back propagation algorithm, ensuring the content of the low-resolution image sample used each time to be consistent with the content of the high-resolution image sample used at the same time during training, obtaining the generator network after training when the number of times of training the generator network reaches the upper limit, and inputting the low-resolution image into the generator network for recognition to generate a super-resolution image immediately;
s4, taking a low-resolution image and a high-resolution image of the same picture content in a high-resolution image sample set of a low-resolution image sample set as a positive example image pair, taking the super-resolution image generated by a generator network and the high-resolution image in the high-resolution image as a negative example image pair, taking the positive example image pair and the negative example image pair as training databases, inputting the training databases into a countermeasure network of the discriminator, and performing countermeasure training on a neural network by using an Adam algorithm to finally obtain a trained discriminator network;
s5, inputting the low resolution path image into the trained generator network to process and generate a super-resolution image, then inputting the super-resolution image into the trained discriminator network to judge, outputting and generating a super-resolution picture if the discriminator network judges to be true, and feeding back the generator network to regenerate the super-resolution image if the discriminator network judges to be false.
2. The method of generating high quality images against a super resolution reconstruction model according to claim 1, characterized in that: the generator network comprises 5 residual blocks which are connected in sequence, wherein each residual block consists of two convolution layers, two spectrum normalization layers and an active layer, the specific arrangement sequence is a convolution layer I, a spectrum normalization layer I, an active layer, a convolution layer II and a spectrum normalization layer II, and the active layer is in contact with a jump connection in ResNet and has a PReLU function; respectively arranging two convolution layers and an activation layer in front of and behind the 5 residual blocks to extract the shallow feature of the image; at the end of the network, continuously setting two sub-pixel convolution layers for super-resolution up-sampling of the image;
the generator network takes the low-resolution image in the obtained image sample dataset as input, firstly passes through a convolution layer with a convolution kernel of 3x3 and an LRelu activation layer, and then sequentially passes through 5 residual blocks for deep feature extraction of the image to generate a high-quality image which is as close to a real high-resolution sample as possible, and inputs the high-quality image into an increment module; and sequentially extracting the output characteristic diagram of the Inception module through multi-scale characteristics to obtain a final output super-resolution image.
3. The method of generating high quality images against a super resolution reconstruction model according to claim 1, characterized in that: the discriminator network is used for finding the relative difference between the super-resolution image and the original high-resolution image; the discriminator consists of 8 convolutional layers, the convolutional extracted feature map is increased from 64 to 512, followed by 1 activation layer with the leaked RELU function, and finally two dense convolutional layers are connected to return the probability that the original high-resolution image is more realistic than the generated super-resolution image.
4. The method of generating high quality images against a super resolution reconstruction model according to claim 1, characterized in that: the total loss function includes a generator network loss function LGAnd discriminates the network loss function LD
Figure FDA0002655010670000021
Figure FDA0002655010670000022
In the formula: l isfeaAs a characteristic loss function, LconAs a function of content loss, LtexIn order to be a function of the texture loss,
Figure FDA0002655010670000023
a challenge loss function for generating a model for the relative generated challenge network,
Figure FDA0002655010670000024
Is the countermeasure loss function of the discriminant model; alpha, beta, gamma and are respectively given toThe weights of content loss, feature loss, texture loss and adversarial loss in the loss enable the proposed method to satisfy the contribution factors of a plurality of combinations of loss functions.
5. The method of generating high quality images against a super resolution reconstruction model according to claim 4, characterized in that: the loss functions include a content loss function, a feature loss function, a texture loss function, and a counter loss function, wherein:
the content loss is a measure for measuring the similarity of the content based on the pixel between the generated image and the real sample, Charbonnier loss is introduced as the content loss to keep edge details, pixel space regularization is provided for loss optimization, and the quality improvement is facilitated:
Figure FDA0002655010670000025
in the formula:
Figure FDA0002655010670000026
is a super-resolution image of the image,
Figure FDA0002655010670000027
is the original high resolution image. Is a constant term close to 0, representing the influence on the Charbonier penalty term;
the feature loss function is a measure for measuring semantic perception similarity between a generated image and a real sample, feature mapping extracted after activation is used in a super-resolution task based on generation of a countermeasure network, more accurate texture details can be generated by using a feature map before activation, and feature loss is defined as follows:
Figure FDA0002655010670000028
in the formula: phi is ai,jMapping features for conventional VGG network architectureThe jth convolutional layer, W, before the ith pooling layeri,jAnd Hi,jRespectively, the width and height of the feature map;
texture loss is a measure of structural style similarity between the generated super-resolution image and the original high-resolution image sample, and is used to visually approximate the low-quality image as close as possible to the true texture style of the original high-resolution image, and the texture loss defines:
Figure FDA0002655010670000031
in the formula:
Figure FDA0002655010670000032
and IHRRespectively the generated super-resolution image and the true high-resolution sample,
Figure FDA0002655010670000033
is a feature layer of n feature maps with length m extracted from a pre-trained VGG network, gram (F) FFTA Gram matrix representing a feature map F;
the discriminator network not only involves true high resolution image samples in the confrontation training, but also includes the super-resolution images generated, and the relative loss of the confrontation training is expressed as follows:
Figure FDA0002655010670000034
Figure FDA0002655010670000035
Figure FDA0002655010670000036
Figure FDA0002655010670000037
in the formula: x is the number ofrP and xfQ represents the data distribution of the real high resolution image sample and the generated super-resolution image, respectively, C (.) represents the output of the untransformed discriminator, σ refers to Sigmoid function, E represents the mean value, and D represents the discrimination network.
CN202010884014.XA 2020-08-28 2020-08-28 Method for generating high-quality image by relatively generating antagonistic super-resolution reconstruction model Withdrawn CN112001847A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010884014.XA CN112001847A (en) 2020-08-28 2020-08-28 Method for generating high-quality image by relatively generating antagonistic super-resolution reconstruction model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010884014.XA CN112001847A (en) 2020-08-28 2020-08-28 Method for generating high-quality image by relatively generating antagonistic super-resolution reconstruction model

Publications (1)

Publication Number Publication Date
CN112001847A true CN112001847A (en) 2020-11-27

Family

ID=73464379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010884014.XA Withdrawn CN112001847A (en) 2020-08-28 2020-08-28 Method for generating high-quality image by relatively generating antagonistic super-resolution reconstruction model

Country Status (1)

Country Link
CN (1) CN112001847A (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381720A (en) * 2020-11-30 2021-02-19 黑龙江大学 Construction method of super-resolution convolutional neural network model
CN112561864A (en) * 2020-12-04 2021-03-26 深圳格瑞健康管理有限公司 Method, system and storage medium for training caries image classification model
CN112561796A (en) * 2020-12-02 2021-03-26 西安电子科技大学 Laser point cloud super-resolution reconstruction method based on self-attention generation countermeasure network
CN112598581A (en) * 2020-12-30 2021-04-02 中国科学院信息工程研究所 Training method of RDN super-resolution network and image generation method
CN112749788A (en) * 2020-12-17 2021-05-04 郑州金惠计算机系统工程有限公司 Super-resolution picture model generation method and device, electronic equipment and storage medium
CN112837270A (en) * 2021-01-11 2021-05-25 成都圭目机器人有限公司 Synthetic method and network model of road surface image with semantic annotation
CN112837232A (en) * 2021-01-13 2021-05-25 山东省科学院海洋仪器仪表研究所 Underwater image enhancement and detail recovery method
CN113077385A (en) * 2021-03-30 2021-07-06 上海大学 Video super-resolution method and system based on countermeasure generation network and edge enhancement
CN113139906A (en) * 2021-05-13 2021-07-20 平安国际智慧城市科技股份有限公司 Training method and device of generator and storage medium
CN113160057A (en) * 2021-04-27 2021-07-23 沈阳工业大学 RPGAN image super-resolution reconstruction method based on generation countermeasure network
CN113191949A (en) * 2021-04-28 2021-07-30 中南大学 Multi-scale super-resolution pathological image digitization method and system and storage medium
CN113222824A (en) * 2021-06-03 2021-08-06 北京理工大学 Infrared image super-resolution and small target detection method
CN113269722A (en) * 2021-04-22 2021-08-17 北京邮电大学 Training method for generating countermeasure network and high-resolution image reconstruction method
CN113344110A (en) * 2021-06-26 2021-09-03 浙江理工大学 Fuzzy image classification method based on super-resolution reconstruction
CN113343705A (en) * 2021-04-26 2021-09-03 山东师范大学 Text semantic based detail preservation image generation method and system
CN113409191A (en) * 2021-06-02 2021-09-17 广东工业大学 Lightweight image super-resolution method and system based on attention feedback mechanism
CN113538247A (en) * 2021-08-12 2021-10-22 中国科学院空天信息创新研究院 Super-resolution generation and conditional countermeasure network remote sensing image sample generation method
CN113674191A (en) * 2021-08-23 2021-11-19 中国人民解放军国防科技大学 Weak light image enhancement method and device based on conditional countermeasure network
CN113744238A (en) * 2021-09-01 2021-12-03 南京工业大学 Method for establishing bullet trace database
CN113837945A (en) * 2021-09-30 2021-12-24 福州大学 Display image quality optimization method and system based on super-resolution reconstruction
CN114063168A (en) * 2021-11-16 2022-02-18 电子科技大学 Artificial intelligence noise reduction method for seismic signals
CN114519679A (en) * 2022-02-21 2022-05-20 安徽大学 Intelligent SAR target image data enhancement method
CN115063293A (en) * 2022-05-31 2022-09-16 北京航空航天大学 Rock microscopic image super-resolution reconstruction method adopting generation of countermeasure network
CN115086670A (en) * 2022-06-13 2022-09-20 梧州学院 Low-bit-rate encoding and decoding method and system for high-definition microscopic video
CN115564652A (en) * 2022-09-30 2023-01-03 南京航空航天大学 Reconstruction method for image super-resolution
CN115880537A (en) * 2023-02-16 2023-03-31 江西财经大学 Method and system for evaluating image quality of confrontation sample

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109410239A (en) * 2018-11-07 2019-03-01 南京大学 A kind of text image super resolution ratio reconstruction method generating confrontation network based on condition
CN109509152A (en) * 2018-12-29 2019-03-22 大连海事大学 A kind of image super-resolution rebuilding method of the generation confrontation network based on Fusion Features
CN109993698A (en) * 2019-03-29 2019-07-09 西安工程大学 A kind of single image super-resolution texture Enhancement Method based on generation confrontation network
CN110136063A (en) * 2019-05-13 2019-08-16 南京信息工程大学 A kind of single image super resolution ratio reconstruction method generating confrontation network based on condition
CN110189253A (en) * 2019-04-16 2019-08-30 浙江工业大学 A kind of image super-resolution rebuilding method generating confrontation network based on improvement
US20190370608A1 (en) * 2018-05-31 2019-12-05 Seoul National University R&Db Foundation Apparatus and method for training facial locality super resolution deep neural network
CN110570353A (en) * 2019-08-27 2019-12-13 天津大学 Dense connection generation countermeasure network single image super-resolution reconstruction method
CN111583109A (en) * 2020-04-23 2020-08-25 华南理工大学 Image super-resolution method based on generation countermeasure network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190370608A1 (en) * 2018-05-31 2019-12-05 Seoul National University R&Db Foundation Apparatus and method for training facial locality super resolution deep neural network
CN109410239A (en) * 2018-11-07 2019-03-01 南京大学 A kind of text image super resolution ratio reconstruction method generating confrontation network based on condition
CN109509152A (en) * 2018-12-29 2019-03-22 大连海事大学 A kind of image super-resolution rebuilding method of the generation confrontation network based on Fusion Features
CN109993698A (en) * 2019-03-29 2019-07-09 西安工程大学 A kind of single image super-resolution texture Enhancement Method based on generation confrontation network
CN110189253A (en) * 2019-04-16 2019-08-30 浙江工业大学 A kind of image super-resolution rebuilding method generating confrontation network based on improvement
CN110136063A (en) * 2019-05-13 2019-08-16 南京信息工程大学 A kind of single image super resolution ratio reconstruction method generating confrontation network based on condition
CN110570353A (en) * 2019-08-27 2019-12-13 天津大学 Dense connection generation countermeasure network single image super-resolution reconstruction method
CN111583109A (en) * 2020-04-23 2020-08-25 华南理工大学 Image super-resolution method based on generation countermeasure network

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381720A (en) * 2020-11-30 2021-02-19 黑龙江大学 Construction method of super-resolution convolutional neural network model
CN112561796A (en) * 2020-12-02 2021-03-26 西安电子科技大学 Laser point cloud super-resolution reconstruction method based on self-attention generation countermeasure network
CN112561796B (en) * 2020-12-02 2024-04-16 西安电子科技大学 Laser point cloud super-resolution reconstruction method based on self-attention generation countermeasure network
CN112561864B (en) * 2020-12-04 2024-03-29 深圳格瑞健康科技有限公司 Training method, system and storage medium for caries image classification model
CN112561864A (en) * 2020-12-04 2021-03-26 深圳格瑞健康管理有限公司 Method, system and storage medium for training caries image classification model
CN112749788A (en) * 2020-12-17 2021-05-04 郑州金惠计算机系统工程有限公司 Super-resolution picture model generation method and device, electronic equipment and storage medium
CN112598581A (en) * 2020-12-30 2021-04-02 中国科学院信息工程研究所 Training method of RDN super-resolution network and image generation method
CN112598581B (en) * 2020-12-30 2023-10-24 中国科学院信息工程研究所 Training method and image generation method of RDN super-resolution network
CN112837270A (en) * 2021-01-11 2021-05-25 成都圭目机器人有限公司 Synthetic method and network model of road surface image with semantic annotation
CN112837232A (en) * 2021-01-13 2021-05-25 山东省科学院海洋仪器仪表研究所 Underwater image enhancement and detail recovery method
CN112837232B (en) * 2021-01-13 2022-10-04 山东省科学院海洋仪器仪表研究所 Underwater image enhancement and detail recovery method
CN113077385A (en) * 2021-03-30 2021-07-06 上海大学 Video super-resolution method and system based on countermeasure generation network and edge enhancement
CN113269722A (en) * 2021-04-22 2021-08-17 北京邮电大学 Training method for generating countermeasure network and high-resolution image reconstruction method
CN113343705A (en) * 2021-04-26 2021-09-03 山东师范大学 Text semantic based detail preservation image generation method and system
CN113160057B (en) * 2021-04-27 2023-09-05 沈阳工业大学 RPGAN image super-resolution reconstruction method based on generation countermeasure network
CN113160057A (en) * 2021-04-27 2021-07-23 沈阳工业大学 RPGAN image super-resolution reconstruction method based on generation countermeasure network
CN113191949B (en) * 2021-04-28 2023-06-20 中南大学 Multi-scale super-resolution pathology image digitizing method, system and storage medium
CN113191949A (en) * 2021-04-28 2021-07-30 中南大学 Multi-scale super-resolution pathological image digitization method and system and storage medium
CN113139906A (en) * 2021-05-13 2021-07-20 平安国际智慧城市科技股份有限公司 Training method and device of generator and storage medium
CN113139906B (en) * 2021-05-13 2023-11-24 平安国际智慧城市科技股份有限公司 Training method and device for generator and storage medium
CN113409191A (en) * 2021-06-02 2021-09-17 广东工业大学 Lightweight image super-resolution method and system based on attention feedback mechanism
CN113222824A (en) * 2021-06-03 2021-08-06 北京理工大学 Infrared image super-resolution and small target detection method
CN113344110B (en) * 2021-06-26 2024-04-05 浙江理工大学 Fuzzy image classification method based on super-resolution reconstruction
CN113344110A (en) * 2021-06-26 2021-09-03 浙江理工大学 Fuzzy image classification method based on super-resolution reconstruction
CN113538247B (en) * 2021-08-12 2022-04-15 中国科学院空天信息创新研究院 Super-resolution generation and conditional countermeasure network remote sensing image sample generation method
CN113538247A (en) * 2021-08-12 2021-10-22 中国科学院空天信息创新研究院 Super-resolution generation and conditional countermeasure network remote sensing image sample generation method
CN113674191A (en) * 2021-08-23 2021-11-19 中国人民解放军国防科技大学 Weak light image enhancement method and device based on conditional countermeasure network
CN113744238A (en) * 2021-09-01 2021-12-03 南京工业大学 Method for establishing bullet trace database
CN113744238B (en) * 2021-09-01 2023-08-01 南京工业大学 Method for establishing bullet trace database
CN113837945A (en) * 2021-09-30 2021-12-24 福州大学 Display image quality optimization method and system based on super-resolution reconstruction
CN113837945B (en) * 2021-09-30 2023-08-04 福州大学 Display image quality optimization method and system based on super-resolution reconstruction
CN114063168A (en) * 2021-11-16 2022-02-18 电子科技大学 Artificial intelligence noise reduction method for seismic signals
CN114519679B (en) * 2022-02-21 2022-10-21 安徽大学 Intelligent SAR target image data enhancement method
CN114519679A (en) * 2022-02-21 2022-05-20 安徽大学 Intelligent SAR target image data enhancement method
CN115063293A (en) * 2022-05-31 2022-09-16 北京航空航天大学 Rock microscopic image super-resolution reconstruction method adopting generation of countermeasure network
CN115086670B (en) * 2022-06-13 2023-03-10 梧州学院 Low-bit-rate encoding and decoding method and system for high-definition microscopic video
CN115086670A (en) * 2022-06-13 2022-09-20 梧州学院 Low-bit-rate encoding and decoding method and system for high-definition microscopic video
CN115564652A (en) * 2022-09-30 2023-01-03 南京航空航天大学 Reconstruction method for image super-resolution
CN115564652B (en) * 2022-09-30 2023-12-01 南京航空航天大学 Reconstruction method for super-resolution of image
CN115880537B (en) * 2023-02-16 2023-05-09 江西财经大学 Method and system for evaluating image quality of countermeasure sample
CN115880537A (en) * 2023-02-16 2023-03-31 江西财经大学 Method and system for evaluating image quality of confrontation sample

Similar Documents

Publication Publication Date Title
CN112001847A (en) Method for generating high-quality image by relatively generating antagonistic super-resolution reconstruction model
CN110570353B (en) Super-resolution reconstruction method for generating single image of countermeasure network by dense connection
CN110033410B (en) Image reconstruction model training method, image super-resolution reconstruction method and device
CN110136063B (en) Single image super-resolution reconstruction method based on condition generation countermeasure network
CN112507997B (en) Face super-resolution system based on multi-scale convolution and receptive field feature fusion
CN111369440B (en) Model training and image super-resolution processing method, device, terminal and storage medium
CN113362223B (en) Image super-resolution reconstruction method based on attention mechanism and two-channel network
CN111429355A (en) Image super-resolution reconstruction method based on generation countermeasure network
CN112037131A (en) Single-image super-resolution reconstruction method based on generation countermeasure network
CN110717857A (en) Super-resolution image reconstruction method and device
CN109214989B (en) Single image super resolution ratio reconstruction method based on Orientation Features prediction priori
CN110473142B (en) Single image super-resolution reconstruction method based on deep learning
CN111915490A (en) License plate image super-resolution reconstruction model and method based on multi-scale features
CN112949636B (en) License plate super-resolution recognition method, system and computer readable medium
Wei et al. Improving resolution of medical images with deep dense convolutional neural network
CN115546032B (en) Single-frame image super-resolution method based on feature fusion and attention mechanism
CN113538246B (en) Remote sensing image super-resolution reconstruction method based on unsupervised multi-stage fusion network
CN116168067B (en) Supervised multi-modal light field depth estimation method based on deep learning
CN112950480A (en) Super-resolution reconstruction method integrating multiple receptive fields and dense residual attention
CN113096015B (en) Image super-resolution reconstruction method based on progressive perception and ultra-lightweight network
CN112365405A (en) Unsupervised super-resolution reconstruction method based on generation countermeasure network
CN113344110B (en) Fuzzy image classification method based on super-resolution reconstruction
CN115880158A (en) Blind image super-resolution reconstruction method and system based on variational self-coding
CN113129237B (en) Depth image deblurring method based on multi-scale fusion coding network
Yang et al. Super-resolution generative adversarial networks based on attention model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20201127

WW01 Invention patent application withdrawn after publication