CN115018705A - Image super-resolution method based on enhanced generation countermeasure network - Google Patents

Image super-resolution method based on enhanced generation countermeasure network Download PDF

Info

Publication number
CN115018705A
CN115018705A CN202210593247.3A CN202210593247A CN115018705A CN 115018705 A CN115018705 A CN 115018705A CN 202210593247 A CN202210593247 A CN 202210593247A CN 115018705 A CN115018705 A CN 115018705A
Authority
CN
China
Prior art keywords
network
image
convolution
resolution
parallel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210593247.3A
Other languages
Chinese (zh)
Other versions
CN115018705B (en
Inventor
付煜波
杨欣
朱义天
李恒锐
周大可
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202210593247.3A priority Critical patent/CN115018705B/en
Publication of CN115018705A publication Critical patent/CN115018705A/en
Application granted granted Critical
Publication of CN115018705B publication Critical patent/CN115018705B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image super-resolution algorithm based on an enhanced generation countermeasure network. The method mainly aims to build a generation network and a judgment network, train the generation network and the judgment network and finally expect to obtain an algorithm model capable of effectively improving the image resolution. In the network generation, a parallel convolution dense connection residual block is adopted to replace a DenseBlock in the original algorithm, the image receptive field is increased, a channel attention and space attention mechanism is added, after the parallel convolution dense connection residual block, the parallel dense connection residual block and the attention mechanism are combined to serve as a network generation basic unit, the learning capacity of the algorithm on image high-frequency information is enhanced, and the super-resolution effect is improved.

Description

Image super-resolution method based on enhanced generation countermeasure network
Technical Field
The invention belongs to the field of computer vision, and particularly relates to an image super-resolution method based on an enhanced generation countermeasure network.
Background
Super-resolution of a single image is a fundamental and active research topic, aiming at improving the resolution of the image. Since the c.dong proposed the SRCNN model and realized super-resolution by using the convolutional network method, the super-resolution technology has developed vigorously, various new network structures and training strategies have been developed, and the super-resolution performance has been improved. The DONG model only has three layers of convolution networks, the shallow network layer number is difficult to extract deep information of the image, but the network depth is simply increased, and the problem of network degradation still occurs even after regularization is utilized. He et al proposed a residual network in 2015, in which a skip connection is added, so that the upper layer information can be directly input into the lower layer by skipping the calculation of the layer. The network degradation problem can be solved to a great extent through the network structure, and the idea enables a deeper network structure.
2014, with the advent of deep learning, Ian J et al proposed generating a countermeasure Network (GAN) whose model consists of a generating Network part and a discriminating Network part. The GAN network outputs a response result by the generating network and inputs the result into the judging network, and the judging network scores the generating result to judge the training effect. The model utilizes the thought of the zero sum game, the training target of the generated network is to generate a result which can cheat the discriminant network, the target of the discriminant network is to effectively discriminate the real result and generate the generated result of the network, the generated network and the discriminant network are trained in the confrontation, the progress is made continuously, and the effect of the generated network is improved continuously. In 2018, Ledig applies the countermeasure generation network to the super-resolution field, provides an SRGAN algorithm, improves the loss function of the network and introduces the countermeasure loss and the perception loss. The previous super-resolution research focuses more on restoring fine-grained texture details, and particularly focuses on improving the PSNR of the algorithm, but the high-frequency details of the algorithm are lost, the whole image is too smooth, the sharpness is insufficient, and the super-resolution research is inconsistent with the visual perception effect of human eyes. Compared with the prior art, the SRGAN pays more attention to the reality of the generated effect, pays more attention to the similarity degree with the original image, and the generated image effect is more consistent with the subjective feeling of people.
In 2018, Wang et al proposed ESRGAN based on SEGAN. By removing the BN layer, the calculation amount is reduced, artifacts are removed favorably, and the generalization capability is improved; the original residual block is adapted into a multilayer dense connection residual network, so that the training capability is more stable and easier; a discrimination network is improved by utilizing a RaGAN method, and the authenticity and detail texture of a reconstructed image are improved; the loss function is optimized, the loss is calculated by using the characteristic value before VGG activation, the brightness consistency is improved, and the sensitivity to high-frequency information is improved.
Disclosure of Invention
The invention aims to: aiming at the prior art, an image super-resolution method based on an enhanced generation countermeasure network is provided.
The technical scheme is as follows: an image super-resolution method based on an enhanced generation countermeasure network comprises the following steps:
1) based on RRDB nested residual error dense structure, multi-scale feature extraction is realized by utilizing parallel convolution PC, a CBAM attention mechanism module is added to improve the learning capability of high-frequency information, and a parallel convolution dense connection residual error block RR-PADB is constructed;
2) constructing a super-resolution network model, wherein the model comprises a generation network and a discrimination network, and the parallel convolution dense connection residual block RR-PADB constructed in the step 1) is added into the generation network;
3) inputting a low-resolution image into the generation network in the step 2), outputting a false high-resolution image, inputting the false high-resolution image and the original low-resolution image into the discrimination network, judging the false high-resolution image by the discrimination network, and if the false high-resolution image is judged to be false, generating the network to continue training; if the judgment is true, generating a network and stopping training, wherein the generated network is the improved super-resolution model;
4) processing the image through the super-resolution model improved in the step 3).
Preferably, the implementation process of constructing the parallel convolution dense connection residual block RR-PADB in step 1) is as follows:
the parallel convolution PC expression is:
P o =f 3*3 (P i )+f 5*5 (P i )
wherein: p o For the output of parallel convolutional layers, P i For parallel convolutional layer output, f 3*3 Denotes a 3 x 3 convolution calculation, f 5*5 Represents 5 by 5 convolution calculations;
parallel convolution PCs are formed into parallel dense connections PADB by utilizing dense connections and adding attention mechanisms:
parallel convolution PC first layer output F 1 Comprises the following steps:
F 1 =f 3*3 (F i )+f 5*5 (F i )
F i represents input of layer 1, F n (n is 1, 2, 3) represents the result of parallel convolution of the nth layer, and includes:
F 2 =f 3*3 [C a (F 1 ,F 0 ,)]+f 5*5 [C a (F 1 ,F 0 ,)])
F 3 =f 3*3 [C a (F 2 ,F 1 ,F 0 ,)]+f 5 * 5 [C a (F 2 ,F 1 ,F 0 ,)])
wherein: c a Representing a feature splicing operation;
and extracting features through dense connection and 3 × 3 convolution for one time:
F=f 3*3 (F 3 )
wherein F is a feature map obtained by feature extraction;
and combining the characteristic diagram F with a CBAM attention mechanism module, wherein the CBAM attention mechanism module firstly carries out channel attention and then carries out spatial attention:
first, a channel attention calculation F ═ M is performed C (F)⊙F
Performing spatial attention calculation again to obtain M S (F')⊙F'
Where F 'is the result of the CBAM attention mechanism, F' is the result of the channel attention mechanism, M C (F) Weighting node for indicating channel attention gainFruit,. alpha.indicates multiplication by a corresponding value, M S (F') represents a weight result obtained by spatial attention;
the result of the final parallel convolution dense connection block is expressed as:
F”=M S (F')⊙(M C (F)⊙{f 3*3 [C a (F 3 ,F 2 ,F 1 ,F i )]})
the parallel convolution dense connection residual block RR-PADB is expressed as:
R 1 =β[D(R i )]+R i
R 2 =β[D(R 1 )]+R 1
R 3 =β[D(R 2 )]+R 2
R O =β[D(R 3 )]+R i
wherein: r i Representing inputs of RR-PADB, R n (n-1, 2, 3) denotes an output through n-layer PADB, R O Represents the output of RR-PADB, D represents the parallel convolution dense join operation, and beta is a parameter.
Preferably, the generating network in the step 2) comprises an initial layer of convolution, a final two-layer of convolution and a Leaky Relu function, and an intermediate 16-layer parallel convolution dense connection residual block RR-PADB; using the leak Relu function as the activation function:
Figure BDA0003666501390000031
x i is an input value, y i To correspond to the output value, a i Are proportional parameters.
The penalty function for the resulting network is:
Figure BDA0003666501390000032
Figure BDA0003666501390000041
Figure BDA0003666501390000042
in the formula x f Representing the generated image, x r In order to input a high-resolution image,
Figure BDA0003666501390000043
e represents the mean value as the result of the relative average discriminator.
Preferably, the discriminant network comprises a convolution layer with an initial Leaky Relu activation function, convolution layers with seven middle activation functions BN and Leaky Relu, and two full connections and a Sigmoid function of a tail activation function Leaky Relu; wherein the Sigmoid function is:
Figure BDA0003666501390000044
the penalty function for the discrimination network is:
Figure BDA0003666501390000045
the overall loss function is:
Figure BDA0003666501390000046
wherein: l is 1 Is a content loss function; l is a radical of an alcohol percep As a function of perceptual loss;
Figure BDA0003666501390000047
m and n are balance parameters for the function of resisting loss;
Figure BDA0003666501390000048
L 1 =E[||G(x)-y|| 1 ]
wherein x is a low resolution image input by the network, y is an original image corresponding to the input image, G (x) is an image generated by the network,
Figure BDA0003666501390000049
for a non-linear transformation, E represents the mean.
Has the advantages that: the invention provides an image super-resolution algorithm based on an enhanced generation countermeasure network, which can improve the learning capability of image high-frequency information to a certain extent through improving and optimizing a model, and further improve the image super-resolution effect.
Drawings
FIG. 1 is a diagram of the network structure generated by the present invention;
FIG. 2 is a diagram of a discriminating network structure according to the present invention;
FIG. 3 is a diagram of a parallel convolution dense concatenation block of the present invention;
FIG. 4 is a diagram of parallel convolution dense concatenation residual block of the present invention;
FIG. 5 is a schematic view of the attention mechanism of the present invention;
FIG. 6 is a diagram of the parallel convolution architecture of the present invention.
Detailed Description
The invention is further explained below with reference to the drawings.
An image super-resolution method based on an enhanced generation countermeasure network comprises the following steps:
5) based on an RRDB nested residual error dense structure, multi-scale feature extraction is realized by utilizing parallel convolution PC, a CBAM attention mechanism module is added to improve the learning capacity of high-frequency information, and a parallel convolution dense connection residual error block RR-PADB is constructed;
6) constructing a super-resolution network model, wherein the model comprises a generation network and a discrimination network, and the parallel convolution dense connection residual block RR-PADB constructed in the step 1) is added into the generation network;
7) inputting a low-resolution image into the generation network in the step 2), outputting a false high-resolution image, inputting the false high-resolution image and the original low-resolution image into the discrimination network, judging the false high-resolution image by the discrimination network, and if the false high-resolution image is judged to be false, generating the network to continue training; if the judgment result is true, the generated network stops training, and the generated network is the improved super-resolution model;
8) processing the image through the super-resolution model improved in the step 3).
The implementation process of constructing the parallel convolution dense connection residual block RR-PADB in the step 1) comprises the following steps:
the parallel convolution PC expression is:
P o =f 3*3 (P i )+f 5*5 (P i )
wherein: p o For the output of parallel convolutional layers, P i For parallel convolutional layer output, f 3*3 Denotes a 3 x 3 convolution calculation, f 5*5 Represents 5 by 5 convolution calculations;
parallel convolution PCs are formed into parallel dense connections PADB by utilizing dense connections and adding attention mechanisms:
parallel convolution PC first layer output F 1 Comprises the following steps:
F 1 =f 3*3 (F i )+f 5*5 (F i )
F i representing input at level 1, F n (n is 1, 2, 3) represents the result of parallel convolution of the nth layer, and includes:
F 2 =f 3*3 [C a (F 1 ,F 0 ,)]+f 5*5 [C a (F 1 ,F 0 ,)])
F 3 =f 3*3 [C a (F 2 ,F 1 ,F 0 ,)]+f 5*5 [C a (F 2 ,F 1 ,F 0 ,)])
wherein: c a Representing a feature splicing operation;
and extracting features through dense connection and 3 × 3 convolution:
F=f 3*3 (F 3 )
wherein F is a feature map obtained by feature extraction;
and combining the characteristic diagram F with a CBAM attention mechanism module, wherein the CBAM attention mechanism module firstly carries out channel attention and then carries out spatial attention:
firstly, a channel attention calculation F ═ M is carried out C (F)⊙F
Performing spatial attention calculation again to obtain M S (F')⊙F'
Where F "is the result of the CBAM attention mechanism, F' is the result of the channel attention mechanism, M C (F) A weight result indicating that channel attention was obtained, indicates that the corresponding value is multiplied, M S (F') represents a weight result obtained by spatial attention;
the result of the final parallel convolution dense connection block is expressed as:
F”=M S (F')⊙(M C (F)⊙{f 3*3 [C a (F 3 ,F 2 ,F 1 ,F i )]})
the parallel convolution dense connection residual block RR-PADB is expressed as:
R 1 =β[D(R i )]+R i
R 2 =β[D(R 1 )]+R 1
R 3 =β[D(R 2 )]+R 2
R O =β[D(R 3 )]+R i
wherein: r i Representing inputs of RR-PADB, R n (n-1, 2, 3) denotes an output through n-layer PADB, R O Represents the output of RR-PADB, D represents the parallel convolution dense join operation, and beta is a parameter.
The generation network in the step 2) comprises an initial layer of convolution, a final two-layer of convolution and a Leaky Relu function, and an intermediate 16-layer parallel convolution dense connection residual block RR-PADB; using the leak Relu function as the activation function:
Figure BDA0003666501390000061
x i is an input value, y i To correspond to the output value, a i Are proportional parameters.
The penalty function for the resulting network is:
Figure BDA0003666501390000062
Figure BDA0003666501390000063
Figure BDA0003666501390000064
in the formula x f Representing the generated image, x r In order to input a high-resolution image,
Figure BDA0003666501390000065
e represents the mean value as the result of the relative average discriminator.
The discrimination network comprises an initial convolution layer with a Leaky Relu activation function, a convolution layer with seven middle activation functions of BN and Leaky Relu, and a two-time full connection and Sigmoid function with a Leaky Relu activation function at the tail end; wherein the Sigmoid function is:
Figure BDA0003666501390000071
the penalty function for the discrimination network is:
Figure BDA0003666501390000072
the overall loss function is:
Figure BDA0003666501390000073
wherein: l is 1 Is a content loss function; l is percep As a function of perceptual loss;
Figure BDA0003666501390000074
m and n are balance parameters for the function of resisting loss;
Figure BDA0003666501390000075
L 1 =E[||G(x)-y|| 1 ]
wherein x is a low resolution image input by the network, y is an original image corresponding to the input image, G (x) is an image generated by the network,
Figure BDA0003666501390000076
for a non-linear transformation, E represents the mean.
The network part for generating the model is shown in the figure I, and the model consists of a parallel convolution dense connection residual block (RR-PADB), a convolution layer (Conv), upsampling (upsampling) and LRelu functions. The RR-PADB structure is shown as figure four and is formed by connecting parallel convolution dense connection blocks (PADB) through a residual error network; the PADB structure is formed by densely connecting parallel convolution structures (PC) and connecting CBAM attention mechanism modules as shown in a third diagram, the parallel convolution structures are shown in a sixth diagram, and the CBAM module structures are shown in a fifth diagram.
Generating a network, generating a false high-resolution image from the input low-resolution image, sending the false high-resolution image and an original image into a discrimination network, judging the generation effect of the generated network by the discrimination network, and continuing training the generated network if the discrimination network is false; and if the judgment result is true, the network is generated and the training is stopped, and the generated network model is the super-resolution model to be finally obtained. And under the excitation of the loss function, continuously training a generation network and a discrimination network to finally obtain a reconstruction model meeting the requirements.
Step one, training a training set by using 500 high-definition pictures.
And step two, adopting 50 images in the verification set.
And step three, inputting the training set into the countermeasure network to train the network.
And step four, verifying the super-resolution performance of the network by using 50 test set images.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (4)

1. An image super-resolution method based on an enhanced generation countermeasure network is characterized by comprising the following steps:
1) based on an RRDB nested residual error dense structure, multi-scale feature extraction is realized by utilizing parallel convolution PC, a CBAM attention mechanism module is added to improve the learning capacity of high-frequency information, and a parallel convolution dense connection residual error block RR-PADB is constructed;
2) constructing a super-resolution network model, wherein the model comprises a generation network and a discrimination network, and the parallel convolution dense connection residual block RR-PADB constructed in the step 1) is added into the generation network;
3) inputting a low-resolution image into the generation network in the step 2), outputting a false high-resolution image, inputting the false high-resolution image and the original low-resolution image into the discrimination network, judging the false high-resolution image by the discrimination network, and if the false high-resolution image is judged to be false, generating the network to continue training; if the judgment is true, generating a network and stopping training, wherein the generated network is the improved super-resolution model;
4) and processing the image through the super-resolution model improved in the step 3).
2. The image super-resolution method based on the enhanced generative countermeasure network as claimed in claim 1, wherein the implementation process of constructing the parallel convolution dense connection residual block RR-PADB in step 1) is as follows:
the parallel convolution PC expression is:
P o =f 3*3 (P i )+f 5*5 (P i )
wherein: p o For the output of parallel convolutional layers, P i For parallel convolutional layer input, f 3*3 Denotes a 3 x 3 convolution calculation, f 5*5 Represents 5 by 5 convolution calculations;
parallel convolution PCs are formed into parallel dense connections PADB by utilizing dense connections and adding attention mechanisms:
parallel convolution PC first layer output F 1 Comprises the following steps:
F 1 =f 3*3 (F i )+f 5*5 (F i )
F i representing input at level 1, F n (n is 1, 2, 3) represents the result of parallel convolution of the nth layer, and includes:
F 2 =f 3*3 [C a (F 1 ,F 0 ,)]+f 5*5 [C a (F 1 ,F 0 ,)])
F 3 =f 3*3 [C a (F 2 ,F 1 ,F 0 ,)]+f 5*5 [C a (F 2 ,F 1 ,F 0 ,)])
wherein: c a Representing a feature splicing operation;
and extracting features through dense connection and 3 × 3 convolution for one time:
F=f 3*3 (F 3 )
wherein F is a feature map obtained by feature extraction;
and combining the characteristic diagram F with a CBAM attention mechanism module, wherein the CBAM attention mechanism module firstly carries out channel attention and then carries out spatial attention:
firstly, a channel attention calculation F ═ M is carried out C (F)⊙F
The spatial attention calculation F ″, M, is performed again S (F′)⊙F′
Where F 'is the result of the operation through the CBAM attention mechanism, F' is the result of the operation through the channel attention mechanism, M C (F) Indicating the right of channel attention gainHeavy result,. indicates that the corresponding value is multiplied, M S (F') represents the spatial attention derived weighting result;
the result of the final parallel convolution dense connection block is expressed as:
F″=M S (F′)⊙(M C (F)⊙{f 3*3 [C a (F 3 ,F 2 ,F 1 ,F i )]})
the parallel convolution dense connection residual block RR-PADB is expressed as:
R 1 =β[D(R i )]+R i
R 2 =β[D(R 1 )]+R 1
R 3 =β[D(R 2 )]+R 2
R O =β[D(R 3 )]+R i
wherein: r is i Representing inputs of RR-PADB, R n (n-1, 2, 3) represents an output through n layers of RR-PADB, R O Represents the output of RR-PADB, D represents the parallel convolution dense connection operation, and beta is a parameter.
3. The image super-resolution method based on enhanced generation countermeasure network of claim 2, wherein the generation network in step 2) comprises an initial layer of convolution, a final layer of convolution and a Leaky Relu function, and an intermediate 16-layer parallel convolution dense connection residual block RR-PADB; using the leak Relu function as the activation function:
Figure FDA0003666501380000021
x i is an input value, y i To correspond to the output value, a i Are proportional parameters.
The penalty function for the resulting network is:
Figure FDA0003666501380000022
Figure FDA0003666501380000023
Figure FDA0003666501380000024
in the formula x f Representing the generated image, x r In order to input a high-resolution image,
Figure FDA0003666501380000025
e represents the mean value as the result of the relative average discriminator.
4. The image super-resolution method based on the enhanced generation countermeasure network of claim 3, wherein the discriminant network comprises an initial convolution with a Leaky Relu activation function, a convolution layer with seven middle bands BN and the Leaky Relu activation function, and two full connections with a Leaky Relu activation function and a Sigmoid function at the end; wherein the Sigmoid function is:
Figure FDA0003666501380000031
the penalty function for the discrimination network is:
Figure FDA0003666501380000032
the overall loss function is:
Figure FDA0003666501380000033
wherein: l is 1 Is a content loss function; l is percep As a function of perceptual loss;
Figure FDA0003666501380000034
m and n are balance parameters for the function of resisting loss;
Figure FDA0003666501380000035
L 1 =E[||G(x)-y|| 1 ]
wherein x is a low resolution image input by the network, y is an original image corresponding to the input image, G (x) is an image generated by the network,
Figure FDA0003666501380000036
for a non-linear transformation, E represents the averaging.
CN202210593247.3A 2022-05-27 2022-05-27 Image super-resolution method based on enhanced generation countermeasure network Active CN115018705B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210593247.3A CN115018705B (en) 2022-05-27 2022-05-27 Image super-resolution method based on enhanced generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210593247.3A CN115018705B (en) 2022-05-27 2022-05-27 Image super-resolution method based on enhanced generation countermeasure network

Publications (2)

Publication Number Publication Date
CN115018705A true CN115018705A (en) 2022-09-06
CN115018705B CN115018705B (en) 2024-06-14

Family

ID=83070656

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210593247.3A Active CN115018705B (en) 2022-05-27 2022-05-27 Image super-resolution method based on enhanced generation countermeasure network

Country Status (1)

Country Link
CN (1) CN115018705B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115564652A (en) * 2022-09-30 2023-01-03 南京航空航天大学 Reconstruction method for image super-resolution
CN115601792A (en) * 2022-12-14 2023-01-13 长春大学(Cn) Cow face image enhancement method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829855A (en) * 2019-01-23 2019-05-31 南京航空航天大学 A kind of super resolution ratio reconstruction method based on fusion multi-level features figure
CN111047515A (en) * 2019-12-29 2020-04-21 兰州理工大学 Cavity convolution neural network image super-resolution reconstruction method based on attention mechanism
CN111161141A (en) * 2019-11-26 2020-05-15 西安电子科技大学 Hyperspectral simple graph super-resolution method for counterstudy based on inter-band attention mechanism
CN113222825A (en) * 2021-06-03 2021-08-06 北京理工大学 Infrared image super-resolution reconstruction method based on visible light image training and application
CN113269675A (en) * 2021-05-18 2021-08-17 东北师范大学 Time-variant data time super-resolution visualization method based on deep learning model
CN113592718A (en) * 2021-08-12 2021-11-02 中国矿业大学 Mine image super-resolution reconstruction method and system based on multi-scale residual error network
CN114332625A (en) * 2021-12-31 2022-04-12 云南大学 Remote sensing image colorizing and super-resolution method and system based on neural network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829855A (en) * 2019-01-23 2019-05-31 南京航空航天大学 A kind of super resolution ratio reconstruction method based on fusion multi-level features figure
CN111161141A (en) * 2019-11-26 2020-05-15 西安电子科技大学 Hyperspectral simple graph super-resolution method for counterstudy based on inter-band attention mechanism
CN111047515A (en) * 2019-12-29 2020-04-21 兰州理工大学 Cavity convolution neural network image super-resolution reconstruction method based on attention mechanism
CN113269675A (en) * 2021-05-18 2021-08-17 东北师范大学 Time-variant data time super-resolution visualization method based on deep learning model
CN113222825A (en) * 2021-06-03 2021-08-06 北京理工大学 Infrared image super-resolution reconstruction method based on visible light image training and application
CN113592718A (en) * 2021-08-12 2021-11-02 中国矿业大学 Mine image super-resolution reconstruction method and system based on multi-scale residual error network
CN114332625A (en) * 2021-12-31 2022-04-12 云南大学 Remote sensing image colorizing and super-resolution method and system based on neural network

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
LONGBAO WANG 等: "A CBAM-GAN-based method for super-resolution reconstruction of remote sensing image", IET IMAGE PROCESS. 18, 16 October 2023 (2023-10-16), pages 548 - 560 *
XIN YANG 等: "Image super-resolution based on deep neural network of multiple attention mechanism", JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION,75(2021)103019, 8 January 2021 (2021-01-08), pages 1 - 11 *
朱晨 等: "基于双稀疏模型和非局部自相似约束的超分辨率算法研究", 云南民族大学学报(自然科学版), vol. 29, no. 1, 19 January 2020 (2020-01-19), pages 59 - 64 *
汪荣贵 等: "基于自相似特征增强网络结构的图像超分辨率重建", 光电工程, vol. 49, no. 5, 25 May 2022 (2022-05-25), pages 38 - 53 *
董猛;吴戈;曹洪玉;景文博;于洪洋;: "基于注意力残差卷积网络的视频超分辨率重构", 长春理工大学学报(自然科学版), no. 01, 15 February 2020 (2020-02-15), pages 86 - 92 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115564652A (en) * 2022-09-30 2023-01-03 南京航空航天大学 Reconstruction method for image super-resolution
CN115564652B (en) * 2022-09-30 2023-12-01 南京航空航天大学 Reconstruction method for super-resolution of image
CN115601792A (en) * 2022-12-14 2023-01-13 长春大学(Cn) Cow face image enhancement method

Also Published As

Publication number Publication date
CN115018705B (en) 2024-06-14

Similar Documents

Publication Publication Date Title
CN113240580B (en) Lightweight image super-resolution reconstruction method based on multi-dimensional knowledge distillation
CN112507997B (en) Face super-resolution system based on multi-scale convolution and receptive field feature fusion
CN111681166B (en) Image super-resolution reconstruction method of stacked attention mechanism coding and decoding unit
CN115018705A (en) Image super-resolution method based on enhanced generation countermeasure network
CN110992270A (en) Multi-scale residual attention network image super-resolution reconstruction method based on attention
CN110175986B (en) Stereo image visual saliency detection method based on convolutional neural network
CN110197468A (en) A kind of single image Super-resolution Reconstruction algorithm based on multiple dimensioned residual error learning network
CN110033410A (en) Image reconstruction model training method, image super-resolution rebuilding method and device
CN111915490A (en) License plate image super-resolution reconstruction model and method based on multi-scale features
CN112862689A (en) Image super-resolution reconstruction method and system
CN113298718A (en) Single image super-resolution reconstruction method and system
CN110738663A (en) Double-domain adaptive module pyramid network and unsupervised domain adaptive image segmentation method
CN113723174B (en) Face image super-resolution restoration and reconstruction method and system based on generation countermeasure network
CN112801914A (en) Two-stage image restoration method based on texture structure perception
CN112907448A (en) Method, system, equipment and storage medium for super-resolution of any-ratio image
CN115829880A (en) Image restoration method based on context structure attention pyramid network
CN116664435A (en) Face restoration method based on multi-scale face analysis map integration
CN114694176A (en) Lightweight human body posture estimation method based on deep learning
CN113096015B (en) Image super-resolution reconstruction method based on progressive perception and ultra-lightweight network
CN114764754B (en) Occlusion face restoration method based on geometric perception priori guidance
CN116485654A (en) Lightweight single-image super-resolution reconstruction method combining convolutional neural network and transducer
CN114881858A (en) Lightweight binocular image super-resolution method based on multi-attention machine system fusion
CN114677282A (en) Image super-resolution reconstruction method and system
CN113538236A (en) Image super-resolution reconstruction method based on generation countermeasure network
CN113450249B (en) Video redirection method with aesthetic characteristics for different liquid crystal screen sizes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant