CN114820302A - Improved image super-resolution algorithm based on residual dense CNN and edge enhancement - Google Patents

Improved image super-resolution algorithm based on residual dense CNN and edge enhancement Download PDF

Info

Publication number
CN114820302A
CN114820302A CN202210288078.2A CN202210288078A CN114820302A CN 114820302 A CN114820302 A CN 114820302A CN 202210288078 A CN202210288078 A CN 202210288078A CN 114820302 A CN114820302 A CN 114820302A
Authority
CN
China
Prior art keywords
image
channel
resolution
residual dense
slf
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210288078.2A
Other languages
Chinese (zh)
Other versions
CN114820302B (en
Inventor
陆绮荣
吴止境
卢子任
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Technology
Original Assignee
Guilin University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Technology filed Critical Guilin University of Technology
Priority to CN202210288078.2A priority Critical patent/CN114820302B/en
Publication of CN114820302A publication Critical patent/CN114820302A/en
Application granted granted Critical
Publication of CN114820302B publication Critical patent/CN114820302B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an improved image super-resolution reconstruction method based on residual dense convolutional neural network and edge enhancement. For the chroma channel, after the picture is amplified by a bicubic interpolation method, the edge is sharpened by using guide filtering, and the edge characteristic is enhanced. In order to enhance the depth of the model and better improve the performance of the model, for the chroma channel, a proposed residual dense convolutional neural network is used, and two residual dense blocks are stacked for better extracting the edge characteristics and detail information of the chroma channel. This approach can effectively maintain high frequency details and provide better results than other approaches. The performance of the algorithm proposed by the patent is evaluated on different image data sets and compared with other methods, and the obtained experimental result also proves that the proposed method is better.

Description

Improved image super-resolution algorithm based on residual dense CNN and edge enhancement
Technical Field
The invention relates to the field of digital image processing, in particular to research and implementation of an image super-resolution reconstruction method.
Background
With the development of smart phones and the internet, images have become important information carriers in daily life of people. The image is the record and expression of the objective world, the resolution of the image directly reflects the definition of the image and measures the information content of the image. The single image super-resolution technology (SISR) has wide application prospect, and can be applied to various fields such as video monitoring, target recognition, medical image analysis and the like.
Single Image Super Resolution (SISR) refers to a pathological problem in which High Resolution (HR) images are estimated from Low Resolution (LR) input images, and the SISR problem has received a lot of attention from computer vision and image processing communities.
The SISR problem is a particularly challenging ill-defined problem because there are always many HR images that can be down-sampled into the same LR image. To solve this problem, many methods have been proposed, which can be roughly classified into three types, interpolation-based methods, reconstruction-based methods, and learning-based methods. Because interpolation and reconstruction based methods perform poorly at high magnifications, current research is focused mainly on example based methods, learning non-linear mappings between low-resolution and high-resolution images.
With the rapid development and successful application of deep learning techniques, many new SISR methods based on deep Convolutional Neural Networks (CNNs) have been proposed and have achieved great success. Since the pioneer work SRCNN utilized three layers of CNNs to solve the SISR problem, many methods have been proposed to achieve super-resolution based on CNN models. Kim et al proposed VDSR increasing the network depth to 20 a significant improvement over SRCNN, indicating that the quality of the generated image is closely related to the depth and scale of the model. Mao et al designed a 30-layer convolutional auto-encoder with symmetric skip-concatenation for SISR and image denoising. In order to reduce model complexity and training difficulty, the DRCN employs recursive supervision and skip connections.
In general, the super-resolution reconstruction technology can effectively improve the resolution and the image definition of an image on the premise of low cost without replacing hardware equipment. With the development of the technology, the application field is gradually expanded, and the development prospect is wider and wider. The super-resolution reconstruction technology is profound in future scientific and technological development. Therefore, it is very important to further study the technology
Disclosure of Invention
The invention aims to overcome the defects of the existing method and provide an improved image super-resolution algorithm based on residual dense CNN and edge enhancement.
The method comprises the following specific steps:
the first step is as follows: an improved image super-resolution reconstruction method based on residual dense convolutional neural network and edge enhancement is characterized by comprising the following steps:
(1) aiming at the research of a super-resolution reconstruction method of deep learning, a novel super-resolution algorithm is provided based on a Convolutional Neural Network (CNN) algorithm: an improved image super-resolution reconstruction method (RECNN) based on Residual dense convolution Neural network and Edge enhancement.
(2) The RECNN architecture consists mainly of 3 layers. Performing a process comprising the steps of:
a low resolution original image is first converted to YUV channels. The input image is decomposed into a luminance channel (Y channel) and a chrominance channel (U channel and V channel) by using the YUV channel.
A first layer: firstly, carrying out bicubic interpolation amplification on an image decomposed from a U channel, and then carrying out edge enhancement on the amplified U channel image by using guide filtering. And obtaining an enlarged image after edge enhancement.
A second layer: the same method as in the first layer is used for the image decomposed from the V channel, resulting in an enlarged image after edge enhancement.
And a third layer: and for the decomposed Y-channel image, using an improved residual dense convolutional neural network, extracting features through two residual dense blocks, and then performing sub-pixel convolution and up-sampling to obtain an amplified edge enhanced image.
And finally, combining the results of the three channels, and converting the results into RGB channels to obtain a super-resolution result image.
The second step: the image super-resolution reconstruction method according to the first step (1), further comprising:
(1) first, an image of an RGB color channel is converted into a YUV channel, and when the image is divided into a chrominance channel (U, V channel) and a luminance channel (Y channel) by using the YUV channel, the conversion of a low-resolution input image into the YUV channel is implemented using formula 1.
Figure BDA0003559083450000031
Figure BDA0003559083450000032
In formula 1, Y represents a luminance channel of an image, U represents a red chrominance channel of the image, and V represents a blue chrominance channel of the image; r represents the red component channel of the image, G represents the red component channel of the image, and B represents the blue component channel of the image. The conversion of the image in the RGB channel and the YUV channel can be realized through the formula 1, namely the first step of the algorithm of the patent is realized.
In formula 2, G is the guide image and Y is the output image, an image optimization objective function is constructed by combining with the noise reduction model, and then the objective function is minimized to realize edge enhancement, and Y is i GF Is its filtered output at pixel i. E k And D k Is the mean and variance of the pixel k in the local region of the guide image G, and epsilon is a regularization term to ensure that the gradient is not too large.
The third step: the output of formula (1) in claim 2 is taken as an input to a three-layer structure, and the three channels are processed differently. Firstly, carrying out a formula (3-4) bicubic interpolation method on the chrominance channel for amplification, and then carrying out guide filtering processing by a formula (2) to obtain a chrominance channel image after edge enhancement. And serves as the input for the final combination with the processed luminance channel.
Bicubic Interpolation (Bicubic Interpolation) is an Interpolation method using a Bicubic function as a basis function. The Bicubic function is shown in equation 4:
Figure BDA0003559083450000041
in the formula (3), when a is 0.5, the best effect is obtained. Let the source image be L, the size be M × N, the target image after K times of scaling be L, and the size be M × N, that is, K is M/M. To obtain the value of each pixel value (X, Y) in the target image L, a corresponding pixel point (X, Y) must be found in the source image, and then 16 adjacent pixel points around (X, Y) are used as parameters for calculating the target pixel point (X, Y). According to the proportion
Figure BDA0003559083450000042
The coordinates of the corresponding pixel points of (X, Y) in the image l can be obtained as
Figure BDA0003559083450000043
The f point is the position of the target image L in the source image corresponding to the point (X, Y), and its coordinate is represented as f (X, Y) ═ f (i + u, j + v), where (i, j) represents a given pixel point in the image, u represents the distance between the fixed point and the point to be solved in the horizontal direction, and v represents the distance between the fixed point and the point to be solved in the vertical direction, so the target pixel value is interpolated by equation (4):
f(x,y)=f(i+u,j+v)=A×B×C (4)
Figure BDA0003559083450000044
Figure BDA0003559083450000045
Figure BDA0003559083450000046
where W is the Bicubic interpolation function shown above.
The fourth step: in the chrominance channel, I lr And I hr Representing the inputs and outputs of the architecture, respectively. The first convolution layer is from the input low resolution image I lr Middle extracted feature f 1
f 1 =SLF 1 (I lr ) (5)
In the above formula, SLF (narrow layer features) represents shallow feature extraction, SLF i (x) Is the i-th convolution operation on x. Using f 1 And performing further feature extraction and residual operation on the result. We also calculated the SLF among them 2 (x) The convolution operation of the second layer is extracted for the features, the output of which is used as the input of the residual dense network. If the number of residual dense blocks is n, calculating the output f of the nth residual dense block n
The fifth step: and (3) performing layered feature calculation on the residual dense blocks to obtain dense feature fusion containing both low-resolution image features and residual dense block features, wherein the specific steps are shown as a formula (6-8).
f 2 =SLF 2 (I lr ) (6)
f n =SLF RDB,n (f n-1 ) (7)
f n =SLF RDB,n (SLF RDB,n-1 (···(SLF RDB,1 (f 1 ))···)) (8)
Where RDB (residual Dense blocks) represents the residual Dense block, SLF RDB,n Indicating the operation of the nth RDB. SLF RDB,n Consists of a number of processes including convolution and rectifying linear units (relus).
And a sixth step: in the fifth step, the residual dense block is designed into 3 convolutional layers, each with an activation function. Wherein the first convolution layer is jumped and linked to the second convolution layer and the third convolution base layer; the second convolutional layer jumps to the third convolutional layer. Where f is DFF Is a dense feature fusion applying a merging function SLF DFF And obtaining the feature map. After computing the features of the residual dense block in the low resolution image and low resolution space, we used a sub-pixel convolution on the high resolution image to enlarge the image.
The seventh step: the YUV individual channel component maps are synthesized into a final high resolution output result image using equation 9.
Figure BDA0003559083450000051
In formula 9, Y represents a luminance channel of an image, U represents a red chrominance channel of the image, and V represents a blue chrominance channel of the image; r represents the red component channel of the image, G represents the red component channel of the image, and B represents the blue component channel of the image. The conversion of the image from the YUV channel to the RGB channel can be realized by formula 9, i.e. the final step of the algorithm of this patent is realized.
Drawings
FIG. 1 flow chart of the present invention
Fig. 2 is a general structural view of the present invention.
Fig. 3 is a diagram of a RECNN network structure.
FIG. 4 is a graph comparing results on a public data set with a classical algorithm.
FIG. 5 is a graph comparing results on a public data set with a classical algorithm.
Detailed Description
The experiment of the embodiment of the invention is completed on a Ubuntu Server 16.04 x64 system, and the training uses a GPU of NVIDIA GeForce GTX 16606G. And a PyTorch deep learning framework is adopted in the training and testing processes, and an Adam optimizer is adopted in the training process to update the weight of each layer. The data set used for model training is a DIV2K data set, in order to obtain more training samples, data expansion is carried out on the images of the DIV2K data set, and the images are subjected to mirror image and rotation operations of 0 degree, 90 degrees, 180 degrees and 270 degrees, so that training samples of 8 times of samples are obtained in total. To evaluate our proposed model, we used four common test data sets with different characteristics, Set5, Set14, BSD100, and Urban 100.
During training, the DIV2K database was used. For each scale (2, 3, 4) a different model was trained, using 256 filters for the first convolutional layer and 64 filters for the rest of the model. Optimization using Adam optimizer, beta 1 =0.9,β 2 0.999 and 1 e-8. The set one-time training sample size is 16, and the learning rate is 1e-4. The training image is cropped to an LR sub-image of 32 x 32 pixels. The input and the output are both RGB images. All deviations are set to zero.
For quantitative evaluation of the model, the Peak Signal-to-Noise Ratio (PSNR) equation 11 is used as a quality evaluation criterion. This index is a commonly used quantitative evaluation index in image SR technology.
Figure BDA0003559083450000061
Figure BDA0003559083450000071
Formula 10, wherein u x And u y Respectively representing the gray-scale average of the sample image x and the reconstructed image y,
Figure BDA0003559083450000072
and
Figure BDA0003559083450000073
represents the variance, C, of the sample image x and the reconstructed image y, respectively 1 And C 2 A small constant is shown to prevent the denominator from being zero. The reference index has a value range of [0,1 ]]The higher the value of the structural similarity is, the smaller the distortion of the image is, and the more similar the reconstructed image is to the original image, the better the image quality is.
Equation 11, where peak represents the maximum of the number of pixels the image can take. If the single-channel bit width of the pixel value is n, peak equals 2 n -1. The peak snr is the most widely and generally used image quality evaluation index, which is based on the error between corresponding pixels, i.e. on the error-sensitive image quality evaluation. The higher the peak signal-to-noise ratio, the higher the image quality.
The results obtained are shown in table 1.
Table 1: public data set test result comparison table
Figure BDA0003559083450000074
Note that the result of thickening is the optimal result
Compared with experimental observation, the conclusion is obtained through experiments on a public data set, and the result of the algorithm provided by the patent is superior to that of Bicubic, SRCNN, FSRCNN, VDSR and DRCN, and is the optimal algorithm of the five algorithms. The method proves that the quality of the image is effectively improved by improving the algorithm model, and the method is an effective method.

Claims (7)

1. An improved image super-resolution reconstruction method based on residual dense CNN and edge enhancement is characterized by comprising the following steps:
(1) aiming at the research of the super-resolution reconstruction method of deep learning, a new super-resolution algorithm is provided based on a convolutional neural network algorithm: an improved image super-resolution reconstruction method based on residual dense convolutional neural network and edge enhancement;
(2) the RECNN architecture is mainly composed of 3 layers; performing a process comprising the steps of:
firstly, converting a low-resolution original image into a YUV channel; decomposing an input image into a luminance channel (i.e., Y channel) and a chrominance channel (i.e., U channel and V channel) by using the YUV channel;
a first layer: firstly, carrying out bicubic interpolation method amplification on an image decomposed from a U channel, and then carrying out edge enhancement on the amplified U channel image by using guide filtering; obtaining an enlarged image after edge enhancement;
a second layer: obtaining an enlarged image after edge enhancement by using the same method as the first layer on the image decomposed from the V channel;
and a third layer: for the decomposed Y-channel image, using an improved residual dense convolution neural network, extracting features through two residual dense blocks, and then performing sub-pixel convolution and upsampling to obtain an amplified edge enhanced image;
and finally, combining the results of the three channels, and converting the results into RGB channels to obtain a super-resolution result image.
2. The method for improving the image super-resolution reconstruction based on residual dense CNN and edge enhancement according to claim 1, further comprising:
(1) firstly, converting an image of an RGB color channel into a YUV channel, and when the image is divided into a chrominance channel (i.e., U, V channel) and a luminance channel (i.e., Y channel) by using the YUV channel, converting a low-resolution input image into the YUV channel is realized by using equation 1;
Figure FDA0003559083440000011
Figure FDA0003559083440000012
in formula 1, Y represents a luminance channel of an image, U represents a red chrominance channel of the image, and V represents a blue chrominance channel of the image; r represents a red component channel of the image, G represents a green component channel of the image, and B represents a blue component channel of the image; the conversion of the image in the RGB channel and the YUV channel can be realized through the formula 1, namely the first step of the algorithm of the patent is realized;
in formula 2, G is the guide image and Y is the output image, an image optimization objective function is constructed by combining with the noise reduction model, and then the objective function is minimized to realize edge enhancement, and Y is i GF Is its filtered output on pixel i; e k And D k Is the mean and variance of the pixel k in the local region of the guide image G, and epsilon is a regularization term to ensure that the gradient is not too large.
3. The method for improving image super-resolution reconstruction based on residual dense CNN and edge enhancement according to claim 2, further comprising:
taking the output of formula (1) in claim 2 as the input of a three-layer structure, and respectively carrying out different processing on three channels; firstly, a chromaticity channel is amplified by a bi-cubic interpolation method of a formula (3-4), and then guided filtering processing is carried out by a formula (2) to obtain a chromaticity channel graph after edge strengthening; and as an input for final combination with the processed luminance channel;
bicubic Interpolation (Bicubic Interpolation) is an Interpolation method using Bicubic function as a basis function; the Bicubic function is shown in equation 4:
Figure FDA0003559083440000021
in the formula (3), when a is 0.5, the best effect can be obtained; setting a source image as L, the size of the source image as M multiplied by N, a target image after K times of zooming as L, and the size of the target image as M multiplied by N, namely K is M/M; in order to obtain the value of each pixel value (X, Y) in the target image L, a pixel point (X, Y) corresponding to the pixel value (X, Y) needs to be found in a source image, and then 16 adjacent pixel points around the pixel point (X, Y) are used as parameters for calculating the target pixel point (X, Y); according to the proportion
Figure FDA0003559083440000022
The coordinates of the corresponding pixel points of (X, Y) in the image l can be obtained as
Figure FDA0003559083440000023
The f point is the position of the target image L in the source image corresponding to the point (X, Y), and its coordinate is represented as f (X, Y) ═ f (i + u, j + v), where (i, j) represents a given pixel point in the image, u represents the distance between the fixed point and the point to be solved in the horizontal direction, and v represents the distance between the fixed point and the point to be solved in the vertical direction, so the target pixel value is interpolated by equation (4):
f(x,y)=f(i+u,j+v)=A×B×C (4)
Figure FDA0003559083440000031
Figure FDA0003559083440000032
Figure FDA0003559083440000033
where W is the Bicubic interpolation function shown above.
4. The method for improving the image super-resolution reconstruction based on residual dense CNN and edge enhancement according to claim 1, further comprising:
in the chrominance channel, I lr And I hr Respectively representing the inputs and outputs of the architecture; the first convolution layer is from the input low resolution image I lr Middle extracted feature f 1
f 1 =SLF 1 (I lr ) (5)
In the above formula, SLF represents shallow feature extraction, SLF i (x) Is the ith convolution operation on x; using f 1 Performing further feature extraction and residual operation on the result; we also calculated the SLF among them 2 (x) Extracting convolution operation of a second layer for the features, wherein the output of the convolution operation is used as the input of the residual dense network; if the number of residual dense blocks is n, calculating the output f of the nth residual dense block n
5. The method for improving the image super-resolution reconstruction based on residual dense CNN and edge enhancement according to claim 4, further comprising:
performing layered feature calculation on a plurality of residual dense blocks to obtain dense feature fusion containing both low-resolution image features and residual dense block features, wherein the specific steps are shown as a formula (6-8);
f 2 =SLF 2 (I lr ) (6)
f n =SLF RDB,n (f n-1 ) (7)
f n =SLF RDB,n (SLF RDB,n-1 (…(SLF RDB,1 (f 1 ))…)) (8)
where RDB denotes residual dense Block, SLF RDB,n Represents the operation of the nth RDB; SLF RDB,n Consists of a number of processes including convolution and rectifying linear units (relus).
6. The method for improving the image super-resolution reconstruction based on residual dense CNN and edge enhancement according to claim 4, further comprising:
the residual dense block is designed as 3 convolutional layers, each convolutional layer having an activation function; wherein the first convolution layer is jumped and linked to the second convolution layer and the third convolution base layer; the second convolutional layer jumps to the third convolutional layer; where f is DFF Is a dense feature fusion applying a merging function SLF DFF Then obtaining feature map; after computing the features of the residual dense block in the low resolution image and low resolution space, we used a sub-pixel convolution on the high resolution image to enlarge the image.
7. The method for improving image super-resolution reconstruction based on residual dense CNN and edge enhancement according to claim 1, further comprising:
synthesizing the component graphs of all YUV channels into a final high-resolution output result image by using a formula 9;
Figure FDA0003559083440000041
in formula 9, Y represents a luminance channel of an image, U represents a red chrominance channel of the image, and V represents a blue chrominance channel of the image; r represents the red component channel of the image, G represents the green component channel of the image, and B represents the blue component channel of the image.
CN202210288078.2A 2022-03-22 2022-03-22 Improved image super-resolution algorithm based on residual dense CNN and edge enhancement Active CN114820302B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210288078.2A CN114820302B (en) 2022-03-22 2022-03-22 Improved image super-resolution algorithm based on residual dense CNN and edge enhancement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210288078.2A CN114820302B (en) 2022-03-22 2022-03-22 Improved image super-resolution algorithm based on residual dense CNN and edge enhancement

Publications (2)

Publication Number Publication Date
CN114820302A true CN114820302A (en) 2022-07-29
CN114820302B CN114820302B (en) 2024-05-24

Family

ID=82531695

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210288078.2A Active CN114820302B (en) 2022-03-22 2022-03-22 Improved image super-resolution algorithm based on residual dense CNN and edge enhancement

Country Status (1)

Country Link
CN (1) CN114820302B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107358575A (en) * 2017-06-08 2017-11-17 清华大学 A kind of single image super resolution ratio reconstruction method based on depth residual error network
AU2020100200A4 (en) * 2020-02-08 2020-06-11 Huang, Shuying DR Content-guide Residual Network for Image Super-Resolution
CN111640060A (en) * 2020-04-30 2020-09-08 南京理工大学 Single image super-resolution reconstruction method based on deep learning and multi-scale residual dense module
CN112150360A (en) * 2020-09-16 2020-12-29 北京工业大学 IVUS image super-resolution reconstruction method based on dense residual error network
CN112801904A (en) * 2021-02-01 2021-05-14 武汉大学 Hybrid degraded image enhancement method based on convolutional neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107358575A (en) * 2017-06-08 2017-11-17 清华大学 A kind of single image super resolution ratio reconstruction method based on depth residual error network
AU2020100200A4 (en) * 2020-02-08 2020-06-11 Huang, Shuying DR Content-guide Residual Network for Image Super-Resolution
CN111640060A (en) * 2020-04-30 2020-09-08 南京理工大学 Single image super-resolution reconstruction method based on deep learning and multi-scale residual dense module
CN112150360A (en) * 2020-09-16 2020-12-29 北京工业大学 IVUS image super-resolution reconstruction method based on dense residual error network
CN112801904A (en) * 2021-02-01 2021-05-14 武汉大学 Hybrid degraded image enhancement method based on convolutional neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘月峰;杨涵晰;蔡爽;张晨荣;: "基于改进卷积神经网络的单幅图像超分辨率重建方法", 计算机应用, no. 05, 28 November 2018 (2018-11-28) *

Also Published As

Publication number Publication date
CN114820302B (en) 2024-05-24

Similar Documents

Publication Publication Date Title
CN108734659B (en) Sub-pixel convolution image super-resolution reconstruction method based on multi-scale label
Anwar et al. Densely residual laplacian super-resolution
Bashir et al. A comprehensive review of deep learning-based single image super-resolution
CN108921786B (en) Image super-resolution reconstruction method based on residual convolutional neural network
CN109819321B (en) Video super-resolution enhancement method
CN112734646B (en) Image super-resolution reconstruction method based on feature channel division
CN112801877B (en) Super-resolution reconstruction method of video frame
CN107341765B (en) Image super-resolution reconstruction method based on cartoon texture decomposition
CN103871041B (en) The image super-resolution reconstructing method built based on cognitive regularization parameter
CN106709875A (en) Compressed low-resolution image restoration method based on combined deep network
CN110136060B (en) Image super-resolution reconstruction method based on shallow dense connection network
CN109949221B (en) Image processing method and electronic equipment
CN112070702B (en) Image super-resolution reconstruction system and method for multi-scale residual error characteristic discrimination enhancement
CN108416736B (en) Image super-resolution reconstruction method based on secondary anchor point neighborhood regression
Cai et al. TDPN: Texture and detail-preserving network for single image super-resolution
Cai et al. HIPA: hierarchical patch transformer for single image super resolution
CN112508794B (en) Medical image super-resolution reconstruction method and system
CN111951164A (en) Image super-resolution reconstruction network structure and image reconstruction effect analysis method
CN115272068A (en) Efficient interpolation method for image upsampling
Chen et al. Underwater-image super-resolution via range-dependency learning of multiscale features
CN114820302B (en) Improved image super-resolution algorithm based on residual dense CNN and edge enhancement
CN111899166A (en) Medical hyperspectral microscopic image super-resolution reconstruction method based on deep learning
Cheng et al. Adaptive feature denoising based deep convolutional network for single image super-resolution
CN113674154B (en) Single image super-resolution reconstruction method and system based on generation countermeasure network
CN112348745B (en) Video super-resolution reconstruction method based on residual convolutional network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant