CN114820302A - Improved image super-resolution algorithm based on residual dense CNN and edge enhancement - Google Patents
Improved image super-resolution algorithm based on residual dense CNN and edge enhancement Download PDFInfo
- Publication number
- CN114820302A CN114820302A CN202210288078.2A CN202210288078A CN114820302A CN 114820302 A CN114820302 A CN 114820302A CN 202210288078 A CN202210288078 A CN 202210288078A CN 114820302 A CN114820302 A CN 114820302A
- Authority
- CN
- China
- Prior art keywords
- image
- channel
- resolution
- residual dense
- slf
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 claims abstract description 45
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 21
- 238000001914 filtration Methods 0.000 claims abstract description 5
- 230000006870 function Effects 0.000 claims description 16
- 230000008569 process Effects 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 238000013135 deep learning Methods 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 4
- 230000004927 fusion Effects 0.000 claims description 4
- 238000011160 research Methods 0.000 claims description 4
- 230000003321 amplification Effects 0.000 claims description 3
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims description 2
- 238000013528 artificial neural network Methods 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 claims description 2
- 230000009467 reduction Effects 0.000 claims description 2
- 238000005728 strengthening Methods 0.000 claims 1
- 230000002194 synthesizing effect Effects 0.000 claims 1
- 238000013459 approach Methods 0.000 abstract 2
- 238000012549 training Methods 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 6
- 238000011161 development Methods 0.000 description 5
- 238000013441 quality evaluation Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000011158 quantitative evaluation Methods 0.000 description 2
- 101100365548 Caenorhabditis elegans set-14 gene Proteins 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000008719 thickening Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an improved image super-resolution reconstruction method based on residual dense convolutional neural network and edge enhancement. For the chroma channel, after the picture is amplified by a bicubic interpolation method, the edge is sharpened by using guide filtering, and the edge characteristic is enhanced. In order to enhance the depth of the model and better improve the performance of the model, for the chroma channel, a proposed residual dense convolutional neural network is used, and two residual dense blocks are stacked for better extracting the edge characteristics and detail information of the chroma channel. This approach can effectively maintain high frequency details and provide better results than other approaches. The performance of the algorithm proposed by the patent is evaluated on different image data sets and compared with other methods, and the obtained experimental result also proves that the proposed method is better.
Description
Technical Field
The invention relates to the field of digital image processing, in particular to research and implementation of an image super-resolution reconstruction method.
Background
With the development of smart phones and the internet, images have become important information carriers in daily life of people. The image is the record and expression of the objective world, the resolution of the image directly reflects the definition of the image and measures the information content of the image. The single image super-resolution technology (SISR) has wide application prospect, and can be applied to various fields such as video monitoring, target recognition, medical image analysis and the like.
Single Image Super Resolution (SISR) refers to a pathological problem in which High Resolution (HR) images are estimated from Low Resolution (LR) input images, and the SISR problem has received a lot of attention from computer vision and image processing communities.
The SISR problem is a particularly challenging ill-defined problem because there are always many HR images that can be down-sampled into the same LR image. To solve this problem, many methods have been proposed, which can be roughly classified into three types, interpolation-based methods, reconstruction-based methods, and learning-based methods. Because interpolation and reconstruction based methods perform poorly at high magnifications, current research is focused mainly on example based methods, learning non-linear mappings between low-resolution and high-resolution images.
With the rapid development and successful application of deep learning techniques, many new SISR methods based on deep Convolutional Neural Networks (CNNs) have been proposed and have achieved great success. Since the pioneer work SRCNN utilized three layers of CNNs to solve the SISR problem, many methods have been proposed to achieve super-resolution based on CNN models. Kim et al proposed VDSR increasing the network depth to 20 a significant improvement over SRCNN, indicating that the quality of the generated image is closely related to the depth and scale of the model. Mao et al designed a 30-layer convolutional auto-encoder with symmetric skip-concatenation for SISR and image denoising. In order to reduce model complexity and training difficulty, the DRCN employs recursive supervision and skip connections.
In general, the super-resolution reconstruction technology can effectively improve the resolution and the image definition of an image on the premise of low cost without replacing hardware equipment. With the development of the technology, the application field is gradually expanded, and the development prospect is wider and wider. The super-resolution reconstruction technology is profound in future scientific and technological development. Therefore, it is very important to further study the technology
Disclosure of Invention
The invention aims to overcome the defects of the existing method and provide an improved image super-resolution algorithm based on residual dense CNN and edge enhancement.
The method comprises the following specific steps:
the first step is as follows: an improved image super-resolution reconstruction method based on residual dense convolutional neural network and edge enhancement is characterized by comprising the following steps:
(1) aiming at the research of a super-resolution reconstruction method of deep learning, a novel super-resolution algorithm is provided based on a Convolutional Neural Network (CNN) algorithm: an improved image super-resolution reconstruction method (RECNN) based on Residual dense convolution Neural network and Edge enhancement.
(2) The RECNN architecture consists mainly of 3 layers. Performing a process comprising the steps of:
a low resolution original image is first converted to YUV channels. The input image is decomposed into a luminance channel (Y channel) and a chrominance channel (U channel and V channel) by using the YUV channel.
A first layer: firstly, carrying out bicubic interpolation amplification on an image decomposed from a U channel, and then carrying out edge enhancement on the amplified U channel image by using guide filtering. And obtaining an enlarged image after edge enhancement.
A second layer: the same method as in the first layer is used for the image decomposed from the V channel, resulting in an enlarged image after edge enhancement.
And a third layer: and for the decomposed Y-channel image, using an improved residual dense convolutional neural network, extracting features through two residual dense blocks, and then performing sub-pixel convolution and up-sampling to obtain an amplified edge enhanced image.
And finally, combining the results of the three channels, and converting the results into RGB channels to obtain a super-resolution result image.
The second step: the image super-resolution reconstruction method according to the first step (1), further comprising:
(1) first, an image of an RGB color channel is converted into a YUV channel, and when the image is divided into a chrominance channel (U, V channel) and a luminance channel (Y channel) by using the YUV channel, the conversion of a low-resolution input image into the YUV channel is implemented using formula 1.
In formula 1, Y represents a luminance channel of an image, U represents a red chrominance channel of the image, and V represents a blue chrominance channel of the image; r represents the red component channel of the image, G represents the red component channel of the image, and B represents the blue component channel of the image. The conversion of the image in the RGB channel and the YUV channel can be realized through the formula 1, namely the first step of the algorithm of the patent is realized.
In formula 2, G is the guide image and Y is the output image, an image optimization objective function is constructed by combining with the noise reduction model, and then the objective function is minimized to realize edge enhancement, and Y is i GF Is its filtered output at pixel i. E k And D k Is the mean and variance of the pixel k in the local region of the guide image G, and epsilon is a regularization term to ensure that the gradient is not too large.
The third step: the output of formula (1) in claim 2 is taken as an input to a three-layer structure, and the three channels are processed differently. Firstly, carrying out a formula (3-4) bicubic interpolation method on the chrominance channel for amplification, and then carrying out guide filtering processing by a formula (2) to obtain a chrominance channel image after edge enhancement. And serves as the input for the final combination with the processed luminance channel.
Bicubic Interpolation (Bicubic Interpolation) is an Interpolation method using a Bicubic function as a basis function. The Bicubic function is shown in equation 4:
in the formula (3), when a is 0.5, the best effect is obtained. Let the source image be L, the size be M × N, the target image after K times of scaling be L, and the size be M × N, that is, K is M/M. To obtain the value of each pixel value (X, Y) in the target image L, a corresponding pixel point (X, Y) must be found in the source image, and then 16 adjacent pixel points around (X, Y) are used as parameters for calculating the target pixel point (X, Y). According to the proportionThe coordinates of the corresponding pixel points of (X, Y) in the image l can be obtained as
The f point is the position of the target image L in the source image corresponding to the point (X, Y), and its coordinate is represented as f (X, Y) ═ f (i + u, j + v), where (i, j) represents a given pixel point in the image, u represents the distance between the fixed point and the point to be solved in the horizontal direction, and v represents the distance between the fixed point and the point to be solved in the vertical direction, so the target pixel value is interpolated by equation (4):
f(x,y)=f(i+u,j+v)=A×B×C (4)
where W is the Bicubic interpolation function shown above.
The fourth step: in the chrominance channel, I lr And I hr Representing the inputs and outputs of the architecture, respectively. The first convolution layer is from the input low resolution image I lr Middle extracted feature f 1 。
f 1 =SLF 1 (I lr ) (5)
In the above formula, SLF (narrow layer features) represents shallow feature extraction, SLF i (x) Is the i-th convolution operation on x. Using f 1 And performing further feature extraction and residual operation on the result. We also calculated the SLF among them 2 (x) The convolution operation of the second layer is extracted for the features, the output of which is used as the input of the residual dense network. If the number of residual dense blocks is n, calculating the output f of the nth residual dense block n 。
The fifth step: and (3) performing layered feature calculation on the residual dense blocks to obtain dense feature fusion containing both low-resolution image features and residual dense block features, wherein the specific steps are shown as a formula (6-8).
f 2 =SLF 2 (I lr ) (6)
f n =SLF RDB,n (f n-1 ) (7)
f n =SLF RDB,n (SLF RDB,n-1 (···(SLF RDB,1 (f 1 ))···)) (8)
Where RDB (residual Dense blocks) represents the residual Dense block, SLF RDB,n Indicating the operation of the nth RDB. SLF RDB,n Consists of a number of processes including convolution and rectifying linear units (relus).
And a sixth step: in the fifth step, the residual dense block is designed into 3 convolutional layers, each with an activation function. Wherein the first convolution layer is jumped and linked to the second convolution layer and the third convolution base layer; the second convolutional layer jumps to the third convolutional layer. Where f is DFF Is a dense feature fusion applying a merging function SLF DFF And obtaining the feature map. After computing the features of the residual dense block in the low resolution image and low resolution space, we used a sub-pixel convolution on the high resolution image to enlarge the image.
The seventh step: the YUV individual channel component maps are synthesized into a final high resolution output result image using equation 9.
In formula 9, Y represents a luminance channel of an image, U represents a red chrominance channel of the image, and V represents a blue chrominance channel of the image; r represents the red component channel of the image, G represents the red component channel of the image, and B represents the blue component channel of the image. The conversion of the image from the YUV channel to the RGB channel can be realized by formula 9, i.e. the final step of the algorithm of this patent is realized.
Drawings
FIG. 1 flow chart of the present invention
Fig. 2 is a general structural view of the present invention.
Fig. 3 is a diagram of a RECNN network structure.
FIG. 4 is a graph comparing results on a public data set with a classical algorithm.
FIG. 5 is a graph comparing results on a public data set with a classical algorithm.
Detailed Description
The experiment of the embodiment of the invention is completed on a Ubuntu Server 16.04 x64 system, and the training uses a GPU of NVIDIA GeForce GTX 16606G. And a PyTorch deep learning framework is adopted in the training and testing processes, and an Adam optimizer is adopted in the training process to update the weight of each layer. The data set used for model training is a DIV2K data set, in order to obtain more training samples, data expansion is carried out on the images of the DIV2K data set, and the images are subjected to mirror image and rotation operations of 0 degree, 90 degrees, 180 degrees and 270 degrees, so that training samples of 8 times of samples are obtained in total. To evaluate our proposed model, we used four common test data sets with different characteristics, Set5, Set14, BSD100, and Urban 100.
During training, the DIV2K database was used. For each scale (2, 3, 4) a different model was trained, using 256 filters for the first convolutional layer and 64 filters for the rest of the model. Optimization using Adam optimizer, beta 1 =0.9,β 2 0.999 and 1 e-8. The set one-time training sample size is 16, and the learning rate is 1e-4. The training image is cropped to an LR sub-image of 32 x 32 pixels. The input and the output are both RGB images. All deviations are set to zero.
For quantitative evaluation of the model, the Peak Signal-to-Noise Ratio (PSNR) equation 11 is used as a quality evaluation criterion. This index is a commonly used quantitative evaluation index in image SR technology.
Formula 10, wherein u x And u y Respectively representing the gray-scale average of the sample image x and the reconstructed image y,andrepresents the variance, C, of the sample image x and the reconstructed image y, respectively 1 And C 2 A small constant is shown to prevent the denominator from being zero. The reference index has a value range of [0,1 ]]The higher the value of the structural similarity is, the smaller the distortion of the image is, and the more similar the reconstructed image is to the original image, the better the image quality is.
Equation 11, where peak represents the maximum of the number of pixels the image can take. If the single-channel bit width of the pixel value is n, peak equals 2 n -1. The peak snr is the most widely and generally used image quality evaluation index, which is based on the error between corresponding pixels, i.e. on the error-sensitive image quality evaluation. The higher the peak signal-to-noise ratio, the higher the image quality.
The results obtained are shown in table 1.
Table 1: public data set test result comparison table
Note that the result of thickening is the optimal result
Compared with experimental observation, the conclusion is obtained through experiments on a public data set, and the result of the algorithm provided by the patent is superior to that of Bicubic, SRCNN, FSRCNN, VDSR and DRCN, and is the optimal algorithm of the five algorithms. The method proves that the quality of the image is effectively improved by improving the algorithm model, and the method is an effective method.
Claims (7)
1. An improved image super-resolution reconstruction method based on residual dense CNN and edge enhancement is characterized by comprising the following steps:
(1) aiming at the research of the super-resolution reconstruction method of deep learning, a new super-resolution algorithm is provided based on a convolutional neural network algorithm: an improved image super-resolution reconstruction method based on residual dense convolutional neural network and edge enhancement;
(2) the RECNN architecture is mainly composed of 3 layers; performing a process comprising the steps of:
firstly, converting a low-resolution original image into a YUV channel; decomposing an input image into a luminance channel (i.e., Y channel) and a chrominance channel (i.e., U channel and V channel) by using the YUV channel;
a first layer: firstly, carrying out bicubic interpolation method amplification on an image decomposed from a U channel, and then carrying out edge enhancement on the amplified U channel image by using guide filtering; obtaining an enlarged image after edge enhancement;
a second layer: obtaining an enlarged image after edge enhancement by using the same method as the first layer on the image decomposed from the V channel;
and a third layer: for the decomposed Y-channel image, using an improved residual dense convolution neural network, extracting features through two residual dense blocks, and then performing sub-pixel convolution and upsampling to obtain an amplified edge enhanced image;
and finally, combining the results of the three channels, and converting the results into RGB channels to obtain a super-resolution result image.
2. The method for improving the image super-resolution reconstruction based on residual dense CNN and edge enhancement according to claim 1, further comprising:
(1) firstly, converting an image of an RGB color channel into a YUV channel, and when the image is divided into a chrominance channel (i.e., U, V channel) and a luminance channel (i.e., Y channel) by using the YUV channel, converting a low-resolution input image into the YUV channel is realized by using equation 1;
in formula 1, Y represents a luminance channel of an image, U represents a red chrominance channel of the image, and V represents a blue chrominance channel of the image; r represents a red component channel of the image, G represents a green component channel of the image, and B represents a blue component channel of the image; the conversion of the image in the RGB channel and the YUV channel can be realized through the formula 1, namely the first step of the algorithm of the patent is realized;
in formula 2, G is the guide image and Y is the output image, an image optimization objective function is constructed by combining with the noise reduction model, and then the objective function is minimized to realize edge enhancement, and Y is i GF Is its filtered output on pixel i; e k And D k Is the mean and variance of the pixel k in the local region of the guide image G, and epsilon is a regularization term to ensure that the gradient is not too large.
3. The method for improving image super-resolution reconstruction based on residual dense CNN and edge enhancement according to claim 2, further comprising:
taking the output of formula (1) in claim 2 as the input of a three-layer structure, and respectively carrying out different processing on three channels; firstly, a chromaticity channel is amplified by a bi-cubic interpolation method of a formula (3-4), and then guided filtering processing is carried out by a formula (2) to obtain a chromaticity channel graph after edge strengthening; and as an input for final combination with the processed luminance channel;
bicubic Interpolation (Bicubic Interpolation) is an Interpolation method using Bicubic function as a basis function; the Bicubic function is shown in equation 4:
in the formula (3), when a is 0.5, the best effect can be obtained; setting a source image as L, the size of the source image as M multiplied by N, a target image after K times of zooming as L, and the size of the target image as M multiplied by N, namely K is M/M; in order to obtain the value of each pixel value (X, Y) in the target image L, a pixel point (X, Y) corresponding to the pixel value (X, Y) needs to be found in a source image, and then 16 adjacent pixel points around the pixel point (X, Y) are used as parameters for calculating the target pixel point (X, Y); according to the proportionThe coordinates of the corresponding pixel points of (X, Y) in the image l can be obtained as
The f point is the position of the target image L in the source image corresponding to the point (X, Y), and its coordinate is represented as f (X, Y) ═ f (i + u, j + v), where (i, j) represents a given pixel point in the image, u represents the distance between the fixed point and the point to be solved in the horizontal direction, and v represents the distance between the fixed point and the point to be solved in the vertical direction, so the target pixel value is interpolated by equation (4):
f(x,y)=f(i+u,j+v)=A×B×C (4)
where W is the Bicubic interpolation function shown above.
4. The method for improving the image super-resolution reconstruction based on residual dense CNN and edge enhancement according to claim 1, further comprising:
in the chrominance channel, I lr And I hr Respectively representing the inputs and outputs of the architecture; the first convolution layer is from the input low resolution image I lr Middle extracted feature f 1 ;
f 1 =SLF 1 (I lr ) (5)
In the above formula, SLF represents shallow feature extraction, SLF i (x) Is the ith convolution operation on x; using f 1 Performing further feature extraction and residual operation on the result; we also calculated the SLF among them 2 (x) Extracting convolution operation of a second layer for the features, wherein the output of the convolution operation is used as the input of the residual dense network; if the number of residual dense blocks is n, calculating the output f of the nth residual dense block n 。
5. The method for improving the image super-resolution reconstruction based on residual dense CNN and edge enhancement according to claim 4, further comprising:
performing layered feature calculation on a plurality of residual dense blocks to obtain dense feature fusion containing both low-resolution image features and residual dense block features, wherein the specific steps are shown as a formula (6-8);
f 2 =SLF 2 (I lr ) (6)
f n =SLF RDB,n (f n-1 ) (7)
f n =SLF RDB,n (SLF RDB,n-1 (…(SLF RDB,1 (f 1 ))…)) (8)
where RDB denotes residual dense Block, SLF RDB,n Represents the operation of the nth RDB; SLF RDB,n Consists of a number of processes including convolution and rectifying linear units (relus).
6. The method for improving the image super-resolution reconstruction based on residual dense CNN and edge enhancement according to claim 4, further comprising:
the residual dense block is designed as 3 convolutional layers, each convolutional layer having an activation function; wherein the first convolution layer is jumped and linked to the second convolution layer and the third convolution base layer; the second convolutional layer jumps to the third convolutional layer; where f is DFF Is a dense feature fusion applying a merging function SLF DFF Then obtaining feature map; after computing the features of the residual dense block in the low resolution image and low resolution space, we used a sub-pixel convolution on the high resolution image to enlarge the image.
7. The method for improving image super-resolution reconstruction based on residual dense CNN and edge enhancement according to claim 1, further comprising:
synthesizing the component graphs of all YUV channels into a final high-resolution output result image by using a formula 9;
in formula 9, Y represents a luminance channel of an image, U represents a red chrominance channel of the image, and V represents a blue chrominance channel of the image; r represents the red component channel of the image, G represents the green component channel of the image, and B represents the blue component channel of the image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210288078.2A CN114820302B (en) | 2022-03-22 | 2022-03-22 | Improved image super-resolution algorithm based on residual dense CNN and edge enhancement |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210288078.2A CN114820302B (en) | 2022-03-22 | 2022-03-22 | Improved image super-resolution algorithm based on residual dense CNN and edge enhancement |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114820302A true CN114820302A (en) | 2022-07-29 |
CN114820302B CN114820302B (en) | 2024-05-24 |
Family
ID=82531695
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210288078.2A Active CN114820302B (en) | 2022-03-22 | 2022-03-22 | Improved image super-resolution algorithm based on residual dense CNN and edge enhancement |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114820302B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107358575A (en) * | 2017-06-08 | 2017-11-17 | 清华大学 | A kind of single image super resolution ratio reconstruction method based on depth residual error network |
AU2020100200A4 (en) * | 2020-02-08 | 2020-06-11 | Huang, Shuying DR | Content-guide Residual Network for Image Super-Resolution |
CN111640060A (en) * | 2020-04-30 | 2020-09-08 | 南京理工大学 | Single image super-resolution reconstruction method based on deep learning and multi-scale residual dense module |
CN112150360A (en) * | 2020-09-16 | 2020-12-29 | 北京工业大学 | IVUS image super-resolution reconstruction method based on dense residual error network |
CN112801904A (en) * | 2021-02-01 | 2021-05-14 | 武汉大学 | Hybrid degraded image enhancement method based on convolutional neural network |
-
2022
- 2022-03-22 CN CN202210288078.2A patent/CN114820302B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107358575A (en) * | 2017-06-08 | 2017-11-17 | 清华大学 | A kind of single image super resolution ratio reconstruction method based on depth residual error network |
AU2020100200A4 (en) * | 2020-02-08 | 2020-06-11 | Huang, Shuying DR | Content-guide Residual Network for Image Super-Resolution |
CN111640060A (en) * | 2020-04-30 | 2020-09-08 | 南京理工大学 | Single image super-resolution reconstruction method based on deep learning and multi-scale residual dense module |
CN112150360A (en) * | 2020-09-16 | 2020-12-29 | 北京工业大学 | IVUS image super-resolution reconstruction method based on dense residual error network |
CN112801904A (en) * | 2021-02-01 | 2021-05-14 | 武汉大学 | Hybrid degraded image enhancement method based on convolutional neural network |
Non-Patent Citations (1)
Title |
---|
刘月峰;杨涵晰;蔡爽;张晨荣;: "基于改进卷积神经网络的单幅图像超分辨率重建方法", 计算机应用, no. 05, 28 November 2018 (2018-11-28) * |
Also Published As
Publication number | Publication date |
---|---|
CN114820302B (en) | 2024-05-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108734659B (en) | Sub-pixel convolution image super-resolution reconstruction method based on multi-scale label | |
Anwar et al. | Densely residual laplacian super-resolution | |
Bashir et al. | A comprehensive review of deep learning-based single image super-resolution | |
CN108921786B (en) | Image super-resolution reconstruction method based on residual convolutional neural network | |
CN109819321B (en) | Video super-resolution enhancement method | |
CN112734646B (en) | Image super-resolution reconstruction method based on feature channel division | |
CN112801877B (en) | Super-resolution reconstruction method of video frame | |
CN107341765B (en) | Image super-resolution reconstruction method based on cartoon texture decomposition | |
CN103871041B (en) | The image super-resolution reconstructing method built based on cognitive regularization parameter | |
CN106709875A (en) | Compressed low-resolution image restoration method based on combined deep network | |
CN110136060B (en) | Image super-resolution reconstruction method based on shallow dense connection network | |
CN109949221B (en) | Image processing method and electronic equipment | |
CN112070702B (en) | Image super-resolution reconstruction system and method for multi-scale residual error characteristic discrimination enhancement | |
CN108416736B (en) | Image super-resolution reconstruction method based on secondary anchor point neighborhood regression | |
Cai et al. | TDPN: Texture and detail-preserving network for single image super-resolution | |
Cai et al. | HIPA: hierarchical patch transformer for single image super resolution | |
CN112508794B (en) | Medical image super-resolution reconstruction method and system | |
CN111951164A (en) | Image super-resolution reconstruction network structure and image reconstruction effect analysis method | |
CN115272068A (en) | Efficient interpolation method for image upsampling | |
Chen et al. | Underwater-image super-resolution via range-dependency learning of multiscale features | |
CN114820302B (en) | Improved image super-resolution algorithm based on residual dense CNN and edge enhancement | |
CN111899166A (en) | Medical hyperspectral microscopic image super-resolution reconstruction method based on deep learning | |
Cheng et al. | Adaptive feature denoising based deep convolutional network for single image super-resolution | |
CN113674154B (en) | Single image super-resolution reconstruction method and system based on generation countermeasure network | |
CN112348745B (en) | Video super-resolution reconstruction method based on residual convolutional network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |