CN111161146B - Coarse-to-fine single-image super-resolution reconstruction method - Google Patents
Coarse-to-fine single-image super-resolution reconstruction method Download PDFInfo
- Publication number
- CN111161146B CN111161146B CN201911357050.4A CN201911357050A CN111161146B CN 111161146 B CN111161146 B CN 111161146B CN 201911357050 A CN201911357050 A CN 201911357050A CN 111161146 B CN111161146 B CN 111161146B
- Authority
- CN
- China
- Prior art keywords
- image
- stage
- resolution
- network
- resolution image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000000605 extraction Methods 0.000 claims description 29
- 238000012549 training Methods 0.000 claims description 22
- 238000005070 sampling Methods 0.000 claims description 14
- 238000013528 artificial neural network Methods 0.000 claims description 10
- 238000013527 convolutional neural network Methods 0.000 claims description 9
- 238000012360 testing method Methods 0.000 claims description 6
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 5
- 238000010586 diagram Methods 0.000 claims description 5
- 230000004927 fusion Effects 0.000 claims description 5
- 230000004913 activation Effects 0.000 claims description 4
- 230000006870 function Effects 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 4
- 230000000750 progressive effect Effects 0.000 claims description 4
- 230000002708 enhancing effect Effects 0.000 claims description 3
- 238000003062 neural network model Methods 0.000 claims description 3
- 238000002474 experimental method Methods 0.000 claims description 2
- 238000012545 processing Methods 0.000 claims description 2
- 230000007306 turnover Effects 0.000 claims description 2
- 238000012795 verification Methods 0.000 claims description 2
- 238000011156 evaluation Methods 0.000 abstract description 6
- 230000000007 visual effect Effects 0.000 abstract description 2
- 238000013135 deep learning Methods 0.000 description 4
- 230000001965 increasing effect Effects 0.000 description 4
- 238000011160 research Methods 0.000 description 4
- 239000000284 extract Substances 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000011002 quantification Methods 0.000 description 2
- 230000003321 amplification Effects 0.000 description 1
- 230000001174 ascending effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000012797 qualification Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4046—Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of computer vision, and provides a coarse-to-fine super-resolution reconstruction method for a single image. The multi-context stage is used for extracting the image context characteristic information of the low-resolution space, and the reconstruction enhancement stage is used for extracting the characteristic information utilizing the high-resolution space. The high-resolution image reconstructed by the method has good visual effect and excellent performance on image evaluation indexes such as peak signal-to-noise ratio, structural similarity and the like. Meanwhile, the time cost and the hardware requirement of the patent are low.
Description
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a super-resolution reconstruction method for a single image based on deep learning.
Background
The image resolution is the amount of information stored in an image, and is an important index for measuring the image. The high-resolution image contains more detail texture information and has stronger expression capability on the information. With the widespread use of electronic products having higher resolutions, the demand for high-resolution images is becoming greater. The single-image super-resolution reconstruction is to reconstruct a high-resolution image from a given low-resolution image. Image super-resolution reconstruction can be applied to many important fields, for example: in medical image MRI and CT, super-resolution reconstruction can better help doctors determine detailed conditions of patients; in the surveillance video, super-resolution reconstruction of images can help related personnel to identify, such as license plate identification, face identification and the like. Deep learning has been successfully applied to image super-resolution reconstruction. At present, a popular method for super-resolution image reconstruction is to use a convolutional neural network. The design of the super-divided network structure can be divided into four main categories according to the position and the use mode of upsampling (upsampling) in the network structure: the system comprises a front-end up-sampling hyper-division network, a rear-end up-sampling hyper-division network and a progressive up-sampling hyper-division network.
(1) The front-end up-sampling hyper-division network generally uses bicubic (bicubic) to directly interpolate a low-resolution image to a target resolution, and then uses models such as a deep convolution network and the like to reconstruct high-quality detail information. Dong et al successfully applied convolutional neural networks to super-resolution reconstruction of images for the first time. Dong et al use a simple convolutional neural network with a small number of layers to perform feature extraction, establish nonlinear mapping, and reconstruct high-resolution images, respectively. Kim et al enable the network to extract image context information better by deepening the depth of the network, and at the same time, use residual learning and a larger learning rate to accelerate network convergence. Kim et al propose a method for image super-resolution reconstruction of a deep recursive convolutional neural network. Increasing the depth of recursion can improve performance without introducing new parameters for additional convolutions. The method obviously reduces the difficulty of learning, but the preset up-sampling method can introduce the problems of blurring, noise amplification and the like, and simultaneously, because the network performs interpolation to a high-resolution space at the front end, the required storage space and the time consumption are far higher than those of other types of ultra-division networks
(2) A back-end up-sampling hyper-division network generally uses an end-to-end learnable up-sampling layer at the last layer or layers of the network structure. Dong et al improve the original network with respect to the problem of time and large computational cost. First, an deconvolution layer is added to the last layer. Second, the dimensionality of the input feature map is reduced while employing a smaller convolution kernel. Shi et al introduced an efficient subpixel convolution layer that learned an ascending filter array to upsample the final low resolution feature map to a high resolution output. EDSR, RDN, RCAN successively introduce dense connections, attention mechanisms, and convolutional neural networks for super-resolution reconstruction. Most mapping transformation in the back-end up-sampling hyper-division network is carried out in a low-resolution space, the computational complexity and the space complexity are obviously reduced, and meanwhile, the training and testing speed is also obviously improved, so that the method is used by a multi-front mainstream hyper-division network framework. For more practical purposes, many networks with lightweight core contribution points have been proposed
(3) A progressive up-sampling network mainly solves large hyper-division multiplication coefficient, up-sampling is not completed in one step, and intermediate reconstructed images are generated to serve as input images of subsequent modules in a Laplacian pyramid or cascade CNN mode. Lai et al use a laplacian pyramid structure to gradually increase the size of the reconstructed image. Such methods can reduce the difficulty of learning, especially at large hyperfine multiplication coefficients.
However, these algorithms inevitably face several problems: first, to extract low resolution spatial information from an image, some algorithms tend to blindly increase the depth or width of the network, resulting in an increased computational load of the algorithm; secondly, the context characteristics of the image are key information for reconstructing a high-resolution image, and the context characteristic information of the image cannot be sufficiently and effectively extracted and utilized by the current algorithm; thirdly, most algorithms directly perform an upsampling operation at the end of the network to reconstruct the final high-resolution image, which increases the difficulty of training a large-scale super-resolution reconstruction and makes it difficult to utilize information on a high-resolution space.
Disclosure of Invention
Aiming at the technical problems that context characteristic information of an image is difficult to effectively extract and utilize in the process of image super-resolution reconstruction, and the utilization rate of high-resolution spatial information is low, the invention designs a single-image super-resolution reconstruction algorithm based on a deep learning technology, and can generate a high-resolution image with rich details and clear textures for a given low-resolution image.
The technical scheme of the invention is as follows:
a coarse-to-fine super-resolution reconstruction method for single images is an end-to-end training process and specifically comprises the following two stages:
(1) A multi-context extraction stage:
(1.1) stage input:
the data set used at this stage is a DIV2K data set, which contains 800 training images, 100 verification images and 100 test images, and is a high-quality image data set for image restoration tasks; the input of the stage is a low-resolution image obtained by carrying out bicubic interpolation downsampling processing on a DIV2K data set;
(1.2) phase architecture:
the multi-context extraction stage is formed by combining three same context extraction modules, and each context extraction module comprises six basic double-branch structure blocks; each branch of the double-branch structure block is provided with two convolution layers which are connected in series and have the kernel size of 3 multiplied by 3 and the number of output channels of 64, and finally, the output of the two branches is fused in series by using 1 convolution layer which has the kernel size of 1 multiplied by 1 and the number of output channels of 64; the context extraction module combines six double-branch structures through residual connection and dense connection, specifically, the six double-branch structure blocks are connected in series, and meanwhile, in order to maximize information flow among all the double-branch structure blocks in the network, every two double-branch structure blocks in the network are connected, so that each double-branch structure block in the network receives the characteristics of all the double-branch structure blocks in front of the double-branch structure block as input;
the input to the multi-context extraction stage is a low resolution image I LR The output is a high resolution image corresponding to the coarse feature magnificationLow resolution image I LR Input into three context extraction modules in a progressive manner, namely:
H 3 =f 3 (H 2 ),H 2 =f 2 (H 1 ),H 1 =f 1 (I LR ) (1)
wherein f is 1 、f 2 And f 3 Respectively represented as three context extraction modules, H 1 、H 2 And H 3 Respectively representing the output characteristic graphs of the three context extraction modules; followed by three context extraction modulesThe output characteristic graphs are fused in series, and are positioned in a convolution layer with the size of 1 multiplied by 1 and the output channel number of 64 on the other coreUnder the treatment of (3), a fused feature map H is obtained fusion :
Wherein [ 2 ], [ 2 ]]Representing a series operation on a feature map; finally, the feature map H is combined fusion Is subjected to deconvolution f up To obtain the coarse super-resolution image predicted at the stage
(2) And (3) rebuilding and enhancing stage:
(2.1) stage input:
the input to this stage comes from the output of the multi-context extraction stage, i.e. a coarse high resolution image(2.2) phase architecture:
the reconstruction enhancing stage is a residual convolution neural network, and the input is a coarse high-resolution imageThe output is a fine super-resolution imageThe residual convolutional neural network comprises 5 convolutional layers with the kernel size of 3 multiplied by 3 and the output channel number of 64 and one convolutional layer with the kernel size of 1 multiplied by 1 and the output channel number of 3, and a ReLU activation function is used for activation after each convolutional layer; networkThe whole is connected by a residual error, and the residual error image restored at the stage and the input image are fusedFinally, a fine super-resolution image is reconstructed by a convolution layer with the kernel size of 1 multiplied by 1 and the number of output channels of 3
(3) Loss function:
the two stages are used as a whole to carry out end-to-end training, the training is carried out by adopting a method of direction propagation and random gradient descent, for a batch of samples, the error between the prediction result of the network and the true value result of the database is calculated according to the formula (4), the gradient of the error is calculated, the network parameters are gradually updated along the gradient descent direction according to the back propagation of the neural network, and the iteration is carried out until convergence;
wherein, I HR The representation is that the neural network inputs the corresponding true value image;
(4) Procedure of experiment
Firstly, down-sampling an image in a DIV2K by using a bicubic interpolation method to obtain a low-resolution image, and simultaneously performing random 90-degree rotation and turnover operation on the low-resolution image to increase the training data volume; simultaneously building and connecting in series a neural network model of a multi-context extraction stage and a reconstruction enhancement stage; then, multi-thread batch conveying of training data to a network model to be trained, and calculating an error between a high-resolution image reconstructed by the neural network and a true value according to a formula (4); finally, network parameters are iteratively updated by a gradient descent optimizer ADAM according to a back propagation method, wherein two important parameters beta of the ADAM optimizer 1 =0.9,β 2 =0.999。
The initial learning rate during training is set to 0.0001 and is reduced by half every 200 epochs, with an epoch total of 1000.
The invention has the beneficial effects that:
(1) Acquisition and utilization of image context information
Different from the method for reconstructing the super-resolution of the image by increasing the depth and the width of the neural network, the method can effectively extract and utilize the context information of the image to reconstruct the super-resolution by designing a compact structure. This may be embodied in two ways. First, this patent proposes a two-branch structure block, and the multi-branch structure block can utilize image feature information from different receptive fields compared to a cascade structure, so that the extracted image context information is more comprehensive. Secondly, the patent adopts the mode of residual connection and intensive connection to fuse a plurality of double-branch structures, and can acquire the layering characteristic information of the image while increasing the receptive field, thereby reconstructing a rough high-resolution image.
(2) Acquisition and utilization of high resolution spatial feature information
Different from the current method for performing image super-resolution reconstruction by using a deep learning method, the method attempts to optimize the rough high-resolution image by using high-resolution spatial feature information on the basis of obtaining the rough high-resolution image, which brings a main advantage. Other methods only use feature information from low resolution space and ignore high resolution reconstruction information that may be used during the reconstruction process. The method extracts high-resolution spatial feature information by using a simple and effective residual error network, thereby optimizing the reconstructed high-resolution image.
(3) Time cost and hardware requirements
The patent puts a research angle on image context information extraction and high-resolution spatial feature utilization, and provides a brand-new scheme of an algorithm suitable for the image super-resolution research field. According to the method, time cost and hardware requirements are fully considered, a plurality of GPUs with strong computing power are generally needed for training of a neural network for image super-resolution reconstruction at present, and stronger hardware requirements and time cost exist.
Drawings
Fig. 1 is a basic two-branch structure diagram. Each branch of the double-branch structure comprises two serially connected expanded convolutional layers of core size 3 × 3, which are then fused by one common convolutional layer of core size 1 × 1.
FIG. 2 is a diagram of a multi-context extraction module. The multi-context extraction module combines the six double-branch structures through residual connection and dense connection.
Fig. 3 is a diagram of an enhanced reconstruction network architecture. The enhancement network is a residual network containing 6 convolutional layers of core size 3 x 3.
Fig. 4 is a diagram showing an overall configuration of the network. The whole network is divided into two stages: a multi-context extraction phase and an enhanced reconstruction phase.
Detailed Description
The present invention will be described in further detail with reference to specific embodiments, but the present invention is not limited to the specific embodiments.
A method for reconstructing super-resolution of single image from coarse image to fine image based on dual stages comprises a network model training and model evaluation part
(1) Training network model
Firstly, down-sampling is carried out on images in a DIV2K by using a bicubic interpolation method to obtain low-resolution images, and operations such as rotation and turning are carried out on the low-resolution images to obtain training data. Meanwhile, a neural network model is built according to the figure 4; then, multi-thread batch conveying of training data to a network model to be trained, and calculating an error between a high-resolution image reconstructed by the neural network and a true value according to a formula (4); and finally, iteratively updating network parameters by using a gradient descent optimizer ADAM according to a back propagation method until the iteration times meet the requirements, and finishing the training of the network.
(2) Model testing and evaluation
The patent researches how to extract and utilize image context information and high-resolution spatial feature information, and further provides a brand-new scheme of an algorithm suitable for the single-image super-resolution research field. During testing, a low-resolution image needing super-resolution reconstruction is prepared, a file path in a code and trained model path parameters are modified, then the test code is executed, and MatLab software can be used for index evaluation on an output result after the code is run.
The method evaluates the constructed model mainly by quantification and secondarily by qualification. In qualitative aspect, the human visual viewing effect is taken as the main point. In terms of quantification, the reconstructed result was evaluated using image evaluation indices PSNR (Peak Signal to Noise Ratio) and SSIM (structured Similarity Index). The PSNR is based on errors among corresponding pixel points, and is the most common and most widely used image objective evaluation index. SSIM represents structural similarity and is an index for measuring the similarity of two images. PSNR and SSIM calculation formulas are as follows:
wherein H and W respectively represent the length and width of the image, and X and Y represent two images needing index calculation. n is the number of bits of each sampling value, and i and j represent the horizontal and vertical coordinates of each pixel point. u denotes mean, σ denotes standard deviation, C 1 ,C 2 And C 3 Are three constants.
Claims (2)
1. A single image super-resolution reconstruction method from rough to fine is characterized in that the single image super-resolution reconstruction method is an end-to-end training process, and a network model to be trained specifically comprises the following two stages:
(1) A multi-context extraction stage:
(1.1) stage input:
the data set used at this stage is a DIV2K data set, and the DIV2K data set comprises 800 training images, 100 verification images and 100 test images and is a high-quality image data set for an image restoration task; the input of the stage is a low-resolution image obtained by carrying out bicubic interpolation downsampling processing on a DIV2K data set;
(1.2) phase architecture:
the multi-context extraction stage is formed by combining three same context extraction modules, and each context extraction module comprises six basic double-branch structure blocks; each branch of the double-branch structure block is respectively provided with two convolution layers which are connected in series and have the kernel size of 3 multiplied by 3 and the output channel number of 64, and finally, 1 convolution layer with the kernel size of 1 multiplied by 1 and the output channel number of 64 is used for carrying out series fusion on the outputs of the two branches; the context extraction module combines six double-branch structures through residual connection and dense connection, specifically, the six double-branch structure blocks are connected in series, and meanwhile, in order to maximize information flow among all the double-branch structure blocks in the network, every two double-branch structure blocks in the network are connected, so that each double-branch structure block in the network receives the characteristics of all the double-branch structure blocks in front of the double-branch structure block as input;
the input to the multi-context extraction stage is a low resolution image I LR The output is a high resolution image corresponding to the coarse feature magnificationThe low resolution image ILR is input to three context extraction modules in a progressive manner, namely:
H 3 =f 3 (H 2 ),H 2 =f 2 (H 1 ),H 1 =f 1 (I LR ) (1)
wherein f is 1 、f 2 And f 3 Are respectively represented as three context extraction modules, H 1 、H 2 And H 3 Respectively representing the output characteristic graphs of the three context extraction modules; then, the output characteristic diagrams of the three context extraction modules are fused in series, and the convolution layer with the core size of 1 multiplied by 1 and the output channel number of 64 is formed on the other convolution layerUnder the treatment of (3), a fused feature map H is obtained fusion :
<xnotran> , [ </xnotran>]Representing a series operation on a feature map; finally, the feature map H is combined fusion Is subjected to deconvolution f up To obtain the coarse super-resolution image predicted at the stage
(2) And (3) rebuilding and enhancing stage:
(2.1) stage input:
the input to this stage comes from the output of the multi-context extraction stage, i.e. a coarse high-resolution image
(2.2) phase architecture:
the reconstruction enhancement stage is a residual convolutional neural network, and the input is a coarse high-resolution imageThe output is a fine super-resolution imageThe residual convolutional neural network comprises 5 convolutional layers with the kernel size of 3 multiplied by 3 and the output channel number of 64 and one convolutional layer with the kernel size of 1 multiplied by 1 and the output channel number of 3, and a ReLU activation function is used for activation after each convolutional layer;the whole network uses a residual error connection to fuse the residual error image restored at the stage and the input imageFinally, a fine super-resolution image is reconstructed by a convolution layer with the kernel size of 1 multiplied by 1 and the number of output channels of 3
(3) Loss function:
the two stages are used as a whole to carry out end-to-end training, the training is carried out by adopting a method of direction propagation and random gradient descent, for a batch of samples, the error between the prediction result of the network and the true value result of the database is calculated according to the formula (4), the gradient of the error is calculated, the network parameters are gradually updated along the gradient descent direction according to the back propagation of the neural network, and the iteration is carried out until convergence;
wherein, I HR The representation is that the neural network inputs the corresponding true value image;
(4) Procedure of experiment
Firstly, down-sampling an image in DIV2K by using a bicubic interpolation method to obtain a low-resolution image, and simultaneously performing random 90-degree rotation and turnover operation on the low-resolution image to increase the training data volume; simultaneously building and connecting a neural network model of a multi-context extraction stage and a reconstruction enhancement stage in series; then, multi-thread batch conveying of training data to a network model to be trained, and calculating an error between a high-resolution image reconstructed by the neural network and a true value according to a formula (4); finally, network parameters are iteratively updated by a gradient descent optimizer ADAM according to a back propagation method, and two important parameters beta of the ADAM optimizer 1 =0.9,β 2 =0.999。
2. The coarse-to-fine single-image super-resolution reconstruction method according to claim 1, wherein the initial learning rate during training is set to 0.0001 and reduced by half every 200 epochs, and the total number of epochs is 1000.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911357050.4A CN111161146B (en) | 2019-12-25 | 2019-12-25 | Coarse-to-fine single-image super-resolution reconstruction method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911357050.4A CN111161146B (en) | 2019-12-25 | 2019-12-25 | Coarse-to-fine single-image super-resolution reconstruction method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111161146A CN111161146A (en) | 2020-05-15 |
CN111161146B true CN111161146B (en) | 2022-10-14 |
Family
ID=70558312
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911357050.4A Active CN111161146B (en) | 2019-12-25 | 2019-12-25 | Coarse-to-fine single-image super-resolution reconstruction method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111161146B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111667444B (en) * | 2020-05-29 | 2021-12-03 | 湖北工业大学 | Image compressed sensing reconstruction method based on multi-channel residual error network |
CN112070670B (en) * | 2020-09-03 | 2022-05-10 | 武汉工程大学 | Face super-resolution method and system of global-local separation attention mechanism |
CN112116527B (en) * | 2020-09-09 | 2024-02-23 | 北京航空航天大学杭州创新研究院 | Image super-resolution method based on cascade network frame and cascade network |
CN112686830B (en) * | 2020-12-30 | 2023-07-25 | 太原科技大学 | Super-resolution method of single depth map based on image decomposition |
CN113256494B (en) * | 2021-06-02 | 2022-11-11 | 同济大学 | Text image super-resolution method |
CN114066727A (en) * | 2021-07-28 | 2022-02-18 | 华侨大学 | Multi-stage progressive image super-resolution method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104899835A (en) * | 2015-04-28 | 2015-09-09 | 西南科技大学 | Super-resolution processing method for image based on blind fuzzy estimation and anchoring space mapping |
CN106204489A (en) * | 2016-07-12 | 2016-12-07 | 四川大学 | Single image super resolution ratio reconstruction method in conjunction with degree of depth study with gradient conversion |
CN109118432A (en) * | 2018-09-26 | 2019-01-01 | 福建帝视信息科技有限公司 | A kind of image super-resolution rebuilding method based on Rapid Circulation convolutional network |
CN110232653A (en) * | 2018-12-12 | 2019-09-13 | 天津大学青岛海洋技术研究院 | The quick light-duty intensive residual error network of super-resolution rebuilding |
-
2019
- 2019-12-25 CN CN201911357050.4A patent/CN111161146B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104899835A (en) * | 2015-04-28 | 2015-09-09 | 西南科技大学 | Super-resolution processing method for image based on blind fuzzy estimation and anchoring space mapping |
CN106204489A (en) * | 2016-07-12 | 2016-12-07 | 四川大学 | Single image super resolution ratio reconstruction method in conjunction with degree of depth study with gradient conversion |
CN109118432A (en) * | 2018-09-26 | 2019-01-01 | 福建帝视信息科技有限公司 | A kind of image super-resolution rebuilding method based on Rapid Circulation convolutional network |
CN110232653A (en) * | 2018-12-12 | 2019-09-13 | 天津大学青岛海洋技术研究院 | The quick light-duty intensive residual error network of super-resolution rebuilding |
Non-Patent Citations (1)
Title |
---|
基于残差神经网络的图像超分辨率改进算法;王一宁等;《计算机应用》;20180110(第01期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN111161146A (en) | 2020-05-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111161146B (en) | Coarse-to-fine single-image super-resolution reconstruction method | |
CN112507997B (en) | Face super-resolution system based on multi-scale convolution and receptive field feature fusion | |
CN111932461B (en) | Self-learning image super-resolution reconstruction method and system based on convolutional neural network | |
CN114119444B (en) | Multi-source remote sensing image fusion method based on deep neural network | |
CN109949223B (en) | Image super-resolution reconstruction method based on deconvolution dense connection | |
CN109214989A (en) | Single image super resolution ratio reconstruction method based on Orientation Features prediction priori | |
Luo et al. | Lattice network for lightweight image restoration | |
CN111861884B (en) | Satellite cloud image super-resolution reconstruction method based on deep learning | |
CN111681166A (en) | Image super-resolution reconstruction method of stacked attention mechanism coding and decoding unit | |
CN109949217B (en) | Video super-resolution reconstruction method based on residual learning and implicit motion compensation | |
CN112669248A (en) | Hyperspectral and panchromatic image fusion method based on CNN and Laplacian pyramid | |
CN115311184A (en) | Remote sensing image fusion method and system based on semi-supervised deep neural network | |
CN115526779A (en) | Infrared image super-resolution reconstruction method based on dynamic attention mechanism | |
Deng et al. | Multiple frame splicing and degradation learning for hyperspectral imagery super-resolution | |
CN111967516B (en) | Pixel-by-pixel classification method, storage medium and classification equipment | |
CN115731141A (en) | Space-based remote sensing image space-time fusion method for dynamic monitoring of maneuvering target | |
CN113240581A (en) | Real world image super-resolution method for unknown fuzzy kernel | |
CN117575915A (en) | Image super-resolution reconstruction method, terminal equipment and storage medium | |
Zang et al. | Cascaded dense-UNet for image super-resolution | |
CN111414988B (en) | Remote sensing image super-resolution method based on multi-scale feature self-adaptive fusion network | |
CN116681592A (en) | Image super-resolution method based on multi-scale self-adaptive non-local attention network | |
CN116433548A (en) | Hyperspectral and panchromatic image fusion method based on multistage information extraction | |
CN115861062A (en) | Multi-scale learning wavelet attention mechanism network and image super-resolution reconstruction method | |
Liu et al. | Face super-resolution network with incremental enhancement of facial parsing information | |
CN115578262A (en) | Polarization image super-resolution reconstruction method based on AFAN model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |