CN110675321A - Super-resolution image reconstruction method based on progressive depth residual error network - Google Patents
Super-resolution image reconstruction method based on progressive depth residual error network Download PDFInfo
- Publication number
- CN110675321A CN110675321A CN201910920959.XA CN201910920959A CN110675321A CN 110675321 A CN110675321 A CN 110675321A CN 201910920959 A CN201910920959 A CN 201910920959A CN 110675321 A CN110675321 A CN 110675321A
- Authority
- CN
- China
- Prior art keywords
- image
- residual error
- depth residual
- progressive depth
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000000750 progressive effect Effects 0.000 title claims abstract description 73
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000012549 training Methods 0.000 claims abstract description 49
- 238000012360 testing method Methods 0.000 claims abstract description 16
- 238000012545 processing Methods 0.000 claims abstract description 12
- 238000005520 cutting process Methods 0.000 claims abstract description 4
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 22
- 230000006870 function Effects 0.000 claims description 17
- 238000005070 sampling Methods 0.000 claims description 12
- 238000005457 optimization Methods 0.000 claims description 9
- 230000004913 activation Effects 0.000 claims description 7
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000013461 design Methods 0.000 claims description 4
- 238000009826 distribution Methods 0.000 claims description 3
- 238000011156 evaluation Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 description 6
- 238000013527 convolutional neural network Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 238000012937 correction Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 101100365548 Caenorhabditis elegans set-14 gene Proteins 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
- G06T3/4076—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4023—Scaling of whole images or parts thereof, e.g. expanding or contracting based on decimating pixels or lines of pixels; based on inserting pixels or lines of pixels
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4046—Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Image Processing (AREA)
Abstract
The invention provides a super-resolution image reconstruction method based on a progressive depth residual error network, which mainly comprises the following steps: (1) selecting a training data set and a test data set, and performing rotation and scaling processing on an image of the training data set to expand the image of the training data set; (2) carrying out downsampling processing on the obtained training data set image; (3) respectively cutting the original training data set image and the low-resolution image in the step 2 into image blocks; (4) taking the original image blocks and the low-resolution image blocks corresponding to the same positions in the step 3 as high-resolution/low-resolution sample pairs to generate a training data set file with a format of HDF 5; (5) building a progressive depth residual error network; (6) training a progressive depth residual error network; (7) and inputting the low-resolution image into the progressive depth residual error network model, and outputting to obtain a reconstructed high-resolution image.
Description
Technical Field
The invention belongs to the technical field of image digital processing, and relates to a super-resolution image reconstruction method based on a progressive depth residual error network.
Background
With the continuous progress of image and video digital processing technology, higher quality images are always desired. Factors influencing the image quality are mainly divided into two major parts, namely objective factors such as inaccurate focusing, artificial shaking and object motion in the process of generating an image at the early stage; and the image quality is reduced due to noise signal processing, undersampling effect and the like in the image transmission and storage process. An important index for evaluating image quality is image resolution. When the resolution is higher, the pixel density of the picture is higher, the number of pixels in a unit area is larger, the detail information is provided more, and the image quality is better.
Image super-resolution reconstruction is a technique for recovering a high-resolution image from a low-resolution image or a sequence of images. With the rapid development of scientific technology, image super-resolution reconstruction technology is widely applied in many fields, such as city management, military reconnaissance, medical image, etc. The requirements of the application fields on the image super-resolution reconstruction technology are higher and higher, and how to reconstruct a high-resolution image with better effect is still a fundamental and urgent task to be solved.
In recent years, as deep learning has shown great potential in the field of image processing, many researchers have proposed super-resolution image reconstruction methods based on deep learning. Dong et al first applied convolutional neural networks to Super-Resolution reconstruction techniques, and proposed Super-Resolution reconstruction methods (SRCNN) based on convolutional neural networks. Although the reconstruction effect of the method is better than that of the traditional method, the method only uses a 3-layer convolutional neural network, so that deep detail information of the image is difficult to extract, and the context information of the reconstructed image is lack of correlation. To solve this problem, Dong et al also propose a Fast Super-Resolution reconstruction method (FSRCNN) based on a Fast Convolutional Neural Network. The method deepens the convolution layer number to 8 layers, and performs image up-sampling on the last layer of the network by using deconvolution operation instead of bicubic interpolation operation. Although the reconstruction effect of the FSRCNN is improved compared with the SRCNN method, the deep information of the image extracted by the 8-layer convolutional network is still limited. Later, Kim et al further proposed an Image super-Resolution reconstruction method (VDSR) based on a Deep Convolutional neural Network, which deepens the number of Convolutional layers to 20 layers and applies a residual structure to the Deep Convolutional Network, so that the reconstruction effect is greatly improved. However, with a large scaling factor, only one up-sampling operation is likely to cause a large amount of information loss, resulting in training difficulty.
Disclosure of Invention
The invention aims to provide a super-resolution image reconstruction method based on a progressive depth residual error network, which can solve the problem that the prior method only performs one-time up-sampling on a reconstructed image to cause loss of a large amount of image detail information and can still reconstruct a clear high-resolution image under a larger scaling factor.
Therefore, the invention adopts the following technical scheme:
a super-resolution image reconstruction method based on a progressive depth residual error network comprises the following steps:
step 1: selecting a training data set and a test data set, and performing rotation and scaling processing on an image of the training data set to expand the image of the training data set;
step 2: 1/N ratio down-sampling processing is carried out on the training data set image obtained in the step 1, wherein N is a scaling factor;
and step 3: respectively cutting the original training data set image and the low-resolution image obtained in the step 2 into image blocks with the sizes of H multiplied by W and H/N multiplied by W/N pixels;
and 4, step 4: taking the original image blocks and the low-resolution image blocks corresponding to the same positions in the step 3 as high-resolution/low-resolution sample pairs to generate a training data set file with a format of HDF 5;
and 5: building a progressive depth residual network
5.1 design residual block for jumper connection
The residual block connected by the jumper wire consists of two residual units, an outer convolution layer and the jumper wire; the residual error unit consists of two inner convolution layers, an activation function and a jumper connection, the residual error unit and the outer convolution layer are connected together end to end through lambda times, and then the input of the residual error unit is combined with the output of the outer convolution layer through the jumper connection to be used as the output of a residual error block connected with the jumper;
5.2 setting residual Block internal parameters for Jumper connections
Setting parameters including the number of convolution kernels, the sizes of the convolution kernels, filling, moving step length and an activation function;
5.3 constructing a deep residual network
Connecting 5 residual blocks connected by jumper wires end to form a deep residual network;
5.4 building a progressive depth residual network
The progressive depth residual error network is divided into 2 levels, each level completes super-resolution reconstruction of 2X scaling factors, and further realizes super-resolution reconstruction of 4X scaling factors; each level of progressive depth residual error network consists of a depth residual error network and a sub-pixel convolution layer, wherein in each level of progressive depth residual error network, the depth residual error network is used for extracting the characteristics of an input characteristic image, and then the sub-pixel convolution is used for up-sampling the extracted characteristics;
5.5 setting parameters of a progressive depth residual network
Setting parameters including the number of convolution kernels of the input convolution layer, the output convolution layer and the sub-pixel convolution layer, the size of the convolution kernels, the moving step length and filling;
step 6: training progressive depth residual network
6.1 constructing a mean square error function as a loss function;
6.2, updating parameters of the progressive depth residual error network through an optimization algorithm;
6.3 using the peak signal-to-noise ratio and the structural similarity as evaluation indexes to objectively evaluate the reconstruction performance of the progressive depth residual error network model;
6.4 setting the parameter value of λ of the residual block connected by the jumper, and λ is 0.1,0.2, …, 1;
6.5 initializing parameters of progressive depth residual network and setting training parameters
Initializing parameters in a progressive depth residual error network into Gaussian distribution with the mean value of 0 and the standard deviation of 0.001, and initializing and setting the deviation to be 0; setting a learning rate, iteration times and the number of batch training samples;
6.6 training progressive depth residual error network model
6.6.1 training a progressive depth residual error network model by using the HDF5 training data set file generated in the step 4 according to the parameters set in the step 6.5, and if the network does not converge, repeatedly executing the step 6.5 until the network converges;
6.6.2 continuing to train the progressive depth residual error network model to reach the maximum iteration times, and finishing the training; otherwise, step 6.6.2 is executed until the maximum iteration number is reached;
6.7 testing of progressive depth residual network model
Using the test data set to test the progressive depth residual error network model obtained in the step 6.6, and recording the obtained peak signal-to-noise ratio and the structural similarity value; then returning to the step 6.4, setting different lambda values, continuously testing and recording the obtained peak signal-to-noise ratio and the structural similarity value; finally, comparing peak signal-to-noise ratios and structural similarity values obtained by using different lambda values, selecting the lambda value corresponding to the highest peak signal-to-noise ratio and structural similarity value as the lambda parameter value of the residual block connected by the jumper, and storing the trained progressive depth residual error network model;
and 7: and inputting the low-resolution image into the progressive depth residual error network model, and outputting to obtain a reconstructed high-resolution image.
Further, in step 2, downsampling processing of the image is performed using a bicubic interpolation algorithm.
Further, in step 6.2, the optimization algorithm selects an Adam optimization algorithm.
Further, in step 6.3, the calculation formulas of the peak signal-to-noise ratio PSNR and the structural similarity SSIM index are shown as formula (11) and formula (12):
where M, N denotes the size of the image, f denotes the true high resolution image,expressed as reconstructed high resolution image, μfAndmean gray value, σ, expressed as true high resolution image and reconstructed image, respectivelyfAndexpressed as the variance of the true high resolution image and the reconstructed image respectively,represented as the covariance of the true high-resolution image and the reconstructed image, C1And C2Is constant, and C1=(k1L)2,C2=(k2L)2,k1=0.01,k2L is the dynamic range of the pixel value, 0.03.
The image super-resolution reconstruction method based on the progressive depth residual error network realizes reconstruction of a high-resolution image by 4 times by performing feature extraction and up-sampling on a low-resolution image for 2 times. The invention can solve the problem that the existing method only carries out one-time up-sampling on the low-resolution image to cause loss of a large amount of image detail information, and can still reconstruct a clear high-resolution image under a larger scaling factor.
The invention has the beneficial effects that:
(1) a residual block connected by a jumper wire is designed, and the residual block has a better effect in the characteristic extraction process;
(2) the invention designs a progressive depth residual error network, which can perform feature extraction and upsampling on a low-resolution image for 2 times, specifically performs feature extraction operation through the depth residual error network, performs upsampling operation through sub-pixel convolution, and solves the problem of image detail information loss caused by only one-time upsampling in the traditional method through a progressive reconstruction mode.
Drawings
FIG. 1 is a schematic diagram of a jumper connection residual block structure according to the present invention;
FIG. 2 is a schematic diagram of a residual unit in FIG. 1;
FIG. 3 is a schematic diagram of a progressive depth residual network constructed in accordance with the present invention;
fig. 4 is a schematic structural diagram of the depth residual error network in fig. 3.
Detailed Description
The process of the invention is further illustrated by the following specific examples.
A super-resolution image reconstruction method based on a progressive depth residual error network comprises the following steps:
step 1: selecting a T91 image data Set and a BSD200 image data Set as training data sets, and selecting a Set5 image data Set, a Set14 image data Set and a Urban100 image data Set as test data sets; rotating the training data set image by 90 degrees, 180 degrees and 270 degrees and scaling by 0.9, 0.8, 0.7 and 0.6 to expand the training data set image;
step 2: performing 1/N proportional down-sampling on the training data set image obtained in the step 1 by using a Bicubic algorithm, wherein N is a scaling factor; the value of N is selected according to the multiple to be reconstructed, and is generally 2 or 4;
and step 3: respectively cutting the original training data set image and the low-resolution image obtained in the step 2 into image blocks with the sizes of H multiplied by W and H/N multiplied by W/N pixels;
and 4, step 4: taking the original image blocks and the low-resolution image blocks corresponding to the same positions in the step 3 as high-resolution/low-resolution sample pairs to generate a training data set file with a format of HDF 5;
and 5: building a progressive depth residual network
5.1 design residual block for jumper connection
As shown in fig. 1, the jumper-connected residual block constructed by the present invention is composed of two residual units, an outer convolution layer and jumper connections. The residual error unit is formed by connecting two inner convolution layers, an activation function ReLU in series and a jumper wire, and the structure of the residual error unit is shown in FIG. 2; the residual error unit and the outer convolution layer are connected together end to end by lambda times, and then the input of the residual error unit is combined with the output of the outer convolution layer through jumper connection to be used as the output of a residual error block connected by the jumper;
5.2 setting internal parameters of residual Block for Jumper connections
Setting parameters including the number of convolution kernels, the sizes of the convolution kernels, filling, moving step length and an activation function; in this embodiment, in the convolution layer of the residual block and the convolution layer of the residual unit connected by the jumper, each convolution layer has 64 convolution kernels, the size of the convolution kernel is 3 × 3, the padding is 1, and the moving step length is 1; the activation function between two inner convolution layers of the residual unit is ReLU, and the convolution calculation process is as follows:
Y=W*X+B (1)
wherein X is the input of the convolutional layer, Y is the output of the convolutional layer, B is the offset, W is a filter of size 64 × 3 × 3 × 64, "+" indicates the convolution operation;
5.3 constructing a deep residual network
Connecting 5 residual blocks connected by jumpers end to form a deep residual network, wherein the structure of the deep residual network is shown in FIG. 4;
5.4 building a progressive depth residual network
The progressive depth residual error network is divided into 2 levels, each level completes super-resolution reconstruction of 2X scaling factors, and further realizes super-resolution reconstruction of 4X scaling factors, and the structure of the progressive depth residual error network is shown in FIG. 3; each level of progressive depth residual error network consists of a depth residual error network and a sub-pixel convolution layer, wherein in each level of progressive depth residual error network, the depth residual error network is used for extracting the characteristics of an input characteristic image, and then the sub-pixel convolution is used for up-sampling the extracted characteristics;
5.5 setting parameters of a progressive depth residual network
Setting parameters including the number of convolution kernels of the input convolution layer, the output convolution layer and the sub-pixel convolution layer, the size of the convolution kernels, the moving step length and filling; in this embodiment, the progressive depth residual error network inputs convolutional layers and outputs convolutional layers, each convolutional layer has 64 convolutional kernels, the size of the convolutional kernel is 7 × 7, the padding is 3, and the moving step length is 1; the sub-pixel convolution layer has 256 convolution kernels with a size of 3 x 3, a fill of 1, and a move step of 1. The calculation process of the sub-pixel convolution is as follows:
Y1=PS(W1*X1+B1) (2)
in the formula, X1For input of sub-pixel convolution layers, Y1Is the output of the sub-pixel convolution layer, B1Is a deviation, W1Is a filter of size 64 × 3 × 3 × 256, "+" indicates a convolution operation, and PS indicates a sub-pixel convolution operation that takes a size H × W × c × r2The feature images of (a) are rearranged into feature images of a size rH × rW × c;
step 6: training progressive depth residual network
6.1, constructing a mean square error function as a loss function, and estimating a network parameter theta by minimizing the loss of the reconstructed image and the corresponding real high-resolution image, wherein the expression form of the mean square error function is as follows:
where n represents the number of training samples, L represents the mean square error function, XiRepresenting a true high resolution image, YiRepresenting a reconstructed image;
6.2, updating parameters of the progressive depth residual error network by selecting and using an Adam optimization algorithm; the process of updating the network parameters by the Adam optimization algorithm is represented as:
mt=u×mt-1+(1-u)×gt(5)
nt=v×nt-1+(1-v)×gt 2(6)
θt+1=θt+△θt(10)
in the formula, gtIs the gradient of the mean square error function L (theta) to theta, mtIs to the gradient gtFirst order moment estimate of (n)tIs to the gradient gtIs estimated by the second order moment of (a),is to mtThe deviation of (2) is corrected,is to ntThe exponential decay rate u of the moment estimate is 0.9, v is 0.99, η is the step length and its value is 0.001, and epsilon is a constantIt has a value of 10-8,△θtIs the calculated thetatIs updated by the value of θtIs the value of theta at the time t, and theta istAnd △ thetatThe sum of values of (a) is applied to (theta)t+1。
Updating network parameters by an Adam optimization algorithm, and initializing a parameter vector, a first moment vector and a second moment vector; the loop then iteratively updates the various sections to converge the parameter θ. Adding 1 to the time step t, updating the first moment estimation and the second moment estimation of the deviation, then calculating the deviation correction of the first moment estimation and the deviation correction of the second moment estimation, updating the gradient of the objective function on the parameter theta at the time step, and finally updating the parameter theta of the model by using the calculated value;
6.3 using Peak Signal to Noise Ratio (PSNR) and Structural SIMilarity (SSIM) as evaluation indexes to objectively evaluate the reconstruction performance of the progressive depth residual error network model;
the calculation formulas of peak signal-to-noise ratio (PSNR) and Structural Similarity (SSIM) indexes are shown in formulas (11) and (12):
where M, N denotes the size of the image, f denotes the true high resolution image,expressed as reconstructed high resolution image, μfAndmean gray value, σ, expressed as true high resolution image and reconstructed image, respectivelyfAndrespectively expressed as trueThe variance of the high-resolution image and the reconstructed image,represented as the covariance of the true high-resolution image and the reconstructed image, C1And C2Is constant, and C1=(k1L)2,C2=(k2L)2,k1=0.01,k20.03, L is the dynamic range of the pixel value;
6.4 setting the parameter value of λ of the residual block connected by the jumper, and λ is 0.1,0.2, …, 1;
6.5 initializing parameters of progressive depth residual network and setting training parameters
Initializing parameters in a progressive depth residual error network into Gaussian distribution with the mean value of 0 and the standard deviation of 0.001, and initializing and setting the deviation to be 0; setting a learning rate, iteration times and the number of batch training samples; in this embodiment, the learning rate is initially set to 0.0001, the iteration number epoch is initially set to 100, and the batch training sample number batchsize is initially set to 32;
6.6 training progressive depth residual error network model
6.6.1 training a progressive depth residual error network model by using the HDF5 training data set file generated in the step 4 according to the parameters set in the step 6.5, and if the network does not converge, repeatedly executing the step 6.5 until the network converges;
6.6.2 continuing to train the progressive depth residual error network model to reach the maximum iteration times, and finishing the training; otherwise, step 6.6.2 is executed until the maximum iteration number is reached;
6.7 testing of progressive depth residual network model
Testing the network model obtained in the step 6.6 by using the test data set, and recording the obtained peak signal-to-noise ratio and the structural similarity value; then returning to the step 6.4, setting different lambda values, continuously testing and recording the obtained peak signal-to-noise ratio and the structural similarity value; finally, comparing peak signal-to-noise ratios and structural similarity values obtained by using different lambda values, selecting the lambda value corresponding to the highest peak signal-to-noise ratio and structural similarity value as the lambda parameter value of the residual block connected by the jumper, and storing the trained progressive depth residual error network model;
and 7: and inputting the low-resolution image into the progressive depth residual error network model, and outputting to obtain a reconstructed high-resolution image.
Claims (4)
1. A super-resolution image reconstruction method based on a progressive depth residual error network is characterized by comprising the following steps:
step 1: selecting a training data set and a test data set, and performing rotation and scaling processing on an image of the training data set to expand the image of the training data set;
step 2: 1/N ratio down-sampling processing is carried out on the training data set image obtained in the step 1, wherein N is a scaling factor;
and step 3: respectively cutting the original training data set image and the low-resolution image obtained in the step 2 into image blocks with the sizes of H multiplied by W and H/N multiplied by W/N pixels;
and 4, step 4: taking the original image blocks and the low-resolution image blocks corresponding to the same positions in the step 3 as high-resolution/low-resolution sample pairs to generate a training data set file with a format of HDF 5;
and 5: building a progressive depth residual network
5.1 design residual block for jumper connection
The residual block connected by the jumper wire consists of two residual units, an outer convolution layer and the jumper wire; the residual error unit consists of two inner convolution layers, an activation function and a jumper connection, the residual error unit and the outer convolution layer are connected together end to end through lambda times, and then the input of the residual error unit is combined with the output of the outer convolution layer through the jumper connection to be used as the output of a residual error block connected with the jumper;
5.2 setting residual Block internal parameters for Jumper connections
Setting parameters including the number of convolution kernels, the sizes of the convolution kernels, filling, moving step length and an activation function;
5.3 constructing a deep residual network
Connecting 5 residual blocks connected by jumper wires end to form a deep residual network;
5.4 building a progressive depth residual network
The progressive depth residual error network is divided into 2 levels, each level completes super-resolution reconstruction of 2X scaling factors, and further realizes super-resolution reconstruction of 4X scaling factors; each level of progressive depth residual error network consists of a depth residual error network and a sub-pixel convolution layer, wherein in each level of progressive depth residual error network, the depth residual error network is used for extracting the characteristics of an input characteristic image, and then the sub-pixel convolution is used for up-sampling the extracted characteristics;
5.5 setting parameters of a progressive depth residual network
Setting parameters including the number of convolution kernels of the input convolution layer, the output convolution layer and the sub-pixel convolution layer, the size of the convolution kernels, the moving step length and filling;
step 6: training progressive depth residual network
6.1 constructing a mean square error function as a loss function;
6.2, updating parameters of the progressive depth residual error network through an optimization algorithm;
6.3 using the peak signal-to-noise ratio and the structural similarity as evaluation indexes to objectively evaluate the reconstruction performance of the progressive depth residual error network model;
6.4 setting the parameter value of λ of the residual block connected by the jumper, and λ is 0.1,0.2, …, 1;
6.5 initializing parameters of progressive depth residual network and setting training parameters
Initializing parameters in a progressive depth residual error network into Gaussian distribution with the mean value of 0 and the standard deviation of 0.001, and initializing and setting the deviation to be 0; setting a learning rate, iteration times and the number of batch training samples;
6.6 training progressive depth residual error network model
6.6.1 training a progressive depth residual error network model by using the HDF5 training data set file generated in the step 4 according to the parameters set in the step 6.5, and if the network does not converge, repeatedly executing the step 6.5 until the network converges;
6.6.2 continuing to train the progressive depth residual error network model to reach the maximum iteration times, and finishing the training; otherwise, step 6.6.2 is executed until the maximum iteration number is reached;
6.7 testing of progressive depth residual network model
Using the test data set to test the progressive depth residual error network model obtained in the step 6.6, and recording the obtained peak signal-to-noise ratio and the structural similarity value; then returning to the step 6.4, setting different lambda values, continuously testing and recording the obtained peak signal-to-noise ratio and the structural similarity value; finally, comparing peak signal-to-noise ratios and structural similarity values obtained by using different lambda values, selecting the lambda value corresponding to the highest peak signal-to-noise ratio and structural similarity value as the lambda parameter value of the residual block connected by the jumper, and storing the trained progressive depth residual error network model;
and 7: and inputting the low-resolution image into the progressive depth residual error network model, and outputting to obtain a reconstructed high-resolution image.
2. The super-resolution image reconstruction method based on the progressive depth residual network of claim 1, wherein in step 2, a bicubic interpolation algorithm is used to perform down-sampling processing on the image.
3. The super-resolution image reconstruction method based on the progressive depth residual error network of claim 1, wherein in step 6.2, the optimization algorithm is selected from Adam optimization algorithm.
4. The super-resolution image reconstruction method based on the progressive depth residual error network of claim 1, wherein in step 6.3, the calculation formulas of peak signal-to-noise ratio (PSNR) and Structural Similarity (SSIM) are shown as formula (11) and formula (12):
where M, N denotes the size of the image, f denotes the true high resolution image,expressed as reconstructed high resolution image, μfAndmean gray value, σ, expressed as true high resolution image and reconstructed image, respectivelyfAndexpressed as the variance of the true high resolution image and the reconstructed image respectively,represented as the covariance of the true high-resolution image and the reconstructed image, C1And C2Is constant, and C1=(k1L)2,C2=(k2L)2,k1=0.01,k2L is the dynamic range of the pixel value, 0.03.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910920959.XA CN110675321A (en) | 2019-09-26 | 2019-09-26 | Super-resolution image reconstruction method based on progressive depth residual error network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910920959.XA CN110675321A (en) | 2019-09-26 | 2019-09-26 | Super-resolution image reconstruction method based on progressive depth residual error network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110675321A true CN110675321A (en) | 2020-01-10 |
Family
ID=69079534
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910920959.XA Pending CN110675321A (en) | 2019-09-26 | 2019-09-26 | Super-resolution image reconstruction method based on progressive depth residual error network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110675321A (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111407260A (en) * | 2020-03-30 | 2020-07-14 | 华南理工大学 | Electroencephalogram and electrocardio-based fatigue detection method with steering wheel embedded in electrocardio sensor |
CN111597945A (en) * | 2020-05-11 | 2020-08-28 | 济南博观智能科技有限公司 | Target detection method, device, equipment and medium |
CN111681168A (en) * | 2020-06-05 | 2020-09-18 | 杭州电子科技大学 | Low-resolution cell super-resolution reconstruction method based on parallel residual error network |
CN111754403A (en) * | 2020-06-15 | 2020-10-09 | 南京邮电大学 | Image super-resolution reconstruction method based on residual learning |
CN111951164A (en) * | 2020-08-11 | 2020-11-17 | 哈尔滨理工大学 | Image super-resolution reconstruction network structure and image reconstruction effect analysis method |
CN112699844A (en) * | 2020-04-23 | 2021-04-23 | 华南理工大学 | Image super-resolution method based on multi-scale residual error level dense connection network |
CN112734645A (en) * | 2021-01-19 | 2021-04-30 | 青岛大学 | Light-weight image super-resolution reconstruction method based on characteristic distillation multiplexing |
CN113256496A (en) * | 2021-06-11 | 2021-08-13 | 四川省人工智能研究院(宜宾) | Lightweight progressive feature fusion image super-resolution system and method |
CN113421188A (en) * | 2021-06-18 | 2021-09-21 | 广东奥普特科技股份有限公司 | Method, system, device and storage medium for image equalization enhancement |
CN113538307A (en) * | 2021-06-21 | 2021-10-22 | 陕西师范大学 | Synthetic aperture imaging method based on multi-view super-resolution depth network |
CN113674151A (en) * | 2021-07-28 | 2021-11-19 | 南京航空航天大学 | Image super-resolution reconstruction method based on deep neural network |
CN113674185A (en) * | 2021-07-29 | 2021-11-19 | 昆明理工大学 | Weighted average image generation method based on fusion of multiple image generation technologies |
CN116452696A (en) * | 2023-06-16 | 2023-07-18 | 山东省计算中心(国家超级计算济南中心) | Image compressed sensing reconstruction method and system based on double-domain feature sampling |
CN116524199A (en) * | 2023-04-23 | 2023-08-01 | 江苏大学 | Image rain removing method and device based on PReNet progressive network |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108734659A (en) * | 2018-05-17 | 2018-11-02 | 华中科技大学 | A kind of sub-pix convolved image super resolution ratio reconstruction method based on multiple dimensioned label |
CN109978763A (en) * | 2019-03-01 | 2019-07-05 | 昆明理工大学 | A kind of image super-resolution rebuilding algorithm based on jump connection residual error network |
-
2019
- 2019-09-26 CN CN201910920959.XA patent/CN110675321A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108734659A (en) * | 2018-05-17 | 2018-11-02 | 华中科技大学 | A kind of sub-pix convolved image super resolution ratio reconstruction method based on multiple dimensioned label |
CN109978763A (en) * | 2019-03-01 | 2019-07-05 | 昆明理工大学 | A kind of image super-resolution rebuilding algorithm based on jump connection residual error network |
Non-Patent Citations (5)
Title |
---|
XINTAO WANG, KE YU, SHIXIANG WU, JINJIN GU, YIHAO LIU, CHAO DONG, CHEN CHANGE LOY, YU QIAO, XIAOOU TANG: "ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks", PROCEEDINGS OF THE EUROPEAN CONFERENCE ON COMPUTER VISION (ECCV) WORKSHOPS, pages 1 - 23 * |
ZHAOYANG SONG, XIAOQIANG ZHAO, HONGMEI JIANG: "Gradual deep residual network for super-resolution", GRADUAL DEEP RESIDUAL NETWORK FOR SUPER-RESOLUTION, vol. 80, pages 9765 - 9778, XP037406465, DOI: 10.1007/s11042-020-10152-9 * |
代强,程曦,王永梅,牛子未,刘飞: "基于轻量自动残差缩放网络的图像超分辨率重建", 计算机应用, vol. 40, no. 05, pages 1446 - 1452 * |
曾接贤,倪申龙: "改进的卷积神经网络单幅图像超分辨率重建", 计算机工程与应用, vol. 55, no. 13, pages 1 - 7 * |
王一宁,秦品乐,李传朋,崔雨豪: "基于残差神经网络的图像超分辨率改进算法", 计算机应用, vol. 38, no. 1, pages 246 - 254 * |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111407260A (en) * | 2020-03-30 | 2020-07-14 | 华南理工大学 | Electroencephalogram and electrocardio-based fatigue detection method with steering wheel embedded in electrocardio sensor |
CN112699844B (en) * | 2020-04-23 | 2023-06-20 | 华南理工大学 | Image super-resolution method based on multi-scale residual hierarchy close-coupled network |
CN112699844A (en) * | 2020-04-23 | 2021-04-23 | 华南理工大学 | Image super-resolution method based on multi-scale residual error level dense connection network |
CN111597945A (en) * | 2020-05-11 | 2020-08-28 | 济南博观智能科技有限公司 | Target detection method, device, equipment and medium |
CN111597945B (en) * | 2020-05-11 | 2023-08-18 | 济南博观智能科技有限公司 | Target detection method, device, equipment and medium |
CN111681168A (en) * | 2020-06-05 | 2020-09-18 | 杭州电子科技大学 | Low-resolution cell super-resolution reconstruction method based on parallel residual error network |
CN111681168B (en) * | 2020-06-05 | 2023-03-21 | 杭州电子科技大学 | Low-resolution cell super-resolution reconstruction method based on parallel residual error network |
CN111754403B (en) * | 2020-06-15 | 2022-08-12 | 南京邮电大学 | Image super-resolution reconstruction method based on residual learning |
CN111754403A (en) * | 2020-06-15 | 2020-10-09 | 南京邮电大学 | Image super-resolution reconstruction method based on residual learning |
CN111951164A (en) * | 2020-08-11 | 2020-11-17 | 哈尔滨理工大学 | Image super-resolution reconstruction network structure and image reconstruction effect analysis method |
CN111951164B (en) * | 2020-08-11 | 2023-06-16 | 哈尔滨理工大学 | Image super-resolution reconstruction network structure and image reconstruction effect analysis method |
CN112734645A (en) * | 2021-01-19 | 2021-04-30 | 青岛大学 | Light-weight image super-resolution reconstruction method based on characteristic distillation multiplexing |
CN112734645B (en) * | 2021-01-19 | 2023-11-03 | 青岛大学 | Lightweight image super-resolution reconstruction method based on feature distillation multiplexing |
CN113256496A (en) * | 2021-06-11 | 2021-08-13 | 四川省人工智能研究院(宜宾) | Lightweight progressive feature fusion image super-resolution system and method |
CN113421188A (en) * | 2021-06-18 | 2021-09-21 | 广东奥普特科技股份有限公司 | Method, system, device and storage medium for image equalization enhancement |
CN113421188B (en) * | 2021-06-18 | 2024-01-05 | 广东奥普特科技股份有限公司 | Method, system, device and storage medium for image equalization enhancement |
CN113538307A (en) * | 2021-06-21 | 2021-10-22 | 陕西师范大学 | Synthetic aperture imaging method based on multi-view super-resolution depth network |
CN113674151A (en) * | 2021-07-28 | 2021-11-19 | 南京航空航天大学 | Image super-resolution reconstruction method based on deep neural network |
CN113674185A (en) * | 2021-07-29 | 2021-11-19 | 昆明理工大学 | Weighted average image generation method based on fusion of multiple image generation technologies |
CN113674185B (en) * | 2021-07-29 | 2023-12-08 | 昆明理工大学 | Weighted average image generation method based on fusion of multiple image generation technologies |
CN116524199A (en) * | 2023-04-23 | 2023-08-01 | 江苏大学 | Image rain removing method and device based on PReNet progressive network |
CN116524199B (en) * | 2023-04-23 | 2024-03-08 | 江苏大学 | Image rain removing method and device based on PReNet progressive network |
CN116452696A (en) * | 2023-06-16 | 2023-07-18 | 山东省计算中心(国家超级计算济南中心) | Image compressed sensing reconstruction method and system based on double-domain feature sampling |
CN116452696B (en) * | 2023-06-16 | 2023-08-29 | 山东省计算中心(国家超级计算济南中心) | Image compressed sensing reconstruction method and system based on double-domain feature sampling |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110675321A (en) | Super-resolution image reconstruction method based on progressive depth residual error network | |
CN111754403B (en) | Image super-resolution reconstruction method based on residual learning | |
CN109064396B (en) | Single image super-resolution reconstruction method based on deep component learning network | |
CN108550115B (en) | Image super-resolution reconstruction method | |
CN108122197B (en) | Image super-resolution reconstruction method based on deep learning | |
CN111192200A (en) | Image super-resolution reconstruction method based on fusion attention mechanism residual error network | |
CN111861884B (en) | Satellite cloud image super-resolution reconstruction method based on deep learning | |
CN103413286B (en) | United reestablishing method of high dynamic range and high-definition pictures based on learning | |
CN107784628B (en) | Super-resolution implementation method based on reconstruction optimization and deep neural network | |
CN111932461A (en) | Convolutional neural network-based self-learning image super-resolution reconstruction method and system | |
CN111127325B (en) | Satellite video super-resolution reconstruction method and system based on cyclic neural network | |
CN111681166A (en) | Image super-resolution reconstruction method of stacked attention mechanism coding and decoding unit | |
CN111861886B (en) | Image super-resolution reconstruction method based on multi-scale feedback network | |
CN109636721A (en) | Video super-resolution method based on confrontation study and attention mechanism | |
CN112288632A (en) | Single image super-resolution method and system based on simplified ESRGAN | |
CN113538234A (en) | Remote sensing image super-resolution reconstruction method based on lightweight generation model | |
CN111369466A (en) | Image distortion correction enhancement method of convolutional neural network based on deformable convolution | |
CN115496663A (en) | Video super-resolution reconstruction method based on D3D convolution intra-group fusion network | |
CN114463183A (en) | Image super-resolution method based on frequency domain and spatial domain | |
Yang et al. | A survey of super-resolution based on deep learning | |
CN115222592A (en) | Underwater image enhancement method based on super-resolution network and U-Net network and training method of network model | |
CN115526779A (en) | Infrared image super-resolution reconstruction method based on dynamic attention mechanism | |
CN109819256B (en) | Video compression sensing method based on feature sensing | |
CN110047038B (en) | Single-image super-resolution reconstruction method based on hierarchical progressive network | |
CN117132472B (en) | Forward-backward separable self-attention-based image super-resolution reconstruction method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |