CN106991646B - Image super-resolution method based on dense connection network - Google Patents
Image super-resolution method based on dense connection network Download PDFInfo
- Publication number
- CN106991646B CN106991646B CN201710193665.2A CN201710193665A CN106991646B CN 106991646 B CN106991646 B CN 106991646B CN 201710193665 A CN201710193665 A CN 201710193665A CN 106991646 B CN106991646 B CN 106991646B
- Authority
- CN
- China
- Prior art keywords
- network
- image
- resolution
- layer
- dense
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000012549 training Methods 0.000 claims abstract description 35
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 33
- 230000006870 function Effects 0.000 claims description 10
- 230000004913 activation Effects 0.000 claims description 7
- 230000005540 biological transmission Effects 0.000 claims description 7
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 claims description 6
- 238000005457 optimization Methods 0.000 claims description 6
- 238000013139 quantization Methods 0.000 claims description 3
- FGUUSXIOTUKUDN-IBGZPJMESA-N C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 Chemical compound C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 FGUUSXIOTUKUDN-IBGZPJMESA-N 0.000 claims description 2
- 238000013507 mapping Methods 0.000 claims description 2
- 238000012360 testing method Methods 0.000 abstract description 10
- 238000005516 engineering process Methods 0.000 abstract description 8
- 230000008034 disappearance Effects 0.000 abstract description 5
- 230000000694 effects Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 4
- 238000003062 neural network model Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 238000010008 shearing Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an image super-resolution method based on a dense connection network, which effectively solves the problem of gradient disappearance when the deep network reversely propagates by increasing the depth of a convolutional neural network and introducing a large number of jump-type connections into the deep network, optimizes the flow of information on the network and improves the super-resolution reconstruction capability of the convolutional neural network. Meanwhile, the method effectively combines the bottom layer characteristic and the high-level abstract characteristic, reduces the model parameters, and compresses the depth network model, thereby improving the reconstruction efficiency of the image super-resolution. In addition, by introducing a depth supervision technology, super-resolution images can be reconstructed at different depths of the network, so that the training of the depth network is optimized, and a proper network depth can be selected to reconstruct a high-definition image according to the computing capability of a testing end during testing. Finally, the invention utilizes the image sets of multiple magnifications for training, and the obtained model can be subjected to image super-resolution on multiple scales without training different models for each magnification.
Description
Technical Field
The invention relates to the field of computer vision and artificial intelligence technology, in particular to an image super-resolution method based on a dense connection network.
Background
In the field of computer vision, most of the problems have begun to be solved using deep neural networks with wide success. In a plurality of computer vision tasks, such as face recognition, target detection and tracking, image retrieval and the like, the performance of the algorithm using the deep neural network model is greatly improved compared with the performance of the traditional algorithm. In the task of super-resolution reconstruction of images, the latest work has also started to utilize the nonlinear feature representation capability of the convolutional neural network to improve the reconstruction effect of super-resolution of images. Through the search of documents in the prior art, the patent name "an image super-resolution reconstruction method" (chinese patent publication No. CN105976318A, published as 2016.09.28) and the patent name "a convolutional neural network image super-resolution reconstruction method based on learning rate adaptation" (chinese patent publication No. CN106228512A, published as 2016.12.14) use a deep learning method to reconstruct the image super-resolution, and obtain a better reconstruction result than the conventional interpolation method. However, this patent only adopts a 3-layer convolutional neural network structure, and the nonlinear feature representation capability and the image reconstruction capability are limited. The performance of the latest neural network models such as AlexNet, VGG, ResNet and the like are greatly improved by performing different degrees of amplification mainly in the aspects of width and depth. Therefore, researching and designing a deeper network model can greatly help to improve the reconstruction performance of image super-resolution.
The simplest way to deepen the network model is to stack the basic building blocks (e.g., convolutional layers and active layers) together. However, as networks become deeper and deeper, the difficulty of training and convergence increases accordingly. During the training process of the network model, gradient signals need to be propagated from the topmost layer to the bottommost layer of the network in a backward mode, so that the parameters of the network model are updated. For a traditional neural network model with only a few layers, convergence can be achieved in this way. However, for a network model trained with tens of layers, when propagating backwards to the lowest layer of the network, the gradient signal has disappeared almost, and the model parameters of the underlying network cannot be updated and optimized efficiently. Therefore, if such a direct stacking method is adopted, the performance of the algorithm is reduced. In order to effectively train a deep network, a VDSR algorithm proposed on an international conference CVPR in 2016 adopts technologies such as gradient shearing and residual learning, so that a convolutional neural network model with 20 layers can be effectively optimized and converged, and the super-resolution reconstruction performance of the model is greatly improved compared with that of previous network models (such as Chinese patents CN105976318A and CN 106228512A). However, the VDSR algorithm still stacks the convolutional layer and the active layer together, which is not favorable for the flow of gradient information and brings difficulty to the optimization of deeper networks. Meanwhile, the simple stacking method cannot effectively utilize the features trained by each layer, and the network model parameters are huge. For example, a 20-layer network of the VDSR algorithm requires more than 70 ten thousand model parameters, which not only brings difficulty to optimization, but also increases the computational complexity in super-resolution reconstruction.
Recently proposed methods such as residual network structure ResNet and dense network structure densneet attempt to solve the problem of optimization of extremely deep networks by introducing jump connections in the network. Through introducing a large amount of jump type connections, the connection channel between the underlying network and the top network can be effectively shortened, so that the flow of information on the network can be optimized, and the problem of gradient disappearance of a deep network is effectively solved. In addition, the dense network structure can support feature reuse, can strengthen the propagation of features, reduce the parameters of the model and reduce the computational complexity of the model. The invention fully utilizes the advantages of the dense network, is applied to the task of image super-resolution for the first time, provides the SRDenseNet algorithm and greatly improves the reconstruction performance of the deep network on the image super-resolution. Meanwhile, the SRDenseNet algorithm provided by the invention combines a deep supervision technology, so that parameters of each layer of the network model can be converged more effectively and rapidly, the training speed is accelerated, and the super-resolution reconstruction performance of the network model is further improved. In addition, the algorithm provided by the invention integrates multi-scale information, so that a network model obtained by training can effectively reconstruct a plurality of super-resolution magnification factors.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide an image super-resolution method based on a dense connection network, which improves the image super-resolution reconstruction effect of a plurality of magnification factors, greatly reduces model parameters, effectively compresses a deep neural network model and improves the reconstruction efficiency of the image super-resolution.
The technical scheme adopted by the invention is as follows:
an image super-resolution method based on a dense connection network comprises the following steps:
a) generating a multi-scale image training set (I) according to different interpolation magnificationsLR,IHR);
b) Constructing a dense network module: intensive network module includes n layer network structure that sets gradually along direction of transmission, and n is for being greater than 1 integer, and every layer network structure all includes a convolution layer and an active layer, and the characteristic stack that the convolution of upper strata network structure obtained to follow-up every layer network structure in, then the characteristic representation of every layer network structure's convolution layer is:
Xn=Hn([X1,X2,…,Xn-1]) (1)
wherein XnIs a characteristic of the convolutional layer of the n-th layer network structure, [ X ]1,X2,...,Xn-1]Feature sets for convolutional layers of layer 1 through layer n-1 network structures; thus, the characteristics of the bottom network training can be directly added into the last layer of the module, thereby effectively combining the characteristics of the bottom network and the abstract characteristics of the top layer;
c) building a convolutional neural network model, wherein the convolutional neural network model comprises an input convolutional layer, an activation layer and L dense network modules, which are sequentially arranged along the network transmission direction; the output end of each dense network module is respectively connected with a convolution layer in parallel to be used as a reconstruction network,
d) selecting an image training set (I)LR,IHR) As a training set, a low resolution image I is inputLRAnd high resolution image IHRThen, the reconstructed image of the reconstructed network of each dense network module is compared with the input image of the convolutional neural network model to obtain a plurality of loss functions of the convolutional neural network, which are specifically expressed as:
wherein f isi(w,b,ILR) The prediction result of the ith reconstruction network is obtained, and w and b are respectively a convolution template parameter and a bias parameter in the neural network;
e) iteratively solving the obtained convolutional neural network model parameters w and b by using an Adam optimization algorithm; forming a network map between the low resolution image and the high resolution image;
f) this network can be used for different magnifications due to the use of a multi-scale training set. And reconstructing the input low-resolution image into a high-resolution image by using the convolutional neural network model parameters w and b obtained by training, and calculating corresponding quantization indexes PSNR and SSIM.
Further, the magnification factor used in the step a) includes 2 times, 3 times and 4 times, and forms a multi-scale training image set.
Further, the step a) generates a low-resolution image and a high-resolution image set with different interpolation magnifications by using the data set of ImageNet, and forms a paired image training set (I)LR,IHR)。
Further, the low-resolution images and the high-resolution images in the image training set of the step a) are converted into a YCbCr space, and are trained by using a Y channel.
Further, the network structure of the dense network module in the step b) is 8 layers, the convolution kernel size of the convolution layer in the network structure is 3 × 3, and the activation function of the activation layer is a regularized linear unit function.
Further, the number of output feature maps of the dense network module is controlled by introducing a feature growth rate k in the step b), and k × n feature maps are output at the nth layer in the dense network module.
Further, the number L of the dense network modules in step c) is 8.
Further, the Adam algorithm in step e) dynamically adjusts the learning rate of each parameter by using the first moment estimation and the second moment estimation of the gradient.
Further, the larger the two indexes PSNR and SSIM in step f), the smaller the difference between the reconstructed image and the original high resolution image.
By adopting the technical scheme, the invention effectively solves the problem of gradient disappearance of the extremely deep network by using the latest convolutional neural network technology based on dense connection, optimizes the flow of information among various network layers, improves the reconstruction effect of the super-resolution of the image, effectively compresses network model parameters and improves the reconstruction efficiency of the super-resolution. The specific innovation points comprise the following points: (1) first, the super-resolution algorithm of the present invention uses a plurality of dense network modules for the first time. In each module, each layer network is connected with other layer networks in the module, so that a direct path always exists in the information back propagation in the module, the information flow on an extremely deep network is optimized, and the training problem of the deep network is effectively solved. (2) Secondly, efficient utilization of underlying features is achieved through a dense network structure. In deep networks, generally, lower feature layers may determine edge information of an image, while higher layers may train to obtain more abstract features in the image. The dense network structure enables the super-resolution reconstruction process to fully utilize the edge information of the underlying network and the abstract characteristics of the high-level network through superposition of the characteristic layers. (3) Due to the reuse of the feature layers, the number of features needing to be newly learned in each layer is reduced, so that model parameters are reduced, the size of a network model is effectively reduced, and the calculation speed in a test stage is increased. (4) In addition, the invention also introduces a deep supervision technology, and a reconstruction network is accessed outside each dense network module, so that the reconstruction capability of each dense network module can be improved, and a deeper network model can be trained. In addition, because the network can reconstruct a high-resolution image at different depths, the network depth can be reasonably selected according to the computing capability of the test end at the test end, and the high-resolution reconstructed image is output. For example, in a computer, the reconstruction result can be output in a deeper dense network module by using high-performance parallel computation of the GPU, and at a mobile terminal of a mobile phone, the reconstruction result can be output in the first or second dense network module by selecting due to limited computation capability. (5) Finally, the invention utilizes images of multiple scales for training, so that the trained network can carry out super-resolution reconstruction with the magnification of 2-4 times, and a deep network model does not need to be trained aiming at each magnification.
Drawings
The invention is described in further detail below with reference to the accompanying drawings and the detailed description;
FIG. 1 is a schematic flow chart of an image super-resolution method based on a dense connection network according to the present invention;
FIG. 2 is a model structure diagram of an image super-resolution method based on a dense connection network according to the present invention;
FIG. 3 is a dense network configuration diagram of an image super-resolution method based on a dense connection network according to the present invention;
FIG. 4 is a low resolution image of an input convolutional neural network;
FIG. 5 is a high resolution reconstruction effect graph based on a conventional bicubic interpolation algorithm;
FIG. 6 is a high resolution reconstruction effect graph of the Aplus algorithm based on dictionary learning;
FIG. 7 is a reconstruction effect graph of SRCNN algorithm based on 3-layer convolutional neural network;
FIG. 8 is a graph of the reconstruction effectiveness of a VDSR algorithm based on a 20-layer convolutional neural network;
fig. 9 is a reconstruction effect diagram of the image super-resolution method based on the dense connection network according to the present invention.
Detailed Description
As shown in FIG. 1, the invention discloses an image super-resolution method based on a dense connection network, which makes full use of the advantages of the dense connection network, deepens a convolutional neural network model and improves the reconstruction effect of the image super-resolution. The method specifically comprises the following steps:
a) generating a multi-scale image training set (I) according to different interpolation magnificationsLR,IHR) (ii) a Further, different low resolution image and high resolution image sets are generated using the ImageNet data set and a paired image set (I) is formedLR,IHR)。
The invention randomly extracts 6 ten thousand pictures I from the database of ImageNetHRAnd carrying out Gaussian blur and interpolating to a low-resolution space, wherein the interpolation times are respectively selected to be 2 times, 3 times and 4 times, and the interpolation method uses bicubic interpolation. Then, the low-resolution image is subjected to bicubic interpolation to a high-resolution space to obtain a processed image ILRForming a set of images (I)LR,IHR). The invention further extracts a matching sub-image set with the size of 61 x 61 from the image set, and scrambles the storage sequence of the sub-images to form a final image training set. In addition, a given RGB image will be converted to YCbCr space, allThe super-resolution operation is trained by using a Y channel.
b) Constructing a dense network module: as shown in fig. 2, the dense network module includes n-layer network structures sequentially arranged along a network transmission direction, where n is an integer greater than 1, and each layer of network structure includes a convolutional layer and an active layer. The characteristics that each layer of convolution obtained can be with all the layers the inside after the superimposed mode is added, the last layer of module can directly be added to the characteristics of bottom network training like this to effectively combine the abstract characteristic of bottom network characteristic and top layer, the characteristics of the convolutional layer of every layer of network structure specifically can be expressed as:
Xn=Hn([X1,X2,…,Xn-1]) (1)
wherein XnIs a feature of the n-th layer, [ X ]1,X2,...,Xn-1]Are the feature sets of layer 1 through layer n-1. As shown in FIG. 2, all layers in the module are connected in the forward direction, so that gradient information can be directly transmitted from the top layer to the bottom layer in the backward transmission process, and the problem of gradient disappearance caused by the increase of the network depth is solved.
Specifically, the dense network module in this embodiment includes a total of 8-layer network structures, each layer includes a convolutional layer and an active layer, where the active function is a regularized linear unit function, and the convolutional kernel size of all convolutional layers is 3 × 3.
Further, Hn(.) is output, namely the feature growth rate is k. Since the input of each layer is a connection of all previous layer outputs, the output of each layer does not need to be as many as a conventional network. The feature increase rate k is used here to control the number of channels of the network feature map. Within the dense network module, the nth layer output has k × n signatures. In the present invention, the feature growth rate k is taken to be 16, with 8 layers in each dense network module, so that each dense network module outputs 128 feature maps.
c) Building a convolutional neural network model, wherein the convolutional neural network model comprises an input convolutional layer, an activation layer and L dense network modules, which are sequentially arranged along the network transmission direction, as shown in FIG. 3; a convolution layer is respectively connected to the back of each dense network module to serve as a reconstruction network; specifically, the value of the number L of dense network modules in this embodiment is 8.
d) Selecting an image training set (I)LR,IHR) As a training set, a low resolution image I is inputLRAnd high resolution image IHR toAnd (3) a convolutional neural network model, namely comparing the reconstructed image of the reconstructed network of each dense network module with the input image of the convolutional neural network model to obtain a plurality of loss functions of the convolutional neural network, wherein the loss functions are specifically represented as follows:
wherein f isi(w,b,ILR) The prediction result of the ith reconstruction network is obtained, and w and b are respectively a convolution template parameter and a bias parameter in the neural network; in addition, in order to accelerate convergence of the depth network, the invention also adopts a residual image, namely the prediction information of the network is the difference between a high-resolution image and a low-resolution image. Therefore, the neural network can be trained aiming at the high-frequency information lost by the low-resolution images, and the redundant reconstruction process of the low-frequency information in the images is removed.
e) The method comprises the steps of utilizing an Adam optimization algorithm to iteratively solve obtained convolutional neural network model parameters w and b, forming network mapping between a low-resolution image and a high-resolution image, wherein the Adam algorithm dynamically adjusts the learning rate of each parameter by utilizing first moment estimation and second moment estimation of gradients1Set to 0.9. The initial learning rate is set to 0.0001, 16 samples are randomly taken in each forward propagation, and the algorithm iterates 100 ten thousand times.
f) This network can be used for different magnifications due to the use of a multi-scale training set. And reconstructing the input low-resolution image into a high-resolution image by using the convolutional neural network model parameters w and b obtained by training, and calculating corresponding quantization indexes PSNR and SSIM. Further, the larger these two indices are, the smaller the difference between the reconstructed image and the original high resolution image is.
In order to verify the super-resolution reconstruction effect of the algorithm, the invention is tested on a common test image Set5 and compared with other algorithms. Fig. 4 shows a super-resolution reconstruction example compared with several other algorithms, which are: FIG. 4 is a low resolution image of an input neural network; FIG. 5 is a high resolution reconstruction effect graph based on a conventional bicubic interpolation algorithm; FIG. 6 is a high resolution reconstruction effect graph of the Aplus algorithm based on dictionary learning; FIG. 7 is a reconstruction effect graph of SRCNN algorithm based on 3-layer convolutional neural network; FIG. 8 is a graph of the reconstruction effectiveness of a VDSR algorithm based on a 20-layer convolutional neural network; fig. 9 is a diagram of the reconstruction effect of the SRDenseNet algorithm proposed by the present invention. It can be seen from the figure that the SRDenseNet algorithm provided by the invention can well reconstruct the details of the image and display a clearer image under the condition that the input image is not clear. Meanwhile, as can be seen from the quantitative indexes in the table, the reconstruction result of the SRDenseNet algorithm of the invention is closer to the original high-resolution image. In addition, although the SRDenseNet algorithm of the present invention employs a deeper network, which has 65 layers in total, the model parameters are smaller than the 20-layer network of VDSR. 70 model parameters of a 20-layer network of the VDSR need to be optimized, and 50 model parameters of a 65-layer network of the SRDenseNet of the invention need to be optimized, so that the model can be compressed while the network is deepened, and the reconstruction efficiency of the image can be ensured while the reconstruction effect of the super-resolution is improved.
TABLE 1
Table 1: quantification indicators on the Set5 test Set for several different algorithms.
By adopting the technical scheme, the invention effectively solves the problem of gradient disappearance of the extremely deep network by using the latest convolutional neural network technology based on dense connection, optimizes the flow of information among various network layers, improves the reconstruction effect of the super-resolution of the image, effectively compresses network model parameters and improves the reconstruction efficiency of the super-resolution. The specific innovation points comprise the following points: (1) first, the super-resolution algorithm of the present invention uses a plurality of dense network modules for the first time. In each module, each layer network is connected with other layer networks in the module, so that a direct path always exists in the information back propagation in the module, the information flow on an extremely deep network is optimized, and the training problem of the deep network is effectively solved. (2) Secondly, efficient utilization of underlying features is achieved through a dense network structure. In deep networks, generally, lower feature layers may determine edge information of an image, while higher layers may train to obtain more abstract features in the image. The dense network structure enables the super-resolution reconstruction process to fully utilize the edge information of the underlying network and the abstract characteristics of the high-level network through superposition of the characteristic layers. (3) Due to the reuse of the feature layers, the number of features needing to be newly learned in each layer is reduced, so that model parameters are reduced, the size of a network model is effectively reduced, and the calculation speed in a test stage is increased. (4) In addition, the invention also introduces a deep supervision technology, and a reconstruction network is accessed outside each dense network module, so that the reconstruction capability of each dense network module can be improved, and a deeper network model can be trained. In addition, because the network can reconstruct a high-resolution image at different depths, the network depth can be reasonably selected according to the computing capability of the test end at the test end, and the high-resolution reconstructed image is output. For example, in a computer, the reconstruction result can be output in a deeper dense network module by using high-performance parallel computation of the GPU, and at a mobile terminal of a mobile phone, the reconstruction result can be output in the first or second dense network module by selecting due to limited computation capability. (5) Finally, the invention utilizes images of multiple scales for training, so that the trained network can carry out super-resolution reconstruction with the magnification of 2-4 times, and a deep network model does not need to be trained aiming at each magnification.
Claims (9)
1. An image super-resolution method based on a dense connection network is characterized in that: which comprises the following steps:
a) generating a multi-scale image training set (I) according to different interpolation magnificationsLR,IHR);
b) Constructing a dense network module: intensive network module includes n layer network structure that sets gradually along network transmission direction, and n is for being greater than 1 integer, and every layer network structure all includes a convolution layer and an active layer, and the characteristic stack that the convolution of upper strata network structure obtained to follow-up every layer network structure in, the characteristic of the convolution layer of every layer network structure expresses and is:
Xn=Hn([X1,X2,…,Xn-1]) (1)
wherein XnIs a characteristic of the convolutional layer of the n-th layer network structure, [ X ]1,X2,...,Xn-1]Feature sets for convolutional layers of layer 1 through layer n-1 network structures;
c) building a convolutional neural network model, wherein the convolutional neural network model comprises an input convolutional layer, an activation layer and L dense network modules, which are sequentially arranged along the transmission direction; a convolution layer is respectively connected to the back of each dense network module to serve as a reconstruction network;
d) selecting an image training set (I)LR,IHR) As a training set, a low resolution image I is inputLRAnd high resolution image IHRThen, the reconstructed image of the reconstructed network of each dense network module is compared with the input image of the convolutional neural network model to obtain a plurality of loss functions of the convolutional neural network, which are specifically expressed as:
wherein f isi(w,b,ILR) The prediction result of the ith reconstruction network is obtained, and w and b are respectively a convolution template parameter and a bias parameter in the neural network;
e) using an Adam optimization algorithm, and carrying out iterative solution to obtain convolutional neural network model parameters w and b; forming a network mapping between the low resolution image and the high resolution image;
f) and reconstructing the input low-resolution image into a high-resolution image by using the convolutional neural network model parameters w and b obtained by training, and calculating corresponding quantization indexes PSNR and SSIM.
2. The image super-resolution method based on the dense connection network as claimed in claim 1, wherein: the magnification times adopted in the step a) comprise 2 times, 3 times and 4 times, and a multi-scale training image set is formed.
3. The image super-resolution method based on the dense connection network as claimed in claim 1, wherein: the step a) utilizes the data set of ImageNet to generate a low-resolution image set and a high-resolution image set with different interpolation magnification factors and form a paired image training set (I)LR,IHR)。
4. The image super-resolution method based on the dense connection network as claimed in claim 1, wherein: and b) converting the low-resolution images and the high-resolution images in the image training set in the step a) into a YCbCr space, and performing algorithm training by using a Y channel.
5. The image super-resolution method based on the dense connection network as claimed in claim 1, wherein: the network structure of the dense network module in the step b) is 8 layers, the convolution kernel size of the convolution layer in the network structure is 3 x 3, and the activation function of the activation layer is a regularized linear unit function.
6. The image super-resolution method based on the dense connection network as claimed in claim 1, wherein: in the step b), the number of output feature maps of the dense network module is controlled by introducing a feature growth rate k, and k × n feature maps are output at the nth layer in the dense network module.
7. The image super-resolution method based on the dense connection network as claimed in claim 1, wherein: the number L of the dense network modules in the step c) is 8.
8. The image super-resolution method based on the dense connection network as claimed in claim 1, wherein: the Adam algorithm in the step e) dynamically adjusts the learning rate of each parameter by using the first moment estimation and the second moment estimation of the gradient.
9. The image super-resolution method based on the dense connection network as claimed in claim 1, wherein: the larger the two indexes of PSNR and SSIM in the step f), the smaller the difference between the reconstructed image and the original high-resolution image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710193665.2A CN106991646B (en) | 2017-03-28 | 2017-03-28 | Image super-resolution method based on dense connection network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710193665.2A CN106991646B (en) | 2017-03-28 | 2017-03-28 | Image super-resolution method based on dense connection network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106991646A CN106991646A (en) | 2017-07-28 |
CN106991646B true CN106991646B (en) | 2020-05-26 |
Family
ID=59413032
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710193665.2A Active CN106991646B (en) | 2017-03-28 | 2017-03-28 | Image super-resolution method based on dense connection network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106991646B (en) |
Families Citing this family (60)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11113800B2 (en) | 2017-01-18 | 2021-09-07 | Nvidia Corporation | Filtering image data using a neural network |
US11676247B2 (en) * | 2017-07-31 | 2023-06-13 | Institut Pasteur | Method, device, and computer program for improving the reconstruction of dense super-resolution images from diffraction-limited images acquired by single molecule localization microscopy |
CN107767413B (en) * | 2017-09-20 | 2020-02-18 | 华南理工大学 | Image depth estimation method based on convolutional neural network |
US10643306B2 (en) * | 2017-10-11 | 2020-05-05 | Qualcomm Incoporated | Image signal processor for processing images |
US11263782B2 (en) | 2017-10-11 | 2022-03-01 | Qualcomm Incorporated | Image signal processor for processing images |
CN107818302A (en) * | 2017-10-20 | 2018-03-20 | 中国科学院光电技术研究所 | Non-rigid multiple dimensioned object detecting method based on convolutional neural networks |
CN108022212B (en) * | 2017-11-24 | 2022-07-01 | 腾讯科技(深圳)有限公司 | High-resolution picture generation method, generation device and storage medium |
CN109949255B (en) * | 2017-12-20 | 2023-07-28 | 华为技术有限公司 | Image reconstruction method and device |
CN109949332B (en) * | 2017-12-20 | 2021-09-17 | 北京京东尚科信息技术有限公司 | Method and apparatus for processing image |
CN108182669A (en) * | 2018-01-02 | 2018-06-19 | 华南理工大学 | A kind of Super-Resolution method of the generation confrontation network based on multiple dimension of pictures |
CN108257105B (en) * | 2018-01-29 | 2021-04-20 | 南华大学 | Optical flow estimation and denoising joint learning depth network model for video image |
CN108427986A (en) * | 2018-02-26 | 2018-08-21 | 中车青岛四方机车车辆股份有限公司 | A kind of production line electrical fault prediction technique and device |
CN109903221B (en) | 2018-04-04 | 2023-08-22 | 华为技术有限公司 | Image super-division method and device |
CN108536144A (en) * | 2018-04-10 | 2018-09-14 | 上海理工大学 | A kind of paths planning method of fusion dense convolutional network and competition framework |
CN108615222A (en) * | 2018-04-17 | 2018-10-02 | 中国矿业大学 | A kind of depth convolutional network image super-resolution system based on multipair multi-connection |
CN108764287B (en) * | 2018-04-24 | 2021-11-16 | 东南大学 | Target detection method and system based on deep learning and packet convolution |
CN108805166B (en) * | 2018-05-03 | 2019-11-15 | 全球能源互联网研究院有限公司 | It is a kind of to establish image classification neural network model and image classification method, device |
CN108629737B (en) * | 2018-05-09 | 2022-11-18 | 复旦大学 | Method for improving JPEG format image space resolution |
CN108765290A (en) * | 2018-05-29 | 2018-11-06 | 天津大学 | A kind of super resolution ratio reconstruction method based on improved dense convolutional neural networks |
CN108765291A (en) * | 2018-05-29 | 2018-11-06 | 天津大学 | Super resolution ratio reconstruction method based on dense neural network and two-parameter loss function |
CN109035184A (en) * | 2018-06-08 | 2018-12-18 | 西北工业大学 | A kind of intensive connection method based on the deformable convolution of unit |
CN108830211A (en) * | 2018-06-11 | 2018-11-16 | 厦门中控智慧信息技术有限公司 | Face identification method and Related product based on deep learning |
CN109146784B (en) * | 2018-07-27 | 2020-11-20 | 徐州工程学院 | Image super-resolution reconstruction method based on multi-scale generation countermeasure network |
CN109146788B (en) * | 2018-08-16 | 2023-04-18 | 广州视源电子科技股份有限公司 | Super-resolution image reconstruction method and device based on deep learning |
CN109064405A (en) * | 2018-08-23 | 2018-12-21 | 武汉嫦娥医学抗衰机器人股份有限公司 | A kind of multi-scale image super-resolution method based on dual path network |
CN108897045A (en) * | 2018-08-28 | 2018-11-27 | 中国石油天然气股份有限公司 | Deep learning model training method and seismic data noise attenuation method, device and equipment |
CN109064407B (en) * | 2018-09-13 | 2023-05-05 | 武汉大学 | Dense connection network image super-resolution method based on multi-layer perceptron layers |
CN109345476A (en) * | 2018-09-19 | 2019-02-15 | 南昌工程学院 | High spectrum image super resolution ratio reconstruction method and device based on depth residual error network |
CN110956575B (en) | 2018-09-26 | 2022-04-12 | 京东方科技集团股份有限公司 | Method and device for converting image style and convolution neural network processor |
CN112868033A (en) * | 2018-10-01 | 2021-05-28 | 谷歌有限责任公司 | System and method for providing machine learning model with adjustable computational requirements |
CN109360152A (en) * | 2018-10-15 | 2019-02-19 | 天津大学 | 3 d medical images super resolution ratio reconstruction method based on dense convolutional neural networks |
CN109598197A (en) * | 2018-10-31 | 2019-04-09 | 大连大学 | The design method of hourglass model based on intensive link block |
CN109544457A (en) * | 2018-12-04 | 2019-03-29 | 电子科技大学 | Image super-resolution method, storage medium and terminal based on fine and close link neural network |
CN109767386A (en) * | 2018-12-22 | 2019-05-17 | 昆明理工大学 | A kind of rapid image super resolution ratio reconstruction method based on deep learning |
CN109741260B (en) * | 2018-12-29 | 2023-05-12 | 天津大学 | Efficient super-resolution method based on depth back projection network |
CN109919840A (en) * | 2019-01-21 | 2019-06-21 | 南京航空航天大学 | Image super-resolution rebuilding method based on dense feature converged network |
CN109816592B (en) * | 2019-01-26 | 2022-05-13 | 福州大学 | Single-frame image continuous scale super-resolution method based on convolutional neural network |
CN109978003A (en) * | 2019-02-21 | 2019-07-05 | 上海理工大学 | Image classification method based on intensive connection residual error network |
CN109949223B (en) * | 2019-02-25 | 2023-06-20 | 天津大学 | Image super-resolution reconstruction method based on deconvolution dense connection |
CN109903228B (en) * | 2019-02-28 | 2023-03-24 | 合肥工业大学 | Image super-resolution reconstruction method based on convolutional neural network |
CN109993109A (en) * | 2019-03-29 | 2019-07-09 | 成都信息工程大学 | Image character recognition method |
CN112116526B (en) * | 2019-06-19 | 2024-06-11 | 中国石油化工股份有限公司 | Super-resolution method of torch smoke image based on depth convolution neural network |
CN110415170B (en) * | 2019-06-24 | 2022-12-16 | 武汉大学 | Image super-resolution method based on multi-scale attention convolution neural network |
CN110378799B (en) * | 2019-07-16 | 2022-07-12 | 东北大学 | Alumina comprehensive production index decision method based on multi-scale deep convolution network |
CN110738231B (en) * | 2019-07-25 | 2022-12-27 | 太原理工大学 | Method for classifying mammary gland X-ray images by improving S-DNet neural network model |
CN110647934B (en) * | 2019-09-20 | 2022-04-08 | 北京百度网讯科技有限公司 | Training method and device for video super-resolution reconstruction model and electronic equipment |
CN110740350B (en) * | 2019-10-31 | 2021-12-21 | 北京金山云网络技术有限公司 | Image processing method, image processing device, terminal equipment and computer readable storage medium |
CN110827963A (en) * | 2019-11-06 | 2020-02-21 | 杭州迪英加科技有限公司 | Semantic segmentation method for pathological image and electronic equipment |
CN110782396B (en) * | 2019-11-25 | 2023-03-28 | 武汉大学 | Light-weight image super-resolution reconstruction network and reconstruction method |
EP3828809A1 (en) * | 2019-11-28 | 2021-06-02 | Samsung Electronics Co., Ltd. | Electronic apparatus and controlling method thereof |
CN111080688A (en) * | 2019-12-25 | 2020-04-28 | 左一帆 | Depth map enhancement method based on depth convolution neural network |
CN111179314B (en) * | 2019-12-30 | 2023-05-02 | 北京工业大学 | Target tracking method based on residual intensive twin network |
CN111402138A (en) * | 2020-03-24 | 2020-07-10 | 天津城建大学 | Image super-resolution reconstruction method of supervised convolutional neural network based on multi-scale feature extraction fusion |
CN111553861B (en) * | 2020-04-29 | 2023-11-24 | 苏州大学 | Image super-resolution reconstruction method, device, equipment and readable storage medium |
CN112150360A (en) * | 2020-09-16 | 2020-12-29 | 北京工业大学 | IVUS image super-resolution reconstruction method based on dense residual error network |
CN112907446B (en) * | 2021-02-07 | 2022-06-07 | 电子科技大学 | Image super-resolution reconstruction method based on packet connection network |
CN113222823B (en) * | 2021-06-02 | 2022-04-15 | 国网湖南省电力有限公司 | Hyperspectral image super-resolution method based on mixed attention network fusion |
CN113536971A (en) * | 2021-06-28 | 2021-10-22 | 中科苏州智能计算技术研究院 | Target detection method based on incremental learning |
CN113538235B (en) * | 2021-06-30 | 2024-01-09 | 北京百度网讯科技有限公司 | Training method and device for image processing model, electronic equipment and storage medium |
CN113409195A (en) * | 2021-07-06 | 2021-09-17 | 中国标准化研究院 | Image super-resolution reconstruction method based on improved deep convolutional neural network |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016019484A1 (en) * | 2014-08-08 | 2016-02-11 | Xiaoou Tang | An apparatus and a method for providing super-resolution of a low-resolution image |
CN106204449A (en) * | 2016-07-06 | 2016-12-07 | 安徽工业大学 | A kind of single image super resolution ratio reconstruction method based on symmetrical degree of depth network |
CN106228512A (en) * | 2016-07-19 | 2016-12-14 | 北京工业大学 | Based on learning rate adaptive convolutional neural networks image super-resolution rebuilding method |
-
2017
- 2017-03-28 CN CN201710193665.2A patent/CN106991646B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016019484A1 (en) * | 2014-08-08 | 2016-02-11 | Xiaoou Tang | An apparatus and a method for providing super-resolution of a low-resolution image |
CN106204449A (en) * | 2016-07-06 | 2016-12-07 | 安徽工业大学 | A kind of single image super resolution ratio reconstruction method based on symmetrical degree of depth network |
CN106228512A (en) * | 2016-07-19 | 2016-12-14 | 北京工业大学 | Based on learning rate adaptive convolutional neural networks image super-resolution rebuilding method |
Non-Patent Citations (3)
Title |
---|
变焦序列图像超分辨路重建算法研究;罗鸣威等;《南京大学学报(自然科学)》;20170131(第1期);第165-172页 * |
自适应的归一化卷积超分辨率重建算法研究;汪慧兰等;《计算机工程与应用》;20160807(第8期);第191-195页 * |
超分辨率算法研究综述;浦剑等;《山东大学学报(工学版)》;20090131(第1期);第27-32页 * |
Also Published As
Publication number | Publication date |
---|---|
CN106991646A (en) | 2017-07-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106991646B (en) | Image super-resolution method based on dense connection network | |
CN113240580B (en) | Lightweight image super-resolution reconstruction method based on multi-dimensional knowledge distillation | |
Song et al. | Efficient residual dense block search for image super-resolution | |
CN108765296B (en) | Image super-resolution reconstruction method based on recursive residual attention network | |
CN109087273B (en) | Image restoration method, storage medium and system based on enhanced neural network | |
CN111932461A (en) | Convolutional neural network-based self-learning image super-resolution reconstruction method and system | |
Luo et al. | Lattice network for lightweight image restoration | |
CN110648292A (en) | High-noise image denoising method based on deep convolutional network | |
CN113538234A (en) | Remote sensing image super-resolution reconstruction method based on lightweight generation model | |
CN112699844A (en) | Image super-resolution method based on multi-scale residual error level dense connection network | |
Hui et al. | Two-stage convolutional network for image super-resolution | |
CN111986085A (en) | Image super-resolution method based on depth feedback attention network system | |
CN116168197A (en) | Image segmentation method based on Transformer segmentation network and regularization training | |
CN110288529B (en) | Single image super-resolution reconstruction method based on recursive local synthesis network | |
CN116091315A (en) | Face super-resolution reconstruction method based on progressive training and face semantic segmentation | |
CN116580184A (en) | YOLOv 7-based lightweight model | |
Wang et al. | Underwater image super-resolution using multi-stage information distillation networks | |
CN113379606B (en) | Face super-resolution method based on pre-training generation model | |
Jiang et al. | Toward pixel-level precision for binary super-resolution with mixed binary representation | |
Liu et al. | Facial image inpainting using multi-level generative network | |
CN116823610A (en) | Deep learning-based underwater image super-resolution generation method and system | |
CN116797456A (en) | Image super-resolution reconstruction method, system, device and storage medium | |
CN115660979A (en) | Attention mechanism-based double-discriminator image restoration method | |
CN113191947B (en) | Image super-resolution method and system | |
CN116152263A (en) | CM-MLP network-based medical image segmentation method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |