CN111861961A - Multi-scale residual error fusion model for single image super-resolution and restoration method thereof - Google Patents

Multi-scale residual error fusion model for single image super-resolution and restoration method thereof Download PDF

Info

Publication number
CN111861961A
CN111861961A CN202010726231.6A CN202010726231A CN111861961A CN 111861961 A CN111861961 A CN 111861961A CN 202010726231 A CN202010726231 A CN 202010726231A CN 111861961 A CN111861961 A CN 111861961A
Authority
CN
China
Prior art keywords
image
fusion
features
feature
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010726231.6A
Other languages
Chinese (zh)
Other versions
CN111861961B (en
Inventor
赵佰亭
胡锐
贾晓芬
郭永存
黄友锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University of Science and Technology
Original Assignee
Anhui University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University of Science and Technology filed Critical Anhui University of Science and Technology
Priority to CN202010726231.6A priority Critical patent/CN111861961B/en
Publication of CN111861961A publication Critical patent/CN111861961A/en
Application granted granted Critical
Publication of CN111861961B publication Critical patent/CN111861961B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention discloses a multi-scale residual fusion model for single image super-resolution and a restoration method thereof, comprising a feature extraction module, a nonlinear mapping module and a reconstruction module which are connected in sequence; the characteristic extraction module is used for extracting shallow features such as a plurality of lines, contours and the like of the low-resolution LR image, and the plurality of complementary shallow features can solve the problem that a single feature does not represent enough LR image; the nonlinear mapping module extracts high-frequency characteristics by establishing a nonlinear mapping relation between input and output and transmits the high-frequency characteristics to the reconstruction module by dense connection; and the reconstruction module is used for fusing the shallow feature and the LR image after further extracting features such as details, textures and the like from the high-frequency features which are connected and fused, so as to complete the reconstruction of the high-resolution HR image. The method is used for super-resolution reconstruction and restoration of a single image, improves the resolution of the image, can enhance the contour characteristics of the reconstructed image while ensuring the reconstruction efficiency, and obviously improves the image quality.

Description

Multi-scale residual error fusion model for single image super-resolution and restoration method thereof
Technical Field
The invention belongs to the technical field of image reconstruction, and relates to a single-image super-resolution multi-scale residual error fusion model and a restoration method thereof.
Background
Images serve as a propagation medium in which a large amount of information content is contained. The demand of high-resolution images on satellite remote sensing, public safety, unmanned driving, medical diagnosis and the like is increasing, the higher the resolution of the images is, the more information can be provided, and how to accurately utilize and extract the information in the images plays an indispensable role in the development of the future machine vision field in China. Due to the influence of the current imaging technology, cost limitation, external environment and the like, the resolution of the obtained image does not reach the standard in practical application, and the subsequent processing and further use are seriously influenced. Therefore, there is a need to develop effective solutions to improve the resolution of images, to obtain higher resolution and higher quality images.
The super-resolution reconstruction method of a single image is mainly divided into three categories: interpolation based, reconstruction based and learning based. Interpolation-based and reconstruction-based methods are simple and easy to implement, but actual physical parameters of the image are not considered, the image is only amplified in mathematical logic, the improvement of image quality and edge details and texture features is limited, and the reconstruction effect of the method cannot necessarily reach the required standard. With the development of science and technology, people begin to turn their eyes to learning-based methods, and their core idea is to obtain additional a priori knowledge by training other samples, so as to help restore the reconstruction of image details.
In recent years, deep learning technology is rapidly developed, and as one of learning-based algorithms in reconstruction, super-resolution reconstruction of images by using a neural network is researched and paid attention to. For example, Dong et al first applies deep learning knowledge to the reconstruction technology, and the proposed SRCNN avoids manual design of a feature extraction method, realizes learning of an image itself, and thus realizes image reconstruction, which is detailed in Dong, c.; loy, c.c.; he, k.; tang, X.image super-resolution using the concept of the network in Proceedings of the IEEE Conference on computer Vision and Pattern Recognition, Zurich, Switzerland, 6-12 September 2014; pp.184-199 ". Kim et al propose VDSR based on residual network concept, solving the gradient dispersion problem caused by deep networks by accumulation of feature maps, see "Kim, j., j.lee, k., and Lee, k.m., Accurate image super-resolution computing over compressive networks, in proc. In order to accelerate the convergence speed of the network and reduce the network parameters, Kim has proposed DRCN, which is described in detail in "Kim, j., and Lee, j.k., and Lee, k.m., deep-reliable communication network for image super-resolution, in proc. Tai et al propose DRRN, which further improves the reconstruction effect by combining a residual network and a cyclic network, as detailed in "Tai, y., Yang, j., and Liu, x., Image super-resolution video future reconstruction network, inproc. ieee conf.computer.vis.pattern recognition (CVPR), jul.2017, pp.3147-3155. Lai et al propose LapSRN, which combines the Laplacian pyramid of the traditional image algorithm with deep learning, and realizes the reconstruction of the image by constructing an upper and a lower layer branch structure, which are detailed in Lai, W.S., Huang, J.B., Ahuja, N., and Yang, M.H., deep Laplacian pyramid and acid super-resolution, InProc.IEEE Conf.computer.Vis.Pattern recognition (CVPR), Jul.2017, pp.624-632 ". Tai et al propose the deepest persistent memory network MemNet for image restoration with densely connecting stacked persistent memory and multiple memory blocks, detailed in "Tai, y., Yang, j., Liu, x., and Xu, c., MemNet: a persistent memory for image restoration, in proc.ieee int.conf.com.vis. (ICCV), oct.2017, pp.4549-4557.
From the above, the traditional method based on interpolation and reconstruction cannot meet various image super-resolution reconstruction work, while the learning-based method can reconstruct a high-resolution image, but a short plate exists in feature extraction, so that the subsequent reconstructed image has edge blurring and unobvious feature details, and the quality of the obtained reconstructed image still has an improved space.
With the development of science and technology, especially the new generation of scientific revolution represented by artificial intelligence, people are more utilizing machines to process various information. The high-resolution image is a key place for ensuring a visual machine to correctly process tasks, how to effectively restore an image of an imaging device, and how to improve the image resolution to enable a visual effect to be restored to a natural scene as soon as possible, so that information contained in the image is presented to the greatest extent, subsequent research and application are facilitated, and the problem to be solved is urgently solved at present.
Disclosure of Invention
The embodiment of the invention provides a multi-scale residual fusion model for single image super-resolution and a restoration method thereof, which are used for solving the problems of high-frequency detail loss, edge blurring and the like in the traditional reconstruction method.
The technical scheme adopted by the embodiment of the invention is that a multi-scale residual fusion model for single image super-resolution comprises a feature extraction module, a nonlinear mapping module and a reconstruction module which are sequentially connected;
the feature extraction module is used for extracting shallow features such as a plurality of lines and contours of the low-resolution LR image, and the plurality of complementary shallow features can solve the problem that a single feature does not represent enough LR image;
the nonlinear mapping module extracts high-frequency characteristics by establishing a nonlinear mapping relation between input and output and transmits the high-frequency characteristics to the reconstruction module by dense connection;
and the reconstruction module is used for fusing the shallow feature and the LR image after further extracting features such as details, textures and the like from the high-frequency features which are connected and fused, so as to complete the reconstruction of the high-resolution HR image.
The embodiment of the invention adopts another technical scheme that the restoration method of the single image super-resolution multi-scale residual fusion model comprises the following steps:
step S1, inputting an LR image into a feature extraction module of a multi-scale residual fusion model of single image super-resolution;
step S2, a feature extraction module performs feature extraction on the LR image to obtain shallow layer features;
s3, sending the shallow layer features into a nonlinear mapping module, and extracting 5-layer features through the nonlinear mapping module;
and S4, sending the 5 layers of features into a reconstruction module, connecting and fusing the 5 layers of features into a tensor through dense connection by the reconstruction module to obtain global features, carrying out three-level processing on the global features to obtain three-level features, and realizing reconstruction of the HR image by utilizing the three-level features.
The embodiment of the invention has the beneficial effects that a multi-scale residual fusion model of single image super-resolution and a restoration method thereof are provided, a feature extraction module is designed, convolution kernels with different sizes are used for completely extracting the image features of the input LR image and connecting the image features together to form multi-range context information, and the feature extraction module can realize the complementarity of different types of features. By cascading five cross-merge modules, a non-linear mapping module (NMM) is proposed, into which dense connections and local residual connections are integrated to achieve fusion of multi-level and multi-scale features, the NMM can obtain necessary high-frequency details to reconstruct texture details of the HR image. An HR image reconstruction process is established that combines external residual, global residual and sub-pixel convolution, global residual concatenation is used to merge low-level features extracted from the shallow layer with high-level features extracted from the deep layer, low-frequency information in LR images is combined with high-frequency information inferred from the network using external residual concatenation, and sub-pixel convolution is used at the last layer of the network to achieve image up-sampling. In addition, the image restoration part introduces the LR image into the last link of the HR image reconstruction by external residual connection, and enhances the correlation between the pixel points by utilizing the same information of the LR image and the reconstructed HR image, namely the similar topological structures of the LR image and the reconstructed HR image. On one hand, the reconstruction process can avoid the problem of image characteristic information loss caused by interpolation and amplification of a low-resolution image, and can improve the super-resolution reconstruction effect; on the other hand, the problem that relevance among pixel points is damaged in the periodic arrangement process of sub-pixel up-sampling convolution can be solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a multi-scale residual fusion model for single-image super-resolution according to an embodiment of the present invention.
Fig. 2 is a schematic structural diagram of a cross fusion module CM in a multi-scale residual fusion model for single image super-resolution according to an embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a dual-channel residual fusion module RDM in the multi-scale residual fusion model for single-image super-resolution according to the embodiment of the present invention.
Fig. 4 is a comparison graph of reconstruction effect of the restoration method of the multi-scale residual fusion model for single image super-resolution according to the embodiment of the present invention and other algorithms on x 3 scale factor of "img _ 092" in Urban 100.
Fig. 5 is a comparison graph of reconstruction effects of the restoration method of the multi-scale residual fusion model for single image super-resolution according to the embodiment of the present invention and other algorithms on x 4 scale factors of "img _ 098" in Urban 100.
Fig. 6 is a comparison graph of the reconstruction effect of the multi-scale residual fusion model for single image super-resolution according to the embodiment of the invention, compared with other algorithms for carrying out x 4 scale factors on the beijing No. two satellite image.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The inventor researches and discovers that the existing single image super-resolution reconstruction method based on deep learning has poor reconstruction effect, and the reconstructed image mainly has the following defects: (1) the reconstructed image has blurred edges, unobvious profile details, poor overall visual effect and low image quality; (2) the existing deep learning model improves the reconstruction effect of the network by increasing the depth of the network, so that the network has the problem of gradient dispersion, and the reconstruction result has a larger difference from the actual result due to the fact that part of input low-feature information is lost in the reconstruction process. To address the defect, an embodiment of the present invention provides a multi-scale residual fusion model for super-resolution reconstruction of a single image, which has a structure as shown in fig. 1 and includes a feature extraction module, a nonlinear mapping module and a reconstruction module, which are connected in sequence, wherein the feature extraction module is configured to extract shallow features such as a plurality of lines and contours of a low-resolution LR image, and the plurality of complementary shallow features can solve the problem that a single feature does not sufficiently characterize the LR image. And the nonlinear mapping module extracts high-frequency characteristics by establishing a nonlinear mapping relation between input and output and transmits the high-frequency characteristics to the reconstruction module by dense connection. And the reconstruction module is used for fusing the shallow feature and the LR image after further extracting features such as details, textures and the like from the high-frequency features which are connected and fused, so as to complete the reconstruction of the high-resolution HR image.
The feature extraction module in the embodiment of the invention consists of a multi-scale extraction module and a feature processing module. The multi-scale extraction module is used for extracting characteristic information such as low-level contour details of objects with different sizes in the low-resolution images under different receptive fields to obtain a multi-scale characteristic diagram; and the characteristic processing module is used for adjusting the parameter quantity of the characteristic diagram output by the multi-scale extraction module and reducing the parameter quantity, so that the difficulty of network model training is reduced.
For the size of the convolution kernel, a large-scale convolution kernel has the capability of learning complex features but loses detail information, while a small-scale convolution kernel is easy to learn, can bring more abundant detail information, but has poor capability of learning complex features. The method has the advantages that the low-resolution images are processed by the convolution layers with various scales together to extract the features, the defect of insufficient feature information extracted by a single scale is complemented, and more detailed information is extracted from the low-resolution images to prepare for subsequent reconstruction work. In the embodiment of the invention, the feature extraction module comprises a multi-scale extraction module and a feature processing part. The multi-scale extraction module is divided into a multi-scale convolution extraction part and a fusion part, wherein the multi-scale convolution part is composed of three parallel convolution layers, and the convolution kernel size of the first layer is 3 multiplied by 3; the convolution kernel size of the second layer is 5 × 5; the convolution kernel size of the third layer is 9 × 9. And the fusion part superposes and connects the results of three layers of convolution in the multi-scale extraction module to generate a characteristic diagram which is used as the result of primary extraction of the multi-scale module. To avoid image size variation, all convolution steps are 1 in the present embodiment. The characteristic processing part is formed by two layers of convolution connected in sequence, and the convolution kernel size of the first layer of convolution layer is 1 multiplied by 1; the convolution kernel size of the second convolution layer is 3 x 3; the first layer of convolution is used for reducing network parameters of the feature diagram after fusion and reducing network complexity. The second layer of convolution processes the feature data in preparation for subsequent non-linear mapping. Thus, shallow features are obtained through the multi-scale extraction module.
The nonlinear mapping module consists of 5 cascaded cross fusion modules, and each cross fusion module is formed by sequentially connecting 3 merging operation structures. The merging operation structure is divided into an upper branch and a lower branch, and the upper branch and the lower branch are sequentially composed of convolution layers, an activation layer and a fusion layer, wherein the convolution kernel size of the convolution layer in the upper branch is 3 multiplied by 3, and the convolution kernel size of the convolution layer in the lower branch is 5 multiplied by 5; the active layers are all ReLU active functions; when processing, the input data of the upper and lower branches are fused with each other, the fused result is subsequently and respectively input into the upper and lower branches to be superposed and connected with the activated data, a group of characteristic diagrams are generated, and the characteristic diagrams are continuously input into the next cross-merge mapping. After 3 merging operation structures, a layer of convolution layer with convolution kernel size of 1 multiplied by 1 and step length of 1 is provided, network parameters of the merging operation mapping structure are adjusted, and then the network parameters are superposed and fused with input data of the structure to be output as a result of the module. In deep learning, the problem of gradient diffusion can occur due to the deepening of the network depth, and the flow of information and gradient of the whole network can be further improved by utilizing residual connection and dense connection.
Because different context information can be extracted from the convolution layers with different scales, the information can greatly improve the reconstruction effect for super-resolution reconstruction of images. Therefore, a cascaded cross fusion module is designed to realize fusion and complementation of multi-scale context information, wherein the large-scale convolution is responsible for extracting complex contours, and the small-scale convolution is responsible for extracting detail features. The local residual avoids the information from declining, and the deep extraction of the information is realized. Thereby constructing a nonlinear mapping process of the network.
The reconstruction module comprises a fusion layer, a global residual connection, an external residual connection and an up-sampling layer. The fusion connection densely connects output results of all cross fusion modules in the nonlinear mapping module and fuses the output results into a tensor. And then sequentially passing through a convolutional layer with the convolutional kernel size of 1 × 1 and a convolutional layer with the convolutional kernel size of 3 × 3, wherein the 1 × 1 convolutional layer is used for reducing the number of parameters in the network, the 3 × 3 convolutional layer further processes data, and the result is used as the input of subsequent global residual connection and is fused with the output result of the 3 × 3 convolutional layer in the feature extraction module to establish a residual mechanism. The result of the global residual error sequentially passes through three layers of convolutional layers, and the size of a convolutional kernel of the first layer of convolutional layers is 1 multiplied by 1; the convolution kernels of the two latter layers of convolution layers are both 3 multiplied by 3, external residual connection is established between the output result and the input low-resolution image, wherein the input low-resolution image is processed by the convolution layer with the convolution kernel size of 1 multiplied by 1, channel parameters are adjusted and then residual connection is carried out, and the number of channels of the input characteristic image, namely the characteristic image output by the pixel recombination module, is balanced by the general adjusting parameters, so that subsequent up-sampling can be normally carried out, and finally the RGB three-channel restored image can be output. And subsequently, the result is subjected to an up-sampling module of sub-pixel convolution to realize that the image is enlarged to a specified size, pixel recombination needs to obtain a high-resolution feature map by recombining a plurality of feature maps with low resolution, and in order to ensure that the operation can normally run, the operation has the definition requirement that r must be2The characteristic graph r is the magnification, and the number of the characteristic graphs is recombined every time when the pixel is madeThe amount of the images is reduced, and in order to ensure that the RGB three-channel restored images can be finally output, the number of feature maps output in each step needs to be balanced to ensure that the next operation can be normally carried out. And finally, inputting the result after the up-sampling into a convolution layer with the convolution kernel size of 3 multiplied by 3, and further optimizing and adjusting parameters to realize the reconstruction of a high-resolution image to obtain a reconstructed image Y.
Because the low-resolution images are different in content, the images contain different object information, and there may be relatively small or large objects. However, when convolution processing is performed, a small object can extract little feature information, and the size of the receptive field may also extract other peripheral irrelevant information, which eventually results in some object information being lost as the convolution processing operation is continuously performed. Therefore, a global residual error connection and an external residual error connection are established, the processed information result of the backward line is supplemented by introducing the original image characteristics, and the information of a small object can be presented on the finally output characteristic diagram.
The network model design based on deep learning focuses on the analysis understanding of image content according to application backgrounds such as image classification and image segmentation, the identification of objects which focus more on model design is favored, and the object information is separated from the whole to be identified and classified. The super-resolution reconstruction of the image is to pay attention to the nonlinear mapping relation between the low-resolution image and the high-resolution image, enhance the original weak outline characteristics by extracting the characteristic information of each position in the image, improve the details and the textures, and deduce all the missing high-frequency details by using the low-resolution image to be the key of the reconstruction. The embodiment of the invention provides a multi-scale residual error fusion model of single image super-resolution and a restoration method thereof, in order to fully extract characteristic information in a low-resolution image and reduce high-frequency details to the maximum extent. The method adopts a feature extraction module with convolution kernels of different sizes to extract a plurality of features from an input low-resolution image, and the features are connected in series and then sent to a nonlinear mapping module. The nonlinear mapping module is composed of five cross merging modules, and each module is formed by cascading three residual double-branch merging structures. Such an architecture may facilitate information integration for different branches. Dense connections and residual connections are integrated in the non-linear mapping module, improving the transmission and gradient of information. The nonlinear mapping module is responsible for extracting high-frequency features and sending the high-frequency features to the reconstruction module, and the reconstruction module generates a high-resolution image by adopting an improved sub-pixel sampling layer and combining an external residual error and a global residual error.
The embodiment of the invention provides a multi-scale residual fusion model for single image super-resolution and a restoration method thereof, as shown in figure 1, the multi-scale residual fusion model is carried out according to the following steps:
and step S1, inputting the LR image into a feature extraction module of a multi-scale residual fusion model of single image super-resolution, and sequentially passing through a multi-scale convolution and fusion part.
Step S2, the feature extraction module performs feature extraction on the LR image to obtain a shallow feature, i.e. X in fig. 10
The single scale convolution kernel extracts the bottom layer feature information, which results in more missing feature detail information. The convolution kernel modes of 3_5_9 of 3 × 3, 5 × 5 and 9 × 9 are adopted as convolution kernel scales in the feature extraction module, convolution layers of various scales are utilized to process the low-resolution image together, more detail information can be extracted from the low-resolution image, the feature information extracted by a single scale is complemented, and the low-resolution image detail recovery is facilitated. The feature extraction formula is as follows:
F1=H3×3(X);
F2=H5×5(X);
F3=H9×9(X);
F=[F1,F2,F3];
X0=H3×3(H1×1(F));
where X is the original low resolution image LR of the input, H represents the convolution operator, the subscript represents the size of the convolution kernel used in the layer, F1Is an extracted feature. Similarly, the LR images were convolved using 5 × 5 and 9 × 9 convolution kernels, respectively, to obtain the feature F2And F3. Superposing and fusing convolution results of three scales to obtain F, wherein [ · C]Represents a concat fusion. F, reducing feature dimensionality through 1X 1 convolution, avoiding overlarge training parameter quantity, being beneficial to improving the robust performance of the network, and then further extracting features by using 3X 3 pixel convolution to obtain finally extracted feature X0
S3, sending the shallow layer features into a nonlinear mapping module, and extracting 5-layer features through the nonlinear mapping module;
the nonlinear mapping module comprises 5 cascaded cross-merge modules (CMs). The structure of the CM is shown in figure 2, the CM is fused with dense connection, and is formed by cascading 3 residual double branch fusion structures (RDM), combining the output of the RDM, adjusting the dimension through convolution of a layer of 1 x 1 pixels, and improving information and gradient flow by means of Local Residual Connection (LRC).
The ReLU in CM is the key to implementing non-linear mapping, which helps the network model of the embodiments of the present invention learn the complex features of the input image. Since the convolutional layer is a linear filter with cross-correlation properties. The ReLU has nonlinear characteristics as an activation function of the convolutional layer, and can convert a plurality of input signals of one node into one output signal to realize nonlinear mapping of input and output characteristic images.
The RDM fuses two parallel branches through residual branch, and the structure is as shown in FIG. 3. The data are input into two parallel residual error branches, the upper branch comprises a 3 x 3 convolution layer and a ReLU active layer, the lower branch comprises a 5 x 5 convolution layer and a ReLU active layer, the two branches are connected by using local residual errors and fused with each other, and then the data are merged through concat, so that the fusion and complementation of multi-scale context information are realized. The RDM utilizes the local residual error to avoid the information from declining, and the deep extraction of the information is realized. The branches are connected through local residual errors, the middle 'add' represents a fused feature graph, the number of channels is not changed, and the subsequent 'concat' combines the feature graphs, so that the number of channels is increased.
Taking the jth mapping stage in the ith CM as an example, the mathematical model of RDM is given as:
Figure BDA0002601823860000081
Figure BDA0002601823860000082
wherein, i is 1,2,3,4,5, j is 1,2, 3;
Figure BDA0002601823860000083
and
Figure BDA0002601823860000084
representing inputs to the jth RDM upper and lower branches
Figure BDA0002601823860000085
And
Figure BDA0002601823860000086
performing 3 × 3 and 5 × 5 convolution respectively, and outputting activated by Relu;
Figure BDA0002601823860000087
and
Figure BDA0002601823860000088
satisfy the relation for the output of the jth RDM upper and lower branches
Figure BDA0002601823860000089
I denotes an identity matrix.
After 5 RDMs are cascaded, feature mapping results of upper and lower residual branches are merged
Figure BDA00026018238600000810
And
Figure BDA00026018238600000811
after the dimensionality is adjusted by adopting a layer of 1 × 1 convolution, Local Residual Connection (LRC) is introduced, shallow features are transferred to a high layer, and transfer of information flow is improved. The output of the i, i-1, … m CMs is:
Figure BDA00026018238600000812
wherein D isc(-) represents a mapping function for the fusion of the upper and lower branches "add".
For convenient representation, use
Figure BDA00026018238600000813
Represents input Xi-1And output XiThe mapping relationship between the outputs of the cascaded n CMs can obtain the following results:
Figure BDA00026018238600000814
wherein, X0The input of the first CM is the output of the feature extraction module, and the number n of cascaded CMs is 5.
S4, sending the 5 layers of features into a reconstruction module, connecting and fusing the 5 layers of features into a tensor through dense connection by the reconstruction module to obtain global features, carrying out three-level processing on the global features to obtain three-level features, and realizing reconstruction of an HR image by utilizing the three-level features
In the fusion layer, dense connection is introduced, the outputs of 5 CMs in the nonlinear mapping module are connected into a tensor, the concat is used for fusing the nonlinear mapping result, and the mathematical model is as follows:
XM=[X0,X1,...,Xn];
wherein, XMExtracting global features for fusing local features in all channel segmentation blocks of NMM, and sequentially convolving the global features by 1 × 1 and 3 × 3 to obtain fused primary features
Figure BDA0002601823860000091
Will be provided with
Figure BDA0002601823860000092
And feature F in FEM introduced with Global Residual (GRC)1Performing add fusion to obtain fused secondary features
Figure BDA0002601823860000093
Will be provided with
Figure BDA0002601823860000094
Then sequentially convolving by 5 multiplied by 5, 3 multiplied by 3 and 3 multiplied by 3, further extracting high-frequency characteristic information to obtain fused three-level characteristics
Figure BDA0002601823860000095
The relevant mathematical model is as follows:
Figure BDA0002601823860000096
Figure BDA0002601823860000097
Figure BDA0002601823860000098
in the last up-sampling, the sub-pixel up-sampling convolution does not need to carry out preprocessing on the input image, and the detail characteristics of the image can be greatly reserved. However, the periodic arrangement process easily destroys the relevance between the pixel points, so that the characteristic information cannot be fully utilized to improve the reconstruction effect. The LR image has much the same information as the reconstructed HR image and has a similar topology. LR is introduced into the final link of HR reconstruction by External Residual Connection (ERC), up-sampling is realized by adopting sub-pixel convolution, and the reconstruction of the HR image is completed by adjusting parameters through the final convolution layer.
As can be seen from FIG. 1, the LR image is convolved by 1 × 1 and then is compared with the HR characteristic data to be reconstructed
Figure BDA0002601823860000099
The method has the same characteristic dimension, and after the two adds are fused, the reconstruction of the HR image is realized through a sub-pixel up-sampling layer, and the mathematical model is as follows:
Figure BDA00026018238600000910
Figure BDA00026018238600000911
Figure BDA00026018238600000912
wherein the content of the first and second substances,
Figure BDA00026018238600000913
as a result of the 1X 1 convolution of the input LR image X, T is the image to be reconstructed,
Figure BDA00026018238600000914
for the upsampling result, SUC (. circle.) is the pixel reconstruction operation on the low resolution feature image, r is the scale factor for upsampling and enlarging, c represents the number of channels of the image (the values corresponding to color and grayscale images are 3 and 1, respectively), mod (x, r) and mod (y, r) represent the activation modes, based on r2Different sub-pixel positions in the LR maps are activated in the pixel recombination process, and pixel regions at the same position are extracted to form a region in the HR image Y.
The purpose of single image super resolution is to infer all missing high frequency details from the input Low Resolution (LR) image X, thereby obtaining a High Resolution (HR) image Y.
Given a training data set E ═ X(k),Y (k)1,2,3, | D |, where X ═ 1,2,3(k)And Y(k)Respectively representing a low resolution image and a high resolution image. The SISR reconstruction model is an end-to-end mapping that enables from LR images to HR images. In other words, the goal of the single image super-resolution reconstruction model of our embodiment of the invention is to learn a deductive model from the input LR image X(k)The HR image is deduced.
Figure BDA0002601823860000101
Where Θ is [ ω, b ] a network model parameter, ω is a weight matrix, and b is a bias. The model parameters Θ are determined by minimizing the loss between the reconstructed HR image and the true HR image. We define the loss function as being that,
Figure BDA0002601823860000103
the process of training the MSCM with the training set E is to minimize the loss, finding the optimal parameters for the model Θ. The structure of the MSCM model is shown in FIG. 1, and it is composed of a Feature Extraction Module (FEM), a non-linear mapping module (NMM) and a Reconstruction Module (RM). FEM is responsible for extracting shallow features of LR images and transmitting the shallow features to NMM, NMM is responsible for extracting high-frequency features and sending the high-frequency features to RM, and RM generates HR images by using improved sub-pixel sampling layers.
In order to verify the effectiveness of the super-resolution reconstruction of a single image according to the embodiment of the present invention, different scene images are selected as a test data set, which is similar to the Dong algorithm (Dong, c.; Loy, c.c.; He, k.; Tang, x.image super-resolution using depth dependent networks. in Proceedings of the ieee conference Computer Vision and Pattern Recognition, Zurich, Switzerland, 6-12 separator 2014; pp.184-199); the algorithm of Kim (Kim, j., j.le, k., and Lee, k.m., accurate-resolution using top connected networks, in proc. ieee conf.com.vis. pattern Recognit. (CVPR), jun.2016, pp.1646-1654); the algorithm of Kim (Kim, j., and Lee, j.k., and Lee, k.m., deep-iterative network for image super-resolution, in proc. ieee conf.com.vis.pattern Recognit. (CVPR), jun.2016, pp.1637-1645.); the algorithm of Tai (Tai, y., Yang, j., and Liu, x., Image super-resolution video deep reactive network, in proc. ieee conf.com.vis.pattern Recognit. (CVPR), jul.2017, pp.3147-3155.); the algorithm of Lai (Lai, w.s., Huang, j.b., Ahuja, n., and Yang, m.h., Deep Laplacian pyrad networks for fast and acid super-resolution, in proc.ieee conf.com.vis.pattern recognit. (CVPR), jul.2017, pp.624-632.); the results of Tai's algorithm (Tai, Y., Yang, J., Liu, X., and Xu, C., MemNet: a persistence network for image restoration, in Proc. IEEEInt. Conf. Comp. Vis. (ICCV), Oct.2017, pp.4549-4557.) and the experiments of the present invention were verified by comparative analysis in both principal and objective aspects.
As shown in fig. 4, the single-image super-resolution reconstruction method provided in the embodiment of the present invention and other algorithms perform a reconstruction experiment effect diagram of 3 times of the image of the high-rise building, and perform local contrast on the wall texture. Fig. 4(a) is an HR image corresponding to a tall building, fig. 4(b) is a graph of a reconstruction result of the srnn method of Dong, fig. 4(c) is a graph of a reconstruction effect of the VDSR method of Kim, fig. (d) is a graph of a reconstruction effect of the DRCN method of Kim, fig. 4(e) is a graph of a reconstruction effect of the laprn method of Lai, fig. 4(f) is a graph of a reconstruction effect of the DRRN method of Tai, fig. 4(g) is a graph of a reconstruction effect of the MemNet method of Tai, and fig. 4(h) is a graph of a reconstruction result of the method according to the embodiment of the present invention. It can be seen that the wall texture reconstructed by the method of the embodiment of the invention is closest to the HR image and has obvious contour, while the image texture reconstructed by other algorithms is disordered and the whole image is blurred. Therefore, the method of the embodiment of the invention effectively restores the edge details and the outline of the original high-resolution image and improves the contrast.
As shown in fig. 5, the single-image super-resolution reconstruction method provided in the embodiment of the present invention and other algorithms perform a reconstruction experiment effect diagram of enlarging the station room image by 4 times, and perform local comparison on the window. Fig. 4(a) is an HR image corresponding to a tall building, fig. 4(b) is a graph of a reconstruction result of the srnn method of Dong, fig. 4(c) is a graph of a reconstruction effect of the VDSR method of Kim, fig. (d) is a graph of a reconstruction effect of the DRCN method of Kim, fig. 4(e) is a graph of a reconstruction effect of the laprn method of Lai, fig. 4(f) is a graph of a reconstruction effect of the DRRN method of Tai, fig. 4(g) is a graph of a reconstruction effect of the MemNet method of Tai, and fig. 4(h) is a graph of a reconstruction result of the method according to the embodiment of the present invention. By comparing the local details of the window, we can find that the image reconstructed by the method of the embodiment of the present invention can obtain the most obvious window contour and can well recover the edge details, while other methods cannot effectively recover the contour. Therefore, the method of the embodiment of the invention effectively restores the edge details and the outline of the original high-resolution image and improves the contrast.
As shown in fig. 6, the reconstruction experiment effect diagram obtained by amplifying the image remotely sensed by the beijing satellite No. 2 by 4 times is obtained by the single-image super-resolution reconstruction method provided by the embodiment of the invention and other algorithms, and the local comparison is performed on the airplane. Fig. 4(a) is an HR image corresponding to a tall building, fig. 4(b) is a graph of a reconstruction result of the srnn method of Dong, fig. 4(c) is a graph of a reconstruction effect of the VDSR method of Kim, fig. (d) is a graph of a reconstruction effect of the DRCN method of Kim, fig. 4(e) is a graph of a reconstruction effect of the laprn method of Lai, fig. 4(f) is a graph of a reconstruction effect of the DRRN method of Tai, fig. 4(g) is a graph of a reconstruction effect of the MemNet method of Tai, and fig. 4(h) is a graph of a reconstruction result of the method according to the embodiment of the present invention. By comparing local details of the airplane, it can be found that the image edge features reconstructed by the method of the embodiment of the invention are most obvious and the recovery effect is best, while other methods cannot effectively recover the contour and have the problem of blurring. Therefore, the method of the embodiment of the invention effectively restores the edge details and the outline of the original high-resolution image and improves the sharpness.
In this embodiment, in order to avoid the deviation caused by qualitative analysis, two objective indexes of peak signal-to-noise ratio (PSNR) and Structural Similarity (SSIM) are used for quantitative evaluation, and reconstruction restoration comparisons with three different multiples of 2,3, and 4 times of amplification are performed on four data sets of Set5, Set14, BSD100, and Urban100, as shown in table 1:
TABLE 1 PSNR (dB)/SSIM result comparison data of different methods on different indexes
Figure BDA0002601823860000121
As can be seen from the data in table 1, both PSNR and SSIM of the present embodiment (Our) are greater than the SRCNN, VDSR, DRCN, DRRN, MemNet, and laprn methods. For PSNR and SSIM, the higher the value, the more similar the representation result is to the real image, and the higher the image quality. Table 1 explicitly indicates the average score of the test data for different image datasets under different criteria. Therefore, the method provided by the embodiment of the invention has a great improvement on the peak signal-to-noise ratio and the structural similarity of the reconstructed image, and is superior to other methods.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (9)

1. The multi-scale residual fusion model for the single image super-resolution is characterized by comprising a feature extraction module, a nonlinear mapping module and a reconstruction module which are sequentially connected;
the feature extraction module is used for extracting shallow features such as a plurality of lines and contours of the low-resolution LR image, and the plurality of complementary shallow features can solve the problem that a single feature does not represent enough LR image;
the nonlinear mapping module extracts high-frequency characteristics by establishing a nonlinear mapping relation between input and output and transmits the high-frequency characteristics to the reconstruction module by dense connection;
and the reconstruction module is used for fusing the shallow feature and the LR image after further extracting features such as details, textures and the like from the high-frequency features which are connected and fused, so as to complete the reconstruction of the high-resolution HR image.
2. The multi-scale residual fusion model for single-image super-resolution according to claim 1, wherein the feature extraction module adopts a 3_5_9 convolution kernel mode of 3 × 3, 5 × 5 and 9 × 9, convolution kernels of three scales are all convolved with an LR image, all feature maps obtained by convolution are sent to a connection fusion operator to complete fusion, and then are sequentially subjected to convolution processing of 1 × 1 and 3 × 3 to obtain final shallow layer features;
all convolution kernels contained in the feature extraction module are 64 channels;
and the final shallow layer features obtained by the feature extraction module are sent to the input end of the nonlinear mapping module.
3. The multi-scale residual fusion model for single image super resolution according to any one of claims 1-2, wherein the nonlinear mapping module is formed by cascading 5 cross fusion modules CM, each CM is formed by cascading 3 dual-channel residual fusion modules RDM, and local residual connection is fused in the RDM;
the nonlinear mapping module inputs the shallow features extracted by the feature extraction module, namely the shallow features are input of the first CM.
4. The multi-scale residual fusion model for single-image super-resolution of claim 3, wherein the dual-channel residual fusion module RDM fuses two parallel branches through a residual branch, and sends the input data to two parallel residual branches, the upper branch comprises a 3 × 3 convolutional layer and a ReLU active layer, the lower branch comprises a 5 × 5 convolutional layer and a ReLU active layer, and the two branches merge the feature data through connection fusion after realizing weighted average fusion by using local residual connection, so as to realize the fusion and complementation of multi-scale context information;
the weighted average fusion in the dual-channel residual fusion module RDM represents a fusion characteristic diagram, and the number of channels is not changed;
the connection fusion in the two-channel residual fusion module RDM represents a merged feature graph, and the number of channels is increased.
5. The multi-scale residual fusion model for single image super-resolution according to any one of claims 1 to 4, wherein the reconstruction module comprises two parts of global feature fusion and image restoration which are connected in sequence;
the global feature fusion part connects the outputs of the 5 CM into a tensor by utilizing dense connection, and obtains a nonlinear mapping result through connection fusion, namely the global feature for reconstruction; the global features are sequentially convolved by 1 × 1 and 3 × 3 to obtain fused primary features; performing weighted average fusion on the primary features and a feature map obtained by convolution of 3 multiplied by 3 in a feature extraction module introduced by using global residual errors to obtain fused secondary features; the secondary features are sequentially convolved by 5 multiplied by 5, 3 multiplied by 3 and 3 multiplied by 3, and high-frequency feature information is further extracted to obtain fused tertiary features;
the image restoration part performs 1 × 1 convolution on an LR image, the LR image and the three-level features obtained by the global feature fusion part have the same feature dimension, the LR image and the three-level features are subjected to weighted average fusion to obtain an image to be reconstructed, pixels on the image to be reconstructed are periodically arranged, and then a 3 × 3 convolution adjustment parameter is performed to realize reconstruction of an HR image;
the image restoration part introduces an LR image into the last link of HR image reconstruction by external residual connection, and enhances the correlation between pixel points by using the same information of the LR image and the reconstructed HR image, namely the similar topological structures of the LR image and the reconstructed HR image.
6. The restoration method of the multi-scale residual fusion model for single image super-resolution according to claims 1-5, characterized by comprising the following steps:
step S1, inputting an LR image into a feature extraction module of a multi-scale residual fusion model of single image super-resolution;
step S2, a feature extraction module performs feature extraction on the LR image to obtain shallow layer features;
s3, sending the shallow layer feature into a nonlinear mapping module, and extracting 5-layer feature X through the nonlinear mapping modulei,i=1,…,5;
Step S4, 5-layer characteristic XiAnd i is 1, …, and 5 is sent to a reconstruction module, the reconstruction module fuses 5 layers of feature connection into a tensor through dense connection to obtain global features, the global features are subjected to three-level processing to obtain three-level features, and the three-level features are utilized to realize the reconstruction of the HR image.
7. The method for restoring the multi-scale residual fusion model at the super resolution of the single image according to claim 6, wherein the mathematical model of the feature extraction module in step S2 is as follows:
F1=H3×3(X);
F2=H5×5(X);
F3=H9×9(X);
F=[F1,F2,F3];
X0=H3×3(H1×1(F))
where X is the original low resolution LR image of the input, H represents the convolution operator, the subscript represents the size of the convolution kernel, F1、F2And F3Represents a feature map extracted by convolving X by 3 × 3, 5 × 5, and 9 × 9, respectively, [ F ]1,F2,F3]Represents a pair F1、F2And F3Performing a ligation fusion operation, F denotes ligation fusion F1、F2And F3The preliminary extracted feature, X, obtained thereafter0And (3) performing 1 × 1 and 3 × 3 convolution on the preliminarily extracted features F to finally obtain shallow features.
8. The method for restoring the multi-scale residual fusion model at the super resolution of the single image according to claim 6, wherein the non-linear mapping module in step S3 extracts 5 layers of features XiThe process of i ═ 1, …,5 is:
the nonlinear mapping module is cascaded with 5 CMs, each CM is formed by cascading 3 RDMs, and the mathematical model of the jth RDM mapping order in the ith CM is as follows:
Figure FDA0002601823850000031
Figure FDA0002601823850000032
wherein, i is 1,2, 5, j is 1,2,3,
Figure FDA0002601823850000033
and
Figure FDA0002601823850000034
representing inputs to the jth RDM upper and lower branches
Figure FDA0002601823850000035
And
Figure FDA0002601823850000036
after 3 x 3 and 5 x 5 convolutions, respectively, through the output of the ReLU activation,
Figure FDA0002601823850000037
and
Figure FDA0002601823850000038
satisfy the relation for the output of the jth RDM upper and lower branches
Figure FDA0002601823850000039
I represents an identity matrix;
after cascading 3 RDMs, combining feature mapping results of upper and lower residual branches
Figure FDA00026018238500000310
And
Figure FDA00026018238500000311
after adjusting the dimensionality by a layer of 1 × 1 convolution, the shallow features are transferred to the upper layers by local residual concatenation, the output of the ith CM is,
Figure FDA00026018238500000312
wherein D isc(-) represents a mapping function of weighted average fusion of the upper and lower branches, XiThe ith layer characteristics extracted by the nonlinear mapping module correspond to the output of the ith CM;
for convenient representation, use
Figure FDA00026018238500000313
Represents input Xi-1And output XiThe mapping relationship between the outputs of the nth CM is,
Figure FDA00026018238500000314
wherein, X0Is firstAnd inputting the CM, namely the shallow feature extracted by the feature extraction module.
9. The restoration method of the multi-scale residual fusion model for single image super resolution according to any one of claims 6 to 8, wherein the mathematical model of the reconstruction module in the step S4 is as follows:
XM=[X0,X1,...,Xn]
Figure FDA0002601823850000041
Figure FDA0002601823850000042
Figure FDA0002601823850000043
Figure FDA0002601823850000044
Figure FDA0002601823850000045
Figure FDA0002601823850000046
wherein, [ X ]0,X1,...,Xn]Represents a pair X0,X1,...,XnPerforming a ligation fusion operation, XMTo ligate fusion X0,X1,...,XnThe global characteristics obtained in the latter step are,
Figure FDA0002601823850000047
is a global feature XMThe first-order features obtained by 1 × 1 and 3 × 3 convolution are sequentially passed through,
Figure FDA0002601823850000048
is to be
Figure FDA0002601823850000049
And feature map F in FEM introduced with global residual1The weighted average is carried out to obtain the secondary characteristics,
Figure FDA00026018238500000410
is to be
Figure FDA00026018238500000411
Sequentially convolving by 5 × 5, 3 × 3 and 3 × 3 to further extract three-level features obtained by high-frequency feature information, Dc(-) represents a mapping function fused to a weighted average,
Figure FDA00026018238500000412
for a 1 × 1 convolution of the input LR image X, T being the image to be reconstructed, suc (T) represents a reconstruction operation with T arranged periodically,
Figure FDA00026018238500000413
representing the result of the recombination on T, x is the abscissa pixel of the HR image, Y is the ordinate pixel of the HR image, c represents the number of channels of the image (the values corresponding to the color and grayscale images are 3 and 1, respectively), and Y is the reconstructed HR image.
CN202010726231.6A 2020-07-25 2020-07-25 Single image super-resolution multi-scale residual error fusion model and restoration method thereof Active CN111861961B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010726231.6A CN111861961B (en) 2020-07-25 2020-07-25 Single image super-resolution multi-scale residual error fusion model and restoration method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010726231.6A CN111861961B (en) 2020-07-25 2020-07-25 Single image super-resolution multi-scale residual error fusion model and restoration method thereof

Publications (2)

Publication Number Publication Date
CN111861961A true CN111861961A (en) 2020-10-30
CN111861961B CN111861961B (en) 2023-09-22

Family

ID=72950997

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010726231.6A Active CN111861961B (en) 2020-07-25 2020-07-25 Single image super-resolution multi-scale residual error fusion model and restoration method thereof

Country Status (1)

Country Link
CN (1) CN111861961B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112634136A (en) * 2020-12-24 2021-04-09 华南理工大学 Image super-resolution method and system based on image characteristic quick splicing
CN112712488A (en) * 2020-12-25 2021-04-27 北京航空航天大学 Remote sensing image super-resolution reconstruction method based on self-attention fusion
CN112766104A (en) * 2021-01-07 2021-05-07 湖北公众信息产业有限责任公司 Insurance new retail service platform
CN112801868A (en) * 2021-01-04 2021-05-14 青岛信芯微电子科技股份有限公司 Method for image super-resolution reconstruction, electronic device and storage medium
CN113139899A (en) * 2021-03-31 2021-07-20 桂林电子科技大学 Design method of high-quality light-weight super-resolution reconstruction network model
CN113163138A (en) * 2021-05-20 2021-07-23 苏州大学 High-resolution video restoration system and method based on bidirectional circulation network
CN113240625A (en) * 2021-03-31 2021-08-10 辽宁华盾安全技术有限责任公司 Steel plate detection method based on deep learning, tail-to-tail early warning method, electronic device and computer storage medium
CN113256494A (en) * 2021-06-02 2021-08-13 同济大学 Text image super-resolution method
CN113362384A (en) * 2021-06-18 2021-09-07 安徽理工大学环境友好材料与职业健康研究院(芜湖) High-precision industrial part measurement algorithm of multi-channel sub-pixel convolution neural network
CN113628125A (en) * 2021-07-06 2021-11-09 武汉大学 Multi-infrared image enhancement method based on spatial parallax prior network
CN113674156A (en) * 2021-09-06 2021-11-19 苏州大学 Method and system for reconstructing image super-resolution
CN115206331A (en) * 2022-06-13 2022-10-18 华南理工大学 Voice super-resolution method based on tapered residual dense network
WO2023040108A1 (en) * 2021-09-14 2023-03-23 浙江师范大学 Image super-resolution enlargement model and method

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017189870A1 (en) * 2016-04-27 2017-11-02 Massachusetts Institute Of Technology Stable nanoscale nucleic acid assemblies and methods thereof
CN108376386A (en) * 2018-03-23 2018-08-07 深圳天琴医疗科技有限公司 A kind of construction method and device of the super-resolution model of image
CN108537731A (en) * 2017-12-29 2018-09-14 西安电子科技大学 Image super-resolution rebuilding method based on compression multi-scale feature fusion network
KR20190040586A (en) * 2017-10-11 2019-04-19 인하대학교 산학협력단 Method and apparatus for reconstructing single image super-resolution based on artificial neural network
CN109903223A (en) * 2019-01-14 2019-06-18 北京工商大学 A kind of image super-resolution method based on dense connection network and production confrontation network
CN109903226A (en) * 2019-01-30 2019-06-18 天津城建大学 Image super-resolution rebuilding method based on symmetrical residual error convolutional neural networks
CN109961396A (en) * 2017-12-25 2019-07-02 中国科学院沈阳自动化研究所 A kind of image super-resolution rebuilding method based on convolutional neural networks
CN109978785A (en) * 2019-03-22 2019-07-05 中南民族大学 The image super-resolution reconfiguration system and its method of multiple recurrence Fusion Features
CN110276721A (en) * 2019-04-28 2019-09-24 天津大学 Image super-resolution rebuilding method based on cascade residual error convolutional neural networks
WO2019190017A1 (en) * 2018-03-26 2019-10-03 아주대학교 산학협력단 Residual network system for low resolution image correction
CN110969577A (en) * 2019-11-29 2020-04-07 北京交通大学 Video super-resolution reconstruction method based on deep double attention network
EP3637099A1 (en) * 2018-10-08 2020-04-15 Ecole Polytechnique Federale de Lausanne (EPFL) Image reconstruction method based on a trained non-linear mapping
CN111192200A (en) * 2020-01-02 2020-05-22 南京邮电大学 Image super-resolution reconstruction method based on fusion attention mechanism residual error network
US20200219323A1 (en) * 2019-01-04 2020-07-09 University Of Maryland, College Park Interactive mixed reality platform utilizing geotagged social media
CN111402128A (en) * 2020-02-21 2020-07-10 华南理工大学 Image super-resolution reconstruction method based on multi-scale pyramid network

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017189870A1 (en) * 2016-04-27 2017-11-02 Massachusetts Institute Of Technology Stable nanoscale nucleic acid assemblies and methods thereof
KR20190040586A (en) * 2017-10-11 2019-04-19 인하대학교 산학협력단 Method and apparatus for reconstructing single image super-resolution based on artificial neural network
CN109961396A (en) * 2017-12-25 2019-07-02 中国科学院沈阳自动化研究所 A kind of image super-resolution rebuilding method based on convolutional neural networks
CN108537731A (en) * 2017-12-29 2018-09-14 西安电子科技大学 Image super-resolution rebuilding method based on compression multi-scale feature fusion network
CN108376386A (en) * 2018-03-23 2018-08-07 深圳天琴医疗科技有限公司 A kind of construction method and device of the super-resolution model of image
WO2019190017A1 (en) * 2018-03-26 2019-10-03 아주대학교 산학협력단 Residual network system for low resolution image correction
EP3637099A1 (en) * 2018-10-08 2020-04-15 Ecole Polytechnique Federale de Lausanne (EPFL) Image reconstruction method based on a trained non-linear mapping
US20200219323A1 (en) * 2019-01-04 2020-07-09 University Of Maryland, College Park Interactive mixed reality platform utilizing geotagged social media
CN109903223A (en) * 2019-01-14 2019-06-18 北京工商大学 A kind of image super-resolution method based on dense connection network and production confrontation network
CN109903226A (en) * 2019-01-30 2019-06-18 天津城建大学 Image super-resolution rebuilding method based on symmetrical residual error convolutional neural networks
CN109978785A (en) * 2019-03-22 2019-07-05 中南民族大学 The image super-resolution reconfiguration system and its method of multiple recurrence Fusion Features
CN110276721A (en) * 2019-04-28 2019-09-24 天津大学 Image super-resolution rebuilding method based on cascade residual error convolutional neural networks
CN110969577A (en) * 2019-11-29 2020-04-07 北京交通大学 Video super-resolution reconstruction method based on deep double attention network
CN111192200A (en) * 2020-01-02 2020-05-22 南京邮电大学 Image super-resolution reconstruction method based on fusion attention mechanism residual error network
CN111402128A (en) * 2020-02-21 2020-07-10 华南理工大学 Image super-resolution reconstruction method based on multi-scale pyramid network

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
XIAOLE ZHAO等: ""Channel Splitting Network for Single MR Image Super-Resolution"", 《ARXIV》, pages 4 - 7 *
XIN JIN等: ""Single image super-resolution with multi-level feature fusion recursive network"", 《NEUROCOMPUTING》, pages 166 - 173 *
YANTING HU等: ""Single Image Super-Resolution via Cascaded Multi-Scale Cross Network"", 《ARXIV》, pages 1 - 12 *
YING TAI等: ""MEMNet:A Persistent Memory Network for Image Restoration"", 《2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》, pages 1 - 12 *
应自炉等: ""多尺度密集残差网络的单幅图像超分辨率重建"", 《中国图象图形学报》, vol. 24, no. 3, pages 410 - 419 *
段然等: ""基于多尺度特征映射网络的图像超分辨率重建"", 《浙江大学学报(工学版)》, vol. 53, no. 7, pages 1331 - 1339 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112634136B (en) * 2020-12-24 2023-05-23 华南理工大学 Image super-resolution method and system based on image feature rapid stitching
CN112634136A (en) * 2020-12-24 2021-04-09 华南理工大学 Image super-resolution method and system based on image characteristic quick splicing
CN112712488A (en) * 2020-12-25 2021-04-27 北京航空航天大学 Remote sensing image super-resolution reconstruction method based on self-attention fusion
CN112712488B (en) * 2020-12-25 2022-11-15 北京航空航天大学 Remote sensing image super-resolution reconstruction method based on self-attention fusion
CN112801868B (en) * 2021-01-04 2022-11-11 青岛信芯微电子科技股份有限公司 Method for image super-resolution reconstruction, electronic device and storage medium
CN112801868A (en) * 2021-01-04 2021-05-14 青岛信芯微电子科技股份有限公司 Method for image super-resolution reconstruction, electronic device and storage medium
CN112766104A (en) * 2021-01-07 2021-05-07 湖北公众信息产业有限责任公司 Insurance new retail service platform
CN113139899A (en) * 2021-03-31 2021-07-20 桂林电子科技大学 Design method of high-quality light-weight super-resolution reconstruction network model
CN113240625A (en) * 2021-03-31 2021-08-10 辽宁华盾安全技术有限责任公司 Steel plate detection method based on deep learning, tail-to-tail early warning method, electronic device and computer storage medium
CN113163138B (en) * 2021-05-20 2023-01-17 苏州大学 High-resolution video restoration system and method based on bidirectional circulation network
CN113163138A (en) * 2021-05-20 2021-07-23 苏州大学 High-resolution video restoration system and method based on bidirectional circulation network
CN113256494B (en) * 2021-06-02 2022-11-11 同济大学 Text image super-resolution method
CN113256494A (en) * 2021-06-02 2021-08-13 同济大学 Text image super-resolution method
CN113362384A (en) * 2021-06-18 2021-09-07 安徽理工大学环境友好材料与职业健康研究院(芜湖) High-precision industrial part measurement algorithm of multi-channel sub-pixel convolution neural network
CN113628125A (en) * 2021-07-06 2021-11-09 武汉大学 Multi-infrared image enhancement method based on spatial parallax prior network
CN113628125B (en) * 2021-07-06 2023-08-15 武汉大学 Method for enhancing multiple infrared images based on space parallax priori network
CN113674156A (en) * 2021-09-06 2021-11-19 苏州大学 Method and system for reconstructing image super-resolution
WO2023040108A1 (en) * 2021-09-14 2023-03-23 浙江师范大学 Image super-resolution enlargement model and method
CN115206331A (en) * 2022-06-13 2022-10-18 华南理工大学 Voice super-resolution method based on tapered residual dense network
CN115206331B (en) * 2022-06-13 2024-04-05 华南理工大学 Voice super-resolution method based on conical residual dense network

Also Published As

Publication number Publication date
CN111861961B (en) 2023-09-22

Similar Documents

Publication Publication Date Title
CN111861961A (en) Multi-scale residual error fusion model for single image super-resolution and restoration method thereof
CN110033410B (en) Image reconstruction model training method, image super-resolution reconstruction method and device
CN110119780B (en) Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network
CN110120011B (en) Video super-resolution method based on convolutional neural network and mixed resolution
CN107240066A (en) Image super-resolution rebuilding algorithm based on shallow-layer and deep layer convolutional neural networks
CN115222601A (en) Image super-resolution reconstruction model and method based on residual mixed attention network
CN109741256A (en) Image super-resolution rebuilding method based on rarefaction representation and deep learning
CN102243711B (en) Neighbor embedding-based image super-resolution reconstruction method
Cai et al. FCSR-GAN: Joint face completion and super-resolution via multi-task learning
CN110070489A (en) Binocular image super-resolution method based on parallax attention mechanism
Zhu et al. Efficient single image super-resolution via hybrid residual feature learning with compact back-projection network
CN112365403B (en) Video super-resolution recovery method based on deep learning and adjacent frames
CN110689482A (en) Face super-resolution method based on supervised pixel-by-pixel generation countermeasure network
CN115358932B (en) Multi-scale feature fusion face super-resolution reconstruction method and system
CN111768340B (en) Super-resolution image reconstruction method and system based on dense multipath network
CN112200724A (en) Single-image super-resolution reconstruction system and method based on feedback mechanism
CN112950475A (en) Light field super-resolution reconstruction method based on residual learning and spatial transformation network
CN111414988B (en) Remote sensing image super-resolution method based on multi-scale feature self-adaptive fusion network
CN115660955A (en) Super-resolution reconstruction model, method, equipment and storage medium for efficient multi-attention feature fusion
CN112017116A (en) Image super-resolution reconstruction network based on asymmetric convolution and construction method thereof
CN109615576B (en) Single-frame image super-resolution reconstruction method based on cascade regression basis learning
Zheng et al. Double-branch dehazing network based on self-calibrated attentional convolution
CN109272450A (en) A kind of image oversubscription method based on convolutional neural networks
CN116823647A (en) Image complement method based on fast Fourier transform and selective attention mechanism
Li et al. Parallel-connected residual channel attention network for remote sensing image super-resolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant