CN111402128A - Image super-resolution reconstruction method based on multi-scale pyramid network - Google Patents

Image super-resolution reconstruction method based on multi-scale pyramid network Download PDF

Info

Publication number
CN111402128A
CN111402128A CN202010108622.1A CN202010108622A CN111402128A CN 111402128 A CN111402128 A CN 111402128A CN 202010108622 A CN202010108622 A CN 202010108622A CN 111402128 A CN111402128 A CN 111402128A
Authority
CN
China
Prior art keywords
layer
image
resolution
features
scale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010108622.1A
Other languages
Chinese (zh)
Inventor
史景伦
杨鹏
梁可弘
陈学斌
林阳城
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Menghui Robot Co ltd
South China University of Technology SCUT
Original Assignee
Guangzhou Menghui Robot Co ltd
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Menghui Robot Co ltd, South China University of Technology SCUT filed Critical Guangzhou Menghui Robot Co ltd
Priority to CN202010108622.1A priority Critical patent/CN111402128A/en
Publication of CN111402128A publication Critical patent/CN111402128A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • G06T3/4076Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image super-resolution reconstruction method based on a multi-scale pyramid network, which comprises the following steps: s1, shallow feature extraction is carried out on the input image; s2, performing feature fusion and feature enhancement on the shallow features through K multi-scale residual error modules to obtain richer deep features; s3, performing up-sampling on the deep level features by using the transposed convolution; s4, reconstructing the image by residual error learning; and S5, taking the reconstructed image as the output of the current pyramid network and the input of the next pyramid network, and continuing to train by adopting the steps S1-S4 to obtain an image with higher resolution. The invention adopts a multi-scale residual error module to fuse the characteristics to obtain richer characteristics; meanwhile, a Laplacian pyramid network is adopted to gradually up-sample and reconstruct a high-resolution image; by the method, the image with richer details and higher quality can be reconstructed.

Description

Image super-resolution reconstruction method based on multi-scale pyramid network
Technical Field
The invention relates to the technical field of computer vision and image processing, in particular to an image super-resolution reconstruction method based on a multi-scale pyramid network.
Background
With the development of information technology, the number of pictures on a network is continuously increased, and the images are used as a main medium for people to learn the world and are applied to various scenes. In a plurality of fields, the image quality is as large as that of medical images, satellite remote sensing, and people's cameras, mobile phones and the like. People have increasingly high requirements on the image quality of images. Therefore, the improvement of the resolution of the image is of great significance in real life.
Image super-resolution reconstruction aims to recover a high-resolution image by using one or more low-resolution images, and has been developed in recent years as one of the research hotspots in the field of computer vision. At present, super-resolution reconstruction algorithms are divided into two categories, namely interpolation-based and learning-based, in the aspect of reconstruction algorithms. The interpolation-based algorithm is simple and fast, but cannot meet the increasing image quality requirements of people. The super-resolution reconstruction method based on learning learns the prior by means of extra training samples to reduce the ill-posed property of the super-resolution problem and obtain better effect, such as a method based on sparse coding and a method based on neighborhood embedding. However, these methods only solve sparse coding coefficients and learn embedding space on the primary feature space of the image, so that sparsity and manifold assumptions are difficult to strictly satisfy, directly resulting in degradation of image reconstruction quality. With the rapid development of deep learning, researchers widely apply the deep learning algorithm to image super-resolution reconstruction and obtain a reconstruction result superior to an interpolation algorithm. However, the mainstream methods at present are based on the theory that the deeper the network is, the better the reconstruction effect is, and with the increase of the network depth, the problems of gradient disappearance or grid degradation still exist, and most methods are to reach the specified size through one-time up-sampling, so that the quality of the reconstructed high-resolution image needs to be improved.
Disclosure of Invention
The invention aims to solve the defects in the prior art and provides an image super-resolution reconstruction method based on a multi-scale pyramid network.
The purpose of the invention can be achieved by adopting the following technical scheme:
an image super-resolution reconstruction method based on a multi-scale pyramid network comprises the following steps:
s1, shallow feature extraction is carried out on the input image;
s2, performing feature fusion and feature enhancement on the shallow features through K multi-scale residual error modules to obtain deep features;
s3, performing up-sampling on the deep level features by using the transposed convolution;
s4, reconstructing the image by residual error learning;
and S5, taking the reconstructed image as the output of the current pyramid network and simultaneously as the input of the next pyramid network, and continuously repeating the training from the step S1 to the step S4 to obtain an image with higher resolution.
Further, the step S1 is as follows:
extracting shallow features from the input low resolution image using a layer of 3 × 3 convolutional layer followed by a non-linear activation unit, the expression is as follows:
F0=σ(W1*ILR) (1)
wherein, ILRRepresenting the input low resolution image, sigma represents the nonlinear activation function Re L U, W1Convolution kernel representing the convolution layer of 3 × 3, F0Representing features extracted by convolutional layers.
Further, each multi-scale residual module in S2 comprises a feature enhancement unit, a compression unit and a residual learning unit, wherein the feature enhancement unit comprises 2 convolution layers of 3 × 3 followed by nonlinear activation units and 2 convolution layers of 5 × 5 followed by nonlinear activation units, the compression unit comprises a layer of convolution layers with the size of 1 × 1, compared with a single-scale convolution kernel, features of different scales can be extracted by convolution kernels with different sizes, so that a filter can extract and learn richer image information.
Further, the step S2 is as follows:
firstly, the shallow layer features extracted in the step S1 are processed by a feature enhancement unit to obtain two different features, then the two different features are processed by feature fusion by a compression unit, the fused features are further processed by learning of a convolution layer, and finally the fused features and the shallow layer features are added to form a residual block, wherein the expression of the calculation process is as follows:
Figure BDA0002389194870000031
Figure BDA0002389194870000032
Figure BDA0002389194870000033
Figure BDA0002389194870000034
M=W1×1*[T2,P2](6)
B=σ(W*M) (7)
Fm=B+Fm-1(8)
wherein, T1Characteristic after lamination by the first layer 3 × 3, T2Characteristic of the post-lamination by the second layer 3 × 3, P1Characteristic of the post-lamination layer after lamination by the first layer 5 × 5, P2For the characteristics after the layer was wrapped by the second layer 5 × 5, σ represents the nonlinear activation function Re L U function,
Figure BDA0002389194870000035
the convolution kernel of the first layer 3 × 3 convolutional layer,
Figure BDA0002389194870000036
the convolution kernel of the second layer 3 × 3 convolutional layer,
Figure BDA0002389194870000037
the convolution kernel of the first layer 5 × 5 convolutional layer,
Figure BDA0002389194870000038
convolution kernel, W, of the second layer 5 × 5 convolution layer1×1Denotes a convolution kernel of 1 [1 × 1 ] convolution layer, W denotes a convolution kernel of the last learning layer, [ alpha ], [ alpha]Represents a feature fusion function, M represents a feature after fusion by 1 × 1 convolutional layers,Brepresenting features obtained by the last learning layer, Fm-1And FmRespectively representing the input and output of the mth multi-scale residual block.
Further, the step S3 is as follows:
using a layer of transposition convolution layer to up-sample the deep-level features passing through the K multi-scale residual error modules to obtain a high-resolution image, wherein the expression is as follows:
IHR_conv=fdeconv(FK) (9)
in the above formula, IHR_convFor high resolution images after upsampling, fdeconvFor up-sampling operations, FKIs the output of the kth multi-scale residual module.
Further, the step S4 is as follows:
firstly, carrying out double cubic interpolation on a low-resolution image to obtain a high-resolution image IHR_bicuThen the high resolution image IHR_bicuWith up-sampled high-resolution image IHR_convAdding to obtain a depth map I with spatial resolution amplified by two timesHRThe expression is as follows:
IHR=IHR_bicu+IHR_deconv(10)
further, compared with a network in which an image with a specified size is obtained by only one-time upsampling, the pyramid network can be used for gradual upsampling, so that the training difficulty of the network (especially the training of large-scale factors) can be reduced, and a picture with higher quality can be obtained. The pyramid network comprises N levels in total, if the input low-resolution image is a low-resolution image with the down-sampling rate of 1/S times, and S is an up-sampling scale factor, N is log2S; each stage reconstructs an image output by a previous stage into a high-resolution image of the stage.
Compared with the prior art, the invention has the following advantages and effects:
according to the invention, the multi-scale residual error module is adopted to extract various characteristics from the image, the characteristics are reinforced by fusing the characteristics, so that the extracted characteristics are richer, and the image is gradually up-sampled and reconstructed in a pyramid network mode, so that the high-resolution image quality is higher.
Drawings
FIG. 1 is a schematic diagram of an image super-resolution reconstruction method based on a multi-scale pyramid network disclosed in the present invention;
fig. 2 is a multi-scale residual module framework diagram in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Examples
As shown in fig. 1, the present embodiment provides an image super-resolution reconstruction method based on a multi-scale pyramid network, which performs fusion and reinforcement on extracted features through a multi-scale residual module, and gradually performs up-sampling on the pyramid network to gradually reconstruct an image, and specifically includes the following steps:
s1, shallow feature extraction is carried out on the input image, and the method specifically comprises the following steps:
shallow features are extracted from the input low resolution image using a layer of 3 × 3 convolutional layer followed by a non-linear activation unit, as follows:
F0=σ(W1*ILR) (1)
wherein, ILRRepresenting the input low resolution image, sigma represents the nonlinear activation function Re L U, W1Convolution kernel representing the convolution layer of 3 × 3, F0Representing features extracted by convolutional layers.
S2, performing feature fusion and feature enhancement on the shallow features through a plurality of K multi-scale residual error modules to obtain deep features, specifically:
the multi-scale residual error module can extract features of different scales by using convolution kernels of different sizes, so that the filter can extract and learn richer image information. As shown in fig. 1, by inputting the shallow feature into K multi-scale residual error modules, the shallow feature can be enhanced to obtain richer and deeper features, where K is 2 in this embodiment, but the value of K does not limit the technical solution of the present invention.
Multi-scale residual modules as shown in fig. 2, each multi-scale residual module comprises a feature enhancement unit, a compression unit and a residual learning, wherein the feature enhancement unit comprises 2 convolutional layers of 3 × 3 followed by nonlinear activation units and 2 convolutional layers of 5 × 5 followed by nonlinear activation units, the compression unit is composed of one layer of convolutional layer with the size of 1 × 1, and the utilization of the residual learning makes the network easier to optimize.
Firstly, the shallow layer features extracted in the step S1 are processed by a feature enhancement unit to obtain two different features, then the two different features are processed by feature fusion by a compression unit, the fused features are further processed by learning of a convolution layer, and finally the feature is added with the shallow layer features to form a residual block; the expression is as follows:
Figure BDA0002389194870000061
Figure BDA0002389194870000062
Figure BDA0002389194870000063
Figure BDA0002389194870000064
M=W1×1*[T2,P2](6)
B=σ(W*M) (7)
Fm=B+Fm-1(8)
wherein, T1Characteristic after lamination by the first layer 3 × 3, T2Characteristic of the post-lamination by the second layer 3 × 3, P1Characteristic of the post-lamination layer after lamination by the first layer 5 × 5, P2For the characteristics after the layer was wrapped by the second layer 5 × 5, σ represents the nonlinear activation function Re L U function,
Figure BDA0002389194870000065
the convolution kernel of the first layer 3 × 3 convolutional layer,
Figure BDA0002389194870000066
the convolution kernel of the second layer 3 × 3 convolutional layer,
Figure BDA0002389194870000067
the convolution kernel of the first layer 5 × 5 convolutional layer,
Figure BDA0002389194870000071
convolution kernel, W, of the second layer 5 × 5 convolution layer1×1Denotes a convolution kernel of 1 [1 × 1 ] convolution layer, W denotes a convolution kernel of the last learning layer, [ alpha ], [ alpha]Represents a feature fusion function, M represents a feature after fusion by 1 × 1 convolutional layers,Brepresenting features obtained by the last learning layer, Fm-1And FmRespectively representing the input and output of the mth multi-scale residual block. In this embodiment, m is [1,2 ]]。
S3, using a layer of transposition convolution layer to up-sample the deep-level features passing through the 2 multi-scale residual modules to obtain a high-resolution image, wherein the expression is as follows:
IHR_conv=fdeconv(F2) (9)
in the above formula, IHR_convFor high resolution images after upsampling, fdeconvFor up-sampling operations, F2Is the output of the 2 nd multi-scale residual module.
S4, reconstructing the image by residual learning, specifically:
firstly, carrying out double cubic interpolation on a low-resolution image to obtain a high-resolution image IHR_bicuThen the high resolution image IHR_bicuWith up-sampled high-resolution image IHR_convAdding to obtain a depth map I with spatial resolution amplified by two timesHRThe expression is as follows:
IHR=IHR_bicu+IHR_deconv(10)
and S5, taking the reconstructed image as the output of the current pyramid network layer and the input of the next pyramid network layer, and continuing to adopt the training from the step S1 to the step S4 to obtain an image with higher resolution.
The pyramid network comprises N levels in total, if the input low-resolution image is a down-sampling 1/S times low-resolution image, and S is an up-sampling scale factor, N is log2S; each stage reconstructs an image output by a previous stage into a high-resolution image of the stage.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (7)

1. The image super-resolution reconstruction method based on the multi-scale pyramid network is characterized by comprising the following steps of:
s1, shallow feature extraction is carried out on the input image;
s2, performing feature fusion and feature enhancement on the shallow features through K multi-scale residual error modules to obtain deep features;
s3, performing up-sampling on the deep level features by using the transposed convolution;
s4, reconstructing the image by residual error learning;
and S5, taking the reconstructed image as the output of the current pyramid network and simultaneously as the input of the next pyramid network, and continuously repeating the training from the step S1 to the step S4 to obtain an image with higher resolution.
2. The method for reconstructing image resolution based on multi-scale pyramid network as claimed in claim 1, wherein said step S1 is performed as follows:
extracting shallow features from the input low resolution image using a layer of 3 × 3 convolutional layer followed by a non-linear activation unit, the expression is as follows:
F0=σ(W1*ILR) (1)
wherein, ILRRepresenting the input low resolution image, sigma represents the nonlinear activation function Re L U, W1Convolution kernel representing the convolution layer of 3 × 3, F0Representing features extracted by convolutional layers.
3. The method as claimed in claim 1, wherein each multi-scale residual error module comprises a feature enhancement unit, a compression unit and a residual error learning, wherein the feature enhancement unit comprises 2 convolutional layers of 3 × 3 followed by nonlinear activation units and 2 convolutional layers of 5 × 5 followed by nonlinear activation units, and the compression unit comprises a convolutional layer with a size of 1 × 1.
4. The method for reconstructing image resolution based on multi-scale pyramid network as claimed in claim 3, wherein said step S2 is as follows:
firstly, the shallow layer features extracted in the step S1 are processed by a feature enhancement unit to obtain two different features, then the two different features are processed by feature fusion by a compression unit, the fused features are further processed by learning of a convolution layer, and finally the fused features and the shallow layer features are added to form a residual block, wherein the expression of the calculation process is as follows:
Figure FDA0002389194860000021
Figure FDA0002389194860000022
Figure FDA0002389194860000023
Figure FDA0002389194860000024
M=W1×1*[T2,P2](6)
B=σ(W*M) (7)
Fm=B+Fm-1(8)
wherein, T1Characteristic after lamination by the first layer 3 × 3, T2Characteristic of the post-lamination by the second layer 3 × 3, P1Characteristic of the post-lamination layer after lamination by the first layer 5 × 5, P2For the characteristics after the layer was wrapped by the second layer 5 × 5, σ represents the nonlinear activation function Re L U function,
Figure FDA0002389194860000025
the convolution kernel of the first layer 3 × 3 convolutional layer,
Figure FDA0002389194860000026
the convolution kernel of the second layer 3 × 3 convolutional layer,
Figure FDA0002389194860000027
the convolution kernel of the first layer 5 × 5 convolutional layer,
Figure FDA0002389194860000028
convolution kernel, W, of the second layer 5 × 5 convolution layer1×1Denotes a convolution kernel of 1 [1 × 1 ] convolution layer, W denotes a convolution kernel of the last learning layer, [ alpha ], [ alpha]Represents a feature fusion function, M represents a feature after fusion by 1 × 1 convolutional layers, B represents a feature obtained by the last learning layer, and Fm-1And FmRespectively representing the input and output of the mth multi-scale residual block.
5. The method for reconstructing image resolution based on multi-scale pyramid network as claimed in claim 1, wherein said step S3 is performed as follows:
using a layer of transposition convolution layer to up-sample the deep-level features passing through the K multi-scale residual error modules to obtain a high-resolution image, wherein the expression is as follows:
IHR_conv=fdeconv(FK) (9)
in the above formula, IHR_convFor high resolution images after upsampling, fdeconvFor up-sampling operations, FKIs the output of the kth multi-scale residual module.
6. The method for reconstructing image resolution based on multi-scale pyramid network as claimed in claim 1, wherein said step S4 is performed as follows:
firstly, carrying out double cubic interpolation on a low-resolution image to obtain a high-resolution image IHR_bicuThen the high resolution image IHR_bicuWith up-sampled high-resolution image IHR_convAdding to obtain a depth map I with spatial resolution amplified by two timesHRThe expression is as follows:
IHR=IHR_bicu+IHR_deconv(10)
7. the method as claimed in any one of claims 1 to 6, wherein the pyramid network comprises a total of N levels, and if the input low-resolution image is a down-sampled 1/S times low-resolution image and S is an up-sampling scale factor, then N is log2S; each stage reconstructs an image output by a previous stage into a high-resolution image of the stage.
CN202010108622.1A 2020-02-21 2020-02-21 Image super-resolution reconstruction method based on multi-scale pyramid network Pending CN111402128A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010108622.1A CN111402128A (en) 2020-02-21 2020-02-21 Image super-resolution reconstruction method based on multi-scale pyramid network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010108622.1A CN111402128A (en) 2020-02-21 2020-02-21 Image super-resolution reconstruction method based on multi-scale pyramid network

Publications (1)

Publication Number Publication Date
CN111402128A true CN111402128A (en) 2020-07-10

Family

ID=71430440

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010108622.1A Pending CN111402128A (en) 2020-02-21 2020-02-21 Image super-resolution reconstruction method based on multi-scale pyramid network

Country Status (1)

Country Link
CN (1) CN111402128A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111861961A (en) * 2020-07-25 2020-10-30 安徽理工大学 Multi-scale residual error fusion model for single image super-resolution and restoration method thereof
CN112070702A (en) * 2020-09-14 2020-12-11 中南民族大学 Image super-resolution reconstruction system and method for multi-scale residual error feature discrimination enhancement
CN112634136A (en) * 2020-12-24 2021-04-09 华南理工大学 Image super-resolution method and system based on image characteristic quick splicing
CN113034381A (en) * 2021-02-08 2021-06-25 浙江大学 Single image denoising method and device based on cavitated kernel prediction network
CN113222821A (en) * 2021-05-24 2021-08-06 南京航空航天大学 Image super-resolution processing method for annular target detection
CN113378972A (en) * 2021-06-28 2021-09-10 成都恒创新星科技有限公司 License plate recognition method and system in complex scene
CN113421187A (en) * 2021-06-10 2021-09-21 山东师范大学 Super-resolution reconstruction method, system, storage medium and equipment
CN113989800A (en) * 2021-12-29 2022-01-28 南京南数数据运筹科学研究院有限公司 Intestinal plexus auxiliary identification method based on improved progressive residual error network
CN116385280A (en) * 2023-01-09 2023-07-04 爱芯元智半导体(上海)有限公司 Image noise reduction system and method and noise reduction neural network training method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170293825A1 (en) * 2016-04-08 2017-10-12 Wuhan University Method and system for reconstructing super-resolution image
CN108734660A (en) * 2018-05-25 2018-11-02 上海通途半导体科技有限公司 A kind of image super-resolution rebuilding method and device based on deep learning
CN109767386A (en) * 2018-12-22 2019-05-17 昆明理工大学 A kind of rapid image super resolution ratio reconstruction method based on deep learning
CN109829855A (en) * 2019-01-23 2019-05-31 南京航空航天大学 A kind of super resolution ratio reconstruction method based on fusion multi-level features figure
CN109993701A (en) * 2019-04-09 2019-07-09 福州大学 A method of the depth map super-resolution rebuilding based on pyramid structure
CN110197468A (en) * 2019-06-06 2019-09-03 天津工业大学 A kind of single image Super-resolution Reconstruction algorithm based on multiple dimensioned residual error learning network
CN110473144A (en) * 2019-08-07 2019-11-19 南京信息工程大学 A kind of image super-resolution rebuilding method based on laplacian pyramid network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170293825A1 (en) * 2016-04-08 2017-10-12 Wuhan University Method and system for reconstructing super-resolution image
CN108734660A (en) * 2018-05-25 2018-11-02 上海通途半导体科技有限公司 A kind of image super-resolution rebuilding method and device based on deep learning
CN109767386A (en) * 2018-12-22 2019-05-17 昆明理工大学 A kind of rapid image super resolution ratio reconstruction method based on deep learning
CN109829855A (en) * 2019-01-23 2019-05-31 南京航空航天大学 A kind of super resolution ratio reconstruction method based on fusion multi-level features figure
CN109993701A (en) * 2019-04-09 2019-07-09 福州大学 A method of the depth map super-resolution rebuilding based on pyramid structure
CN110197468A (en) * 2019-06-06 2019-09-03 天津工业大学 A kind of single image Super-resolution Reconstruction algorithm based on multiple dimensioned residual error learning network
CN110473144A (en) * 2019-08-07 2019-11-19 南京信息工程大学 A kind of image super-resolution rebuilding method based on laplacian pyramid network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WEI-SHENG LAI 等: "Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution", 《2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
应自炉 等: "多尺度密集残差网络的单幅图像超分辨率重建", 《中国图象图形学报》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111861961A (en) * 2020-07-25 2020-10-30 安徽理工大学 Multi-scale residual error fusion model for single image super-resolution and restoration method thereof
CN111861961B (en) * 2020-07-25 2023-09-22 安徽理工大学 Single image super-resolution multi-scale residual error fusion model and restoration method thereof
CN112070702A (en) * 2020-09-14 2020-12-11 中南民族大学 Image super-resolution reconstruction system and method for multi-scale residual error feature discrimination enhancement
CN112070702B (en) * 2020-09-14 2023-10-03 中南民族大学 Image super-resolution reconstruction system and method for multi-scale residual error characteristic discrimination enhancement
CN112634136A (en) * 2020-12-24 2021-04-09 华南理工大学 Image super-resolution method and system based on image characteristic quick splicing
CN112634136B (en) * 2020-12-24 2023-05-23 华南理工大学 Image super-resolution method and system based on image feature rapid stitching
CN113034381B (en) * 2021-02-08 2022-06-21 浙江大学 Single image denoising method and device based on cavitated kernel prediction network
CN113034381A (en) * 2021-02-08 2021-06-25 浙江大学 Single image denoising method and device based on cavitated kernel prediction network
CN113222821A (en) * 2021-05-24 2021-08-06 南京航空航天大学 Image super-resolution processing method for annular target detection
CN113421187A (en) * 2021-06-10 2021-09-21 山东师范大学 Super-resolution reconstruction method, system, storage medium and equipment
CN113421187B (en) * 2021-06-10 2023-01-03 山东师范大学 Super-resolution reconstruction method, system, storage medium and equipment
CN113378972A (en) * 2021-06-28 2021-09-10 成都恒创新星科技有限公司 License plate recognition method and system in complex scene
CN113378972B (en) * 2021-06-28 2024-03-22 成都恒创新星科技有限公司 License plate recognition method and system under complex scene
CN113989800A (en) * 2021-12-29 2022-01-28 南京南数数据运筹科学研究院有限公司 Intestinal plexus auxiliary identification method based on improved progressive residual error network
CN116385280A (en) * 2023-01-09 2023-07-04 爱芯元智半导体(上海)有限公司 Image noise reduction system and method and noise reduction neural network training method
CN116385280B (en) * 2023-01-09 2024-01-23 爱芯元智半导体(上海)有限公司 Image noise reduction system and method and noise reduction neural network training method

Similar Documents

Publication Publication Date Title
CN111402128A (en) Image super-resolution reconstruction method based on multi-scale pyramid network
CN109903226B (en) Image super-resolution reconstruction method based on symmetric residual convolution neural network
CN113096017B (en) Image super-resolution reconstruction method based on depth coordinate attention network model
CN110992270A (en) Multi-scale residual attention network image super-resolution reconstruction method based on attention
CN108475415B (en) Method and system for image processing
CN112750082B (en) Human face super-resolution method and system based on fusion attention mechanism
CN111369440B (en) Model training and image super-resolution processing method, device, terminal and storage medium
CN110348487B (en) Hyperspectral image compression method and device based on deep learning
CN113139907A (en) Generation method, system, device and storage medium for visual resolution enhancement
CN110689483B (en) Image super-resolution reconstruction method based on depth residual error network and storage medium
CN109146788A (en) Super-resolution image reconstruction method and device based on deep learning
CN111768340B (en) Super-resolution image reconstruction method and system based on dense multipath network
CN111652804B (en) Super-resolution reconstruction method based on expansion convolution pyramid and bottleneck network
CN110349087B (en) RGB-D image high-quality grid generation method based on adaptive convolution
CN115953303B (en) Multi-scale image compressed sensing reconstruction method and system combining channel attention
CN112700460B (en) Image segmentation method and system
CN114331831A (en) Light-weight single-image super-resolution reconstruction method
CN116433914A (en) Two-dimensional medical image segmentation method and system
CN113554058A (en) Method, system, device and storage medium for enhancing resolution of visual target image
CN104899835A (en) Super-resolution processing method for image based on blind fuzzy estimation and anchoring space mapping
CN116664397B (en) TransSR-Net structured image super-resolution reconstruction method
CN111654621B (en) Dual-focus camera continuous digital zooming method based on convolutional neural network model
CN112949636A (en) License plate super-resolution identification method and system and computer readable medium
Hui et al. Two-stage convolutional network for image super-resolution
CN115936992A (en) Garbage image super-resolution method and system of lightweight transform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200710