CN111080516B - Super-resolution image reconstruction method based on self-sample enhancement - Google Patents

Super-resolution image reconstruction method based on self-sample enhancement Download PDF

Info

Publication number
CN111080516B
CN111080516B CN201911170154.4A CN201911170154A CN111080516B CN 111080516 B CN111080516 B CN 111080516B CN 201911170154 A CN201911170154 A CN 201911170154A CN 111080516 B CN111080516 B CN 111080516B
Authority
CN
China
Prior art keywords
resolution
image
self
design
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911170154.4A
Other languages
Chinese (zh)
Other versions
CN111080516A (en
Inventor
曹飞龙
张磊
张清华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Petrochemical Technology
Original Assignee
Guangdong University of Petrochemical Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Petrochemical Technology filed Critical Guangdong University of Petrochemical Technology
Priority to CN201911170154.4A priority Critical patent/CN111080516B/en
Publication of CN111080516A publication Critical patent/CN111080516A/en
Application granted granted Critical
Publication of CN111080516B publication Critical patent/CN111080516B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture

Abstract

The invention discloses a super-resolution image reconstruction method based on self-sample enhancement. And designing two convolutional neural networks by using an external training set to respectively extract the non-design features of the low-resolution image and the high-resolution image. And then estimating the mapping relation between the low-resolution and high-resolution non-design features by using an anchor point neighbor regression and a least square method, and connecting two networks. Aiming at the output of the connection network, a residual neural network is designed to improve the expression of the image characteristics. And constructing a self-sample training set by utilizing the output of the residual error network, still utilizing an anchor point neighbor regression and a least square method to estimate the mapping relation between the image with enhanced feature expression and the high-resolution image according to the self-similarity information of the image and combining with an external sample, and reconstructing the high-resolution image by utilizing the obtained mapping relation. The invention combines the advantages of deep learning and self-learning, can avoid the loss of detail information of the image and reconstruct the complex structure of the image.

Description

Super-resolution image reconstruction method based on self-sample enhancement
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a super-resolution image reconstruction method based on self-sample enhancement.
Background
Super-resolution image reconstruction is an important research topic in the field of digital image processing, and the main purpose of the super-resolution image reconstruction is to improve the image quality so as to meet the requirements of practical applications. Until now, relevant scholars have conducted extensive researches on reconstruction algorithms based on interpolation, reconstruction algorithms based on statistical models and reconstruction algorithms based on sample learning, and have achieved numerous results. However, existing reconstruction methods still have drawbacks such as not very good reconstruction of complex textures, and there is room for improvement.
At present, the earlier proposed interpolation-based reconstruction algorithm is very fast in terms of operation and implementation, but the reconstructed image is blurred, and the image details cannot be well expressed. In 2006, after the compressed sensing theory is developed, a new thought is provided for super-resolution image reconstruction. Yang et al propose a sparse representation based reconstruction algorithm on this basis. The algorithm learns an overcomplete dictionary, representing high resolution image information with coefficients that are as sparse as possible. Although the effect of super-resolution reconstruction is improved, the computational complexity is high and image edge degradation still exists. In 2014, timofte proposes a reconstruction algorithm based on improved anchor point neighbor regression based on previous research results, and the efficiency of model solving is improved by utilizing collaborative representation, but the structural information of the image is not considered. With the advent of deep learning, dong et al in 2014 proposed an image super-resolution reconstruction algorithm based on a convolutional neural network. The algorithm learns the mapping relation between the low resolution image and the high resolution image by using a three-layer convolution network. Compared with the linear mapping relation studied by the prior art, the nonlinear mapping relation learned by the algorithm has a remarkable reconstruction effect. However, training convolutional networks is performed on the basis of a huge number of learning samples, and the convolutional network cannot achieve an ideal effect without enough training samples.
Disclosure of Invention
In view of the above, the present invention provides a super-resolution image reconstruction method based on self-sample enhancement for realizing high-resolution image reconstruction, which is beneficial to presenting complex textures.
In order to achieve the above object, the present invention provides the following technical solutions: the super-resolution image reconstruction method based on self-sample enhancement comprises the following steps: firstly, respectively extracting non-design features of low-resolution and high-resolution images by using an independent convolution network, then, enhancing an output image of the feature extraction network by using a residual error network, building a self-similar training set, learning a mapping relation between an enhanced reconstructed image containing self-samples and the high-resolution image, and finally estimating the reconstructed high-resolution image.
Alternatively, more specific methods are:
a. giving an external training set, determining an amplification coefficient u, designing two independent convolutional neural networks, and respectively extracting non-design features of a low-resolution image and a high-resolution image;
b. c, according to the non-design features obtained in the step a, respectively extracting p multiplied by p patch blocks on the low-resolution feature map and the high-resolution feature map, and writing the patch blocks into a vector form to construct a sample set; training an overcomplete joint dictionary, respectively searching a plurality of neighbor samples of each low-resolution dictionary atom in a sample set, estimating the mapping relation between low-resolution non-design features and high-resolution non-design features by using a regression and least squares method, and connecting two non-design feature extraction networks;
c. b, designing a residual convolution network according to the output of the two convolution networks connected in the step b, and carrying out feature expression enhancement on the output images of the two connected non-design feature extraction networks to obtain an enhanced reconstruction image;
d. downsampling a low-resolution image to be reconstructed by u times, determining a sampling factor s for the downsampled image, and constructing a multi-scale self-similar image; applying the mapping relation between the low-resolution non-design feature extraction network trained in the step a and the step b to the constructed self-similar image to obtain a connected network output image, and applying the step c to the output image to obtain a corresponding enhanced reconstructed image to form a self-similar sample set;
e. d, respectively extracting p multiplied by p patch blocks from the self-similar sample set obtained in the step d and the enhanced reconstructed image and the high-resolution image by combining the external sample set obtained in the step c; determining a feature extraction operator, extracting features of the enhanced reconstructed image and the high-resolution image patch block respectively, and estimating a mapping relation containing self-sample learning by using anchor point neighbor regression and a least square algorithm; and estimating a finally reconstructed high-resolution image according to the obtained mapping relation.
Optionally, in the step a, the output of the non-design feature extraction network is approximately equal to the input thereof; the network comprises three convolution layers, and the convolution kernel of each layer has the following size: 64×9×9, 32×1×1, and 1×5×5; the network learning and optimization can be expressed as:
Figure BDA0002288471350000031
wherein I represents an input image, l represents a convolution layer index, o represents an output layer index, and the parameter updating iteration uses an error back propagation algorithm, so that the total number of non-design feature images corresponding to each training image is 32.
Optionally, in the step b, the low-resolution feature map used for collecting the patch block is a sum of 32 feature maps extracted by the corresponding non-design feature extraction network, and the high-resolution feature map is 32 feature maps extracted by the corresponding non-design feature extraction network, where the patch block size p=3u; the overcomplete dictionary of different feature graphs is learned through a K-SVD method, the number of nearest neighbor samples corresponding to each low-resolution dictionary atom is 2048, and a mapping relation formula is as follows:
Figure BDA0002288471350000032
where k represents the dictionary index corresponding to the high resolution non-design feature map, j represents the atomic index of each low resolution dictionary,
Figure BDA0002288471350000033
sample set representing nearest neighbor low resolution non-design features, +.>
Figure BDA0002288471350000041
Sample set, lambda, representing corresponding high resolution non-design features u Representing regularization coefficients.
Optionally, in the step c, the residual network includes three convolution layers, and the convolution kernel of each layer has the following size: 64×9×9, 32×1×1, and 1×5×5; the network learning and optimization can be expressed as:
Figure BDA0002288471350000042
/>
wherein I is z Representing an output image of the connected network, i.e. an input image of the residual network, l representing the convolution layer index, o representing the output layer index, ori representing the original high resolution image, and the parameter update iteration using an error back propagation algorithm.
Optionally, in the step d, the sampling coefficient is 0.98, and the multi-scale parameter is 20.
Optionally, in the step e, the patch size p=3u; the feature extraction operator of the enhanced reconstructed image is a first-order differential operator and a second-order differential operator in the horizontal direction and the vertical direction, the patch block of the enhanced reconstructed image is convolved and written into a vector form, and the main component analysis is utilized to reduce the dimension while removing redundant information in the data; the high-resolution feature extraction is to remove corresponding enhanced reconstructed image information from the high-resolution patch; based on the extracted feature samples, an overcomplete dictionary with the size of 1024 is trained, 2048 nearest neighbor samples of each enhanced reconstructed image dictionary atom are determined, and the mapping relation between the enhanced reconstructed image features and the high-resolution image features is estimated by using regression and a least square method, wherein the formula is as follows:
Figure BDA0002288471350000043
where j represents the atomic index of the enhanced reconstructed image dictionary,
Figure BDA0002288471350000044
sample set representing nearest neighbor enhanced reconstructed image features,/->
Figure BDA0002288471350000045
Sample set, lambda, representing corresponding high resolution non-design features c Representing regularization coefficients.
Compared with the prior art, the invention has the beneficial effects that:
and extracting non-design features of the image by using the deep convolution network, and learning a mapping relation between the low-resolution non-design features and the high-resolution non-design features by using anchor point neighbor regression. And the nonlinear mapping relation between the output of the residual network learning characteristic extraction network and the high-resolution image is utilized to better express the image characteristic. And finally, combining the multi-scale self-similar images, and exploring a reconstruction method of the image containing the self-sample information. According to the invention, the non-design features of the image are firstly extracted through the convolution network, and then the high-resolution image with rich details is reconstructed by means of the structure and texture information contained in the self-similar image, so that the reconstruction effect and accuracy are improved.
Drawings
FIG. 1 is a schematic flow diagram of a connection between two non-design feature extraction networks;
FIG. 2 is a schematic flow chart of constructing a self-similar training set;
FIG. 3a is a low resolution image at magnification of 2;
FIG. 3b depicts a high resolution image reconstructed according to the present invention at a magnification of 2;
FIG. 4a is a low resolution image at magnification of 3;
FIG. 4b is a high resolution image reconstructed according to the present invention at a magnification of 3;
FIG. 5 is a schematic diagram of the overall flow of the super-resolution image reconstruction algorithm according to the present invention;
FIG. 6 is an example of image non-design features and design features.
Detailed Description
The method is based on image non-design feature learning and self-similar sample learning, the image non-design features are extracted by using a deep convolution network, and the mapping relation between the low-resolution non-design features and the high-resolution non-design features is connected by using anchor point neighbor regression. And then, utilizing a residual error network to learn a nonlinear mapping relation between the output of the characteristic extraction network and the high-resolution image, and enhancing the reconstruction effect. And constructing a multi-scale self-similar training set, learning a linear mapping relation between the enhanced reconstructed image containing the self-samples and the high-resolution image by using anchor point neighbor regression, and finally reconstructing the high-resolution image.
The basic idea of the invention is as follows:
firstly, designing two independent convolutional neural networks, and respectively extracting non-design features of a low-resolution image and a high-resolution image;
secondly, estimating the mapping relation between the low-resolution non-design features and the high-resolution non-design features by using an anchor point neighbor regression and a least square method;
thirdly, designing a residual error network to enhance the output of the non-design feature extraction network;
and finally, establishing a multi-scale self-similar training set, and learning the mapping relation between the enhanced reconstructed image and the high-resolution graph.
The invention is further illustrated below with reference to examples.
The invention provides a leukocyte positioning and iterative segmentation method, which comprises the following steps:
a. and (3) determining an amplification coefficient u by giving an external training set, designing two independent convolutional neural networks, and respectively extracting non-design features of the low-resolution image and the high-resolution image.
Wherein the output of the non-design feature extraction network is approximately equal to its input. The network comprises three convolution layers, and the convolution kernel of each layer has the following size: 64×9×9, 32×1×1, and 1×5×5; the network learning and optimization can be expressed as:
Figure BDA0002288471350000061
wherein I represents an input image, l represents a convolution layer index, o represents an output layer index, and the parameter updating iteration uses an error back propagation algorithm, so that the total number of non-design feature images corresponding to each training image is 32.
b. C, according to the non-design features obtained in the step a, respectively extracting p multiplied by p patch blocks on the low-resolution feature map and the high-resolution feature map, and writing the patch blocks into a vector form to construct a sample set; and training an overcomplete joint dictionary, respectively searching a plurality of neighbor samples of each low-resolution dictionary atom in a sample set, estimating the mapping relation between the low-resolution non-design features and the high-resolution non-design features by using a regression and least squares method, and connecting the two non-design feature extraction networks.
The low-resolution feature map used for collecting patch blocks is the sum of 32 feature maps extracted by the corresponding non-design feature extraction network, the high-resolution feature map is the 32 feature maps extracted by the corresponding non-design feature extraction network, and the patch block size p=3u; the overcomplete dictionary of different feature graphs is learned by a K-SVD method, the size of the overcomplete dictionary is 1024, the number of nearest neighbor samples corresponding to each low-resolution dictionary atom is 2048, and a mapping relation formula is as follows:
Figure BDA0002288471350000071
where k represents the dictionary index corresponding to the high resolution non-design feature map, j represents the atomic index of each low resolution dictionary,
Figure BDA0002288471350000072
sample set representing nearest neighbor low resolution non-design features, +.>
Figure BDA0002288471350000073
Sample set, lambda, representing corresponding high resolution non-design features u Representing regularization coefficients.
The process of connecting two non-design feature extraction networks is shown in fig. 1.
c. And c, designing a residual convolution network according to the output of the two convolution networks connected in the step b, and carrying out feature expression enhancement on the output images of the two connected non-design feature extraction networks to obtain an enhanced reconstructed image.
The residual network comprises three convolution layers, and the convolution kernel of each layer is respectively as follows: 64×9×9, 32×1×1, and 1×5×5; the network learning and optimization can be expressed as:
Figure BDA0002288471350000074
wherein I is z Representing an output image of the connected network, i.e. an input image of the residual network, l representing the convolution layer index, o representing the output layer index, ori representing the original high resolution image, and the parameter update iteration using an error back propagation algorithm.
d. Downsampling a low-resolution image to be reconstructed by u times, determining a sampling factor s for the downsampled image, and constructing a multi-scale self-similar image; and c, applying the mapping relation between the low-resolution non-design feature extraction network trained in the step a and the step b to the constructed self-similar image to obtain an output image of the connecting network, and applying the step c to the output image to obtain a corresponding enhanced reconstructed image to form a self-similar sample set.
As shown in fig. 2, a schematic flow chart of constructing a self-similar training set is shown, wherein the sampling coefficient is 0.98, and the multi-scale parameter is 20.
e. D, respectively extracting p multiplied by p patch blocks from the self-similar sample set obtained in the step d and the enhanced reconstructed image and the high-resolution image by combining the external sample set obtained in the step c; determining a feature extraction operator, extracting features of the enhanced reconstructed image and the high-resolution image patch block respectively, and estimating a mapping relation containing self-sample learning by using anchor point neighbor regression and a least square algorithm; and estimating a finally reconstructed high-resolution image according to the obtained mapping relation.
Wherein patch block size p=3u; the feature extraction operator of the enhanced reconstructed image is a first-order differential operator and a second-order differential operator in the horizontal direction and the vertical direction, the patch block of the enhanced reconstructed image is convolved and written into a vector form, and the main component analysis (PCA) is utilized to reduce the dimension while removing redundant information in the data; the high-resolution feature extraction is to remove corresponding enhanced reconstructed image information from the high-resolution patch; based on the extracted feature samples, an overcomplete dictionary with the size of 1024 is trained, 2048 nearest neighbor samples of each enhanced reconstructed image dictionary atom are determined, and the mapping relation between the enhanced reconstructed image features and the high-resolution image features is estimated by using regression and a least square method, wherein the formula is as follows:
Figure BDA0002288471350000081
where j represents the atomic index of the enhanced reconstructed image dictionary,
Figure BDA0002288471350000082
sample set representing nearest neighbor enhanced reconstructed image features,/->
Figure BDA0002288471350000083
Sample set, lambda, representing corresponding high resolution non-design features c Representing regularization coefficients.
As shown in fig. 3a, which is a low resolution image at a magnification of 2, and fig. 3b, which is a high resolution image reconstructed by the present invention. Fig. 4a shows a low resolution image at a magnification of 3, and fig. 4b shows a high resolution image reconstructed by the present invention.
Although the embodiments have been described and illustrated separately above, and with respect to a partially common technique, it will be apparent to those skilled in the art that alternate and integration may be made between embodiments, with reference to one embodiment not explicitly described, and reference may be made to another embodiment described.
The above-described embodiments do not limit the scope of the present invention. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the above embodiments should be included in the scope of the present invention.

Claims (4)

1. A super-resolution image reconstruction method based on self-sample enhancement is characterized by comprising the following steps of: the method comprises the following steps:
firstly, respectively extracting non-design features of low-resolution and high-resolution images by using an independent convolution network, then enhancing an output image of the feature extraction network by using a residual error network, building a self-similar training set, learning a mapping relation between an enhanced reconstructed image containing self-samples and the high-resolution image, and finally estimating the reconstructed high-resolution image;
the method comprises the following steps of:
a. giving an external training set, determining an amplification coefficient u, designing two independent convolutional neural networks, and respectively extracting non-design features of a low-resolution image and a high-resolution image;
b. c, according to the non-design features obtained in the step a, respectively extracting p multiplied by p patch blocks on the low-resolution feature map and the high-resolution feature map, and writing the patch blocks into a vector form to construct a sample set; training an overcomplete joint dictionary, respectively searching a plurality of neighbor samples of each low-resolution dictionary atom in a sample set, estimating the mapping relation between low-resolution non-design features and high-resolution non-design features by using a regression and least squares method, and connecting two non-design feature extraction networks;
c. b, designing a residual convolution network according to the output of the two convolution networks connected in the step b, and carrying out feature expression enhancement on the output images of the two connected non-design feature extraction networks to obtain an enhanced reconstruction image;
d. downsampling a low-resolution image to be reconstructed by u times, determining a sampling factor s for the downsampled image, and constructing a multi-scale self-similar image; applying the mapping relation between the low-resolution non-design feature extraction network trained in the step a and the step b to the constructed self-similar image to obtain a connected network output image, and applying the step c to the output image to obtain a corresponding enhanced reconstructed image to form a self-similar sample set;
e. d, respectively extracting p multiplied by p patch blocks from the self-similar sample set obtained in the step d and the enhanced reconstructed image and the high-resolution image by combining the external sample set obtained in the step c; determining a feature extraction operator, extracting features of the enhanced reconstructed image and the high-resolution image patch block respectively, and estimating a mapping relation containing self-sample learning by using anchor point neighbor regression and a least square algorithm; and estimating a finally reconstructed high-resolution image according to the obtained mapping relation.
2. The self-sample enhancement based super-resolution image reconstruction method according to claim 1, wherein: in the step b, the low-resolution feature map used for collecting the patch is the sum of 32 feature maps extracted by the corresponding non-design feature extraction network, and the high-resolution feature map is the 32 feature maps extracted by the corresponding non-design feature extraction network, and the patch size p=3u; the overcomplete dictionary of different feature graphs is learned through a K-SVD method, the number of nearest neighbor samples corresponding to each low-resolution dictionary atom is 2048, and a mapping relation formula is as follows:
Figure QLYQS_1
where k represents the dictionary index corresponding to the high resolution non-design feature map, j represents the atomic index of each low resolution dictionary,
Figure QLYQS_2
sample set representing nearest neighbor low resolution non-design features, +.>
Figure QLYQS_3
Sample set, lambda, representing corresponding high resolution non-design features u Representing regularization coefficients.
3. The self-sample enhancement based super-resolution image reconstruction method according to claim 2, wherein: in the step d, the sampling coefficient is 0.98, and the multi-scale parameter is 20.
4. The self-sample enhancement based super-resolution image reconstruction method according to claim 1, wherein: in the step e, the patch size p=3u; the feature extraction operator of the enhanced reconstructed image is a first-order differential operator and a second-order differential operator in the horizontal direction and the vertical direction, the patch block of the enhanced reconstructed image is convolved and written into a vector form, and the main component analysis is utilized to reduce the dimension while removing redundant information in the data; the high-resolution feature extraction is to remove corresponding enhanced reconstructed image information from the high-resolution patch; based on the extracted feature samples, an overcomplete dictionary with the size of 1024 is trained, 2048 nearest neighbor samples of each enhanced reconstructed image dictionary atom are determined, and the mapping relation between the enhanced reconstructed image features and the high-resolution image features is estimated by using regression and a least square method, wherein the formula is as follows:
Figure QLYQS_4
where j represents the atomic index of the enhanced reconstructed image dictionary,
Figure QLYQS_5
sample set representing nearest neighbor enhanced reconstructed image features,/->
Figure QLYQS_6
Sample set, lambda, representing corresponding high resolution non-design features c Representing regularization coefficients. />
CN201911170154.4A 2019-11-26 2019-11-26 Super-resolution image reconstruction method based on self-sample enhancement Active CN111080516B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911170154.4A CN111080516B (en) 2019-11-26 2019-11-26 Super-resolution image reconstruction method based on self-sample enhancement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911170154.4A CN111080516B (en) 2019-11-26 2019-11-26 Super-resolution image reconstruction method based on self-sample enhancement

Publications (2)

Publication Number Publication Date
CN111080516A CN111080516A (en) 2020-04-28
CN111080516B true CN111080516B (en) 2023-04-28

Family

ID=70311665

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911170154.4A Active CN111080516B (en) 2019-11-26 2019-11-26 Super-resolution image reconstruction method based on self-sample enhancement

Country Status (1)

Country Link
CN (1) CN111080516B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113628108B (en) * 2021-07-05 2023-10-27 上海交通大学 Image super-resolution method and system based on discrete representation learning and terminal
CN116228786B (en) * 2023-05-10 2023-08-08 青岛市中心医院 Prostate MRI image enhancement segmentation method, device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018120329A1 (en) * 2016-12-28 2018-07-05 深圳市华星光电技术有限公司 Single-frame super-resolution reconstruction method and device based on sparse domain reconstruction
CN108734660A (en) * 2018-05-25 2018-11-02 上海通途半导体科技有限公司 A kind of image super-resolution rebuilding method and device based on deep learning
CN109741256A (en) * 2018-12-13 2019-05-10 西安电子科技大学 Image super-resolution rebuilding method based on rarefaction representation and deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105931179B (en) * 2016-04-08 2018-10-26 武汉大学 A kind of image super-resolution method and system of joint sparse expression and deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018120329A1 (en) * 2016-12-28 2018-07-05 深圳市华星光电技术有限公司 Single-frame super-resolution reconstruction method and device based on sparse domain reconstruction
CN108734660A (en) * 2018-05-25 2018-11-02 上海通途半导体科技有限公司 A kind of image super-resolution rebuilding method and device based on deep learning
CN109741256A (en) * 2018-12-13 2019-05-10 西安电子科技大学 Image super-resolution rebuilding method based on rarefaction representation and deep learning

Also Published As

Publication number Publication date
CN111080516A (en) 2020-04-28

Similar Documents

Publication Publication Date Title
CN109087258B (en) Deep learning-based image rain removing method and device
CN111462013B (en) Single-image rain removing method based on structured residual learning
CN110415199B (en) Multispectral remote sensing image fusion method and device based on residual learning
CN110689483B (en) Image super-resolution reconstruction method based on depth residual error network and storage medium
CN109523470B (en) Depth image super-resolution reconstruction method and system
CN112991472B (en) Image compressed sensing reconstruction method based on residual error dense threshold network
CN111401247B (en) Portrait segmentation method based on cascade convolution neural network
CN111127354B (en) Single-image rain removing method based on multi-scale dictionary learning
CN111080516B (en) Super-resolution image reconstruction method based on self-sample enhancement
CN112329801B (en) Convolutional neural network non-local information construction method
CN111402138A (en) Image super-resolution reconstruction method of supervised convolutional neural network based on multi-scale feature extraction fusion
CN111861884A (en) Satellite cloud image super-resolution reconstruction method based on deep learning
CN112241939A (en) Light-weight rain removing method based on multi-scale and non-local
CN111815516B (en) Super-resolution reconstruction method for weak supervision infrared remote sensing image
CN110288529B (en) Single image super-resolution reconstruction method based on recursive local synthesis network
CN115526779A (en) Infrared image super-resolution reconstruction method based on dynamic attention mechanism
CN109272450B (en) Image super-resolution method based on convolutional neural network
CN107622476B (en) Image Super-resolution processing method based on generative probabilistic model
CN107451961A (en) The restoration methods of picture rich in detail under several fuzzy noise images
CN113793267B (en) Self-supervision single remote sensing image super-resolution method based on cross-dimension attention mechanism
AU2021104479A4 (en) Text recognition method and system based on decoupled attention mechanism
CN112907456B (en) Deep neural network image denoising method based on global smooth constraint prior model
CN115082307A (en) Image super-resolution method based on fractional order differential equation
CN114529450A (en) Face image super-resolution method based on improved depth iterative cooperative network
Wang et al. Lightweight non-local network for image super-resolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant