CN109919840A - Image super-resolution rebuilding method based on dense feature converged network - Google Patents
Image super-resolution rebuilding method based on dense feature converged network Download PDFInfo
- Publication number
- CN109919840A CN109919840A CN201910052202.3A CN201910052202A CN109919840A CN 109919840 A CN109919840 A CN 109919840A CN 201910052202 A CN201910052202 A CN 201910052202A CN 109919840 A CN109919840 A CN 109919840A
- Authority
- CN
- China
- Prior art keywords
- image
- resolution
- network
- feature
- thick
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000000605 extraction Methods 0.000 claims abstract description 26
- 230000003321 amplification Effects 0.000 claims abstract description 5
- 238000003199 nucleic acid amplification method Methods 0.000 claims abstract description 5
- 239000010410 layer Substances 0.000 claims description 51
- 230000004927 fusion Effects 0.000 claims description 15
- 238000012549 training Methods 0.000 claims description 14
- 239000002356 single layer Substances 0.000 claims description 4
- 238000012360 testing method Methods 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 3
- 230000000737 periodic effect Effects 0.000 claims 1
- 238000013527 convolutional neural network Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000007812 deficiency Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
Landscapes
- Image Processing (AREA)
Abstract
The invention discloses a kind of image super-resolution rebuilding methods based on dense feature converged network, include the following steps: 1) data prediction;2) image super-resolution rebuilding model is established;3) image input model to be processed is obtained into high-definition picture.Described image Super-resolution reconstruction established model includes: thick feature extraction network, dense feature converged network and image reconstruction network;The thick feature extraction network is used to extract the thick characteristics of image of low point of rate color image;The dense feature converged network from thick characteristics of image for extracting High-order Image feature;Described image rebuilds network and is used to thick characteristics of image merging the intensive characteristics of image of acquisition with the addition of High-order Image feature, and intensive image feature reconstruction is then obtained high-resolution image.This method can effectively reduce tradition based on interpolation amplification super-resolution algorithms bring noise, and obtain more high-frequency informations to realize that high-definition picture details restores, and improve the precision of Super-resolution Reconstruction.
Description
Technical field
The present invention is a kind of image super-resolution rebuilding method based on dense feature converged network, belongs to image procossing neck
Domain.
Background technique
Image super-resolution problem, especially single image super-resolution (SISR, Single Image Super
Resolution) problem is the classical problem in computer vision.Its purpose is from inexpensive imaging system and limited ring
Single low resolution (LR, the Low Resolution) image that border condition generates obtains visually satisfactory high-resolution
(HR, High Resolution) image.Since super-resolution reconstruction is without the concern for too many cost problem, answered extensively
Among the scenes such as monitoring video, medical image, recognition of face.
Can generally be divided into three classes for the method for SISR problem: based on interpolation method, the method based on reconstruct is based on
The method of study.Super resolution ratio reconstruction method in these methods based on deep learning is directly end-to-end by convolutional neural networks
The mapping of low-resolution image to high-definition picture is realized, Dong uses convolutional neural networks to solve the problems, such as SISR for the first time,
And realize apparent performance boost.Kim etc. then constructs depth up to 20 layers of deep layer convolutional neural networks VDSR, introduces residual error
Study and biggish learning rate, carry out the difficulty of simplified model training, while also being cut using gradient, learn in conjunction with residual error, keep away
The problems such as having exempted from gradient disperse and explosion.Then, Kim etc. has also been proposed DRCN method, and DRCN introduces recursive convolution layer, very well
The parameter amount for controlling profound network model, so that model performance also has a certain upgrade.
Although these super-resolution methods based on deep learning improve image reconstruction quality and algorithm to a certain extent
Efficiency, but there are still deficiency, existing method obtains the image spatial resolution of target in order to realize, selects directly to input
Low-resolution image carry out interpolation amplification pretreatment.Such way not only increases for profound neural network
Calculated load, while making script low-resolution image part loss in detail.Secondly, it is excessively fuzzy or flat to reconstruct the figure come
Sliding, detail recovery is inadequate, or even generates artifact.These last methods only by increasing depth come improving performance, not only promote effect
Fruit is little, also makes network training difficult.
Summary of the invention
Goal of the invention: in view of the deficiencies of the prior art, and a kind of image oversubscription based on dense feature converged network is proposed
Resolution method for reconstructing.
To achieve the above object, the technical solution adopted by the present invention are as follows:
A kind of image super-resolution rebuilding method based on dense feature converged network, includes the following steps:
1) original image is normalized, and is reduced according to different magnification ratio bi-cubic interpolations, obtained not
Same low-resolution image constructs the data set of training pattern using low point of rate color image and original image as element;
2) image super-resolution rebuilding model is established, step 1) the data obtained collection is divided into training set and test set, benefit
With training set training image Super-resolution reconstruction established model, and image super-resolution rebuilding model is tested with test set;
3) after image to be processed being normalized, input picture Super-resolution reconstruction established model obtains high-resolution
Image.
Preferably, image super-resolution rebuilding model described in step 2 includes: thick feature extraction network, and dense feature is melted
Close network and image reconstruction network;The thick feature extraction network is used for extraction step 1) low point of rate color image of gained it is thick
Characteristics of image;The dense feature converged network from thick characteristics of image for extracting High-order Image feature;Described image weight
Establishing network, which is used to merge thick characteristics of image with the addition of High-order Image feature, obtains intensive characteristics of image, then that intensive image is special
Sign reconstruct obtains high-resolution image.
Preferably, the thick feature extraction network is made of single layer convolutional layer, and low-resolution image passes through single layer convolutional layer
It realizes convolution algorithm, obtains thick characteristics of image F0:
F0=Wcoarse×Imglr (1)
Wherein, WcoarseIndicate thick feature extraction network convolution layer parameter, ImglrIndicate the low resolution after processing
Image.
Preferably, the dense feature converged network is formed by several Fusion Features module-cascades, the Fusion Features
Module includes global characteristics integrated unit and feature extraction unit;Assuming that Fusion Features module shares M, for n-th (n≤M)
Global characteristics integrated unit exports in a Fusion Features module are as follows:
fusionn=[[F0,F1,…,Fn-2],Fn-1] (2)
Wherein, Fn-1Feature, F are exported for (n-1)th Fusion Features module0For the output of thick feature extraction network.Then feature mentions
Unit is taken to export are as follows:
Fn=W2×σ(W1×fusionn) (3)
Wherein, W1And W2For convolutional layer weight parameter, σ is activation primitive ReLU.Last dense feature converged network output with
Thick feature extraction network output is added, and obtains intensive characteristics of image Fdense。
Preferably, it includes two layers of convolutional layer and sub-pixel alignment layer that described image, which rebuilds network,;The intensive characteristics of image
Successively after convolutional layer, sub-pixel alignment layer, convolutional layer, target high-resolution image is obtained;Sub-pixel alignment layer indicates one
Kind periodically arrangement operation, dimension is H × W × (Cr by it2) element in vector carries out periodically on channel dimension
Arrangement, being eventually transformed into a dimension is rH × rW × C vector, to realize the amplification of spatial resolution, H, W, C distinguish table
Show that height, width and the port number of vector, r indicate magnification ratio.It is as follows to rebuild network expression:
ImgSR=Wrec_2×PS(Wrec_1×Fdense) (4)
Wherein, Wrec_2And Wrec_1Two layers of the convolutional layer weight parameter rebuild in network is respectively indicated, PS then indicates sub-pixel
Alignment layer expresses function, ImgSRIt indicates to rebuild the target high-resolution image that network exports.
The utility model has the advantages that the present invention can be directly realized by from low-resolution image learning characteristic, and directly reconstructed using feature
High-definition picture restores image detail abundant in high-definition picture, by making full use of middle layer feature super to improve
The quality and precision of resolution reconstruction can effectively reduce tradition and be based on interpolation amplification super-resolution algorithms bring noise,
And more high-frequency informations are obtained to realize that high-definition picture details restores.
Detailed description of the invention
The method flow schematic diagram of Fig. 1 embodiment;
Fig. 2 dense feature converged network model structure;
Fig. 3 Fusion Features module map.
Specific embodiment
Below with reference to embodiment, the present invention will be further explained with attached drawing, but is not limitation of the invention.
The embodiment of the present invention as shown in Figure 1 the following steps are included:
1) data prediction: numerical value is carried out to original color image and is normalized to [0,1], while by original color image root
According to different magnification ratios, interpolation scaling is carried out, the data set of different magnification ratios is generated, for following model training.
2) reconstruction model is established: as shown in Fig. 2, reconstruction model includes thick feature extraction network, dense feature converged network
With image reconstruction network.The thick feature extraction network is made of one layer of convolutional neural networks, for directly from original low resolution
Thick feature F is extracted in rate image0, feature quantity 32.
F0=Wcoarse×Imglr (1)
Wherein, WcoarseIndicate thick feature extraction network convolution layer parameter, ImglrIndicate the low resolution after processing
Image.
Dense feature converged network is made of 32 cascade Fusion Features modules, each module output feature quantity is
32.Image reconstruction network is then made of two layers of convolutional network and a level pixel arrangement layer.The convolutional layer that reconstruction model uses
Convolution kernel size is 3x3, and is operated to input convolutional layer using zero padding, to guarantee that output is identical as input space resolution ratio.
3) the Fusion Features module in dense feature converged network is made of two parts: global characteristics integrated unit and feature
Extraction unit, as shown in Figure 3.Fusion Features module in dense feature converged network shares 32, for n-th of Fusion Features
Global characteristics integrated unit output representation formula (1) in module:
fusionn=[[F0,F1,…,Fn-2],Fn-1] (2)
Wherein, Fn-1Feature, feature quantity 32, F are exported for (n-1)th Fusion Features module0For thick feature extraction network
Output.So feature extraction unit is made of two layers of cascade convolutional layer, and output is indicated by formula (2):
Fn=W2×σ(W1×fusionn) (3)
Wherein, W1And W2For the weight parameter of two layers of convolutional neural networks, σ is activation primitive ReLU.Feature extraction unit is not
High-order feature can be only extracted, while can adaptively determine that how many previous global informations should be retained.
4) image reconstruction network: the output feature of step 3) and thick feature F0Intensive characteristics of image F after additiondenseThrough
After crossing one layer of convolutional neural networks, sub-pixel alignment layer rearranges composition high-definition picture for feature is exported, and then passes through again
One layer of convolutional neural networks are crossed, target high-resolution is obtained and rebuilds color image.For different magnification ratios, first layer convolution
The feature quantity of the output of layer is different, magnification ratio 2, then exporting feature quantity is 32x2x2=128.Magnification ratio is
3, then corresponding feature quantity is 288.It is 4 for magnification ratio, then successively cascades two convolution-sub-pixel arrangement architecture, often
The output feature quantity of one layer of convolution maintains unanimously to be 128.It is 3, i.e. color image institute that the last layer convolution, which exports feature quantity,
The port number possessed, final output target high-resolution image.
It is as follows to rebuild network expression:
ImgSR=Wrec_2×PS(Wrec_1×Fdense) (4)
Wherein, Wrec_2And Wrec_1Two layers of the convolutional layer weight parameter rebuild in network is respectively indicated, PS then indicates sub-pixel
Alignment layer expresses function, ImgSRIt indicates to rebuild the target high-resolution image that network exports.
5) a part model training: is cut from low resolution color image and high-resolution colour picture pair for instructing
Practice, then carries out data enhancing, such as flip horizontal, flip vertical, 90 degree of rotations etc. at random in the training process.When training,
Learning rate is initialized as 0.0001, and every to reduce half after 200,000 iteration, model loss function is damaged using absolute error
It loses.Training process is accelerated using GPU.By above method training pattern until model is restrained.The mould of different magnification ratios
Type can be individually trained.
6) image measurement: after low-resolution image to be processed is normalized, input model obtains high-resolution
Image.
The above is only a preferred embodiment of the present invention, it should be pointed out that: for the ordinary skill people of the art
For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered
It is considered as protection scope of the present invention.
Claims (5)
1. a kind of image super-resolution rebuilding method based on dense feature converged network, which comprises the steps of:
1) original image is normalized, and is reduced according to different magnification ratio bi-cubic interpolations, obtained different
Low-resolution image constructs data set using low point of rate color image and original image as element;
2) image super-resolution rebuilding model is established, step 1) the data obtained collection is divided into training set and test set, utilizes instruction
Practice and collect training image super-resolution rebuilding model, and image super-resolution rebuilding model is tested with test set;
3) after image to be processed being normalized, input picture Super-resolution reconstruction established model obtains high-definition picture.
2. a kind of image super-resolution rebuilding method based on dense feature converged network according to claim 1, special
Sign is, image super-resolution rebuilding model described in step 2 includes: thick feature extraction network, dense feature converged network with
And image reconstruction network;The thick feature extraction network is used for extraction step 1) the thick image of low point of rate color image of gained is special
Sign;The dense feature converged network from thick characteristics of image for extracting High-order Image feature;Described image rebuilds network
Intensive characteristics of image is obtained for merging thick characteristics of image with the addition of High-order Image feature, then by intensive image feature reconstruction
Obtain high-resolution image.
3. a kind of image super-resolution rebuilding method based on dense feature converged network according to claim 2, special
Sign is that the thick feature extraction network is made of single layer convolutional layer, and low-resolution image realizes convolution by single layer convolutional layer
Operation obtains thick characteristics of image F0, expression formula is as follows:
F0=Wcoarse×Imglr (1)
Wherein, WcoarseIndicate thick feature extraction network convolution layer parameter, ImglrIndicate low-resolution image.
4. a kind of image super-resolution rebuilding method based on dense feature converged network according to claim 2, special
Sign is that the dense feature converged network is formed by several Fusion Features module-cascades, and the Fusion Features module includes
Global characteristics integrated unit and feature extraction unit;Assuming that Fusion Features module shares M, n-th (n≤M) a feature is melted
Mold global characteristics integrated unit output in block are as follows:
fusionn=[[F0,F1,...,Fn-2],Fn-1] (2)
Wherein, Fn-1Feature, F are exported for (n-1)th Fusion Features module0It is exported for thick feature extraction network, then feature extraction list
Member output are as follows:
Fn=W2×σ(W1×fusionn) (3)
Wherein, W1And W2For convolutional layer weight parameter, σ is activation primitive ReLU;
Last dense feature converged network output is added with the output of thick feature extraction network, obtains intensive characteristics of image Fdense。
5. a kind of image super-resolution rebuilding method based on dense feature converged network according to claim 2, special
Sign is that it includes two layers of convolutional layer and sub-pixel alignment layer that described image, which rebuilds network,;The intensive characteristics of image successively passes through
After convolutional layer, sub-pixel alignment layer, convolutional layer, target high-resolution image is obtained;Sub-pixel alignment layer indicates a kind of periodicity
Arrangement operation, it by dimension be H × W × (Cr2) element in vector carries out periodic arrangement on channel dimension, finally
Switching to a dimension is rH × rW × C vector, to realize the amplification of spatial resolution, H, W, C respectively indicate the height of vector
Degree, width and port number, r indicate magnification ratio;It is as follows to rebuild network expression:
ImgSR=Wrec_2×PS(Wrec_1×Fdense) (4)
Wherein, Wrec_2And Wrec_1Two layers of convolutional layer weight parameter in image reconstruction network is respectively indicated, PS then indicates sub-pixel
Alignment layer expresses function, ImgSRIt indicates to rebuild the target high-resolution image that network exports.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910052202.3A CN109919840A (en) | 2019-01-21 | 2019-01-21 | Image super-resolution rebuilding method based on dense feature converged network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910052202.3A CN109919840A (en) | 2019-01-21 | 2019-01-21 | Image super-resolution rebuilding method based on dense feature converged network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109919840A true CN109919840A (en) | 2019-06-21 |
Family
ID=66960565
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910052202.3A Pending CN109919840A (en) | 2019-01-21 | 2019-01-21 | Image super-resolution rebuilding method based on dense feature converged network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109919840A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110443755A (en) * | 2019-08-07 | 2019-11-12 | 杭州智团信息技术有限公司 | A method of the image super-resolution based on low-and high-frequency semaphore |
CN111091521A (en) * | 2019-12-05 | 2020-05-01 | 腾讯科技(深圳)有限公司 | Image processing method and device, electronic equipment and computer readable storage medium |
CN111223161A (en) * | 2020-01-02 | 2020-06-02 | 京东数字科技控股有限公司 | Image reconstruction method and device and storage medium |
CN111461983A (en) * | 2020-03-31 | 2020-07-28 | 华中科技大学鄂州工业技术研究院 | Image super-resolution reconstruction model and method based on different frequency information |
CN111476802A (en) * | 2020-04-09 | 2020-07-31 | 山东财经大学 | Medical image segmentation and tumor detection method and device based on dense convolution model and readable storage medium |
CN111681296A (en) * | 2020-05-09 | 2020-09-18 | 上海联影智能医疗科技有限公司 | Image reconstruction method and device, computer equipment and storage medium |
CN112446824A (en) * | 2019-08-28 | 2021-03-05 | 新华三技术有限公司 | Image reconstruction method and device |
CN112801868A (en) * | 2021-01-04 | 2021-05-14 | 青岛信芯微电子科技股份有限公司 | Method for image super-resolution reconstruction, electronic device and storage medium |
CN113570505A (en) * | 2021-09-24 | 2021-10-29 | 中国石油大学(华东) | Shale three-dimensional super-resolution digital core grading reconstruction method and system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106991646A (en) * | 2017-03-28 | 2017-07-28 | 福建帝视信息科技有限公司 | A kind of image super-resolution method based on intensive connection network |
CN107610194A (en) * | 2017-08-14 | 2018-01-19 | 成都大学 | MRI super resolution ratio reconstruction method based on Multiscale Fusion CNN |
CN108765296A (en) * | 2018-06-12 | 2018-11-06 | 桂林电子科技大学 | A kind of image super-resolution rebuilding method based on recurrence residual error attention network |
CN108765291A (en) * | 2018-05-29 | 2018-11-06 | 天津大学 | Super resolution ratio reconstruction method based on dense neural network and two-parameter loss function |
CN109214985A (en) * | 2018-05-16 | 2019-01-15 | 长沙理工大学 | The intensive residual error network of recurrence for image super-resolution reconstruct |
-
2019
- 2019-01-21 CN CN201910052202.3A patent/CN109919840A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106991646A (en) * | 2017-03-28 | 2017-07-28 | 福建帝视信息科技有限公司 | A kind of image super-resolution method based on intensive connection network |
CN107610194A (en) * | 2017-08-14 | 2018-01-19 | 成都大学 | MRI super resolution ratio reconstruction method based on Multiscale Fusion CNN |
CN109214985A (en) * | 2018-05-16 | 2019-01-15 | 长沙理工大学 | The intensive residual error network of recurrence for image super-resolution reconstruct |
CN108765291A (en) * | 2018-05-29 | 2018-11-06 | 天津大学 | Super resolution ratio reconstruction method based on dense neural network and two-parameter loss function |
CN108765296A (en) * | 2018-06-12 | 2018-11-06 | 桂林电子科技大学 | A kind of image super-resolution rebuilding method based on recurrence residual error attention network |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110443755A (en) * | 2019-08-07 | 2019-11-12 | 杭州智团信息技术有限公司 | A method of the image super-resolution based on low-and high-frequency semaphore |
CN112446824A (en) * | 2019-08-28 | 2021-03-05 | 新华三技术有限公司 | Image reconstruction method and device |
CN112446824B (en) * | 2019-08-28 | 2024-02-27 | 新华三技术有限公司 | Image reconstruction method and device |
CN111091521A (en) * | 2019-12-05 | 2020-05-01 | 腾讯科技(深圳)有限公司 | Image processing method and device, electronic equipment and computer readable storage medium |
CN111223161A (en) * | 2020-01-02 | 2020-06-02 | 京东数字科技控股有限公司 | Image reconstruction method and device and storage medium |
CN111223161B (en) * | 2020-01-02 | 2024-04-12 | 京东科技控股股份有限公司 | Image reconstruction method, device and storage medium |
CN111461983B (en) * | 2020-03-31 | 2023-09-19 | 华中科技大学鄂州工业技术研究院 | Image super-resolution reconstruction model and method based on different frequency information |
CN111461983A (en) * | 2020-03-31 | 2020-07-28 | 华中科技大学鄂州工业技术研究院 | Image super-resolution reconstruction model and method based on different frequency information |
CN111476802A (en) * | 2020-04-09 | 2020-07-31 | 山东财经大学 | Medical image segmentation and tumor detection method and device based on dense convolution model and readable storage medium |
CN111476802B (en) * | 2020-04-09 | 2022-10-11 | 山东财经大学 | Medical image segmentation and tumor detection method, equipment and readable storage medium |
CN111681296B (en) * | 2020-05-09 | 2024-03-22 | 上海联影智能医疗科技有限公司 | Image reconstruction method, image reconstruction device, computer equipment and storage medium |
CN111681296A (en) * | 2020-05-09 | 2020-09-18 | 上海联影智能医疗科技有限公司 | Image reconstruction method and device, computer equipment and storage medium |
CN112801868B (en) * | 2021-01-04 | 2022-11-11 | 青岛信芯微电子科技股份有限公司 | Method for image super-resolution reconstruction, electronic device and storage medium |
CN112801868A (en) * | 2021-01-04 | 2021-05-14 | 青岛信芯微电子科技股份有限公司 | Method for image super-resolution reconstruction, electronic device and storage medium |
CN113570505B (en) * | 2021-09-24 | 2022-01-04 | 中国石油大学(华东) | Shale three-dimensional super-resolution digital core grading reconstruction method and system |
CN113570505A (en) * | 2021-09-24 | 2021-10-29 | 中国石油大学(华东) | Shale three-dimensional super-resolution digital core grading reconstruction method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109919840A (en) | Image super-resolution rebuilding method based on dense feature converged network | |
CN107507134B (en) | Super-resolution method based on convolutional neural network | |
CN105069825B (en) | Image super-resolution rebuilding method based on depth confidence network | |
CN109118431A (en) | A kind of video super-resolution method for reconstructing based on more memories and losses by mixture | |
CN108830790B (en) | Rapid video super-resolution reconstruction method based on simplified convolutional neural network | |
CN108921786A (en) | Image super-resolution reconstructing method based on residual error convolutional neural networks | |
CN110136062B (en) | Super-resolution reconstruction method combining semantic segmentation | |
CN110634105B (en) | Video high-space-time resolution signal processing method combining optical flow method and depth network | |
CN108022213A (en) | Video super-resolution algorithm for reconstructing based on generation confrontation network | |
Zhu et al. | Efficient single image super-resolution via hybrid residual feature learning with compact back-projection network | |
CN108805808A (en) | A method of improving video resolution using convolutional neural networks | |
CN110211038A (en) | Super resolution ratio reconstruction method based on dirac residual error deep neural network | |
Chen et al. | Single image super-resolution using deep CNN with dense skip connections and inception-resnet | |
CN108921910A (en) | The method of JPEG coding compression image restoration based on scalable convolutional neural networks | |
CN102402784A (en) | Human face image super-resolution method based on nearest feature line manifold learning | |
CN112767243B (en) | Method and system for realizing super-resolution of hyperspectral image | |
Qin et al. | Difficulty-aware image super resolution via deep adaptive dual-network | |
CN112580473A (en) | Motion feature fused video super-resolution reconstruction method | |
CN112785502A (en) | Light field image super-resolution method of hybrid camera based on texture migration | |
CN106952226A (en) | A kind of F MSA super resolution ratio reconstruction methods | |
CN110288529A (en) | A kind of single image super resolution ratio reconstruction method being locally synthesized network based on recurrence | |
CN109118428A (en) | A kind of image super-resolution rebuilding method based on feature enhancing | |
Kim et al. | Single image super-resolution using fire modules with asymmetric configuration | |
CN111105354A (en) | Depth image super-resolution method and device based on multi-source depth residual error network | |
Zhou et al. | How video super-resolution and frame interpolation mutually benefit |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190621 |
|
RJ01 | Rejection of invention patent application after publication |