CN110211038A - Super resolution ratio reconstruction method based on dirac residual error deep neural network - Google Patents

Super resolution ratio reconstruction method based on dirac residual error deep neural network Download PDF

Info

Publication number
CN110211038A
CN110211038A CN201910354259.9A CN201910354259A CN110211038A CN 110211038 A CN110211038 A CN 110211038A CN 201910354259 A CN201910354259 A CN 201910354259A CN 110211038 A CN110211038 A CN 110211038A
Authority
CN
China
Prior art keywords
dirac
network
residual error
convolution
deep neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910354259.9A
Other languages
Chinese (zh)
Inventor
杨欣
谢堂鑫
朱晨
周大可
李晓川
李志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201910354259.9A priority Critical patent/CN110211038A/en
Publication of CN110211038A publication Critical patent/CN110211038A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution

Abstract

The invention discloses the super resolution ratio reconstruction method based on dirac residual error deep neural network, network inputs low resolution picture learns characteristics of image by the residual block that multiple dirac are parameterized, reconstructs high-definition picture using sub-pix convolution.Network is divided into two parts up and down, and upper part obtains the high-frequency characteristic of LR by depth dirac residual error network, then rebuild by sub-pix convolution.Lower part directly carries out sub-pix convolution by the characteristics of low-frequency to LR come reconstruction image.It combines later and rebuilds structure output reconstruction HR image twice.The present invention improves residual error layer by weight parameter, and reduces the convolution characteristic dimension before activation primitive Relu, the convolution characteristic dimension after increasing activation primitive.The convolution feature of input picture is rebuild in conjunction with the feature of residual error e-learning simultaneously, better SR effect is obtained in the identical situation of network depth.

Description

Super resolution ratio reconstruction method based on dirac residual error deep neural network
Technical field
The present invention relates to the super resolution ratio reconstruction methods based on dirac residual error deep neural network, belong to image reconstruction skill Art field.
Background technique
Super-resolution rebuilding (Super-resolution reconstruction, SR) refers to by Same Scene One or more low resolution (low resolution, LR) images are observed, and obtain high-resolution (high Resolution, HR) image or video estimation.The method that SR passes through Digital Image Processing and machine learning breaks through imaging The limitation of system hardware obtains having more high spatial resolution, the image of more details information, is current acquisition fine definition figure The efficient and inexpensive technological means of picture.
SR can be divided into three classes by different I/O modes, be that single-input single-output (SISO), multi input list are defeated respectively (MISO) and multiple-input and multiple-output (MIMO) out, wherein MIMO belongs to video super-resolution reconstruction.If directly classifying to it, Two major classes, the i.e. super-resolution rebuilding (SISR) of single image and more images or multiframe super-resolution can be divided into.SISR refers to In the case where original image can not being obtained, HR image is estimated with given single LR image.With different methods, one low point Resolution image can reconstruct different high-definition pictures.
The common method of SISR has parsing interpolation method, the method based on reconstruction and the method based on study at present.Often at present The linear interpolation of parsing interpolation method, bicubic interpolation etc..Such method can be using the missing pixel in HR image as The average value of pixel is known to calculate, and this method has good effect in smooth, but this discontinuous at edge Image-region poor effect, will lead to ring and fuzzy.Method based on reconstruction usually requires specific prior information (to divide The forms such as cloth, energy function), while target HR image is constrained.This can be realized in several ways, such as be sharpened Edge details, regularization and deconvolution.Method based on machine learning have based on study dictionary method, be by study LR with The corresponding relationship of HR block of image pixels is rebuild, and also has a method for learning same image internal similarity, as field is embedded in Deng.
The method that the SR method being recently proposed is mainly based upon deep learning.Such method needs to construct depth convolutional network Model, and using great amount of images come training pattern.Weight is carried out by learning priori knowledge from LR image block and HR image block It builds.Exemplary process such as SRCNN, VDSR, EDSR etc..
SRCNN is first method being used in deep learning on SR.By Chao Dong in proposition this method in 2014 By learning an end-to-end mapping directly between the LR image and corresponding HR image after inputting interpolation.2016, Chao End introduces warp lamination in the FSRCNN of proposition, increases convolution intrinsic dimensionality, is rebuild using original LR image.
Sub-pix convolutional layer is introduced in the ESPCN network that Wenzhe Shi et al. was proposed in 2016.It can learn A series of convolution feature, for final reconstruction layer.This method has all been used in most of SR method later.
VDSR and DRCN is proposed by Jiwon Kim et al..By using a very deep convolutional network, Lai Zeng great network Receptive field.VDSR utilizes the low frequency analog information in LR and HR image using connection is skipped, network only need to learn HR with High frequency residual error between LR.
Tong et al. proposes SRDenseNet by introducing the method for intensively skipping connection.By in intensive link block Layer after each layer of feature is all inputed to mitigates gradient disappearance problem.
The EDSR that Bee Lim et al. is proposed, has used for reference SRResNet, has increased convolution intrinsic dimensionality, removed BN (batch Normalization) layer, and solution training instability problem is zoomed in and out after each residual block, use sub-pix convolution It is rebuild.
The WDSR that JiaHui Yu et al. is proposed improves EDSR, increases convolution feature dimensions of the residual block before activation primitive Number, the convolution intrinsic dimensionality after reducing activation primitive.And WN (weight normalization) layer is introduced in convolution. Using skipping connection individually to carry out sub-pix convolution to input LR image, image weight is completed together with the sub-pix layer of feature It builds.
Above-mentioned such as EDSR, WDSR, network carry out building for deep neural network using residual block.Each residual block It is that activation primitive is added in two convolutional layers, the output of second convolution is added to the input of first convolution, as residual It is bad to rebuild effect promoting when network depth increases for the output of poor block.
Summary of the invention
The technical problems to be solved by the present invention are: providing the Super-resolution reconstruction based on dirac residual error deep neural network Construction method learns characteristics of image by the residual block that multiple dirac are parameterized, reconstructs high resolution graphics using sub-pix convolution Picture, to obtain better SR effect in the identical situation of network depth.
The present invention uses following technical scheme to solve above-mentioned technical problem:
Based on the super resolution ratio reconstruction method of dirac residual error deep neural network, include the following steps:
Step 1, the deep neural network based on dirac residual error is constructed, which includes upper and lower two parts, top subpackage Include feature extraction network, multiple dirac parametrization residual block, rebuild network, and feature extraction network successively include convolutional layer and Activation primitive layer, it successively includes the first dirac convolutional layer, activation primitive layer, the 2nd dirac convolution that dirac, which parameterizes residual block, Layer, rebuilding network successively includes that convolutional layer and sub-pix rebuild layer;The input of feature extraction network is original low-resolution image, Input by the output of feature extraction network as first dirac parametrization residual block, parameterizes residual error for previous dirac Input of the output of block as the latter dirac parametrization residual block, the last one dirac parameterize the output conduct of residual block Rebuild the input of network;Lower part point includes that the overall situation skips connection network, and the overall situation skip connection network successively include convolutional layer and Sub-pix rebuilds layer, using original low-resolution image as the input of lower part bundling lamination;
The output result of the first dirac convolutional layer or the 2nd dirac convolutional layer are as follows:
Wherein, y indicates the output of first or second dirac convolutional layer, and x indicates the defeated of first or second dirac convolutional layer Entering, ⊙ is convolution operation,Indicate convolution kernel parameter matrix, WnormFor convolution operation parameter, I is skip floor operating parameter, diag (α), diag (β) are weight parameter;
Step 2, after original high-resolution image being done down-sampled pretreatment, low resolution training image is obtained, utilization is low Resolution ratio training image is trained the deep neural network based on dirac residual error of building, obtains trained network;
Step 3, it is rebuild using to be reconstructed original low-resolution image of the trained network to input, passes through network Top get the high-frequency characteristic of original low-resolution image, original low-resolution image is got by the lower part of network Characteristics of low-frequency, the super-resolution image rebuild in conjunction with high-frequency characteristic and characteristics of low-frequency output.
As a preferred solution of the present invention, the convolutional layer in feature extraction network described in step 1 has M × N number of convolution Core, the size of each convolution kernel are 3 × 3, and the activation primitive in feature extraction network is Relu.
As a preferred solution of the present invention, dirac described in step 1 parameterizes the first dirac convolutional layer in residual block There is M × N number of convolution kernel, the size of each convolution kernel is 3 × 3, and the 2nd dirac convolutional layer has M convolution kernel, each convolution kernel Size is that the activation primitive in 3 × 3, dirac parametrization residual block is Relu.
As a preferred solution of the present invention, the deep neural network based on dirac residual error of building described in step 2 is adopted Adam is used to be trained as Reverse optimization device.
As a preferred solution of the present invention, the deep neural network based on dirac residual error of building described in step 2 into When row training, using L2 loss function:
Wherein, Loss indicates loss function, H(i)Indicate original high-resolution image block,Indicate corresponding low resolution Full resolution pricture block after image reconstruction.
The invention adopts the above technical scheme compared with prior art, has following technical effect that
The present invention improves residual error layer by weight parameter, and reduces the convolution feature dimensions before activation primitive (RELU) Degree, the convolution characteristic dimension after increasing activation primitive.Simultaneously by the feature of the convolution feature of input picture and residual error e-learning In conjunction with being rebuild, effect is preferably rebuild so that obtaining in the identical situation of network depth.
Detailed description of the invention
Fig. 1 is that the present invention is based on the flow diagrams of the super resolution ratio reconstruction method of dirac residual error deep neural network.
Specific embodiment
Embodiments of the present invention are described below in detail, the example of the embodiment is shown in the accompanying drawings.Below by The embodiment being described with reference to the drawings is exemplary, and for explaining only the invention, and is not construed as limiting the claims.
For single image super-resolution problem, using single width low-resolution image as dirac residual error deep neural network Input, estimation ultra-resolution rate reconstruction image.It is as shown in Figure 1 the Super-resolution reconstruction based on dirac residual error deep neural network Construction method flow chart, network are divided into two parts up and down.Upper part is special by the high frequency that depth dirac residual error network obtains LR Sign, is then rebuild by sub-pix convolution.Lower part directly by characteristics of low-frequency to LR carry out sub-pix convolution come Reconstruction image.It combines later and rebuilds structure output reconstruction HR image twice.
Upper part includes a feature extraction layer, multiple dirac parametrization residual block, a network reconnection layer.Feature Extract layer extracts input of the high frequency imaging feature as dirac residual block from low resolution input picture, and feature extraction layer includes One convolutional layer and a Relu activation primitive layer.For first dirac residual block, input is obtained by carrying out convolution to LR It arrives, for any dirac residual block later, which inputs the output for the upper residual block that information is the residual block Information.It include two dirac convolutional layers and a Relu activation primitive layer for any dirac residual block.Network reconnection layer packet It includes a convolutional layer and sub-pix rebuilds layer, when network reconnection utilizes input low-resolution image feature and residual error study figure As feature is rebuild.
Dirac deep neural network uses the residual block of dirac parametrization, and residual block will be traditional by dirac parametrization The connection of skipping of residual error network is parameterized.Allow to change by network training residual block output and skip connection it is defeated Weight out.
Lower part includes one and directly skips articulamentum using the overall situation of low-resolution image, and the overall situation skips articulamentum packet It includes a convolutional layer and a sub-pix rebuilds layer.The input of upper part network reconnection layer is the last one dirac residual block Output, the input that lower part sub-pix rebuilds layer is LR characteristics of image after convolution, combines two sub-pixes later and rebuilds Output of the output of layer as neural network.
Any dirac convolutional layer of any residual block exports result are as follows:
Wherein, y is the output of dirac convolution, and x is convolutional layer input feature vector figure, and ⊙ is convolution operation.It is after merging Deconvolution parameter matrix (this parameter has contained convolution operation and skip floor operation).
Wherein, WnormFor convolution operation parameter, I is skip floor operating parameter, and diag (α) and diag (β) are weight parameter.I It is the unit parameter matrix as derived from convolution window, Dirac delta is also made to convert, any input x passes through the change of this I matrix It changes, output or x itself.Diag (α) and diag (β) are trainable vector parameters, for controlling WnormWith the weight of I. If diag (α) level off to 0, I unit matrix effect reduce, weaken skip floor connection weight.At this moment convolution operation WnormIt accounts for Leading role.
Embodiment:
(1) data set prepares.Training set uses the training set part of disclosed data set DIV2K.Test set include Set5, The test set part of Set14, B100, Urban100 and DIV2K.For each HR image, have corresponding LR image (× 2, × 3, × 4) as input.
(2) network uses input of the size for the image of 48 × 48 × 3 sizes as network.These images pass through to data Collection picture is cut to obtain.Each iteration inputs more than one, picture when due to training, indicates input quantity with n here, then Input is expressed as tensor [n, 48,48,3], corresponding output tensor are as follows: × 2:[n, 96,96,3] and, × 3:[n, 144,144,3], ×4:[n,192,192,3]。
(3) input picture format is 24 RGB images, and each pixel includes R, G, B three-dimensional information.Each Wesy 0 arrives 255 indicate.It is pre-processed after image input, each Wesy -127 of picture to 128 is indicated.Concrete operations are defeated to every batch of Enter tensor and subtracts average value 127.
(4) as shown in Figure 1, input tensor all can carry out feature extraction by first convolutional layer.The convolutional layer has M × N A convolution kernel, each convolution kernel size are 3 × 3, activation primitive Relu.
(5) for dirac residual error network portion, as shown in figure 1 shown in upper part, there are several dirac residual blocks.Each Residual block can two dirac convolution sums, one activation primitive Relu expression.Wherein first dirac convolutional layer has M × N number of Convolution kernel, each convolution kernel size are 3 × 3, have activation primitive.Second dirac convolutional layer has M convolution kernel, Mei Gejuan Product core size is 3 × 3, without activation primitive.Here M=64, N=4 can be taken.
(6) it for each dirac convolution, can indicate are as follows:Wherein, y is the output of dirac convolution, x For input feature vector, ⊙ is convolution operation.Convolution kernel parameter matrix,Wherein, Wnorm For convolution operation parameter, I is skip floor operating parameter, and diag (α) and diag (β) are weight parameter.
(7) layer is rebuild to be rebuild using sub-pix convolution.By combining the reconstructed results output of high and low frequency last HR image.
(8) for the training of × n times of SR network.By to high-definition picture block H(i)(size is l × w) does down-sampled Obtain image block L(i)(size is (l/n) × (w/n)).By L(i)Image block is obtained by neural network as inputNet Network uses L2 loss function:Wherein, H(i)Indicate original high-resolution image block,Expression pair Full resolution pricture block after the low-resolution image reconstruction answered.
(9) network is trained using Adam as Reverse optimization device, and initial learning rate is set as 0.0001, is testing Learning rate halves when continuous 10 holdings of Loss are stablized, and learning rate is less than 0.000001 deconditioning.
(10) after the completion of training, being by the other parameter settings in network model in addition to diag (α) and diag (β) can not Training is loaded into training and completes network model parameter, instructed again to diag (α) and the diag (β) in each dirac convolution Practice, promote each layer of dirac convolution characteristic use rate, preferably rebuilds effect to obtain.Initial learning rate is set as 0.000001, when testing continuous 10 holdings of Loss and stablizing, learning rate halves, and stops instruction when learning rate is less than 0.0000001 Practice.
The above examples only illustrate the technical idea of the present invention, and this does not limit the scope of protection of the present invention, all According to the technical idea provided by the invention, any changes made on the basis of the technical scheme each falls within the scope of the present invention Within.

Claims (5)

1. the super resolution ratio reconstruction method based on dirac residual error deep neural network, which comprises the steps of:
Step 1, the deep neural network based on dirac residual error is constructed, which includes upper and lower two parts, and upper part includes spy Sign extracts network, multiple dirac parametrization residual block, rebuilds network, and feature extraction network successively includes convolutional layer and activation Function layer, dirac parametrization residual block successively include the first dirac convolutional layer, activation primitive layer, the 2nd dirac convolutional layer, weight Establishing network successively includes that convolutional layer and sub-pix rebuild layer;The input of feature extraction network is original low-resolution image, will be special Sign extracts input of the output of network as first dirac parametrization residual block, by previous dirac parametrization residual block The input as the latter dirac parametrization residual block is exported, the last one dirac parameterizes the output of residual block as reconstruction The input of network;Lower part point includes that the overall situation skips connection network, and it successively includes convolutional layer and sub- picture that the overall situation, which skips connection network, Element rebuilds layer, using original low-resolution image as the input of lower part bundling lamination;
The output result of the first dirac convolutional layer or the 2nd dirac convolutional layer are as follows:
Wherein, y indicates the output of first or second dirac convolutional layer, and x indicates the input of first or second dirac convolutional layer, ⊙ For convolution operation,Indicate convolution kernel parameter matrix, WnormFor convolution operation parameter, I is skip floor operating parameter, diag (α), Diag (β) is weight parameter;
Step 2, after original high-resolution image being done down-sampled pretreatment, low resolution training image is obtained, utilizes low resolution Rate training image is trained the deep neural network based on dirac residual error of building, obtains trained network;
Step 3, it is rebuild using to be reconstructed original low-resolution image of the trained network to input, passes through the upper of network Part obtains the high-frequency characteristic of original low-resolution image, and the low frequency of original low-resolution image is got by the lower part of network Feature, the super-resolution image rebuild in conjunction with high-frequency characteristic and characteristics of low-frequency output.
2. the super resolution ratio reconstruction method according to claim 1 based on dirac residual error deep neural network, feature exist In the convolutional layer in feature extraction network described in step 1 has M × N number of convolution kernel, and the size of each convolution kernel is 3 × 3, feature Extracting the activation primitive in network is Relu.
3. the super resolution ratio reconstruction method according to claim 1 based on dirac residual error deep neural network, feature exist In the first dirac convolutional layer has M × N number of convolution kernel, the size of each convolution kernel in the parametrization residual block of dirac described in step 1 It is 3 × 3, the 2nd dirac convolutional layer has M convolution kernel, and the size of each convolution kernel is that 3 × 3, dirac is parameterized in residual block Activation primitive be Relu.
4. the super resolution ratio reconstruction method according to claim 1 based on dirac residual error deep neural network, feature exist In the deep neural network based on dirac residual error of building described in step 2 is trained using Adam as Reverse optimization device.
5. the super resolution ratio reconstruction method according to claim 1 based on dirac residual error deep neural network, feature exist In when the deep neural network based on dirac residual error of building described in step 2 is trained, using L2 loss function:
Wherein, Loss indicates loss function, H(i)Indicate original high-resolution image block,Indicate corresponding low-resolution image Full resolution pricture block after reconstruction.
CN201910354259.9A 2019-04-29 2019-04-29 Super resolution ratio reconstruction method based on dirac residual error deep neural network Pending CN110211038A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910354259.9A CN110211038A (en) 2019-04-29 2019-04-29 Super resolution ratio reconstruction method based on dirac residual error deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910354259.9A CN110211038A (en) 2019-04-29 2019-04-29 Super resolution ratio reconstruction method based on dirac residual error deep neural network

Publications (1)

Publication Number Publication Date
CN110211038A true CN110211038A (en) 2019-09-06

Family

ID=67786680

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910354259.9A Pending CN110211038A (en) 2019-04-29 2019-04-29 Super resolution ratio reconstruction method based on dirac residual error deep neural network

Country Status (1)

Country Link
CN (1) CN110211038A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110675320A (en) * 2019-09-24 2020-01-10 南京工程学院 Method for sharpening target image under spatial parameter change and complex scene
CN111260558A (en) * 2020-01-22 2020-06-09 武汉大学 Image super-resolution network model with variable magnification
CN111402142A (en) * 2020-03-25 2020-07-10 中国计量大学 Single image super-resolution reconstruction method based on depth recursive convolutional network
CN111402140A (en) * 2020-03-25 2020-07-10 中国计量大学 Single image super-resolution reconstruction system and method
CN111598842A (en) * 2020-04-24 2020-08-28 云南电网有限责任公司电力科学研究院 Method and system for generating model of insulator defect sample and storage medium
CN111986092A (en) * 2020-09-07 2020-11-24 山东交通学院 Image super-resolution reconstruction method and system based on dual networks
CN113096017A (en) * 2021-04-14 2021-07-09 南京林业大学 Image super-resolution reconstruction method based on depth coordinate attention network model
CN113362384A (en) * 2021-06-18 2021-09-07 安徽理工大学环境友好材料与职业健康研究院(芜湖) High-precision industrial part measurement algorithm of multi-channel sub-pixel convolution neural network
CN113409195A (en) * 2021-07-06 2021-09-17 中国标准化研究院 Image super-resolution reconstruction method based on improved deep convolutional neural network
CN113793263A (en) * 2021-08-23 2021-12-14 电子科技大学 Parallel residual error network high-resolution image reconstruction method for multi-scale cavity convolution

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104285390A (en) * 2012-05-14 2015-01-14 汤姆逊许可公司 Method and apparatus for compressing and decompressing a higher order ambisonics signal representation
CN108734660A (en) * 2018-05-25 2018-11-02 上海通途半导体科技有限公司 A kind of image super-resolution rebuilding method and device based on deep learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104285390A (en) * 2012-05-14 2015-01-14 汤姆逊许可公司 Method and apparatus for compressing and decompressing a higher order ambisonics signal representation
CN108734660A (en) * 2018-05-25 2018-11-02 上海通途半导体科技有限公司 A kind of image super-resolution rebuilding method and device based on deep learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JIAHUI YU等: "Wide Activation for Efficient and Accurate Image Super-Resolution", HTTPS://ARXIV.ORG/ABS/1808.08718V2, pages 1 - 10 *
SERGEY ZAGORUYKO等: "Diracnets Training very deep neural networks without skip connections", 《HTTPS://ARXIV.ORG/ABS/1706.00388?CONTEXT=CS》 *
SERGEY ZAGORUYKO等: "Diracnets Training very deep neural networks without skip connections", 《HTTPS://ARXIV.ORG/ABS/1706.00388?CONTEXT=CS》, 30 June 2017 (2017-06-30) *
SERGEY ZAGORUYKO等: "DIRACNETS: TRAINING VERY DEEP NEURAL NETWORKS WITHOUT SKIP-CONNECTIONS", HTTPS://ARXIV.ORG/PDF/1706.00388V2.PDF, pages 1 - 8 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110675320A (en) * 2019-09-24 2020-01-10 南京工程学院 Method for sharpening target image under spatial parameter change and complex scene
CN111260558A (en) * 2020-01-22 2020-06-09 武汉大学 Image super-resolution network model with variable magnification
CN111402142A (en) * 2020-03-25 2020-07-10 中国计量大学 Single image super-resolution reconstruction method based on depth recursive convolutional network
CN111402140A (en) * 2020-03-25 2020-07-10 中国计量大学 Single image super-resolution reconstruction system and method
CN111402140B (en) * 2020-03-25 2023-08-22 中国计量大学 Single image super-resolution reconstruction system and method
CN111598842A (en) * 2020-04-24 2020-08-28 云南电网有限责任公司电力科学研究院 Method and system for generating model of insulator defect sample and storage medium
CN111986092B (en) * 2020-09-07 2023-05-05 山东交通学院 Dual-network-based image super-resolution reconstruction method and system
CN111986092A (en) * 2020-09-07 2020-11-24 山东交通学院 Image super-resolution reconstruction method and system based on dual networks
CN113096017A (en) * 2021-04-14 2021-07-09 南京林业大学 Image super-resolution reconstruction method based on depth coordinate attention network model
CN113362384A (en) * 2021-06-18 2021-09-07 安徽理工大学环境友好材料与职业健康研究院(芜湖) High-precision industrial part measurement algorithm of multi-channel sub-pixel convolution neural network
CN113409195A (en) * 2021-07-06 2021-09-17 中国标准化研究院 Image super-resolution reconstruction method based on improved deep convolutional neural network
CN113793263A (en) * 2021-08-23 2021-12-14 电子科技大学 Parallel residual error network high-resolution image reconstruction method for multi-scale cavity convolution
CN113793263B (en) * 2021-08-23 2023-04-07 电子科技大学 Parallel residual error network high-resolution image reconstruction method for multi-scale cavity convolution

Similar Documents

Publication Publication Date Title
CN110211038A (en) Super resolution ratio reconstruction method based on dirac residual error deep neural network
US11537873B2 (en) Processing method and system for convolutional neural network, and storage medium
CN106683067B (en) Deep learning super-resolution reconstruction method based on residual sub-images
CN109671023B (en) Face image super-resolution secondary reconstruction method
CN111028150B (en) Rapid space-time residual attention video super-resolution reconstruction method
CN109903228A (en) A kind of image super-resolution rebuilding method based on convolutional neural networks
CN108921786A (en) Image super-resolution reconstructing method based on residual error convolutional neural networks
CN107464217B (en) Image processing method and device
CN103871041B (en) The image super-resolution reconstructing method built based on cognitive regularization parameter
CN107123094B (en) Video denoising method mixing Poisson, Gaussian and impulse noise
CN108961186A (en) A kind of old film reparation recasting method based on deep learning
CN110136067B (en) Real-time image generation method for super-resolution B-mode ultrasound image
CN111696033B (en) Real image super-resolution model and method based on angular point guided cascade hourglass network structure learning
CN109919840A (en) Image super-resolution rebuilding method based on dense feature converged network
CN109949224A (en) A kind of method and device of the connection grade super-resolution rebuilding based on deep learning
CN110084745A (en) Image super-resolution rebuilding method based on dense convolutional neural networks in parallel
CN111696035A (en) Multi-frame image super-resolution reconstruction method based on optical flow motion estimation algorithm
CN111476745A (en) Multi-branch network and method for motion blur super-resolution
Liu et al. MMDM: Multi-frame and multi-scale for image demoiréing
CN107845064A (en) Image Super-resolution Reconstruction method based on active sampling and gauss hybrid models
Wang et al. Image super-resolution using a improved generative adversarial network
CN115953294A (en) Single-image super-resolution reconstruction method based on shallow channel separation and aggregation
CN113610707B (en) Video super-resolution method based on time attention and cyclic feedback network
CN112785502B (en) Light field image super-resolution method of hybrid camera based on texture migration
CN110177229B (en) Video conversion method based on multi-task counterstudy, storage medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination