CN110047038A - A kind of single image super-resolution reconstruction method based on the progressive network of level - Google Patents
A kind of single image super-resolution reconstruction method based on the progressive network of level Download PDFInfo
- Publication number
- CN110047038A CN110047038A CN201910146330.4A CN201910146330A CN110047038A CN 110047038 A CN110047038 A CN 110047038A CN 201910146330 A CN201910146330 A CN 201910146330A CN 110047038 A CN110047038 A CN 110047038A
- Authority
- CN
- China
- Prior art keywords
- resolution
- super
- level
- image
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4007—Interpolation-based scaling, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4053—Super resolution, i.e. output image resolution higher than sensor resolution
Abstract
The invention proposes a kind of single image super-resolution reconstruction method based on the progressive network of level, main contents include: feature extraction branch, image reconstruction branch, the progressive network structure of level, loss function.High power super-resolution Task-decomposing is multiple subtasks by this method, and each subtask can individually be completed by a super-resolution unit networks, and multiple super-resolution unit networks cascade to form whole network.The present invention can be used the Super-resolution Reconstruction that the same training pattern does a variety of multiples to image and handle.
Description
Technical field
The present invention relates to Image Super-resolution technical field more particularly to a kind of single image oversubscription based on the progressive network of level
Distinguish method for reconstructing.
Background technique
Image super-resolution refers on the basis of original hardware device condition, using single width low-resolution image or has
The sequence of low resolution pictures of sub-pix offset restores the technology of the corresponding high-definition picture with more details information out.Through
Potential details and concealed structure can be given expression to by crossing the image restored, and image visual effect is enhanced.Image super-resolution technology
It played an important role in fields such as medical image, security monitoring, audio-visual amusement, satellite remote sensings.
Currently, being the method based on study using more single image super-resolution rebuilding technology, including based on super complete
Method, the method based on depth convolutional neural networks etc. of sparse dictionary study.The above method is in special scenes, low power super-resolution
Preferable reconstruction effect can be obtained under task.But still in the case where being distributed extensive scene especially high power super-resolution task
Satisfactory result cannot be obtained.On the other hand, the plan taken for multiple dimensioned super-resolution task, most of existing method
It is slightly multiple study to obtain the low-resolution image of different scale to the mapping of high-definition picture, this considerably increases study
Cost.
Summary of the invention
The purpose of the present invention is to provide one kind to be used for single image super-resolution reconstruction method, overcomes existing method super in high power
Reconstruction effect is poor under resolution task, is difficult to the problems such as disposably completing multiple dimensioned super-resolution task.
The technical solution for realizing the aim of the invention is as follows: a kind of single image Super-resolution Reconstruction based on the progressive network of level
Method, which comprises the following steps:
(1) feature extraction branch, the feature extraction branch include feature extraction convolutional layer, non-linear mapping module and
Up-sample layer, wherein the convolution kernel size is 3 × 3, and output characteristic pattern number is 160, and convolution step-length is 1, the up-sampling
Layer is obtained by transposition convolutional layer: convolution kernel size is 3 × 3, and the up-sampling factor is set as 2;
(2) image reconstruction branch, it includes local residual error structure, N number of recurrence block structure and residual error that described image, which rebuilds branch,
Predict convolutional layer, the part residual error structure is indicated by following formula:
The mapping relations for being input to output can be indicated with formula (1):
Y=F (x, { Wi})+x
Wherein, x is input, and y is desired output, F (x, { Wi) indicate the potential mapping for needing to learn;
The recurrence block structure includes intensive link block and transition zone, and each layer is from the front in the intensive link block
All layers are inputted, while the output of oneself is transmitted to subsequent all layers, and the convolution kernel size of the transition zone is 1 × 1,
Exporting characteristic pattern number is 160, and convolution step-length is 1;
The convolution kernel size of the residual prediction convolutional layer is 3 × 3, and output characteristic pattern number is 160, and convolution step-length is 1;
(3) the progressive network structure of level;
(4) loss function, the loss function are expressed from the next:
Wherein, IrFor the residual image obtained by mapping F, ISRFor super-resolution rebuilding image, θ is the ginseng in network
Number, N is the picture number of a trained batch, U (IL) it is to be operated to the interpolation up-sampling of low-resolution image, IH-U(IL) obtain
Be true value residual error, ρ is Charbonnier function, is defined as∈ takes empirical value 10-3。
Further, the network structure of single layer includes convolutional layer I, normalizes layer in batches, is non-in the intensive link block
Linear activation primitive and convolutional layer II,
The convolution kernel size of the convolutional layer I is 1 × 1, and convolution step-length is 1, padding 1;The batch normalizes layer
It is indicated by following formula:
Wherein, E () and var () is respectively indicated to take absolute value and is operated and variance is taken to operate;
The nonlinear activation function uses ReLU activation primitive, is indicated by following formula:
F (z)=max (0, z);
The convolution kernel size of the convolutional layer II is 1 × 1, and convolution step-length is 1, padding 1.
Further, the up-sampling layer adopt in interpolation using low resolution image of the bicubic interpolation method to input
Sample.
Further, the same feature of super-resolution task multiplexing of different multiples mentions in the progressive network structure of the level
Take branched structure.
Further, the progressive network structure of the level 4 × and 8 × super-resolution task under, feature extraction branch
Input is the output of the other up-sampling characteristic pattern of upper level.
The utility model has the advantages that the invention proposes a kind of progressive network structure of the level for single image Super-resolution Reconstruction, the knot
High power super-resolution Task-decomposing is multiple subtasks by structure, and each subtask can be individually complete by a super-resolution unit networks
At multiple super-resolution unit networks cascade to form whole network.Have present invention employs local residual error, intensive connection etc. and skips
The structure of property is connected to improve information flow transmission efficiency, gradient is avoided to disappear.
Detailed description of the invention
Fig. 1 is the progressive network structure of level that the present invention is used for single image super-resolution;
Fig. 2 is the detail structure diagram of 2 times of Image Super-resolution tasks in inventive network structure;
Fig. 3 is recurrence block structural diagram in inventive network structure;
Fig. 4 is the single layer structure figure in the intensive connection structure of the present invention.
Specific embodiment
In the following with reference to the drawings and specific embodiments, the present invention program is further illustrated.
The progressive network structure of level of single image super-resolution is used for for the present invention as shown in Figure 1.It is proposed by the present invention
The progressive neural network of level can make single picture to rebuild superresolution processing (wherein, the s=2 or 4 or 8) that multiple is s.Net
Network is made of one group of cascade up-sampling unit, and the task of each unit is that 2 times of superresolution processings are done to image.It is adopted in every level-one
Sample cellular construction mainly includes feature extraction branch and image reconstruction branch.
Wherein, the feature extraction branch in every level-one up-sampling unit structure is as shown in Figure 2.Including feature extraction convolution
Layer, non-linear mapping module, up-sampling layer.
Feature extraction layer specifically: the feature extraction layer of 2 times of super-resolution tasks is convolutional layer, and convolution kernel size is 3 × 3,
Exporting characteristic pattern number is 160, and convolution step-length is 1;4 times and 8 times of super-resolution task feature extraction layers are upper level super-resolution list
The output of member.
Non-linear mapping module, including local residual error structure, N number of recurrence block structure, residual prediction convolutional layer.
Local residual error structure specifically: input as x, desired output y, the mapping relations for being input to output can use formula (1)
It indicates:
Y=F (x, { Wi})+x (1)
Wherein F (x, { Wi) indicate the potential mapping of complexity for needing to learn.
Recurrence block structure as shown in figure 3, include intensive link block, transition zone, specifically:
Intensive link block, each layer is inputted from all layers of the front in structure, while the output of oneself being transmitted
To subsequent all layers, i.e., each layer of input is the information flow superposition of all layers of front and continued output.The network structure of single layer such as Fig. 4 institute
Show:
(1) convolutional layer I: convolution kernel size is 1 × 1, and convolution step-length is 1, padding 1;
(2) batch normalization (BN) layer, specifically can be used formula (2) to indicate:
Wherein, E () and var () is respectively indicated to take absolute value and is operated and variance is taken to operate;
(3) nonlinear activation function: ReLU activation primitive is used, specific function such as formula (3):
F (z)=max (0, z) (3)
(4) convolutional layer II: convolution kernel size is 1 × 1, and convolution step-length is 1, padding 1
Transition zone specifically: convolution kernel size is 1 × 1, and output characteristic pattern number is 160, and convolution step-length is 1.
Residual prediction convolutional layer, it acts as the characteristic pattern extracted by intensive connection structure is integrated into residual plot,
Convenient for subsequent superimposed with input data.Specially convolution kernel size is 3 × 3, and output characteristic pattern number is 160, convolution step-length
For 1 convolutional layer.
Up-sample layer and use transposition convolutional layer: convolution kernel size is 3 × 3, and the up-sampling factor is set as 2.
Wherein, image reconstruction branch carries out interpolation up-sampling using low resolution image of the bicubic interpolation method to input.Needle
To the reconstruction tasks of different multiples, using corresponding up-sampling ratio.The image of different super-resolution tasks is all from original low resolution
Rate image obtains input picture, respectively by 2 ×, 4 ×, 8 × bicubic interpolation obtains of the same size with prediction residual image
Interpolation image.Network finally, Liang Ge branch image carry out pixel-by-pixel be added obtain final Super-resolution Reconstruction image.
The purpose of Image Super-resolution is to find mapping function F, so that low-resolution image ILIt is obtained after mapping
Super-resolution image F (IL) and original high-resolution image IHIt is as consistent as possible.It is defined through the residual error that mapping F is obtained herein
Image is Ir, super-resolution rebuilding image is ISR, the parameter in network is θ.The loss function that the present invention uses can be by formula (4)
It indicates:
Wherein, N is the picture number of a trained batch, U (IL) it is to be operated to the interpolation up-sampling of low-resolution image, IH-
U(IL) what is obtained is the residual error of true value.ρ is Charbonnier function, is defined as∈, which learns from else's experience, to be tested
Value 10-3。
Compared with traditional Image Super-resolution deep neural network algorithm, the progressive structure of level proposed by the present invention can be with
Multi-scale prediction image is generated in a propagated forward.It is intensively connected using local residual sum simultaneously and integrates information flow, avoided
Gradient disappearance problem is to obtain deeper network.The present embodiment is right using Y-PSNR (PSNR) and structural similarity (SSIM)
Image super-resolution method of the invention is judged.PSNR and SSIM value is higher, and the quality that can be generally considered as image is better.Make
With set5, set14, BSD100 data set tests the present invention, and obtained Image Super-resolution Reconstruction result is in each test
High 0.25dB~the 2.35dB of PSNR value on collection, SSIM value are high by 0.003~0.053.
The invention proposes a kind of progressive network structures of the level for single image Super-resolution Reconstruction.The structure surpasses high power
Resolution Task-decomposing is multiple subtasks, and each subtask can individually be completed by a super-resolution unit networks, multiple oversubscription
Distinguish that unit networks cascade to form whole network.The network training stage can simultaneously to 2 ×, 4 ×, 8 × up-sampling factor carries out
Training can be used the Super-resolution Reconstruction that the same training pattern does three kinds of multiples to image in test phase and handle.Meanwhile this
Invention, which uses local residual error, intensive connection etc., has the structure for skipping connection property to improve information flow transmission efficiency, avoids
Gradient disappears.
Claims (5)
1. a kind of single image super-resolution reconstruction method based on the progressive network of level, which comprises the following steps:
(1) feature extraction branch, the feature extraction branch include feature extraction convolutional layer, non-linear mapping module and on adopt
Sample layer, wherein the convolution kernel size is 3 × 3, and output characteristic pattern number is 160, and convolution step-length is 1, and the up-sampling layer is logical
Cross transposition convolutional layer to obtain: convolution kernel size is 3 × 3, and the up-sampling factor is set as 2;
(2) image reconstruction branch, it includes local residual error structure, N number of recurrence block structure and residual prediction that described image, which rebuilds branch,
Convolutional layer, the part residual error structure are indicated by following formula:
The mapping relations for being input to output can be indicated with formula (1):
Y=F (x, { Wi})+x
Wherein, x is input, and y is desired output, F (x, { Wi) indicate the potential mapping for needing to learn;
The recurrence block structure includes intensive link block and transition zone, each layer owning from the front in the intensive link block
Layer is inputted, while the output of oneself is transmitted to subsequent all layers, and the convolution kernel size of the transition zone is 1 × 1, output
Characteristic pattern number is 160, and convolution step-length is 1;
The convolution kernel size of the residual prediction convolutional layer is 3 × 3, and output characteristic pattern number is 160, and convolution step-length is 1;
(3) the progressive network structure of level;
(4) loss function, the loss function are expressed from the next:
Wherein, IrFor the residual image obtained by mapping F, ISRFor super-resolution rebuilding image, θ is the parameter in network, and N is
The picture number of one trained batch, U (IL) it is to be operated to the interpolation up-sampling of low-resolution image, IH-U(IL) what is obtained is true
The residual error of real value, ρ are Charbonnier functions, are defined as∈ takes empirical value 10-3。
2. the single image super-resolution reconstruction method according to claim 1 based on the progressive network of level, which is characterized in that
The network structure of single layer includes convolutional layer I, batch normalization layer, nonlinear activation function and convolutional layer in the intensive link block
II,
The convolution kernel size of the convolutional layer I is 1 × 1, and convolution step-length is 1, padding 1;The batch normalization layer passes through
Following formula indicates:
Wherein, E () and var () is respectively indicated to take absolute value and is operated and variance is taken to operate;
The nonlinear activation function uses ReLU activation primitive, is indicated by following formula:
F (z)=max (0, z);
The convolution kernel size of the convolutional layer II is 1 × 1, and convolution step-length is 1, padding 1.
3. the single image super-resolution reconstruction method according to claim 1 based on the progressive network of level, which is characterized in that
The up-sampling layer carries out interpolation up-sampling using low resolution image of the bicubic interpolation method to input.
4. the single image super-resolution reconstruction method according to claim 1 based on the progressive network of level, which is characterized in that
The super-resolution task of different multiples is multiplexed same feature extraction branched structure in the progressive network structure of level.
5. the single image super-resolution reconstruction method according to claim 4 based on the progressive network of level, which is characterized in that
For the progressive network structure of level under 4 × and 8 × super-resolution task, the input of feature extraction branch is to adopt on upper level is other
The output of sample characteristic pattern.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910146330.4A CN110047038B (en) | 2019-02-27 | 2019-02-27 | Single-image super-resolution reconstruction method based on hierarchical progressive network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910146330.4A CN110047038B (en) | 2019-02-27 | 2019-02-27 | Single-image super-resolution reconstruction method based on hierarchical progressive network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110047038A true CN110047038A (en) | 2019-07-23 |
CN110047038B CN110047038B (en) | 2022-11-04 |
Family
ID=67274284
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910146330.4A Active CN110047038B (en) | 2019-02-27 | 2019-02-27 | Single-image super-resolution reconstruction method based on hierarchical progressive network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110047038B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111260552A (en) * | 2020-01-09 | 2020-06-09 | 复旦大学 | Image super-resolution method based on progressive learning |
CN113538307A (en) * | 2021-06-21 | 2021-10-22 | 陕西师范大学 | Synthetic aperture imaging method based on multi-view super-resolution depth network |
CN113610706A (en) * | 2021-07-19 | 2021-11-05 | 河南大学 | Fuzzy monitoring image super-resolution reconstruction method based on convolutional neural network |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107657586A (en) * | 2017-10-13 | 2018-02-02 | 深圳市唯特视科技有限公司 | A kind of single photo super-resolution Enhancement Method based on depth residual error network |
CN109118432A (en) * | 2018-09-26 | 2019-01-01 | 福建帝视信息科技有限公司 | A kind of image super-resolution rebuilding method based on Rapid Circulation convolutional network |
-
2019
- 2019-02-27 CN CN201910146330.4A patent/CN110047038B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107657586A (en) * | 2017-10-13 | 2018-02-02 | 深圳市唯特视科技有限公司 | A kind of single photo super-resolution Enhancement Method based on depth residual error network |
CN109118432A (en) * | 2018-09-26 | 2019-01-01 | 福建帝视信息科技有限公司 | A kind of image super-resolution rebuilding method based on Rapid Circulation convolutional network |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111260552A (en) * | 2020-01-09 | 2020-06-09 | 复旦大学 | Image super-resolution method based on progressive learning |
CN111260552B (en) * | 2020-01-09 | 2023-05-30 | 复旦大学 | Progressive learning-based image super-resolution method |
CN113538307A (en) * | 2021-06-21 | 2021-10-22 | 陕西师范大学 | Synthetic aperture imaging method based on multi-view super-resolution depth network |
CN113538307B (en) * | 2021-06-21 | 2023-06-20 | 陕西师范大学 | Synthetic aperture imaging method based on multi-view super-resolution depth network |
CN113610706A (en) * | 2021-07-19 | 2021-11-05 | 河南大学 | Fuzzy monitoring image super-resolution reconstruction method based on convolutional neural network |
Also Published As
Publication number | Publication date |
---|---|
CN110047038B (en) | 2022-11-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110276721A (en) | Image super-resolution rebuilding method based on cascade residual error convolutional neural networks | |
CN111861961B (en) | Single image super-resolution multi-scale residual error fusion model and restoration method thereof | |
CN109886871A (en) | The image super-resolution method merged based on channel attention mechanism and multilayer feature | |
CN109862370A (en) | Video super-resolution processing method and processing device | |
CN109671023A (en) | A kind of secondary method for reconstructing of face image super-resolution | |
CN102142137B (en) | High-resolution dictionary based sparse representation image super-resolution reconstruction method | |
CN108765296A (en) | A kind of image super-resolution rebuilding method based on recurrence residual error attention network | |
CN110310227A (en) | A kind of image super-resolution rebuilding method decomposed based on high and low frequency information | |
CN110119780A (en) | Based on the hyperspectral image super-resolution reconstruction method for generating confrontation network | |
CN106683067A (en) | Deep learning super-resolution reconstruction method based on residual sub-images | |
CN108647775A (en) | Super-resolution image reconstruction method based on full convolutional neural networks single image | |
CN107369189A (en) | The medical image super resolution ratio reconstruction method of feature based loss | |
CN110047038A (en) | A kind of single image super-resolution reconstruction method based on the progressive network of level | |
CN110060204B (en) | Single image super-resolution method based on reversible network | |
CN109118432A (en) | A kind of image super-resolution rebuilding method based on Rapid Circulation convolutional network | |
CN107464216A (en) | A kind of medical image ultra-resolution ratio reconstructing method based on multilayer convolutional neural networks | |
CN106157244A (en) | A kind of QR Code Image Super-resolution Reconstruction method based on rarefaction representation | |
CN110322402A (en) | Medical image super resolution ratio reconstruction method based on dense mixing attention network | |
CN112837224A (en) | Super-resolution image reconstruction method based on convolutional neural network | |
CN111507462A (en) | End-to-end three-dimensional medical image super-resolution reconstruction method and system | |
CN110490804A (en) | A method of based on the generation super resolution image for generating confrontation network | |
CN110533591A (en) | Super resolution image reconstruction method based on codec structure | |
CN110349085A (en) | A kind of single image super-resolution feature Enhancement Method based on generation confrontation network | |
CN113506222A (en) | Multi-mode image super-resolution method based on convolutional neural network | |
CN110136067A (en) | A kind of real-time imaging generation method for super-resolution B ultrasound image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |