CN109064396A - A kind of single image super resolution ratio reconstruction method based on depth ingredient learning network - Google Patents
A kind of single image super resolution ratio reconstruction method based on depth ingredient learning network Download PDFInfo
- Publication number
- CN109064396A CN109064396A CN201810666177.3A CN201810666177A CN109064396A CN 109064396 A CN109064396 A CN 109064396A CN 201810666177 A CN201810666177 A CN 201810666177A CN 109064396 A CN109064396 A CN 109064396A
- Authority
- CN
- China
- Prior art keywords
- resolution
- ingredient
- image
- network
- low
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4053—Super resolution, i.e. output image resolution higher than sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4023—Decimation- or insertion-based scaling, e.g. pixel or line decimation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4046—Scaling the whole image or part thereof using neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Abstract
The invention discloses a kind of single image super resolution ratio reconstruction methods based on depth ingredient learning network, comprising: expands training sample image and carries out region extraction and degeneration, obtains corresponding high-resolution and low-resolution image training set;Deep layer network of the building one with ingredient learning structure, the network first carry out global ingredient breakdown to the low-resolution image of input, recycle the residual error ingredient prediction that therefrom extracts its high resolution space correspondence image;Training is iterated at subnetwork to the depth of building using batch processing stochastic gradient descent method and back-propagation algorithm on training set, the model after obtaining right-value optimization;Utilize trained ingredient network reconnection low-resolution image;Reconstructed results are restored to original color space, obtain the final output of super-resolution rebuilding.The method of the present invention can not only promote the quality of super-resolution image after reconstruction, moreover it is possible to improve the arithmetic speed of model.
Description
Technical field
The invention belongs to field of image processings, are related to a kind of single image super-resolution based on depth ingredient learning network
Method for reconstructing.
Background technique
The spatial resolution of digital picture is to measure an important indicator of picture quality.The higher meaning of the resolution ratio of image
Image it is more clear, the ability that details is presented is stronger.However, in practice since the physical resolution of imaging system is low, target
The reasons such as hypertelorism or shooting environmental are severe make acquired image often second-rate, it is difficult to be needed from image
Minutia, to subsequent image procossing, analysis bring more difficulties, be unfavorable for accurately understanding the visitor for including in image
See information.
To solve the above-mentioned problems, most straightforward approach is to improve the physical resolution rank of electronic imaging device, i.e., more
Change the video camera of higher resolution.Unfortunately, high-definition camera instrument technology is complicated, expensive, and therefore, it is difficult in the short time
It is interior that existing equipment is upgraded on a large scale.Moreover, the capacity limit of picture pick-up device is also by moment sensor manufacturing process
With the restriction of objective optical diffraction rule.Meanwhile the occasion severe in certain imaging circumstances, even if using the video camera of high quality
Device is also difficult to obtain satisfactory image.Therefore, seek a kind of more cost-effective image resolution from the angle of image procossing
Rate enhanced scheme is imperative.Exactly in this background, the super-resolution technique of single image is come into being.The technology can be with
Under the premise of not changing imaging system intrinsic physical resolution, by image processing techniques to collected single width low resolution
Image carries out respective handling to obtain high-definition picture.The advantage of this technology is not only to can break through existing imaging system
The limit, and consider many factors such as down-sampled, the fuzzy, noise in image degradation process, therefore improving space point
While resolution, the quality of image after rebuilding can also be substantially improved, these are for subsequent image analysis, and pattern-recognition etc. is all
It is of great significance to.Just because of this, super-resolution rebuilding technical application is very extensive, is that field of image processing is most now
Popular one of research topic.In current super-resolution rebuilding technology, picture quality and performance be generally difficult to it is satisfactory to both parties, therefore,
Lifting system overall performance becomes urgent problem to be solved on the basis of guaranteeing higher image quality.
Summary of the invention
To solve the above problems, the invention discloses the single image super-resolution rebuildings based on depth ingredient learning network
Method can effectively promote picture quality after reconstruction, and improve overall performance.
In order to achieve the above object, the invention provides the following technical scheme:
A kind of single image super resolution ratio reconstruction method of depth ingredient learning network, includes the following steps:
Step 1: building training set
Concentrate existing sample image to carry out map function training image first to increase the capacity of training sample with
Diversity.Then region extraction and degeneration are carried out to these sample images, obtains high-definition picture XiAnd it is corresponding
Low-resolution image Yi, and with this composing training collectionWherein N indicates the capacity of training set;
Step 2: initialization depth first carries out global ingredient breakdown, then benefit to the low-resolution image of input at subnetwork
With the residual error ingredient prediction therefrom extracted its in the correspondence image of high resolution space, specifically include:
Step 2.1: the original low-resolution input Y to network being used for using ingredient breakdown and representation module and carries out global point
Solution and again expression operation;
It defines being smoothed in original input picture and is divided into SY, residual error ingredient is RY, input following indicate:
Y=SY+RY
Then, from middle extraction is originally inputted, it is smoothed to using convolution sparse coding technology for ingredient breakdown and representation module
Point, this process such as following formula indicates:
In formula, symbol " * " indicates discrete convolution operation;Z is sparse features figure;F, h, v are smoothing filter, level respectively
Gradient operator, vertical gradient operator;φ withIt is corresponding weight respectively;
Formula (1) is efficiently solved in Fourier:
In formula,WithIndicate Fast Fourier Transform (FFT) and its inverse transformation;Y, f, h are respectively indicated,
The Fourier transformation of v;Horizontal line on letter character indicates its complex conjugate;By element multiplication between symbol " " representing matrix;
Similarly, the division in above formula is also to be operated by element;
After obtaining sparse features figure Z, the smooth ingredient of low-resolution image is expressed as SY=f*Z, residual error ingredient then may be used
Think RY=Y-SY;
Step 2.2: by the way of feature extraction and mapping block deep learning by two kinds obtained in step 2.1 at
Divide and is mapped in corresponding high resolution space;
Using traditional bicubic interpolation method by smooth ingredient SYIt is upsampled to corresponding target high-resolution size;
Residual error ingredient R is handled as sub-network using the convolutional neural networks with deep structureYObtain low resolution
Expression of the residual error ingredient in high resolution space;
Step 2.3: the high-resolution ingredient after mapping is synthesized to the output of network using composite module;
Step 3: training depth is at subnetwork
Utilize the training set constructed in step 1Network after initializing in step 2 is trained, net is optimized
It is all in network can training parameter Θ;Loss function using mean square error as network:
In formula, L indicates that the loss function of network, F indicate that whole network acts on the function of input Y.
Then, weight is optimized and revised using the backpropagation of batch processing stochastic gradient descent method and error, finally, is obtained
Network after training optimization;
Step 4: utilizing trained network reconnection low-resolution image
Firstly, reading in a width size is N1×N2× 3 colored low-resolution image Il, wherein N1And N2For positive integer, divide
The line number and columns of the low-resolution image picture element matrix are not indicated, and 3 indicate Color Channel number;
Then the image of input is transformed into the color space YCbCr by RGB color space:
In formula, R, G, B respectively indicate red component of the image in the color space RBG, green component and indigo plant before conversion
Colouring component;And Y, Cb, Cr then indicate the luminance component after converted in the color space YCbCr, blue and red deviation coloration point
Amount.The size of converted images is constant, is still N1×N2×3;
Then, it is carried out respectively using two color channels of the bicubic interpolation algorithm to the low-resolution image after conversion slotting
It is worth, the high-definition picture that the size and expected needs after interpolation are rebuild is in the same size;And the brightness of low-resolution image is divided
Measure the high-resolution prediction output that luminance component in trained network, is obtained in input step 3;
Step 5: by super-resolution rebuilding result obtained in step 4 by YCbCr color space conversion to RGB color sky
Between:
And using the color image after conversion as final output.
Further, each parameter value is as follows in formula (1) in described 2.1:
The 2-D filter that f is one 3 × 3, all elements are equal to 1/9;H is a 1-D convolution kernel, be equal to [- 1,
1];V is a 1-D convolution kernel, the transposition equal to h;φ withPortion is set as 30.
Further, the convolutional neural networks in the step 2.2 with deep structure include four portion's steps:
(5) feature extraction directly extracts feature from low-resolution residual error ingredient with level 1 volume lamination;
(6) feature reproduction transmits the characteristic pattern extracted with 17 layers of convolutional layer;
(7) it up-samples, characteristic pattern is up-sampled with 1 layer of warp lamination;
(8) it rebuilds, reconstruction operation is carried out to the characteristic pattern after up-sampling with one layer of convolutional layer, obtains low-resolution residual error
Expression of the ingredient in high resolution space.
Further, the convolutional neural networks in the forward propagation process, carry out zero padding to input using before convolution
Mode.
Further, synthesis step simply is completed by the printenv layer that element is added using a kind of in the step 2.3.
Further, in the step 3, parameter assignment related with backpropagation is as follows: batch processing is sized to 64,
Momentum parameter is set as 0.9, and weight decaying is set as 0.0001;Using the learning strategy of variable learning rate, initial learning rate is set as
0.01, when error stagnate when, learning rate decay to before 10%.
Compared with prior art, the invention has the advantages that and the utility model has the advantages that
1. the present invention uses the single image super resolution ratio reconstruction method based on depth ingredient learning network, first to input
Low-resolution image carries out global ingredient breakdown, then carries out mapping study to two kinds of ingredients of expression respectively.Compared to existing
Super resolution ratio reconstruction method based on deep learning, core of the invention thought are to utilize to extract from low resolution input picture
Residual error ingredient prediction its high resolution space counterpart.This mode of learning can not only promote the training of network, add
Fast training process, additionally it is possible to the response intensity of convolutional layer in network is reduced, so that the residual error of prediction is more accurate, lift scheme
Overall performance.
2. being done so in addition, mode of learning of the invention does not need to carry out input picture additional interpolation pretreatment
Benefit is that input picture is made to remain in low-resolution spatial, and it is negative to reduce the calculating that subsequent characteristics are extracted with map operation with this
Load.Simultaneously as the convolution kernel that can be learnt in network using one group is completed to the operation of the up-sampling of characteristic image, and these
Convolution kernel optimizes update with the training of network together so that this method better than at present using traditional interpolation method into
The model of row up-sampling operation.
3. the quality that the method for the present invention can not only promote super-resolution image after reconstruction, moreover it is possible to improve the operation speed of model
Degree.
Detailed description of the invention
Fig. 1 is the single image super resolution ratio reconstruction method structure provided by the invention based on depth ingredient learning network
Figure.
Fig. 2 is the visual comparison of wherein one group of experimental result of distinct methods, wherein (1) is original high-resolution image;
It (2) is the reconstructed results of method in document [5];It (3) is the reconstructed results of method in document [6];It (4) is method in document [7]
Reconstructed results;It (5) is the reconstructed results of method in document [8];(6) for after the method for the present invention carries out super-resolution rebuilding
Result.
Bibliography
[1]Bevilacqua M,Roumy A,Guillemot C,et al.Low-Complexity Single-Image
Super-Resolution based on Nonnegative Neighbor Embedding[C]//British Machine
Vision Conference(BMVC).2012.
[2]Zeyde R,Elad M,Protter M.On Single Image Scale-Up Using Sparse-
Representations[J].Curves and Surfaces,2012:711-730.
[3]Martin D,Fowlkes C,Tal D,et al.A database of human segmented
natural images and its application to evaluating segmentation algorithms and
measuring ecological statistics[C]//Computer Vision,2001.ICCV
2001.Proceedings.Eighth IEEE International Conference on.IEEE,2001,2:416-423.
[4]Huang J B,Singh A,Ahuja N.Single image super-resolution from
transformed self-exemplars[C]//Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition.2015:5197-5206.
[5]Gu S,Zuo W,Xie Q,et al.Convolutional sparse coding for image
super-resolution[C]//Proceedings of the IEEE International Conference on
Computer Vision.2015:1823-1831.
[6]Dong C,Loy C C,He K,et al.Image super-resolution using deep
convolutional networks[J].IEEE transactions on pattern analysis and machine
intelligence,2016,38(2):295-307.
[7]Liu D,Wang Z,Wen B,et al.Robust single image super-resolution via
deep networks with sparse prior[J].IEEE Transactions on Image Processing,
2016,25(7):3194-3207.
[8]Kim J,Kwon Lee J,Mu Lee K.Accurate image super-resolution using
very deep convolutional networks[C]//Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition.2016:1646-1654.
Specific embodiment
Technical solution provided by the invention is described in detail below with reference to specific embodiment, it should be understood that following specific
Embodiment is only illustrative of the invention and is not intended to limit the scope of the invention.
Single image super resolution ratio reconstruction method structure such as Fig. 1 institute provided by the invention based on depth ingredient learning network
Show, specifically comprises the following steps:
Step 1: building training set.Existing sample image is concentrated to be rotated, overturn and ruler training image first
The operations such as degree transformation, increase the capacity and diversity of training sample with this.Then region extraction is carried out to these sample images
And degeneration, obtain high-definition picture XiAnd corresponding low-resolution image Yi, and with this composing training collectionWherein N indicates the capacity of training set.
Step 2: initialization depth is at subnetwork.The depth that the present invention designs includes following three main moulds at subnetwork
Block: (1) ingredient breakdown and representation module;(2) feature extraction and mapping block;(3) composite module.Separately below to upper module
It is described in detail.
Step 2.1: ingredient breakdown and representation module.The module be mainly used for inputting the original low-resolution of network Y into
Row global decomposition and again expression operation.If defining being smoothed in original input picture is divided into SY, residual error ingredient is RY, that
This input can indicate as follows:
Y=SY+RY
Then, from middle extraction is originally inputted, it is smoothed to using convolution sparse coding technology for ingredient breakdown and representation module
Point, this process can be indicated such as following formula:
In formula, symbol " * " indicates discrete convolution operation;Z is sparse features figure;F, h, v are smoothing filter respectively, horizontal
Gradient operator, vertical gradient operator;φ withIt is corresponding weight respectively.Meanwhile above-mentioned required parameter according to it is following rule into
Row configuration: f is defined as one 3 × 3 2-D filter, and all elements are equal to 1/9;H is a 1-D convolution kernel, is equal to
[-1,1];V is a 1-D convolution kernel, the transposition equal to h;φ withAll it is set to 30.
There are these clearly defined filters, formula (1) can efficiently be solved in Fourier:
In formula,WithIndicate Fast Fourier Transform (FFT) and its inverse transformation;Y, f, h are respectively indicated,
The Fourier transformation of v;Horizontal line on letter character indicates its complex conjugate;By element multiplication between symbol " " representing matrix;
Similarly, the division in above formula is also to be operated by element.
After obtaining sparse features figure Z, the smooth ingredient of low-resolution image can be expressed as SY=f*Z, residual error ingredient
R can be then expressed asY=Y-SY。
Step 2.2: feature extraction and mapping block.The task of the module is with the mode of deep learning a kind of by step
Two kinds of ingredients obtained in 2.1 are mapped in corresponding high resolution space.Smooth ingredient S is considered firstY, because it is hardly
Comprising any high-frequency information, therefore traditional bicubic interpolation method is used in the present invention, it is directly upsampled to corresponding mesh
Absolute altitude resolution dimensions.
Then consider residual error ingredient RY, the present invention is using a kind of convolutional neural networks with deep structure as son here
Network is handled.The network includes four parts: (1) feature extraction, with level 1 volume lamination come directly from low-resolution residual error ingredient
Middle extraction feature;(2) feature reproduction transmits the characteristic pattern extracted with 17 layers of convolutional layer;(3) it up-samples, with 1 layer of warp lamination
To be up-sampled to characteristic pattern;(4) it rebuilds, reconstruction operation is carried out to the characteristic pattern after up-sampling with one layer of convolutional layer, is obtained
Obtain expression of the low-resolution residual error ingredient in high resolution space.
The configuration information in other related network middle layers can be typically expressed as follows: all convolutional layers all contain 64
3 × 3 convolution kernel;Warp lamination contains 64 6 × 6 convolution kernels;There is no pond layer between convolutional layer;It is non-thread in order to introduce
Property, use line rectification unit R eLU (Rectified Linear Unit) as the activation primitive of convolutional layer, and using it is preceding to
The frame mode of activation.In addition to this, in the forward propagation process, in order to keep the Pixel Dimensions of all characteristic patterns consistent, adopt
With the mode for carrying out zero padding before convolution to input.
Step 2.3: composite module.High-resolution ingredient after mapping is synthesized the output of network by the module.Here it examines
Consider the factors such as computational efficiency, is simply completed by the printenv layer of element addition using one kind.
Step 3: training depth on training set, uses batch processing stochastic gradient descent method and backpropagation at subnetwork
Algorithm is iterated training at subnetwork to the depth of building, the model after obtaining right-value optimization.
The step utilizes the training set constructed in step 1Network after initializing in step 2 is trained,
Optimize it is all in network can training parameter Θ.The present invention uses loss function of the mean square error as network:
In formula, L indicates that the loss function of network, F indicate that whole network acts on the function of input Y.
Then, weight is optimized and revised using the backpropagation of batch processing stochastic gradient descent method and error, reaches minimum
Change the purpose of loss function.Wherein, parameter related with backpropagation can typically assignment it is as follows: the size of batch processing is set
It is 64, momentum parameter is set as 0.9, and weight decaying is set as 0.0001.It is initial to learn meanwhile using the learning strategy of variable learning rate
Habit rate is set as 0.01, when error stagnate when, learning rate decay to before 10%.Finally, the network after obtaining training optimization.
Step 4: utilizing trained network reconnection low-resolution image.Firstly, reading in a width size is N1×N2× 3
Colored low-resolution image Il, wherein N1And N2For positive integer, the line number and column of the low-resolution image picture element matrix are respectively indicated
Number, 3 indicate Color Channel number.Then the image of input is transformed into the color space YCbCr by RGB color space:
In formula, R, G, B respectively indicate red component of the image in the color space RBG, green component and indigo plant before conversion
Colouring component;And Y, Cb, Cr then indicate the luminance component after converted in the color space YCbCr, blue and red deviation coloration point
Amount.The size of converted images is constant, is still N1×N2×3;
Then, it is carried out respectively using two color channels of the bicubic interpolation algorithm to the low-resolution image after conversion slotting
It is worth, the high-definition picture that the size after interpolation should be rebuild with expected needs is in the same size.And by the brightness of low-resolution image
In component input step 3 in trained network, the high-resolution prediction output of luminance component is obtained.
Step 5: by super-resolution rebuilding result obtained in step 4 by YCbCr color space conversion to RGB color sky
Between:
And using the color image after conversion as final output.
In order to verify the treatment effect of the method for the present invention, respectively in image measurement collection (Set5 [1], Set14 of four kinds of standards
[2], BSD100 [3], Ubran100 [4]) on carry out comparative experiments.In an experiment, using it is proposed by the present invention based on depth at
The super-resolution method of point learning network carries out super-resolution rebuilding experiment to the image in every group of test set respectively, puts in experiment
The big factor is taken as 4.The super-resolution rebuilding side of four kinds of mainstreams now has also been additionally introduced meanwhile in order to compare, in experiment
Method.Table 1 gives the performance indicator mean value of all test methods in each case, and as one group of visual contrast, Fig. 2 is shown
The wherein visual comparison of one group of experimental result.Wherein, Fig. 2 (1) shows that entitled Butterfly's in test set Set5 is original
High-definition picture, Fig. 2 (2)-Fig. 2 (5) respectively show document [5]-document [8] method reconstruction effect, and Fig. 2 (6) is then opened up
The correspondence result after rebuilding using super-resolution method proposed by the present invention is shown.The experimental results showed that side proposed by the present invention
Method can be good at being promoted the reconstruction quality of single image, and either in terms of subjective visual effect or objectively performance is commented
All tool is greatly increased in terms of valence index, hence it is evident that better than some representative methods of mainstream now.
Quantitative comparison of 1 algorithms of different of table on standard testing collection
The technical means disclosed in the embodiments of the present invention is not limited only to technological means disclosed in above embodiment, further includes
Technical solution consisting of any combination of the above technical features.It should be pointed out that for those skilled in the art
For, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also considered as
Protection scope of the present invention.
Claims (6)
1. a kind of single image super resolution ratio reconstruction method of depth ingredient learning network, which comprises the steps of:
Step 1: building training set
Existing sample image is concentrated to carry out map function to increase the capacity of training sample and multiplicity training image first
Property;Then region extraction and degeneration are carried out to these sample images, obtains high-definition picture XiAnd it is corresponding low
Image in different resolution Yi, and with this composing training collectionWherein N indicates the capacity of training set;
Step 2: initialization depth first carries out global ingredient breakdown to the low-resolution image of input at subnetwork, recycle from
The residual error ingredient prediction of middle extraction its in the correspondence image of high resolution space, specifically include:
Step 2.1: using ingredient breakdown and representation module be used for the original low-resolution input Y to network carry out global decomposition and
Again operation is indicated;
It defines being smoothed in original input picture and is divided into SY, residual error ingredient is RY, input following indicate:
Y=SY+RY
Then, ingredient breakdown and representation module using convolution sparse coding technology from being originally inputted middle its smooth ingredient of extraction, this
One process such as following formula indicates:
In formula, symbol " * " indicates discrete convolution operation;Z is sparse features figure;F, h, v are smoothing filter, horizontal gradient respectively
Operator, vertical gradient operator;φ withIt is corresponding weight respectively;
Formula (1) is efficiently solved in Fourier:
In formula,WithIndicate Fast Fourier Transform (FFT) and its inverse transformation;Y, f, h are respectively indicated, v's
Fourier transformation;Horizontal line on letter character indicates its complex conjugate;By element multiplication between symbol " " representing matrix;Equally
Ground, the division in above formula is also to be operated by element;
After obtaining sparse features figure Z, the smooth ingredient of low-resolution image is expressed as SY=f*Z, residual error ingredient then can be RY
=Y-SY;
Step 2.2: being reflected two kinds of ingredients obtained in step 2.1 by the way of feature extraction and mapping block deep learning
It is mapped in corresponding high resolution space;
Using traditional bicubic interpolation method by smooth ingredient SYIt is upsampled to corresponding target high-resolution size;
Residual error ingredient R is handled as sub-network using the convolutional neural networks with deep structureYObtain low-resolution residual error at
Divide the expression in high resolution space;
Step 2.3: the high-resolution ingredient after mapping is synthesized to the output of network using composite module;
Step 3: training depth is at subnetwork
Utilize the training set constructed in step 1Network after initializing in step 2 is trained, is optimized in network
All can training parameter Θ;Loss function using mean square error as network:
In formula, L indicates that the loss function of network, F indicate that whole network acts on the function of input Y;
Then, weight is optimized and revised using the backpropagation of batch processing stochastic gradient descent method and error, finally, is trained
Network after optimization;
Step 4: utilizing trained network reconnection low-resolution image
Firstly, reading in a width size is N1×N2× 3 colored low-resolution image Il, wherein N1And N2For positive integer, difference table
Show the line number and columns of the low-resolution image picture element matrix, 3 indicate Color Channel number;
Then the image of input is transformed into the color space YCbCr by RGB color space:
In formula, R, G, B respectively indicate red component of the image in the color space RBG before conversion, green component and blue point
Amount;And Y, Cb, Cr then indicate the luminance component after converted in the color space YCbCr, blue and red deviation chromatic component;Turn
The size for changing rear image is constant, is still N1×N2×3;
Then, interpolation is carried out respectively using two color channels of the bicubic interpolation algorithm to the low-resolution image after conversion,
Size and the expected high-definition picture for needing to rebuild after interpolation is in the same size;And it is the luminance component of low-resolution image is defeated
Enter in trained network, to obtain the high-resolution prediction output of luminance component in step 3;
Step 5: by super-resolution rebuilding result obtained in step 4 by YCbCr color space conversion to RGB color space:
And using the color image after conversion as final output.
2. the single image super resolution ratio reconstruction method of depth ingredient learning network according to claim 1, feature exist
In each parameter value is as follows in formula (1) in described 2.1:
The 2-D filter that f is one 3 × 3, all elements are equal to 1/9;H is a 1-D convolution kernel, is equal to [- 1,1];v
It is a 1-D convolution kernel, the transposition equal to h;φ withIt is set to 30.
3. the single image super resolution ratio reconstruction method of depth ingredient learning network according to claim 1, feature exist
In the convolutional neural networks in the step 2.2 with deep structure include four portion's steps:
(1) feature extraction directly extracts feature from low-resolution residual error ingredient with level 1 volume lamination;
(2) feature reproduction transmits the characteristic pattern extracted with 17 layers of convolutional layer;
(3) it up-samples, characteristic pattern is up-sampled with 1 layer of warp lamination;
(4) it rebuilds, reconstruction operation is carried out to the characteristic pattern after up-sampling with one layer of convolutional layer, obtains low-resolution residual error ingredient
Expression in high resolution space.
4. the single image super resolution ratio reconstruction method of depth ingredient learning network according to claim 3, feature exist
In the forward propagation process in, convolutional neural networks, by the way of carrying out zero padding to input before convolution.
5. the single image super resolution ratio reconstruction method of depth ingredient learning network according to claim 1, feature exist
In using a kind of printenv layer completion synthesis step being simply added by element in the step 2.3.
6. the single image super resolution ratio reconstruction method of depth ingredient learning network according to claim 1, feature exist
In in the step 3, parameter assignment related with backpropagation is as follows: batch processing is sized to 64, and momentum parameter is set as
0.9, weight decaying is set as 0.0001;Using the learning strategy of variable learning rate, initial learning rate is set as 0.01, when error is stagnated
When, learning rate decay to before 10%.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810666177.3A CN109064396B (en) | 2018-06-22 | 2018-06-22 | Single image super-resolution reconstruction method based on deep component learning network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810666177.3A CN109064396B (en) | 2018-06-22 | 2018-06-22 | Single image super-resolution reconstruction method based on deep component learning network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109064396A true CN109064396A (en) | 2018-12-21 |
CN109064396B CN109064396B (en) | 2023-04-07 |
Family
ID=64821544
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810666177.3A Active CN109064396B (en) | 2018-06-22 | 2018-06-22 | Single image super-resolution reconstruction method based on deep component learning network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109064396B (en) |
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109903226A (en) * | 2019-01-30 | 2019-06-18 | 天津城建大学 | Image super-resolution rebuilding method based on symmetrical residual error convolutional neural networks |
CN109949225A (en) * | 2019-03-11 | 2019-06-28 | 厦门美图之家科技有限公司 | A kind of image processing method and calculate equipment |
CN109978764A (en) * | 2019-03-11 | 2019-07-05 | 厦门美图之家科技有限公司 | A kind of image processing method and calculate equipment |
CN109978763A (en) * | 2019-03-01 | 2019-07-05 | 昆明理工大学 | A kind of image super-resolution rebuilding algorithm based on jump connection residual error network |
CN110009565A (en) * | 2019-04-04 | 2019-07-12 | 武汉大学 | A kind of super-resolution image reconstruction method based on lightweight network |
CN110033410A (en) * | 2019-03-28 | 2019-07-19 | 华中科技大学 | Image reconstruction model training method, image super-resolution rebuilding method and device |
CN110136061A (en) * | 2019-05-10 | 2019-08-16 | 电子科技大学中山学院 | Resolution improving method and system based on depth convolution prediction and interpolation |
CN110223288A (en) * | 2019-06-17 | 2019-09-10 | 华东交通大学 | A kind of Rare-Earth Extraction Process multicomponent content prediction method and system |
CN110246083A (en) * | 2019-05-10 | 2019-09-17 | 杭州电子科技大学 | A kind of fluorescence microscope images super-resolution imaging method |
CN110288524A (en) * | 2019-05-09 | 2019-09-27 | 广东启迪图卫科技股份有限公司 | Deep learning super-resolution method based on enhanced up-sampling and discrimination syncretizing mechanism |
CN110310227A (en) * | 2019-06-27 | 2019-10-08 | 电子科技大学 | A kind of image super-resolution rebuilding method decomposed based on high and low frequency information |
CN110570351A (en) * | 2019-08-01 | 2019-12-13 | 武汉大学 | Image super-resolution reconstruction method based on convolution sparse coding |
CN110766609A (en) * | 2019-08-29 | 2020-02-07 | 王少熙 | Depth-of-field map super-resolution reconstruction method for ToF camera |
CN110992265A (en) * | 2019-12-02 | 2020-04-10 | 北京数码视讯科技股份有限公司 | Image processing method and model, model training method and electronic equipment |
CN111127317A (en) * | 2019-12-02 | 2020-05-08 | 深圳供电局有限公司 | Image super-resolution reconstruction method and device, storage medium and computer equipment |
CN111353940A (en) * | 2020-03-31 | 2020-06-30 | 成都信息工程大学 | Image super-resolution reconstruction method based on deep learning iterative up-down sampling |
CN111551988A (en) * | 2020-04-23 | 2020-08-18 | 中国地质大学(武汉) | Seismic data anti-alias interpolation method combining deep learning and prediction filtering |
CN111640061A (en) * | 2020-05-12 | 2020-09-08 | 哈尔滨工业大学 | Self-adaptive image super-resolution system |
CN111784571A (en) * | 2020-04-13 | 2020-10-16 | 北京京东尚科信息技术有限公司 | Method and device for improving image resolution |
CN111929723A (en) * | 2020-07-15 | 2020-11-13 | 清华大学 | Velocity model super-resolution method under seismic data constraint based on multi-task learning |
CN112150354A (en) * | 2019-06-26 | 2020-12-29 | 四川大学 | Single image super-resolution method combining contour enhancement and denoising statistical prior |
CN112308781A (en) * | 2020-11-23 | 2021-02-02 | 中国科学院深圳先进技术研究院 | Single image three-dimensional super-resolution reconstruction method based on deep learning |
CN113096017A (en) * | 2021-04-14 | 2021-07-09 | 南京林业大学 | Image super-resolution reconstruction method based on depth coordinate attention network model |
CN113379602A (en) * | 2021-06-08 | 2021-09-10 | 中国科学技术大学 | Light field super-resolution enhancement method by using zero sample learning |
CN113538307A (en) * | 2021-06-21 | 2021-10-22 | 陕西师范大学 | Synthetic aperture imaging method based on multi-view super-resolution depth network |
CN114331853B (en) * | 2020-09-30 | 2023-05-12 | 四川大学 | Single image restoration iteration framework based on target vector updating module |
TWI818491B (en) * | 2021-12-16 | 2023-10-11 | 聯發科技股份有限公司 | Method for image refinement and system thereof |
US11948279B2 (en) | 2020-11-23 | 2024-04-02 | Samsung Electronics Co., Ltd. | Method and device for joint denoising and demosaicing using neural network |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106683067A (en) * | 2017-01-20 | 2017-05-17 | 福建帝视信息科技有限公司 | Deep learning super-resolution reconstruction method based on residual sub-images |
US20180137603A1 (en) * | 2016-11-07 | 2018-05-17 | Umbo Cv Inc. | Method and system for providing high resolution image through super-resolution reconstruction |
-
2018
- 2018-06-22 CN CN201810666177.3A patent/CN109064396B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180137603A1 (en) * | 2016-11-07 | 2018-05-17 | Umbo Cv Inc. | Method and system for providing high resolution image through super-resolution reconstruction |
CN106683067A (en) * | 2017-01-20 | 2017-05-17 | 福建帝视信息科技有限公司 | Deep learning super-resolution reconstruction method based on residual sub-images |
Non-Patent Citations (1)
Title |
---|
路小波等: "运动车辆车牌图像的超分辨率重建(英文)", 《JOURNAL OF SOUTHEAST UNIVERSITY(ENGLISH EDITION)》 * |
Cited By (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109903226B (en) * | 2019-01-30 | 2023-08-15 | 天津城建大学 | Image super-resolution reconstruction method based on symmetric residual convolution neural network |
CN109903226A (en) * | 2019-01-30 | 2019-06-18 | 天津城建大学 | Image super-resolution rebuilding method based on symmetrical residual error convolutional neural networks |
CN109978763A (en) * | 2019-03-01 | 2019-07-05 | 昆明理工大学 | A kind of image super-resolution rebuilding algorithm based on jump connection residual error network |
CN109978764B (en) * | 2019-03-11 | 2021-03-02 | 厦门美图之家科技有限公司 | Image processing method and computing device |
CN109949225A (en) * | 2019-03-11 | 2019-06-28 | 厦门美图之家科技有限公司 | A kind of image processing method and calculate equipment |
CN109978764A (en) * | 2019-03-11 | 2019-07-05 | 厦门美图之家科技有限公司 | A kind of image processing method and calculate equipment |
CN110033410A (en) * | 2019-03-28 | 2019-07-19 | 华中科技大学 | Image reconstruction model training method, image super-resolution rebuilding method and device |
CN110033410B (en) * | 2019-03-28 | 2020-08-04 | 华中科技大学 | Image reconstruction model training method, image super-resolution reconstruction method and device |
CN110009565A (en) * | 2019-04-04 | 2019-07-12 | 武汉大学 | A kind of super-resolution image reconstruction method based on lightweight network |
CN110288524A (en) * | 2019-05-09 | 2019-09-27 | 广东启迪图卫科技股份有限公司 | Deep learning super-resolution method based on enhanced up-sampling and discrimination syncretizing mechanism |
CN110246083A (en) * | 2019-05-10 | 2019-09-17 | 杭州电子科技大学 | A kind of fluorescence microscope images super-resolution imaging method |
CN110246083B (en) * | 2019-05-10 | 2023-02-24 | 杭州电子科技大学 | Fluorescence microscopic image super-resolution imaging method |
CN110136061B (en) * | 2019-05-10 | 2023-02-28 | 电子科技大学中山学院 | Resolution improving method and system based on depth convolution prediction and interpolation |
CN110136061A (en) * | 2019-05-10 | 2019-08-16 | 电子科技大学中山学院 | Resolution improving method and system based on depth convolution prediction and interpolation |
CN110223288A (en) * | 2019-06-17 | 2019-09-10 | 华东交通大学 | A kind of Rare-Earth Extraction Process multicomponent content prediction method and system |
CN112150354A (en) * | 2019-06-26 | 2020-12-29 | 四川大学 | Single image super-resolution method combining contour enhancement and denoising statistical prior |
CN110310227B (en) * | 2019-06-27 | 2020-09-08 | 电子科技大学 | Image super-resolution reconstruction method based on high-low frequency information decomposition |
CN110310227A (en) * | 2019-06-27 | 2019-10-08 | 电子科技大学 | A kind of image super-resolution rebuilding method decomposed based on high and low frequency information |
CN110570351A (en) * | 2019-08-01 | 2019-12-13 | 武汉大学 | Image super-resolution reconstruction method based on convolution sparse coding |
CN110570351B (en) * | 2019-08-01 | 2021-05-25 | 武汉大学 | Image super-resolution reconstruction method based on convolution sparse coding |
CN110766609B (en) * | 2019-08-29 | 2023-02-10 | 王少熙 | Depth-of-field map super-resolution reconstruction method for ToF camera |
CN110766609A (en) * | 2019-08-29 | 2020-02-07 | 王少熙 | Depth-of-field map super-resolution reconstruction method for ToF camera |
CN110992265B (en) * | 2019-12-02 | 2023-10-20 | 北京数码视讯科技股份有限公司 | Image processing method and model, training method of model and electronic equipment |
CN111127317A (en) * | 2019-12-02 | 2020-05-08 | 深圳供电局有限公司 | Image super-resolution reconstruction method and device, storage medium and computer equipment |
CN111127317B (en) * | 2019-12-02 | 2023-07-25 | 深圳供电局有限公司 | Image super-resolution reconstruction method, device, storage medium and computer equipment |
CN110992265A (en) * | 2019-12-02 | 2020-04-10 | 北京数码视讯科技股份有限公司 | Image processing method and model, model training method and electronic equipment |
CN111353940A (en) * | 2020-03-31 | 2020-06-30 | 成都信息工程大学 | Image super-resolution reconstruction method based on deep learning iterative up-down sampling |
CN111784571A (en) * | 2020-04-13 | 2020-10-16 | 北京京东尚科信息技术有限公司 | Method and device for improving image resolution |
CN111551988A (en) * | 2020-04-23 | 2020-08-18 | 中国地质大学(武汉) | Seismic data anti-alias interpolation method combining deep learning and prediction filtering |
CN111551988B (en) * | 2020-04-23 | 2021-06-25 | 中国地质大学(武汉) | Seismic data anti-alias interpolation method combining deep learning and prediction filtering |
CN111640061A (en) * | 2020-05-12 | 2020-09-08 | 哈尔滨工业大学 | Self-adaptive image super-resolution system |
CN111929723A (en) * | 2020-07-15 | 2020-11-13 | 清华大学 | Velocity model super-resolution method under seismic data constraint based on multi-task learning |
CN114331853B (en) * | 2020-09-30 | 2023-05-12 | 四川大学 | Single image restoration iteration framework based on target vector updating module |
US11948279B2 (en) | 2020-11-23 | 2024-04-02 | Samsung Electronics Co., Ltd. | Method and device for joint denoising and demosaicing using neural network |
CN112308781A (en) * | 2020-11-23 | 2021-02-02 | 中国科学院深圳先进技术研究院 | Single image three-dimensional super-resolution reconstruction method based on deep learning |
CN113096017A (en) * | 2021-04-14 | 2021-07-09 | 南京林业大学 | Image super-resolution reconstruction method based on depth coordinate attention network model |
CN113096017B (en) * | 2021-04-14 | 2022-01-25 | 南京林业大学 | Image super-resolution reconstruction method based on depth coordinate attention network model |
CN113379602A (en) * | 2021-06-08 | 2021-09-10 | 中国科学技术大学 | Light field super-resolution enhancement method by using zero sample learning |
CN113379602B (en) * | 2021-06-08 | 2024-02-27 | 中国科学技术大学 | Light field super-resolution enhancement method using zero sample learning |
CN113538307A (en) * | 2021-06-21 | 2021-10-22 | 陕西师范大学 | Synthetic aperture imaging method based on multi-view super-resolution depth network |
TWI818491B (en) * | 2021-12-16 | 2023-10-11 | 聯發科技股份有限公司 | Method for image refinement and system thereof |
Also Published As
Publication number | Publication date |
---|---|
CN109064396B (en) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109064396A (en) | A kind of single image super resolution ratio reconstruction method based on depth ingredient learning network | |
CN110119780B (en) | Hyper-spectral image super-resolution reconstruction method based on generation countermeasure network | |
Hui et al. | Fast and accurate single image super-resolution via information distillation network | |
CN108734659B (en) | Sub-pixel convolution image super-resolution reconstruction method based on multi-scale label | |
Liu et al. | A spectral grouping and attention-driven residual dense network for hyperspectral image super-resolution | |
CN107123089B (en) | Remote sensing image super-resolution reconstruction method and system based on depth convolution network | |
CN108830813A (en) | A kind of image super-resolution Enhancement Method of knowledge based distillation | |
CN108805814B (en) | Image super-resolution reconstruction method based on multi-band deep convolutional neural network | |
CN103093444B (en) | Image super-resolution reconstruction method based on self-similarity and structural information constraint | |
CN110930342B (en) | Depth map super-resolution reconstruction network construction method based on color map guidance | |
CN109741260A (en) | A kind of efficient super-resolution method based on depth back projection network | |
CN113191953B (en) | Transformer-based face image super-resolution method | |
CN108921786A (en) | Image super-resolution reconstructing method based on residual error convolutional neural networks | |
CN109035267B (en) | Image target matting method based on deep learning | |
CN112435191B (en) | Low-illumination image enhancement method based on fusion of multiple neural network structures | |
CN112381711B (en) | Training and quick super-resolution reconstruction method for light field image reconstruction model | |
CN111951164B (en) | Image super-resolution reconstruction network structure and image reconstruction effect analysis method | |
CN110322402A (en) | Medical image super resolution ratio reconstruction method based on dense mixing attention network | |
Hu et al. | Hyperspectral image super resolution based on multiscale feature fusion and aggregation network with 3-D convolution | |
CN113920043A (en) | Double-current remote sensing image fusion method based on residual channel attention mechanism | |
CN110533591A (en) | Super resolution image reconstruction method based on codec structure | |
CN115311184A (en) | Remote sensing image fusion method and system based on semi-supervised deep neural network | |
Liu et al. | Frequency separation-based multi-scale cascading residual block network for image super resolution | |
CN117058367A (en) | Semantic segmentation method and device for high-resolution remote sensing image building | |
CN116703725A (en) | Method for realizing super resolution for real world text image by double branch network for sensing multiple characteristics |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |