CN109544457A - Image super-resolution method, storage medium and terminal based on fine and close link neural network - Google Patents

Image super-resolution method, storage medium and terminal based on fine and close link neural network Download PDF

Info

Publication number
CN109544457A
CN109544457A CN201811474661.2A CN201811474661A CN109544457A CN 109544457 A CN109544457 A CN 109544457A CN 201811474661 A CN201811474661 A CN 201811474661A CN 109544457 A CN109544457 A CN 109544457A
Authority
CN
China
Prior art keywords
image
resolution
fine
neural network
close link
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811474661.2A
Other languages
Chinese (zh)
Inventor
匡平
马霆松
王豪爽
郭雯霞
陈鹏
彭亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201811474661.2A priority Critical patent/CN109544457A/en
Publication of CN109544457A publication Critical patent/CN109544457A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The invention discloses image super-resolution method, storage medium and terminals based on fine and close link neural network, and method includes: image preprocessing;Feature extraction: building fine and close link neural network, and low-resolution image Input is inputted from the entrance of fine and close link neural network, extracts the characteristic information for including in Input after calculating;Prediction super-resolution image simultaneously updates network parameter: carrying out the image that feature extraction is completed to up-sample/deconvolution, obtains forecast image Predict;Calculate the error amount between forecast image Predict and true picture Label, the reversed parameter for updating fine and close link neural network;Super-resolution reconstruction.The present invention can significantly improve the ability that deep neural network extracts image low frequency and high-frequency characteristic, improve the effect of image super-resolution, improve picture provide information ability, therefore apply expectation obtain high-definition picture, expectation picture be capable of providing field in greater detail.

Description

Image super-resolution method, storage medium based on fine and close link neural network and Terminal
Technical field
The present invention relates to medical images and satellite image process field, more particularly to the figure based on fine and close link neural network As super-resolution method, storage medium and terminal.
Background technique
Super-resolution technique SR (Super-Resolution) refers to be reconstructed accordingly from the low-resolution image observed High-definition picture, have important application value in fields such as monitoring device, satellite image and medical images.SR can be divided into Two classes: high-definition picture is reconstructed from multiple low-resolution images and reconstructs high resolution graphics from single low-resolution image Picture.SR based on deep learning is mainly based upon the method for reconstructing of single low-resolution, i.e. Single Image Super- Resolution(SISR)。
SISR is an inverse problem, for a low-resolution image, it is understood that there may be many different high-definition pictures It is corresponding to it, therefore usually a prior information can be added to carry out standardization constraint when solving high-definition picture.Traditional In method, this prior information can be acquired in the example by several low-high image in different resolution occurred in pairs.And based on deep Degree study SR by neural network directly learn image in different resolution to high-definition picture mapping function end to end.
There are some researchs based on deep learning to be proposed by people at present, such as researcher's proposition in 2016 Super-Resolution Convolutional Neural Network (SRCNN) is that is proposed earlier do the convolution of SR Neural network.The network structure very simple has only used three convolutional layers.This method is for a low-resolution image, first Target sizes are amplified to using bicubic (bicubic) interpolation, then Nonlinear Mapping is done by three-layer coil product network, are obtained Result as high-definition picture export.Author walks the interpretation of structure of three-layer coil product at three corresponding with traditional SR method It is rapid: the extraction of image block and character representation, feature Nonlinear Mapping and final reconstruction.
The number of plies of SRCNN is less, while receptive field is also smaller, DRCN (Deeply-Recursive Convolutional Network for Image Super-Resolution) it proposes to increase network receptive field (41x41) using more convolutional layers, Simultaneously in order to avoid excessive network parameter, this article proposes to use recurrent neural network (RNN).It is similar with SRCNN, the network point For three modules, first is Embedding network, is equivalent to feature extraction, and second is Inference Network is equivalent to the nonlinear transformation of feature, and third is Reconstruction network, i.e., obtains from characteristic image Reconstructed results to the end.Inference network therein is a Recursive Networks, i.e., it is more to pass through the layer to datacycle It is secondary.Better super-resolution efect is achieved compared to SRCNN, DRCN, but the time for generating image has also increased dramatically.
SRGAN(Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network) by production confrontation network (GAN) be used for SR problem.Its starting point is tradition Method generally handles is lesser amplification factor, when the amplification factor of image is 4 or more, it is easy to the result made Seem excessively smooth, and lacks the sense of reality in some details.Therefore SRGAN generates the details in image using GAN, The super-resolution image that SRGAN is generated, it is more true lively for other methods based on deep neural network, still Also it is that the reduction degree of image is not high first along with a series of problem, is quantified using the two indexs of PSNR and SSIM See that the super-resolution efect of SRGAN is not very high.Secondly, the SRGAN complicated network structure, it means that training cost is higher, It difficult can more train, it more difficult to obtain stable super-resolution efect.
Summary of the invention
It is an object of the invention to overcome the deficiencies of the prior art and provide the image oversubscription based on fine and close link neural network Resolution method, storage medium and terminal solve the problems, such as that prior art network is complicated and slow.
The purpose of the present invention is achieved through the following technical solutions: the image oversubscription based on fine and close link neural network Resolution method, comprising the following steps:
Image preprocessing: training image is subjected to random cutting and obtains corresponding high-definition picture Label, to Label Image enhancement is carried out, down-sampling is carried out to enhanced Label and generates low-resolution image Input;
Feature extraction: building fine and close link neural network, and low-resolution image Input is linked neural network from densification Entrance input, extracts the characteristic information for including in Input after calculating;
Prediction super-resolution image simultaneously updates network parameter: it carries out the image that feature extraction is completed to up-sample/deconvolution, It obtains resolution ratio and reaches expected high-resolution forecast image Predict;Calculate forecast image Predict and true picture Error amount between Label, the reversed parameter for updating fine and close link neural network;
Super-resolution reconstruction: the image for needing super-resolution to calculate is sequentially cut, several by what is obtained after cutting Patch is input in fine and close link neural network and carries out the predicted value that each Patch is calculated in super-resolution Output is spliced the output of each Patch to obtain final super-resolution picture in order.
Further, the fine and close link neural network includes N number of common convolutional layer, N-1 activation primitive layer, N-2 A fine and close link convolution group and 1 up-sampling/warp lamination;It include the cause of M sequential connection in each fine and close link convolution group Close link convolutional layer, and all fine and close link convolutional layers in fine and close link convolution group in the convolution group other are fine and close Link convolutional layer link;
Being linked in sequence between the common convolutional layer of preceding N-1 sequential connection has an activation primitive layer and a fine and close chain Connect convolution group, and the output end of N-1 common convolutional layers is also linked in sequence and has the N-1 activation primitive layer, n-th commonly to roll up Lamination and up-sampling/warp lamination.
Further, described that enhanced Label adopt under down-sampling generates in low-resolution image Input The calculation formula of sample is as follows:
For piece image I having a size of M*N, s times of down-sampling is carried out to it to get the score of (M/s) * (N/s) size is arrived Resolution image, s be M and N common divisor, when consideration be matrix form image, be exactly the figure in original image s*s window As becoming a pixel, the value of this pixel is exactly the mean value of all pixels in window:
In formula, PkIndicate the value of that remaining pixel of picture fragment of this s*s size after down-sampling, Win (k) indicates the picture fragment of this s*s size, which such images fragment k represents, the size of k by image size and The size decision of s, IiIndicate one of pixel in the picture fragment of this s*s size.
Further, the calculation formula for extracting the characteristic information for including in Input is as follows:
Fl=max (0, Wl×Fl-1)
Wherein, WlIndicate the weight of l layers of convolutional layer, FlIndicate the characteristic pattern of l layers of convolutional layer output;Therefore each layer Convolutional layer can generate k × (l-1)+k0, wherein k0The port number of input is represented, k indicates that the feature of fine and close streptostyly convolutional layer increases Long step-length.
Further, the image that feature extraction is completed carries out up-sampling/deconvolution, obtains resolution ratio and reaches pre- The calculation formula of the high-resolution forecast image Predict of phase is as follows:
In formula, the output of l layer network is regarded to the input of l+1 layer network, whereinIndicate l layers of characteristic pattern k With the connection of l-1 layers of characteristic pattern c, it is 1 if connection, is otherwise 0;In formula, Cl(y) objective function, mesh are indicated Mark is exactly that optimize to it be that level off to 0, λ expression be a coefficient constant for its value, and I indicates the number of the pixel of input picture Amount, KlIndicate L layers of convolution characteristic pattern quantity,L layers of kth pair characteristic pattern,Indicate C layers of k-th convolution kernel, Indicate that c layer L-1 pair characteristic pattern, p are a hyper parameters, quantity is modified according to the effect of network, general size setting Between (0,1).
Further, the error amount between the calculating forecast image Predict and true picture Label is using equal Variance loss function, calculation formula are as follows:
In formula, XiIt is forecast image Predict, YiIt is true picture Label, mapping F is fine and close link neural network needs The function of study has weight W and biasing B, n to indicate the quantity of training sample comprising parameter.
Further, after error amount is calculated using mean square deviation loss function, the error amount is three-dimensional matrice, Respectively represent RGB triple channel;
To the error amount in different channels multiplied by different weights, the wherein weight highest in the channel G;The weight is from colour Conversion formula of the image to gray level image.
Further, the reversed parameter for updating fine and close link neural network includes updating weight W and biasing B, tool The following sub-step of body:
After propagated forward, network will obtain the high-resolution pictures EHR (EstimatedHigh of a prediction Resolution Image), the EHR and true high-resolution pictures THR (True High Resolution obtained at this time Image) there are also sizable gaps;The gap between EHR and THR is calculated by the mean square deviation loss function, obtains one Value, is called penalty values ERROR;
At this moment, since network training target is exactly that penalty values are reduced to minimum as much as possible, the weight W in network is obtained How many influence are produced on this error value E RROR respectively with biasing these parameters of B, it is each by being directed to ERROR error amount A W and B ask local derviation to realize, at this moment will obtain one be directed to this W and B updated value UpDate, by by this Update with Corresponding W or B are added, to be updated to W and B;It is described to be updated to for the value of UpDate being added in original parameter Later, the Error of the loss of the final output of network is enabled to reduce;
The weight of update is finally recalculated propagated forward again, ceaselessly iteration, and constantly carries out backpropagation more The output error value Error of new parameter, network will constantly reduce, and the gap between HER and THR also will be smaller and smaller, The super-resolution picture quality that network generates is higher and higher.
The present invention also provides a kind of storage mediums, are stored thereon with computer instruction, and the computer instruction is held when running The step of described in row based on the fine and close image super-resolution method for linking neural network.
The present invention also provides a kind of terminal, including memory and processor, being stored on the memory can be at the place The computer instruction run on reason device, what execution was described when the processor runs the computer instruction links mind based on densification The step of image super-resolution method through network.
The beneficial effects of the present invention are:
(1) method of the invention, it is possible to significantly improve the ability that deep neural network extracts image low frequency and high-frequency characteristic, Improve image super-resolution effect, improve picture provide information ability, therefore apply expectation obtain high-definition picture, It is expected that picture is capable of providing field in greater detail, for example medical treatment and satellite image etc. can obtain preferable image and surpass Resolution effect.Storage medium provided by the invention and terminal also solve corresponding technical problem.
(2) it is all linked with the convolutional layer of other layers using fine and close link each layer of convolution of convolution group, the volume positioned at front The output of lamination can be transferred directly to subsequent convolutional layer, that is, by the result of front convolution directly with subsequent convolution Layer is exchanged, and the repetition to same characteristic features is avoided to extract, and guarantees that network can extract more multifarious feature.
Simultaneously in addition, since the front and back of fine and close link convolution group links, the connection between shortening front layer and back layer as far as possible, i.e., Later layer can be linked directly with all convolutional layers in front, and distance between layers is shortened, and reversely be passed in step in this way Broadcast carry out gradient calculating when, gradient disappear the problem of can effectively be contained.That is, the convolution that these densifications are connected Parameter between layer can also be shared, and the number of parameters of network, the training of acceleration are greatly reduced;The structure of densification link can also It plays, prevents the gradient disappearance problem in training process, stabilize the training of network.
So, our network can while improving ability in feature extraction, improve the training speed of network with And stability, it is a kind of extraordinary network establishment skill.
(3) using LeakyReLU to be played the role of all is the training of stabilizing network, prevents the feelings of gradient disappearance Condition.
Detailed description of the invention
Fig. 1 is the method for the present invention flow chart;
Fig. 2 is fine and close link neural network connection schematic diagram.
Specific embodiment
Technical solution of the present invention is clearly and completely described with reference to the accompanying drawing, it is clear that described embodiment It is a part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, ordinary skill people Member's every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that belong to "center", "upper", "lower", "left", "right", "vertical", The direction of the instructions such as "horizontal", "inner", "outside" or positional relationship be based on direction or positional relationship described in attached drawing, merely to Convenient for description the present invention and simplify description, rather than the device or element of indication or suggestion meaning must have a particular orientation, It is constructed and operated in a specific orientation, therefore is not considered as limiting the invention.In addition, belonging to " first ", " second " only For descriptive purposes, it is not understood to indicate or imply relative importance.
In the description of the present invention, it should be noted that unless otherwise clearly defined and limited, belong to " installation ", " phase Even ", " connection " shall be understood in a broad sense, for example, it may be being fixedly connected, may be a detachable connection, or be integrally connected;It can To be mechanical connection, it is also possible to be electrically connected;It can be directly connected, can also can be indirectly connected through an intermediary Connection inside two elements.For the ordinary skill in the art, above-mentioned term can be understood at this with concrete condition Concrete meaning in invention.
As long as in addition, the non-structure each other of technical characteristic involved in invention described below different embodiments It can be combined with each other at conflict.
Embodiment 1
Image super-resolution method provided in this embodiment based on fine and close link neural network, in monitoring device, satellite The fields such as image and medical image all have important application value.This method is used to rebuild from the low-resolution image observed Corresponding high-definition picture out is the method for reconstructing based on single low-resolution, and it is complicated and fast to solve prior art network Spend slower problem.
As shown in Figure 1, method includes the following steps:
A: image preprocessing needs training set of the image data collection as network in the present embodiment.By this step Suddenly, the input picture size of reading is 28x28, twice of size of correspondence of generation i.e. the high-definition picture of 56x56.Specifically Ground, the step specifically include that
A1: training image is subjected to random cutting and obtains corresponding high-definition picture Label.
Since the calculation amount that bigger image calculates its super-resolution result needs is bigger, the time of consuming is more, therefore We are trained by the way of it picture will be trained to be cut into small fragment, accelerate the training speed of neural network, while energy The diversity for enough guaranteeing training data, improves the effect of network super-resolution reconstruction.
In detail, in this step, directly training image is cut, also, the present embodiment uses random cutting Method, rather than sequence is split image.The benefit done so is to obtain more different types of Data, this is also the first step of our data enhancings.
There are three the parameters for needing to know in A1, is p, S and h respectively, respectively represents size, the image of input picture Amplification factor and input picture correspond to the size of Label, in which:
H=p × S
Wherein the size of S must be integer, and generally no more than 4, in addition the value of p is traditionally arranged to be 2 multiple.
Specifically, after cutting, a series of image block of 56x56 sizes will be obtained, this is also our network training process In Label.
A2: image enhancement is carried out to Label.Preferably, in the present embodiment, the side that the image enhancement in this step includes Method has random overturning, random brightness and setting contrast.
Since the picture being collected into from network or elsewhere is after all limited, neural network will be obtained well Learning effect just must have diversity and the huge data set of data as supporting.Therefore the present embodiment uses a series of figures As changing means for 10 times even 100 times of training dataset progress of expansion, to guarantee that network can be trained adequately, master The data enhancement methods to be related to have random overturning, random brightness and setting contrast of image etc..
In comparison, the image enhancement of the prior art is simple rotation and amplification picture, finally obtained data The method that diversity is not so good as the present embodiment.
A3: down-sampling is carried out to enhanced Label and generates low-resolution image Input.
Specifically, in this step, down-sampling is carried out to the image block of these 56x56, obtains the image block of 26x26, made For the input of network.
The prior art uses down-sampling and up-sampling to training image, obtains and an equal amount of low resolution of training image Rate image finally cuts training image and low-resolution image respectively, i.e., whole process just needs to concentrate data Image carries out cutting twice, the plenty of time of waste, and the method for the present embodiment only needs once to cut data set, section About time.
In addition, in the present embodiment, it is described that down-sampling generation low-resolution image is carried out to enhanced Label Down-sampling in Input is calculated using bicubic interpolation method, and by down-sampling, each Label correspondence obtains one low point Input Input of the Patch of resolution as neural network.
Wherein, for piece image I having a size of M*N, s times of down-sampling is carried out to it to get (M/s) * (N/s) size is arrived Image in different resolution, s be M and N common divisor, when consideration be matrix form image, be exactly in original image s*s window Image become a pixel, the value of this pixel is exactly the mean value of all pixels in window:
In formula, PkIndicate the value of that remaining pixel of picture fragment of this s*s size after down-sampling, Win (k) indicates the picture fragment of this s*s size, which such images fragment k represents, the size of k by image size and The size decision of s, IiIndicate one of pixel in the picture fragment of this s*s size.
B: feature extraction, the step specifically include that
B1: fine and close link neural network is built.
Preferably, in the present embodiment, the fine and close link neural network includes N number of common convolutional layer, N-1 activation Function layer, N-2 fine and close link convolution group and 1 up-sampling/warp lamination;It include M suitable in each fine and close link convolution group The fine and close link convolutional layer of sequence connection, and all fine and close link convolutional layers in fine and close link convolution group are and in the convolution group Other densification link convolutional layers link;
Being linked in sequence between the common convolutional layer of preceding N-1 sequential connection has an activation primitive layer and a fine and close chain Connect convolution group, and the output end of N-1 common convolutional layers is also linked in sequence and has the N-1 activation primitive layer, n-th commonly to roll up Lamination and up-sampling/warp lamination.
And specifically, as shown in Fig. 2, the fine and close link neural network includes 7 common convolutional layers, 5 activation letters Several layers, 5 fine and close link convolution groups and 1 up-sampling/warp lamination, wherein the common convolutional layer includes 6 3x3 volumes Lamination and a 1x1 convolutional layer, the activation primitive layer is using LeakyReLU activation primitive, each fine and close link volume It include the 3x3 densification link convolutional layer of 4 sequential connections in product group, and all fine and close links in fine and close link convolution group are rolled up Lamination links convolutional layer with other densifications in the convolution group and links.
Played the role of using LeakyReLU be all stabilizing network training, prevent gradient disappear the case where.
And we use the convolutional network more than 20 layers, and wherein have 20 layers of (5*4) convolution to use fine and close chain The net structure mode connect, in the groove of this densification link, all convolutional layers of preceding layer convolution and back have direct pass Connection.
The benefit done so is able to maintain the communication before and after fine and close link convolution group, since densification links each layer of convolution group Convolution is all linked with the convolutional layer of other layers, and the output positioned at the convolutional layer of front can be transferred directly to subsequent convolution Layer, that is, the result of front convolution is directly exchanged with subsequent convolutional layer, avoid the repetition to same characteristic features from extracting, Guarantee that network can extract more multifarious feature.
Simultaneously in addition, since the front and back of fine and close link convolution group links, the connection between shortening front layer and back layer as far as possible, i.e., Later layer can be linked directly with all convolutional layers in front, distance between layers be shortened, in this way in step C2 Backpropagation carry out gradient calculating when, gradient disappear the problem of can effectively be contained.That is, these densifications are connected Convolutional layer between parameter can also share, greatly reduce the number of parameters of network, the training of acceleration;The knot of densification link Structure can also play, and prevent the gradient disappearance problem in training process, stabilize the training of network.
So, our network can while improving ability in feature extraction, improve the training speed of network with And stability, it is a kind of extraordinary network establishment skill.
B2: low-resolution image Input is inputted from the entrance of fine and close link neural network, is extracted after calculating The characteristic information for including in Input.
Specifically, the low resolution picture Input of input is by all convolutional layers by building in step B1, by every 3 convolution algorithms obtain the image information and feature that own includes in a convolutional layer, the calculation formula of each convolutional layer can be with It indicates are as follows:
Fl=max (0, Wl×Fl-1)
Wherein, WlIndicate the weight of l layers of convolutional layer, FlIndicate the characteristic pattern of l layers of convolutional layer output.In the present embodiment Network in the convolution kernel size of each layer of convolutional layer be both configured to 3 × 3, the feature of fine and close streptostyly convolutional layer increases step-length k, It is set as 12;Therefore each layer of convolutional layer can generate k × (l-1)+k0, wherein k0Represent the port number of input.
C: prediction super-resolution image simultaneously updates network parameter, which specifically includes that
C1: it carries out the image that feature extraction is completed to up-sample/deconvolution, obtains resolution ratio and reach expected high-resolution Forecast image Predict.
After the conventional part of network front end, can obtain one includes input picture low frequency and high-frequency characteristic Characteristic pattern, using this characteristic pattern progress de-convolution operation obtain a size be final goal resolution ratio image as network Final output, here it is the corresponding high-resolution predicted value Predict of the low resolution of neural network forecast input Input.
Warp lamination and level convolution sparse coding network (Hierarchical Convolution Sparse Coding) closely similar, only in sparse coding to the decomposition of image using the mode of matrix multiple, and in deconvolution In network, using the form of matrix convolution.Intersect optimization basic image with training process in sparse coding and combination system is several classes of Seemingly, it is also required to intersect optimization characteristic filter device and characteristic pattern when deconvolution is trained every time, specific function is realized are as follows:
In formula, the output of l layer network is regarded to the input of l+1 layer network, whereinIndicate l layers of characteristic pattern k With the connection of l-1 layers of characteristic pattern c, it is 1 if connection, is otherwise 0;In formula, Cl(y) objective function, mesh are indicated Mark is exactly that optimize to it be that level off to 0, λ expression be a coefficient constant for its value, and I indicates the number of the pixel of input picture Amount, KlIndicate L layers of convolution characteristic pattern quantity,L layers of kth pair characteristic pattern,Indicate C layers of k-th convolution kernel, Indicate that c layer L-1 pair characteristic pattern, p are a hyper parameters, quantity is modified according to the effect of network, general size setting Between (0,1).
C2: calculating the error amount between forecast image Predict and true picture Label, reversed to update fine and close link The parameter of neural network.
By calculating network final output Predict and inputting the error amount between the corresponding Label of picture, by anti- The parameter for participating in calculating in network is calculated and updated to propagation algorithm.
Specifically, the error amount use between the calculating forecast image Predict and true picture Label is square Poor loss function, calculation formula are as follows:
In formula, XiIt is forecast image Predict, YiIt is true picture Label, mapping F is fine and close link neural network needs The function of study has weight W comprising parameter and biases B (the fine and close link nerve net of the reversed update mentioned in namely step C2 The parameter of network), n indicates the quantity of training sample.
In addition, the error amount is three-dimensional matrice, respectively after error amount is calculated using mean square deviation loss function Represent RGB triple channel;To the error amount in different channels multiplied by different weights, the wherein weight highest in the channel G;The weight comes From color image to the conversion formula of gray level image.
Some small modifications are done when seeking mean square error in the present embodiment, the error in different channels is multiplied by different power Value, the corresponding weight of BGR is respectively 0.11448,0.58661,0.29891, these three values are from color image to gray level image Conversion formula, since human eye is most sensitive, the weight highest of G component to green.It does so after capable of making to rebuild Image is more friendly in color, is not in colour cast.
And for backpropagation part, back-propagation algorithm is adjusted hidden first by error back propagation to hidden neuron Layer arrives the connection weight of output layer and the threshold value of output layer neuron;Then according to the mean square error of hidden layer neuron, to adjust Input layer is saved to the connection weight of hidden layer and the threshold value of hidden layer neuron.
Specifically, the reversed parameter for updating fine and close link neural network includes updating weight W and biasing B, specifically Following sub-step:
After propagated forward, network will obtain the high-resolution pictures EHR (EstimatedHigh of a prediction Resolution Image), the EHR and true high-resolution pictures THR (True High Resolution obtained at this time Image) there are also sizable gaps;The gap between EHR and THR is calculated by the mean square deviation loss function, obtains one Value, is called penalty values ERROR;
At this moment, since network training target is exactly that penalty values are reduced to minimum as much as possible, the weight W in network is obtained How many influence are produced on this error value E RROR respectively with biasing these parameters of B, it is each by being directed to ERROR error amount A W and B ask local derviation to realize, at this moment will obtain one be directed to this W and B updated value UpDate, by by this Update with Corresponding W or B are added, to be updated to W and B;It is described to be updated to for the value of UpDate being added in original parameter Later, the Error of the loss of the final output of network is enabled to reduce;
The weight of update is finally recalculated propagated forward again, ceaselessly iteration, and constantly carries out backpropagation more The output error value Error of new parameter, network will constantly reduce, and the gap between HER and THR also will be smaller and smaller, The super-resolution picture quality that network generates is higher and higher.
Backpropagation specific algorithm is as follows, training setLearning rate η:
1. all weight W and biasing B in random initializtion network in (0,1) range;
2.Repeat
3.For all(xk,yk)∈D do
4. calculating the forecast image output valve of current sample according to parameter current
5. calculating the gradient terms g of output layer neuronj
6. calculating the gradient terms e of hidden layer neuronh
7. updating weight and biasing
8.End for
9.Until reaches stop condition
D: super-resolution reconstruction carries out super-resolution reconstruction, step master using D step after ABC step completes training Include:
The image for needing super-resolution to calculate is sequentially cut, several Patch obtained after cutting are input to densification The predicted value Output that each Patch is calculated in super-resolution is carried out in link neural network, by each Patch's Output is spliced to obtain final super-resolution picture in order.
In addition, it is necessary to explanation, the Image Super-resolution provided in an embodiment of the present invention based on fine and close link neural network Each step of rate method can correspond to out relevant software module, i.e. the method can be replaced by corresponding system System, and the method that the explanation of relevant portion refers to specific luggage of taking automatically in the confined space that the embodiment of the present invention 1 provides The detailed description of middle corresponding part, details are not described herein.In addition, in above-mentioned technical proposal provided in an embodiment of the present invention with it is existing The consistent part of corresponding technical solution realization principle and unspecified in technology, in order to avoid excessively repeat.
Embodiment 2
Based on the realization of embodiment 1, the present embodiment also provides a kind of storage medium, is stored thereon with computer instruction, institute It states and executes the image super-resolution method based on fine and close link neural network described in embodiment 1 when computer instruction operation Step.
Based on this understanding, the technical solution of the present embodiment substantially the part that contributes to existing technology in other words Or the part of the technical solution can be embodied in the form of software products, which is stored in one and deposits In storage media, including some instructions are used so that a computer equipment (can be personal computer, server or network Equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention.And storage medium above-mentioned include: USB flash disk, Mobile hard disk, read-only memory (Read-OnlyMemory, ROM), random access memory (RandomAccessMemory, RAM), the various media that can store program code such as magnetic or disk.
Embodiment 3
Based on the realization of embodiment 1, the present embodiment also provides a kind of terminal, including memory and processor, the storage The computer instruction that can be run on the processor is stored on device, the processor executes when running the computer instruction The step of image super-resolution method described in embodiment 1 based on fine and close link neural network.
Each functional unit in embodiment provided by the invention can integrate in one processing unit, be also possible to each A unit physically exists alone, and can also be integrated in one unit with two or more units.
Obviously, the above embodiments are merely examples for clarifying the description, and does not limit the embodiments, right For those of ordinary skill in the art, can also make on the basis of the above description other it is various forms of variation or It changes.There is no necessity and possibility to exhaust all the enbodiments.And thus amplify out it is obvious variation or It changes still within the protection scope of the invention.

Claims (10)

1. the image super-resolution method based on fine and close link neural network, it is characterised in that: the following steps are included:
Image preprocessing: training image is subjected to random cutting and obtains corresponding high-definition picture Label, Label is carried out Image enhancement carries out down-sampling to enhanced Label and generates low-resolution image Input;
Feature extraction: fine and close link neural network is built, low-resolution image Input is linked to the entrance of neural network from densification Input extracts the characteristic information for including in Input after calculating;
Prediction super-resolution image simultaneously updates network parameter: carrying out the image that feature extraction is completed to up-sample/deconvolution, obtain Resolution ratio reaches expected high-resolution forecast image Predict;Calculate forecast image Predict and true picture Label Between error amount, the reversed parameter for updating fine and close link neural network;
Super-resolution reconstruction: the image for needing super-resolution to calculate is sequentially cut, and several Patch obtained after cutting are defeated Enter into fine and close link neural network to carry out the predicted value Output that each Patch is calculated in super-resolution, it will be every The output of a Patch is spliced to obtain final super-resolution picture in order.
2. the image super-resolution method according to claim 1 based on fine and close link neural network, it is characterised in that: institute The fine and close link neural network stated includes N number of common convolutional layer, N-1 activation primitive layer, N-2 fine and close link convolution group and 1 A up-sampling/warp lamination;In each fine and close link convolution group include the fine and close link convolutional layer of M sequential connection, and causes All fine and close link convolutional layers in close link convolution group link convolutional layer with other densifications in the convolution group and link;
Being linked in sequence between the common convolutional layer of preceding N-1 sequential connection has an activation primitive layer and a fine and close link volume Product group, and the output end of the common convolutional layers of N-1 is also linked in sequence the N-1 activation primitive layer, the common convolutional layer of n-th With up-sampling/warp lamination.
3. the image super-resolution method according to claim 1 based on fine and close link neural network, it is characterised in that: institute That states carries out down-sampling generation low-resolution image Input to enhanced Label, wherein the principle description below of down-sampling:
For piece image I having a size of M*N, s times of down-sampling is carried out to it to get to (M/s) * (N/s) size and obtain resolution ratio Image, s be M and N common divisor, when consideration be matrix form image, be exactly in original image s*s window image become At a pixel, the value of this pixel is exactly the mean value of all pixels in window:
In formula, PkIndicate the value of that remaining pixel of picture fragment of this s*s size after down-sampling, win (k) Indicate the picture fragment of this s*s size, which such images fragment k represents, and the size of k is big by image size and s's Small decision, IiIndicate one of pixel in the picture fragment of this s*s size.
4. the image super-resolution method according to claim 2 based on fine and close link neural network, it is characterised in that: institute The calculation formula for extracting the characteristic information for including in Input stated is as follows:
Fl=max (0, Wl×Fl-1)
Wherein, WlIndicate the weight of l layers of convolutional layer, FlIndicate the characteristic pattern of l layers of convolutional layer output;Therefore each layer of convolution Layer can generate k × (l-1)+k0, wherein k0The port number of input is represented, k indicates that the feature of fine and close streptostyly convolutional layer increases step It is long.
5. the image super-resolution method according to claim 2 based on fine and close link neural network, it is characterised in that: institute The image for completing feature extraction stated carries out up-sampling/deconvolution, obtains resolution ratio and reaches expected high-resolution prognostic chart As the calculation formula of Predict is as follows:
In formula, the output of l layer network is regarded to the input of l+1 layer network, whereinIndicate l layers of characteristic pattern k and The connection of l-1 layers of characteristic pattern c is 1 if connection, is otherwise 0;In formula, Cl(y) objective function is indicated, target is just It is that optimize to it be that level off to 0, λ expression be a coefficient constant for its value, I indicates the quantity of the pixel of input picture, KlIndicate L layers of convolution characteristic pattern quantity,L layers of kth pair characteristic pattern,Indicate C layers of k-th convolution kernel,It indicates C layer L-1 pair characteristic pattern, p are a hyper parameters, and quantity is modified according to the effect of network, general size be set in (0, 1) between.
6. the image super-resolution method according to claim 1 based on fine and close link neural network, it is characterised in that: institute The error amount between calculating forecast image Predict and true picture Label stated uses mean square deviation loss function, calculates public Formula is as follows:
In formula, XiIt is forecast image Predict, YiIt is true picture Label, mapping F is that fine and close link neural network needs to learn Function, comprising parameter have weight W and biasing B, n indicate training sample quantity.
7. the image super-resolution method according to claim 6 based on fine and close link neural network, it is characterised in that: After error amount is calculated using mean square deviation loss function, the error amount is three-dimensional matrice, respectively represents RGB triple channel;
To the error amount in different channels multiplied by different weights, the wherein weight highest in the channel G;The weight comes from color image To the conversion formula of gray level image.
8. the image super-resolution method according to claim 6 based on fine and close link neural network, it is characterised in that: institute The reversed parameter for updating fine and close link neural network stated includes updating weight W and biasing B, specific following sub-step:
After propagated forward, network will obtain high-resolution pictures EHR (the Estimated High of a prediction Resolution Image), the EHR and true high-resolution pictures THR (True High Resolution obtained at this time Image) there are also sizable gaps;The gap between EHR and THR is calculated by the mean square deviation loss function, obtains one Value, is called penalty values ERROR;
At this moment, since network training target is exactly that penalty values are reduced to minimum as much as possible, weight W in network and partially is obtained Set B these parameters and how many influence produced on this error value E RROR respectively, by ERROR error amount for each W and B asks local derviation to realize, at this moment will obtain one be directed to this W and B updated value UpDate, by by this Update with it is corresponding W or B are added, to be updated to W and B;It is described to be updated to after the value of UpDate is added in original parameter, energy Enough so that the Error of the loss of the final output of network reduces;
The weight of update is finally recalculated propagated forward again, ceaselessly iteration, and constantly carries out backpropagation and update ginseng Number, the output error value Error of network will constantly reduce, and the gap between HER and THR also will be smaller and smaller, network The super-resolution picture quality of generation is higher and higher.
9. a kind of storage medium, is stored thereon with computer instruction, it is characterised in that: the right of execution when computer instruction is run Benefit require any one of 1 to 8 described in image super-resolution method based on fine and close link neural network the step of.
10. a kind of terminal, including memory and processor, the meter that can be run on the processor is stored on the memory Calculation machine instruction, which is characterized in that perform claim requires any one of 1 to 8 institute when the processor runs the computer instruction The step of image super-resolution method based on fine and close link neural network stated.
CN201811474661.2A 2018-12-04 2018-12-04 Image super-resolution method, storage medium and terminal based on fine and close link neural network Pending CN109544457A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811474661.2A CN109544457A (en) 2018-12-04 2018-12-04 Image super-resolution method, storage medium and terminal based on fine and close link neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811474661.2A CN109544457A (en) 2018-12-04 2018-12-04 Image super-resolution method, storage medium and terminal based on fine and close link neural network

Publications (1)

Publication Number Publication Date
CN109544457A true CN109544457A (en) 2019-03-29

Family

ID=65853665

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811474661.2A Pending CN109544457A (en) 2018-12-04 2018-12-04 Image super-resolution method, storage medium and terminal based on fine and close link neural network

Country Status (1)

Country Link
CN (1) CN109544457A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110111251A (en) * 2019-04-22 2019-08-09 电子科技大学 A kind of combination depth supervision encodes certainly and perceives the image super-resolution rebuilding method of iterative backprojection
CN110189336A (en) * 2019-05-30 2019-08-30 上海极链网络科技有限公司 Image generating method, system, server and storage medium
CN111476740A (en) * 2020-04-28 2020-07-31 北京大米未来科技有限公司 Image processing method, image processing apparatus, storage medium, and electronic device
CN111881920A (en) * 2020-07-16 2020-11-03 深圳力维智联技术有限公司 Network adaptation method of large-resolution image and neural network training device
CN112016507A (en) * 2020-09-07 2020-12-01 平安科技(深圳)有限公司 Super-resolution-based vehicle detection method, device, equipment and storage medium
CN112084908A (en) * 2020-08-28 2020-12-15 广州汽车集团股份有限公司 Image processing method and system and storage medium
WO2020252764A1 (en) * 2019-06-21 2020-12-24 Intel Corporation Adaptive deep learning model for noisy image super-resolution
CN112233041A (en) * 2020-11-05 2021-01-15 Oppo广东移动通信有限公司 Image beautifying processing method and device, storage medium and electronic equipment
CN112767252A (en) * 2021-01-26 2021-05-07 电子科技大学 Image super-resolution reconstruction method based on convolutional neural network
CN112927354A (en) * 2021-02-25 2021-06-08 电子科技大学 Three-dimensional reconstruction method, system, storage medium and terminal based on example segmentation
CN114612309A (en) * 2022-05-12 2022-06-10 电子科技大学 Full-on-chip dynamic reconfigurable super-resolution device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991646A (en) * 2017-03-28 2017-07-28 福建帝视信息科技有限公司 A kind of image super-resolution method based on intensive connection network
CN107154023A (en) * 2017-05-17 2017-09-12 电子科技大学 Face super-resolution reconstruction method based on generation confrontation network and sub-pix convolution
CN108134937A (en) * 2017-12-21 2018-06-08 西北工业大学 A kind of compression domain conspicuousness detection method based on HEVC

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991646A (en) * 2017-03-28 2017-07-28 福建帝视信息科技有限公司 A kind of image super-resolution method based on intensive connection network
CN107154023A (en) * 2017-05-17 2017-09-12 电子科技大学 Face super-resolution reconstruction method based on generation confrontation network and sub-pix convolution
CN108134937A (en) * 2017-12-21 2018-06-08 西北工业大学 A kind of compression domain conspicuousness detection method based on HEVC

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
MATTHEW D. ZEILER等: "deconvolutional networks", 《2010 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
PING KUANG等: "Image super-resolution with densely connected convolutional networks", 《APPLIED INTELLIGENCE》 *
尹宝才等: "深度学习研究综述", 《北京工业大学学报》 *
李立: "基于稀疏表示的人脸图像识别方法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
杨梓豪: "基于区域卷积神经网络的物体识别算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
残月飞雪: "图像的上采样(upsampling)与下采样(subsampled)", 《HTTPS://BLOG.CSDN.NET/MAJINLEI121/ARTICLE/DETAILS/46742339》 *
苏欣: "《Android手机应用网络流量分析与恶意行为检测研究》", 31 October 2016 *
邱婷婷: "基于FPGA的肢体运动检测与实现", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110111251B (en) * 2019-04-22 2023-04-28 电子科技大学 Image super-resolution reconstruction method combining depth supervision self-coding and perception iterative back projection
CN110111251A (en) * 2019-04-22 2019-08-09 电子科技大学 A kind of combination depth supervision encodes certainly and perceives the image super-resolution rebuilding method of iterative backprojection
CN110189336A (en) * 2019-05-30 2019-08-30 上海极链网络科技有限公司 Image generating method, system, server and storage medium
WO2020252764A1 (en) * 2019-06-21 2020-12-24 Intel Corporation Adaptive deep learning model for noisy image super-resolution
CN111476740A (en) * 2020-04-28 2020-07-31 北京大米未来科技有限公司 Image processing method, image processing apparatus, storage medium, and electronic device
CN111476740B (en) * 2020-04-28 2023-10-31 北京大米未来科技有限公司 Image processing method, device, storage medium and electronic equipment
CN111881920A (en) * 2020-07-16 2020-11-03 深圳力维智联技术有限公司 Network adaptation method of large-resolution image and neural network training device
CN111881920B (en) * 2020-07-16 2024-04-09 深圳力维智联技术有限公司 Network adaptation method of large-resolution image and neural network training device
CN112084908A (en) * 2020-08-28 2020-12-15 广州汽车集团股份有限公司 Image processing method and system and storage medium
CN112016507A (en) * 2020-09-07 2020-12-01 平安科技(深圳)有限公司 Super-resolution-based vehicle detection method, device, equipment and storage medium
CN112016507B (en) * 2020-09-07 2023-10-31 平安科技(深圳)有限公司 Super-resolution-based vehicle detection method, device, equipment and storage medium
CN112233041A (en) * 2020-11-05 2021-01-15 Oppo广东移动通信有限公司 Image beautifying processing method and device, storage medium and electronic equipment
CN112767252A (en) * 2021-01-26 2021-05-07 电子科技大学 Image super-resolution reconstruction method based on convolutional neural network
CN112927354A (en) * 2021-02-25 2021-06-08 电子科技大学 Three-dimensional reconstruction method, system, storage medium and terminal based on example segmentation
CN114612309A (en) * 2022-05-12 2022-06-10 电子科技大学 Full-on-chip dynamic reconfigurable super-resolution device
CN114612309B (en) * 2022-05-12 2022-10-14 电子科技大学 Full-on-chip dynamic reconfigurable super-resolution device

Similar Documents

Publication Publication Date Title
CN109544457A (en) Image super-resolution method, storage medium and terminal based on fine and close link neural network
CN105069825B (en) Image super-resolution rebuilding method based on depth confidence network
CN109035142B (en) Satellite image super-resolution method combining countermeasure network with aerial image prior
CN109903221A (en) Image oversubscription method and device
CN107369189A (en) The medical image super resolution ratio reconstruction method of feature based loss
CN106910161A (en) A kind of single image super resolution ratio reconstruction method based on depth convolutional neural networks
DE102018117813A1 (en) Timely data reconstruction with an external recurrent neural network
CN108830913B (en) Semantic level line draft coloring method based on user color guidance
CN107886510A (en) A kind of prostate MRI dividing methods based on three-dimensional full convolutional neural networks
CN106981080A (en) Night unmanned vehicle scene depth method of estimation based on infrared image and radar data
CN109862370A (en) Video super-resolution processing method and processing device
CN109345476A (en) High spectrum image super resolution ratio reconstruction method and device based on depth residual error network
CN108921789A (en) Super-resolution image reconstruction method based on recurrence residual error network
CN109214989A (en) Single image super resolution ratio reconstruction method based on Orientation Features prediction priori
CN109961396A (en) A kind of image super-resolution rebuilding method based on convolutional neural networks
Chen et al. Single image super-resolution using deep CNN with dense skip connections and inception-resnet
CN109785279B (en) Image fusion reconstruction method based on deep learning
CN114897780B (en) MIP sequence-based mesenteric artery blood vessel reconstruction method
CN113177882A (en) Single-frame image super-resolution processing method based on diffusion model
CN109949223A (en) Image super-resolution reconstructing method based on the dense connection of deconvolution
CN112215755A (en) Image super-resolution reconstruction method based on back projection attention network
CN105550989A (en) Image super-resolution method based on nonlocal Gaussian process regression
CN112837224A (en) Super-resolution image reconstruction method based on convolutional neural network
CN112669248A (en) Hyperspectral and panchromatic image fusion method based on CNN and Laplacian pyramid
CN112580473A (en) Motion feature fused video super-resolution reconstruction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190329