CN112991181A - Image super-resolution reconstruction method based on reaction diffusion equation - Google Patents

Image super-resolution reconstruction method based on reaction diffusion equation Download PDF

Info

Publication number
CN112991181A
CN112991181A CN202110346999.5A CN202110346999A CN112991181A CN 112991181 A CN112991181 A CN 112991181A CN 202110346999 A CN202110346999 A CN 202110346999A CN 112991181 A CN112991181 A CN 112991181A
Authority
CN
China
Prior art keywords
resolution
image
reaction diffusion
module
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110346999.5A
Other languages
Chinese (zh)
Other versions
CN112991181B (en
Inventor
蒲晓峰
张乐飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202110346999.5A priority Critical patent/CN112991181B/en
Publication of CN112991181A publication Critical patent/CN112991181A/en
Application granted granted Critical
Publication of CN112991181B publication Critical patent/CN112991181B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the field of computer vision, and particularly relates to an image super-resolution reconstruction method based on a reaction diffusion equation. The method proposes a new reactive diffusion module and a cascaded depth framework. The reaction diffusion module embeds the reaction diffusion equation into the depth model, parameters needed by equation solving are generated through the learning of the depth model, and therefore the generation of local patterns in the image super-resolution reconstruction task is guided, and the difficulty of the over-resolution task is reduced. The cascade depth frame cascades a plurality of feature transformation and reaction diffusion modules, so that the super-division task is divided into small parts and is finished by the parts with different depths of the model, and the difficulty of model training is reduced. And finally, a new depth hyper-resolution model is built, the number of parameters and the model depth of the hyper-resolution model are greatly reduced under the condition of not reducing the hyper-resolution performance, and the difficulty of the application of the depth hyper-resolution model is reduced.

Description

Image super-resolution reconstruction method based on reaction diffusion equation
Technical Field
The invention belongs to the field of computer vision, relates to an image super-resolution reconstruction method, and particularly relates to a cascade deep image super-resolution network based on a reaction diffusion equation.
Background
The super-resolution image reconstruction aims to recover detail information lost in the degradation process of the low-resolution-rate image. The method has great practical application significance in the fields of image classification, image segmentation, target detection and the like. The super-resolution problem is a morbid problem, and traditional methods such as interpolation, sparse coding and point diffusion cannot be widely applied to large data sets. The deep learning algorithm can effectively extract the potential information common to the images in the big data, and the performance of the image super-resolution algorithm is greatly improved.
At present, an image hyper-resolution algorithm based on deep learning is extensively and deeply researched, and researchers design various complex deep network structures aiming at a hyper-resolution task, so that the hyper-resolution performance is greatly improved. However, the existing method lacks a corresponding guiding mechanism for the pattern generation part in the image super-segmentation task and only relies on the powerful learning capability of the depth model to complete.
This makes the existing depth models problematic:
1, the number of parameters is large, and the training of the model becomes difficult;
the depth of the model 2 is deep, and the model is time-consuming to use.
The complex structure of the 3 model makes its practical application very difficult.
It is necessary to design a targeted solution for the pattern generation mechanism of the image hyper-segmentation problem, and reduce the complexity of the depth hyper-segmentation model.
Disclosure of Invention
In order to overcome the problems, the reaction diffusion equation is embedded into the depth model, guidance is provided for local pattern generation of image hyper-resolution, and a cascaded depth hyper-resolution framework is designed based on the reaction diffusion equation, so that the parameter quantity of the model and the depth of the model are greatly reduced, and the difficulty of model application is reduced.
The technical scheme adopted by the invention is a cascade depth model framework based on a reaction diffusion equation. Which comprises the following steps.
An image super-resolution reconstruction method based on a reaction diffusion equation is characterized by comprising the following steps:
establishing an image super-resolution reconstruction model, which specifically comprises the following steps:
step 1.1: collecting a plurality of high-resolution images;
step 1.2: using a bicubic interpolation method to carry out sampling degradation on the high-resolution image according to a plurality of set multiples to obtain a low-resolution image; defining (a low-component image and a high-component image obtained by degrading the high-component image) as an image sample, forming an image super-component data set by all the image samples together, and normalizing the data (dividing the data by the maximum value of the image data, generally 255, so that the value range of the image data is changed from [0, the maximum value of the data ] to [0,1]), so as to facilitate processing;
step 1.3: dividing the data set obtained in the step 1.2 into two parts according to the resolution of the high-resolution image; the resolution of the high-resolution images in the first part is high (the size of the images is not less than 1000X 2000 or 2000X 1000), and the images are cut into image blocks with small sizes to obtain (low-resolution image blocks and high-resolution image blocks) as a training set; the resolution of the high-resolution images in the second part is low (the size of the images: height × width is less than 1000 × 2000 or 2000 × 1000), and the (low-resolution images and high-resolution images) are directly used as a test set;
step 1.4: constructing a cascade deep network based on a reaction diffusion equation; the network comprises:
the characteristic extraction module is used for extracting initial characteristics from the low-resolution image by using a layer of convolution;
a feature transformation module: the system is composed of 8 basic modules and a long-span layer, wherein the basic modules are connected by 4 residual modules and a short-span layer, and the long-span layer and the short-span layer are connected, so that the fusion of shallow features and deep features in the transformation process is enhanced, and the difficulty in training a deep network is reduced;
a reaction diffusion module: firstly, amplifying the scale of the feature by using the deconvolution layer, and then inputting the transformed feature and the estimated hyper-resolution image into a reaction diffusion process together to lead the generation of the local pattern by the reaction diffusion process;
step 1.5: updating parameters of the network by using a training set, initializing the network parameters by using Kaiming, losing by using L1, calculating gradients by using an Adam optimizer and updating the parameters; after iteration is carried out for a set number of times, the model is converged to obtain an established image super-resolution reconstruction model;
the method for reconstructing the super-resolution image by adopting the established image super-resolution reconstruction model specifically comprises the following steps:
step 2.1, collecting a low-resolution image to be reconstructed, and normalizing the image to obtain processable low-resolution image data;
2.2, inputting low-resolution image data into the established image hyper-resolution model, extracting features by a feature extraction module, converting the features by a feature conversion module, and finally reconstructing by a reaction diffusion module to obtain the output of the model;
and 2.3, multiplying the output of the model by 255, and restoring the data to the range [0,255] of the original image data again to obtain a final image super-resolution reconstruction result.
The image super-resolution reconstruction method based on the reaction diffusion equation is characterized in that:
feature extraction module feBy 1-layer convolution fconvComposition (convolution f)convBeing the basic unit of the depth model, formed by a convolution kernel
Figure BDA0003001043860000031
And bias term
Figure BDA0003001043860000032
Determination of ciIndicating the number of channels of the input data, coRepresenting the number of output channels, k representing the convolution kernel size, typically 3, the present invention lets the convolution f by setting the step size to 1 and zero padding of the edgesconvWithout changing the size of the input data) as a low-score image xLR∈R3×h×w3, image data of 3 channels of input color RGB (red green blue), h, w respectively represent the height and width of the input low-resolution image, that is, the input data is formed by stacking 3 matrixes (the number of input channels is 3), and the dimension of each matrix is h × w; the output is the initial characteristic F0∈R64×h×wThat is, the initial feature is formed by stacking 64 matrixes (the number of output channels is 64), and each matrix respectively stores the feature of a certain aspect of the image data; the characteristics are selected from the parameters of the convolution of the layer as w0∈R64×3×3×3,b0∈R64Determination of, i.e. F0=fe(xLR)=fconv(w0,b0,xLR),w0,b0Determination according to step 1.5。
In the image super-resolution reconstruction method based on the reaction diffusion equation, the feature transformation module ftComposed of 8 basic modules fbm1,fbm2,fbm3,fbm4,fbm5,fbm6,fbm7,fbm8Connecting with a long span layer fconvComposition is carried out; input as data feature Fi-1∈R64×h×wI is 1,2,3,4, and the output is the transformed feature Fi∈R64×h×wThe input features are sequentially transformed by 8 basic modules, and then a convolution is used to extract new features to be added to the input features to obtain the transformed features, namely:
Fi=Fi-1+
fconv(wl,bl,fbm8(fbm7(fbm6(fbm5(fbm4(fbm3(fbm2(fbm1(Fi-1)))))))));
basic module fbmBy 4 residual modules fres1,fres2,fres3,fres4Connected to a short span layer fconvComposition is carried out; input data feature Fin∈R64×h×wOutputting the transformed feature Fout∈R64×h×w(ii) a Similar to the above structure, i.e., Fout=Fin+fconv(ws,bs,fres4(fres3(fres2(fres1(Fin)))))
Residual module fresThe structure is 'convolution layer-ReLU activation function-convolution layer', and the calculation formula is as follows
Fout=Fin+fconv(wres2,bres2,fReLU(fconv(wres1,bres1,Fin))),
fReLU(F) Max (0, F) means that only data larger than 0 in F is retained, and data smaller than 0 is set to 0;
the parameters w, b of all convolution functions in the feature transform module are determined according to step 1.5 of claim 1.
In the image super-resolution reconstruction method based on the reaction diffusion equation, the reaction diffusion module frdConsisting of 8 convolutional layers and 1 deconvolution layer, the input being the transformed feature F according to claim 3i∈R64×h×wAnd estimate the super-resolution result SRrough∈R3×H×WThe output is the over-fraction result SR guided by the reaction diffusion equationfine∈R3×H×W(ii) a If the current module is the 1 st reaction diffusion module, SRroughBy a parameterless bicubic interpolation method fbiAccording to the multiplying power s (s is an amplification factor in hyper-resolution reconstruction, H and W respectively represent the height and width of an output hyper-resolution image, and the height and width H and W of an input low-resolution image are calculated by the amplification factor s, H is sh, and W is sw), the low-resolution image x is divided into two partsLRAmplifying to obtain; if the current module is not the 1 st reactive diffusion module, SRroughThe output of the previous reaction diffusion module; the specific working steps of the reaction diffusion module comprise:
a. using deconvolution (deconvolution, also known as transposed convolution, being convolution f)convA variation of (1), assuming the input is
Figure BDA0003001043860000041
Convolution does not generally change the spatial dimensions (width and height) of the input data, i.e., the output yout=fconv(w,b,xin),
Figure BDA0003001043860000042
Figure BDA0003001043860000043
Therefore, in order to obtain the amplified output result, the transposition convolution firstly amplifies the input data by the input scale by a specific amplification factor, for example, 2 times, by means of zero padding inside the data,
Figure BDA0003001043860000044
reuse of general convolution pairs
Figure BDA0003001043860000045
The transformation is carried out to obtain an output result,
Figure BDA0003001043860000046
is marked as
Figure BDA0003001043860000047
) Characteristic of magnification Fi∈R64×h×wScale of (d) to scale of the high resolution image to obtain a result Fnew∈R64 ×H×WH is sh, W is sw, s is the amplification factor F of the super-resolution reconstructionnew=fdeconv(wde,bde,Fi);
b. Using a convolution operation to generate the parameters V, c required to solve the following reaction diffusion equation0.c1,c2,c3,d0,d2,d3∈R3×H×W
Figure BDA0003001043860000048
The reaction diffusion equation defines two components U, V epsilon R3×H×WThe variation with time t; u shapet,Vt∈R3×H×WThe derivative of U and V to time t is obtained, and delta U and delta V are the results of transformation of U and V by using a Laplace operator;
V=fconv(wv,bv,Fnew) Expressing the V component in the reaction diffusion equation; c. C0=fconv(wc0,bc0,Fnew),c1=fconv(wc1,bc1,Fnew),c2=fconv(wc2,bc2,Fnew),c3=fconv(wc3,bc3,Fnew),d0=fconv(wd0,bd0,Fnew),d2=fconv(wd2,bd2,Fnew),d3=fconv(wd3,bd3,Fnew) Parameters required for defining the reaction diffusion equation determine the local solution of the reaction diffusion equation, namely, locally generated patterns are defined;
c. from the predicted image SRroughAs initial value U of U component in reaction diffusion equation0,V=fconv(wv,bv,Fnew) As an initial value V of the V component0Solving equations using Euler iteration
Figure BDA0003001043860000051
n is 1,2,3,4, represents iteration round number, dt represents time step, and takes 1. output SR of reaction diffusion modulefine=U4
In the image super-resolution reconstruction method based on the reaction diffusion equation, the cascade deep network based on the reaction diffusion equation is expressed by fnetIs represented by fnetThe components of the composition are as follows: 1 feature extraction Module fe4 feature transformation modules ft1,ft2,ft3,ft44 reaction diffusion modules frd1,frd2,frd3,frd4(ii) a Input as low partial image xLRAnd outputting a super-resolution result SR, wherein the data processing steps are as follows:
a. by the feature extraction module feFrom input processed image data xLRUp to the initial feature F0,F0=fe(xLR)
b. By 4 feature transformation modules ft1,ft2,ft3,ft4Sequentially carrying out characteristic transformation to obtain F1,F2,F3,F4Wherein the output F of the i (i ═ 1,2,3,4) th feature transformation moduleiFrom the feature module ftiFor the feature F obtained in the previous stagei-1Is transformed to obtaini=fti(Fi-1);
c. From 4 reaction diffusion modules frd1,frd2,frd3,frd4In turn, theUsing the obtained feature F1,F2,F3,F4Carrying out image super-resolution reconstruction; first, a parameterless bicubic interpolation method f is usedbiDividing the image x into lower parts according to the multiplying powerLRUpsampling to obtain initial super-resolution estimation result SR0I.e. SR0=fbi(xLR) (ii) a Then, the super-resolution reconstruction result of the low-resolution image is obtained through the guidance of 4 reaction diffusion modules in sequence; wherein the output SR of the i (i-1, 2,3,4) th reaction diffusion moduleiFrom the reaction diffusion block frdiUsing the super-resolution reconstruction result SR obtained in the previous stagei-1And the output F of the ith feature transformation moduleiIs reconstructed to obtain, namely SRi=frdi(SRi-1,Fi) (ii) a Output SR of 4 th reaction diffusion module4As a result of the final image super-resolution SR,
i.e. SR ═ fnet(xLR)=frd4(frd3(frd2(frd1(fbi(xLR),F1),F2),F3),F4),
Wherein F1=ft1(fe(xLR)),F2=ft2(F1),F3=ft3(F2),F4=ft4(F3)。
The invention has the beneficial effects that:
(1) the invention provides a reaction diffusion module based on a reaction diffusion equation, which provides effective guidance for generation of local patterns during hyper-resolution reconstruction and reduces difficulty of hyper-resolution tasks.
(2) According to the invention, a cascading depth model framework is built based on the modules, so that the process of the super-resolution reconstruction is divided into 4 sub-stages, and the sub-stages are respectively handed to parts with different network depths to complete, and the training difficulty of the model is reduced.
(3) The invention constructs the cascade deep hyper-resolution model based on the model, greatly reduces the parameter quantity of the model, the depth of the model and the difficulty of the model application under the condition of not reducing the hyper-resolution performance.
Drawings
FIG. 1 is a hyper-resolution rate reconstruction framework for a depth image according to an embodiment of the present invention.
Fig. 2 shows the structure of a deep hyper-division network in an embodiment of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
The invention provides a depth image super-resolution method based on a reaction diffusion equation, which comprises the following steps:
step 1: a large number of High-resolution color RGB images (HR) are acquired,
Figure BDA0003001043860000061
Figure BDA0003001043860000062
yirepresenting high resolution images, n representing the number of high resolution images, and i identifying the different high resolution images. [0,255]Representing the range of values of the image data. Hi×WiFor each high partial image yiThe respective spatial dimensions, i.e. the respective height x width.
Step 2: using Bicubic (in the invention, the imresize function of MATLAB is used to complete according to default parameters) method to degrade the high-Resolution Image according to 2 times, 3 times and 4 times to obtain Low-Resolution Image (LR)
Figure BDA0003001043860000063
Figure BDA0003001043860000064
Representing a high-resolution image yiThe resulting low-resolution image is reduced by s-magnification. And s is the scaling factor of the over-resolution reconstruction, and the value is {2, 3,4 }. h isis×wisRepresenting low-resolution images
Figure BDA0003001043860000065
Of a space size of wherein
Figure BDA0003001043860000066
Low resolution image
Figure BDA0003001043860000067
And a high resolution image yiTogether forming an image superset data set
Figure BDA0003001043860000068
And normalizing the data for processing in a manner of dividing the data by a maximum value 255 of the image data so that the value range of the image data is from 0,255]Becomes [0,1]]。
And step 3: dividing the data set obtained in step 2 into two parts according to the size of the resolution of the high-resolution image
Figure BDA0003001043860000069
Figure BDA00030010438600000610
And
Figure BDA00030010438600000611
wherein
Figure BDA00030010438600000612
Figure BDA00030010438600000613
Figure BDA0003001043860000071
The resolution of the high-resolution pictures in the first part is higher (size of picture: height x width not less than 1000 x 2000 or 2000 x 1000), and these pictures are cropped into smaller-size image blocks
Figure BDA0003001043860000072
Figure BDA0003001043860000073
As a training set.
Figure BDA0003001043860000074
Representing low-score images
Figure BDA0003001043860000075
Small image blocks, y, cut outiksRepresenting the corresponding high-resolution image block. m isisIndicating that the ith low partial image
Figure BDA0003001043860000076
The total number of cropped image blocks. The resolution of the high-resolution images in the second part is low (size of the images: height by width below 1000 x 2000 or 2000 x 1000), and these images are directly used as test sets.
And 4, step 4: construction of a reaction diffusion equation-based cascaded deep network fnet(FIG. 2). f. ofnetThe components of the composition are as follows: 1 feature extraction Module fe4 feature transformation modules ft1,ft2,ft3,ft44 reaction diffusion modules frd1,frd2,frd3,frd4. Input as low partial image xLRAnd outputting a super-resolution result SR, wherein the data processing steps are as follows:
a. by the feature extraction module feFrom input processed image data xLRUp to the initial feature F0,F0=fe(xLR)
b. By 4 feature transformation modules ft1,ft2,ft3,ft4Sequentially carrying out characteristic transformation to obtain F1,F2,F3,F4Wherein the output F of the i (i ═ 1,2,3,4) th feature transformation moduleiFrom the feature module ftiFor the feature F obtained in the previous stagei-1Is transformed to obtaini=fti(Fi-1)。
c. From 4 reaction diffusion modules frd1,frd2,frd3,frd4Using the resulting features F in sequence1,F2,F3,F4And performing image hyper-resolution reconstruction. First, a parameterless bicubic interpolation method f is usedbiDividing the image x into lower parts according to the multiplying powerLRUpsampling to obtain an initial supersamplePartial estimation result SR0I.e. SR0=fbi(xLR). Then, the super-resolution reconstruction result of the low-resolution image is obtained through the guidance of 4 reaction diffusion modules in sequence. Wherein the output SR of the i (i-1, 2,3,4) th reaction diffusion moduleiFrom the reaction diffusion block frdiUsing the super-resolution reconstruction result SR obtained in the previous stagei-1And the output F of the ith feature transformation moduleiIs reconstructed to obtain, namely SRi=frdi(SRi-1,Fi). Output SR of 4 th reaction diffusion module4As a result of the final image super-resolution SR,
i.e. SR ═ fnet(xLR)=frd4(frd3(frd2(frd1(fbi(xLR),F1),F2),F3),F4),
Wherein F1=ft1(fe(xLR)),F2=ft2(F1),F3=ft3(F2),F4=ft4(F3)
And 5: and updating parameters of the network by using the training set, wherein the parameters of the network are initialized by normal distribution with the mean value of 0 and the standard deviation of 0.0035 according to a Kaiming initialization mode. Loss of L from L11(SRiks,yiks)=||SRiks-yiks||1,|| ||1Means averaging the absolute values of all elements, SRiksFor a depth hyper-resolution model fnetFor low partial image
Figure BDA0003001043860000077
And outputting the result. Each time the parameter is updated, the scaling factor s is fixed, and 16 samples are randomly selected from the training set to calculate the sum of L1 losses and further calculate the gradient, i.e. the loss ═ Σiks L1(SRiks,yiks) In wall∈RNDenotes fnetN represents the total number of parameters and the gradient is
Figure BDA0003001043860000081
The present invention uses Adam (Adaptive motion estimation optimizer) to update the gradient.
Figure BDA0003001043860000082
Where k denotes the number of iteration rounds, mk,vk∈RNFirst order statistics and second order statistics, beta, representing the gradient at the kth iteration, respectively1=0.5,β20.7. After 100 ten thousand iterations, the model converges and can be used to complete the task of image super-resolution reconstruction.
Step 6, collecting a low-resolution image to be reconstructed, and normalizing the image to obtain processable low-resolution image data; and inputting low-resolution image data into the established image hyper-resolution model, extracting features by a feature extraction module, transforming the features by a feature transformation module, and finally reconstructing by a reaction diffusion module to obtain the output of the model. And multiplying the output of the model by 255, and restoring the data to the range [0,255] of the original image data again to obtain the final image hyper-resolution reconstruction result.
The specific implementation steps of the image super-resolution reconstruction method related by the invention are described above. The whole process comprehensively considers data acquisition, data preprocessing, depth network building and image hyper-resolution reconstruction.
It should be understood that parts of the specification not set forth in detail are well within the prior art.
It should be understood that the above description of the preferred embodiments is given for clarity and not for any purpose of limitation, and that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (5)

1. An image super-resolution reconstruction method based on a reaction diffusion equation is characterized by comprising the following steps:
establishing an image super-resolution reconstruction model, which specifically comprises the following steps:
step 1.1: collecting a plurality of high-resolution images;
step 1.2: using a bicubic interpolation method to carry out sampling degradation on the high-resolution image according to a plurality of set multiples to obtain a low-resolution image; defining (a low-component image and a high-component image obtained by degrading the high-component image) as an image sample, forming an image super-component data set by all the image samples together, and normalizing the data (dividing the data by the maximum value of the image data, generally 255, so that the value range of the image data is changed from [0, the maximum value of the data ] to [0,1]), so as to facilitate processing;
step 1.3: dividing the data set obtained in the step 1.2 into two parts according to the resolution of the high-resolution image; according to a set partition resolution threshold, the resolution of a high-resolution image in a first part higher than the resolution threshold is high, and the first part image is cut into image blocks with small sizes to obtain (low-resolution image blocks and high-resolution image blocks) as a training set; the resolution of the high-resolution images in the second part which is lower than the resolution threshold is low, and the (low-resolution images and high-resolution images) image samples are directly used as a test set without clipping;
step 1.4: constructing a cascade deep network based on a reaction diffusion equation; the network comprises:
the characteristic extraction module is used for extracting initial characteristics from the low-resolution image by using a layer of convolution;
a feature transformation module: the system is composed of 8 basic modules and a long-span layer, wherein the basic modules are connected by 4 residual modules and a short-span layer, and the long-span layer and the short-span layer are connected, so that the fusion of shallow features and deep features in the transformation process is enhanced, and the difficulty in training a deep network is reduced;
a reaction diffusion module: firstly, amplifying the scale of the feature by using the deconvolution layer, and then inputting the transformed feature and the estimated hyper-resolution image into a reaction diffusion process together to lead the generation of the local pattern by the reaction diffusion process;
step 1.5: updating parameters of the network by using a training set, initializing the network parameters by using Kaiming, losing by using L1, calculating gradients by using an Adam optimizer and updating the parameters; after iteration is carried out for a set number of times, the model is converged to obtain an established image super-resolution reconstruction model;
the method for reconstructing the super-resolution image by adopting the established image super-resolution reconstruction model specifically comprises the following steps:
step 2.1, collecting a low-resolution image to be reconstructed, and normalizing the image to obtain processable low-resolution image data;
2.2, inputting low-resolution image data into the established image hyper-resolution model, extracting features by a feature extraction module, converting the features by a feature conversion module, and finally reconstructing by a reaction diffusion module to obtain the output of the model;
and 2.3, multiplying the output of the model by 255, and restoring the data to the range [0,255] of the original image data again to obtain a final image super-resolution reconstruction result.
2. The method for image super-resolution reconstruction based on the reaction diffusion equation as claimed in claim 1, wherein: feature extraction module feBy 1-layer convolution fconvComposition, convolution fconvBeing the basic unit of the depth model, formed by a convolution kernel
Figure FDA0003001043850000021
And bias term
Figure FDA0003001043850000022
Determination of ciIndicating the number of channels of the input data, coRepresenting the number of output channels, k the convolution kernel size, and the input as a low-resolution image xLR∈R3×h×w3, image data of 3 channels of input color RGB red green blue, h and w respectively represent the height and width of an input low-resolution image, namely the input data is formed by stacking 3 matrixes, and the dimensionality of each matrix is h multiplied by w; the output is the initial characteristic F0∈R64×h×wThat is, the initial feature is formed by stacking 64 matrixes (the number of output channels is 64), and each matrix respectively stores the feature of a certain aspect of the image data; the characteristics are selected from the parameters of the convolution of the layer as w0∈R64 ×3×3×3,b0∈R64Determination of, i.e. F0=fe(xLR)=fconv(w0,b0,xLR),w0,b0Determined according to step 1.5.
3. The method for image super-resolution reconstruction based on the reaction diffusion equation as claimed in claim 1, wherein: feature transformation module ftComposed of 8 basic modules fbm1,fbm2,fbm3,fbm4,fbm5,fbm6,fbm7,fbm8Connecting with a long span layer fconvComposition is carried out; input as data feature Fi-1∈R64×h×wI is 1,2,3,4, and the output is the transformed feature Fi∈R64 ×h×wThe input features are sequentially transformed by 8 basic modules, and then a convolution is used to extract new features to be added to the input features to obtain the transformed features, namely:
Fi=Fi-1+
fconv(wl,bl,fbm8(fbm7(fbm6(fbm5(fbm4(fbm3(fbm2(fbm1(Fi-1)))))))));
basic module fbmBy 4 residual modules fres1,fres2,fres3,fres4Connected to a short span layer fconvComposition is carried out; input data feature Fin∈R64×h×wOutputting the transformed feature Fout∈R64×h×w(ii) a Similar to the above structure, i.e., Fout=Fin+fconv(ws,bs,fres4(fres3(fres2(fres1(Fin)))))
Residual module fresThe structure is 'convolution layer-ReLU activation function-convolution layer', and the calculation formula is as follows
Fout=Fin+fconv(wres2,bres2,fReLU(fconv(wres1,bres1,Fin))),
fReLU(F) Max (0, F) means that only data larger than 0 in F is retained, and data smaller than 0 is set to 0;
the parameters w, b of all convolution functions in the feature transform module are determined according to step 1.5 of claim 1.
4. The method for image super-resolution reconstruction based on the reaction diffusion equation as claimed in claim 1, wherein: reaction diffusion module frdConsisting of 8 convolutional layers and 1 deconvolution layer, the input being the transformed feature F according to claim 3i∈R64×h×wAnd estimate the super-resolution result SRrough∈R3×H×WThe output is the over-fraction result SR guided by the reaction diffusion equationfine∈R3×H×W(ii) a If the current module is the 1 st reaction diffusion module, SRroughBy a parameterless bicubic interpolation method fbiAccording to the multiplying power s (s is an amplification factor in hyper-resolution reconstruction, H and W respectively represent the height and width of an output hyper-resolution image, and the height and width H and W of an input low-resolution image are calculated by the amplification factor s, H is sh, and W is sw), the low-resolution image x is divided into two partsLRAmplifying to obtain; if the current module is not the 1 st reactive diffusion module, SRroughThe output of the previous reaction diffusion module; the specific working steps of the reaction diffusion module comprise:
a. using deconvolution, also known as transposed convolution, which is a convolution fconvA variant of (1) input as
Figure FDA0003001043850000031
Convolution does not change the spatial dimensions width (w) and height (h) of the input data, i.e. the output yout=fconv(w,b,xin),
Figure FDA0003001043850000032
In order to obtain an amplified output result, the transposition convolution firstly defines the image super-division magnification ratio as s is more than or equal to 2 according to a specific amplification factor s, wherein s is a positive integer, and the internal zero of the data passes throughThe filling mode enlarges the size of the input data to obtain enlarged data
Figure FDA0003001043850000033
Figure FDA0003001043850000034
Reuse of convolution pairs
Figure FDA0003001043850000035
The transformation obtains a deconvolution output result
Figure FDA0003001043850000036
Deconvolution is carried out to enlarge the scale of the low-resolution image features to the scale of the high-resolution image features, so that subsequent processing is facilitated;
b. using a convolution operation to generate the parameters V, c required to solve the following reaction diffusion equation0.c1,c2,c3,d0,d2,d3∈R3×H×W
Figure FDA0003001043850000037
The reaction diffusion equation defines two components U, V epsilon R3×H×WThe variation with time t; u shapet,Vt∈R3×H×WThe derivative of U and V to time t is obtained, and delta U and delta V are the results of transformation of U and V by using a Laplace operator;
V=fconv(wv,bv,Fnew) Expressing the V component in the reaction diffusion equation;
c0=fconv(wc0,bc0,Fnew),c1=fconv(wc1,bc1,Fnew),c2=fconv(wc2,bc2,Fnew),c3=fconv(wc3,bc3,Fnew),d0=fconv(wd0,bd0,Fnew),d2=fconv(wd2,bd2,Fnew),d3=fconv(wd3,bd3,Fnew) Parameters required for defining the reaction diffusion equation determine the local solution of the reaction diffusion equation, namely, locally generated patterns are defined;
c. from the predicted image SRroughAs initial value U of U component in reaction diffusion equation0,V=fconv(wv,bv,Fnew) As an initial value V of the V component0Solving equations using Euler iteration
Figure FDA0003001043850000041
n is 1,2,3,4, represents iteration round number, dt represents time step, and takes 1. output SR of reaction diffusion modulefine=U4
5. The method for image super-resolution reconstruction based on the reaction diffusion equation as claimed in claim 1, wherein: cascaded deep networks based on the reactive diffusion equationnetIt is shown that,
fnetthe components of the composition are as follows: 1 feature extraction Module fe4 feature transformation modules ft1,ft2,ft3,ft44 reaction diffusion modules frd1,frd2,frd3,frd4(ii) a Input as low partial image xLRAnd outputting a super-resolution result SR, wherein the data processing steps are as follows:
a. by the feature extraction module feFrom input processed image data xLRUp to the initial feature F0,F0=fe(xLR)
b. By 4 feature transformation modules ft1,ft2,ft3,ft4Sequentially carrying out characteristic transformation to obtain F1,F2,F3,F4Wherein the output F of the i (i ═ 1,2,3,4) th feature transformation moduleiFrom the feature module ftiFor the feature F obtained in the previous stagei-1Is transformed to obtaini=fti(Fi-1);
c. From 4 reaction diffusion modules frd1,frd2,frd3,frd4Using the resulting features F in sequence1,F2,F3,F4Carrying out image super-resolution reconstruction; first, a parameterless bicubic interpolation method f is usedbiDividing the image x into lower parts according to the multiplying powerLRUpsampling to obtain initial super-resolution estimation result SR0I.e. SR0=fbi(xLR) (ii) a Then, the super-resolution reconstruction result of the low-resolution image is obtained through the guidance of 4 reaction diffusion modules in sequence; wherein the output SR of the i (i-1, 2,3,4) th reaction diffusion moduleiFrom the reaction diffusion block frdiUsing the super-resolution reconstruction result SR obtained in the previous stagei-1And the output F of the ith feature transformation moduleiIs reconstructed to obtain, namely SRi=frdi(SRi-1,Fi) (ii) a Output SR of 4 th reaction diffusion module4As a result of the final image super-resolution SR,
i.e. SR ═ fnet(xLR)=frd4(frd3(frd2(frd1(fbi(xLR),F1),F2),F3),F4),
Wherein F1=ft1(fe(xLR)),F2=ft2(F1),F3=ft3(F2),F4=ft4(F3)。
CN202110346999.5A 2021-03-31 2021-03-31 Image super-resolution reconstruction method based on reaction diffusion equation Active CN112991181B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110346999.5A CN112991181B (en) 2021-03-31 2021-03-31 Image super-resolution reconstruction method based on reaction diffusion equation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110346999.5A CN112991181B (en) 2021-03-31 2021-03-31 Image super-resolution reconstruction method based on reaction diffusion equation

Publications (2)

Publication Number Publication Date
CN112991181A true CN112991181A (en) 2021-06-18
CN112991181B CN112991181B (en) 2023-03-24

Family

ID=76338587

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110346999.5A Active CN112991181B (en) 2021-03-31 2021-03-31 Image super-resolution reconstruction method based on reaction diffusion equation

Country Status (1)

Country Link
CN (1) CN112991181B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114757830A (en) * 2022-05-06 2022-07-15 西安电子科技大学 Image super-resolution reconstruction method based on channel-diffusion double-branch network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070083114A1 (en) * 2005-08-26 2007-04-12 The University Of Connecticut Systems and methods for image resolution enhancement
CN107358576A (en) * 2017-06-24 2017-11-17 天津大学 Depth map super resolution ratio reconstruction method based on convolutional neural networks
CN111028150A (en) * 2019-11-28 2020-04-17 武汉大学 Rapid space-time residual attention video super-resolution reconstruction method
CN111242846A (en) * 2020-01-07 2020-06-05 福州大学 Fine-grained scale image super-resolution method based on non-local enhancement network
US20200234402A1 (en) * 2019-01-18 2020-07-23 Ramot At Tel-Aviv University Ltd. Method and system for end-to-end image processing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070083114A1 (en) * 2005-08-26 2007-04-12 The University Of Connecticut Systems and methods for image resolution enhancement
CN107358576A (en) * 2017-06-24 2017-11-17 天津大学 Depth map super resolution ratio reconstruction method based on convolutional neural networks
US20200234402A1 (en) * 2019-01-18 2020-07-23 Ramot At Tel-Aviv University Ltd. Method and system for end-to-end image processing
CN111028150A (en) * 2019-11-28 2020-04-17 武汉大学 Rapid space-time residual attention video super-resolution reconstruction method
CN111242846A (en) * 2020-01-07 2020-06-05 福州大学 Fine-grained scale image super-resolution method based on non-local enhancement network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KEYS, R.G: "Cubic convolution interpolation for digital image processing", 《IEEE》 *
付龙: "基于扩散的自适应超分辨率重建", 《现代电子技术》 *
唐艳秋等: "图像超分辨率重建研究综述", 《电子学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114757830A (en) * 2022-05-06 2022-07-15 西安电子科技大学 Image super-resolution reconstruction method based on channel-diffusion double-branch network
CN114757830B (en) * 2022-05-06 2023-09-08 西安电子科技大学 Image super-resolution reconstruction method based on channel-diffusion double-branch network

Also Published As

Publication number Publication date
CN112991181B (en) 2023-03-24

Similar Documents

Publication Publication Date Title
Shi et al. Deep networks for compressed image sensing
CN108734661B (en) High-resolution image prediction method for constructing loss function based on image texture information
CN102142137B (en) High-resolution dictionary based sparse representation image super-resolution reconstruction method
CN105631807B (en) The single-frame image super-resolution reconstruction method chosen based on sparse domain
CN111815516B (en) Super-resolution reconstruction method for weak supervision infrared remote sensing image
CN103455988B (en) The super-resolution image reconstruction method of structure based self-similarity and rarefaction representation
CN112288632B (en) Single image super-resolution method and system based on simplified ESRGAN
CN109544457A (en) Image super-resolution method, storage medium and terminal based on fine and close link neural network
CN109949217B (en) Video super-resolution reconstruction method based on residual learning and implicit motion compensation
CN103136728B (en) Based on the image super-resolution method of dictionary learning and non local total variance
CN111861886B (en) Image super-resolution reconstruction method based on multi-scale feedback network
CN111598786A (en) Hyperspectral image unmixing method based on deep denoising self-coding network
CN115936985A (en) Image super-resolution reconstruction method based on high-order degradation cycle generation countermeasure network
CN111489305B (en) Image enhancement method based on reinforcement learning
CN109785279A (en) A kind of image co-registration method for reconstructing based on deep learning
CN105184742B (en) A kind of image de-noising method of the sparse coding based on Laplce's figure characteristic vector
CN109767389A (en) Adaptive weighted double blind super-resolution reconstruction methods of norm remote sensing images based on local and non local joint priori
CN112991181B (en) Image super-resolution reconstruction method based on reaction diffusion equation
CN104200439B (en) Image super-resolution method based on adaptive filtering and regularization constraint
CN105427249A (en) Wind power image quality enhancing method based on robustness nuclear norm regular regression
CN112150356A (en) Single compressed image super-resolution reconstruction method based on cascade framework
CN116485651A (en) Image super-resolution reconstruction method
CN113538231B (en) Single image super-resolution reconstruction system and method based on pixel distribution estimation
CN114841901B (en) Image reconstruction method based on generalized depth expansion network
CN115409713A (en) Efficient real-time single image hyper-resolution rate reconstruction system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant