CN111932461A - Convolutional neural network-based self-learning image super-resolution reconstruction method and system - Google Patents

Convolutional neural network-based self-learning image super-resolution reconstruction method and system Download PDF

Info

Publication number
CN111932461A
CN111932461A CN202010802461.6A CN202010802461A CN111932461A CN 111932461 A CN111932461 A CN 111932461A CN 202010802461 A CN202010802461 A CN 202010802461A CN 111932461 A CN111932461 A CN 111932461A
Authority
CN
China
Prior art keywords
image
neural network
convolutional neural
reconstructed
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010802461.6A
Other languages
Chinese (zh)
Other versions
CN111932461B (en
Inventor
徐健
高艳
范九伦
赵凤
赵小强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Posts and Telecommunications
Original Assignee
Xian University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Posts and Telecommunications filed Critical Xian University of Posts and Telecommunications
Priority to CN202010802461.6A priority Critical patent/CN111932461B/en
Publication of CN111932461A publication Critical patent/CN111932461A/en
Application granted granted Critical
Publication of CN111932461B publication Critical patent/CN111932461B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a self-learning image super-resolution reconstruction method and a system based on a convolutional neural network, wherein the method comprises the following steps: step 1, obtaining a training sample of an image to be reconstructed; step 2, constructing a convolutional neural network; the convolutional neural network includes: the device comprises a feature extraction unit, a feature enhancement unit, a residual error unit and a reconstruction unit; step 3, training the convolutional neural network constructed in the step 2 based on the training samples obtained in the step 1 to obtain a trained reconstructed convolutional neural network; and 4, performing super-resolution reconstruction on the image to be reconstructed based on the reconstruction convolutional neural network trained in the step 3. The method can effectively solve the problem of insufficient training samples of the self-learning algorithm and avoid the phenomenon of overfitting of the network; meanwhile, a high-resolution image with higher peak signal-to-noise ratio and better visual effect can be obtained.

Description

Convolutional neural network-based self-learning image super-resolution reconstruction method and system
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a self-learning image super-resolution reconstruction method and system based on a convolutional neural network.
Background
With the rapid development of social intelligence and informatization, images become an important way for people to acquire information, and have very important application values in the fields of monitoring equipment, satellite image remote sensing, video restoration, medical images and the like. The high-resolution image can provide more important detailed information for digital image processing due to high pixel density, however, the resolution of the acquired image is often lower due to the limitation of imaging equipment, illumination and other conditions. How to effectively improve the quality of the imaged image becomes a very critical and important task for image processing. The image super-resolution reconstruction technology is one of the main means for improving the image resolution at present.
From the perspective of an image super-resolution algorithm model, the existing algorithms are divided into three types: interpolation based, reconstruction based and learning based. Interpolation-based algorithms are most widely used, and reconstruction-based and learning-based algorithms are often combined with interpolation-based algorithms. The basic idea of the reconstructed image super-resolution algorithm is to reconstruct a high-resolution image by utilizing the inverse process of a degradation model, total variation regularization is a popular algorithm in the reconstructed model, however, a large amount of artificial traces are formed at the edge of the traditional total variation regularization algorithm, and the visual quality of the high-resolution image is seriously influenced. Learning-based algorithms can be divided into two categories: external learning and self-learning. The external learning algorithm involves two non-negligible disadvantages: first, such an algorithm takes a significant amount of time to train the model; second, a parameter trained on one magnification is often only used for that magnification, and multiple different sets of parameters need to be trained for other magnifications. The self-learning algorithm does not need to depend on an external database, the learning and training process can be completed by utilizing the information of the image, and the defects of the external learning algorithm are effectively avoided.
At present, most of image super-resolution with better effect uses a convolution neural network to carry out feature extraction and image reconstruction. The shocker et al successfully combines the convolutional neural network with the self-learning algorithm, realizes super-resolution reconstruction of images by using a single image, and obtains a better image reconstruction effect. Due to the fact that structural self-similarity exists among image blocks in a single image, the characteristic can provide a certain number of samples for a self-learning algorithm, but the self-learning algorithm also has certain defects, firstly, training samples are insufficient, and secondly, the problem that the samples generated by utilizing the multi-scale self-similarity of the image cause image overfitting is solved, and therefore how to construct a convolution network adaptive to the self-learning algorithm is the problem mainly solved by the method.
In summary, the problems of insufficient training samples and easy overfitting of a network generally exist in the conventional self-learning image super-resolution method capable of obtaining a high-resolution image, and a new self-learning image super-resolution reconstruction method based on a convolutional neural network is urgently needed.
Disclosure of Invention
The invention aims to provide a self-learning image super-resolution reconstruction method and a self-learning image super-resolution reconstruction system based on a convolutional neural network, so as to solve one or more technical problems. The method can effectively solve the problem of insufficient training samples of the self-learning algorithm and avoid the phenomenon of overfitting of the network; meanwhile, a high-resolution image with higher peak signal-to-noise ratio and better visual effect can be obtained.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention discloses a self-learning image super-resolution reconstruction method based on a convolutional neural network, which comprises the following steps of:
step 1, obtaining a training sample of an image to be reconstructed; the training sample includes: training sample pairs with high and low resolutions;
step 2, constructing a convolutional neural network; the convolutional neural network includes:
a feature extraction unit comprising: laminating the layers in a roll; the input of the convolution layer is an image to be reconstructed, and the output is a characteristic diagram;
a feature enhancement unit comprising: a plurality of convolutional layers; the input of the first layer in the plurality of convolutional layers is a feature map extracted by the feature extraction unit, and the input of the rest layers is the output of the previous layer; the outputs of the plurality of convolutional layers are all characteristic graphs;
a residual unit comprising: a plurality of convolutional layers; the residual error unit is provided with:
the long jump connection is used for connecting the image to be reconstructed with the result obtained by the convolutional neural network;
a short jump connection for transferring the output of each convolution layer of the feature enhancement unit to each convolution layer of the residual error unit, respectively;
step 3, training the convolutional neural network constructed in the step 2 based on the training samples obtained in the step 1 to obtain a trained reconstructed neural network;
and 4, performing super-resolution reconstruction on the image to be reconstructed based on the reconstruction convolutional neural network trained in the step 3.
The further improvement of the present invention is that, in step 1, the step of obtaining the training sample of the image to be reconstructed specifically includes:
step 1.1, down-sampling with different multiplying powers is carried out on an image I to be reconstructed to obtain the image and a plurality of down-sampled versions I with different multiplying powersn,n∈Z+Obtaining an initial sample;
step 1.2, expanding the initial sample obtained in the step 1.1 to obtain a training sample of an image to be reconstructed;
wherein, the expansion mode is as follows:
Ie=f(In,A,M);
in the formula: i iseIs the extended image sample, f is for the sample set InAnd performing enhancement operation, wherein A is to rotate the image by different angles, and M is to mirror-invert the image.
The further improvement of the invention is that in the step 2, in the feature extraction unit of the constructed convolutional neural network, the size of the convolutional kernel is 3 multiplied by 3; choose [3, 3, 3, 64] filter.
The further improvement of the invention is that in step 2, in the constructed feature enhancement extraction unit of the convolutional neural network, the extracted features are linearly stacked, and the expression of the stacking process is as follows:
Fn+1=a*Fn+(1-a)*Fn-1
wherein: fnRepresenting the image features extracted at the current layer, Fn-1Representing the output characteristic of the preceding layer, Fn+1Representing the input of the next layer of the current layer, wherein n represents the number of layers of the hidden layer, and a is a product factor obtained by a large number of experiments;
and adding an active layer for enhancement after the linear stacking operation is completed, wherein the expression is as follows:
Figure BDA0002627890800000031
wherein: c denotes a linear stacking operation and R denotes an activation operation.
The invention has the further improvement that the value of n is 2, 3, 4 and 5; and a is 0.6.
In the step 2, in the residual error unit of the constructed convolution neural network,
the expression for a long hop connection is:
Ioutput=Iinput+Ffinal
in the formula IinputAnd IoutputRepresenting the input and output of a convolutional neural network, respectively, FfinalRepresenting the last layer of learning from the network training.
In the step 2, in the residual error unit of the constructed convolution neural network,
the expression for a short hop connection is:
Fp+1=Fp+Fq-p
wherein F represents an operation of extracting a feature, Fp+1Refers to the input of the p +1 layer, FpAnd Fq-pFor the output of each hidden layer, p, q each represent the number of layers of the network.
The invention further improves the method and also comprises the following steps:
and 5, taking the super-resolution reconstruction image obtained in the step 4 as an image to be reconstructed, and repeating the steps 1 to 4.
The invention has the further improvement that the step 5 specifically comprises the following steps:
feeding back the super-resolution reconstruction image obtained in the step 4 to an input end through a back propagation algorithm, and repeating the steps 1 to 4; and in the repeated process, the mean square error is used as a loss function, the parameter size is adjusted according to the loss function, and repeated iteration is performed to obtain the final target image with the best hyper-resolution effect.
The invention discloses a self-learning image super-resolution reconstruction system based on a convolutional neural network, which comprises the following components:
the sample acquisition module is used for acquiring a training sample of an image to be reconstructed; the training sample includes: training sample pairs with high and low resolutions;
a convolutional neural network, comprising:
a feature extraction unit comprising: laminating the layers in a roll; the input of the convolution layer is an image to be reconstructed, and the output is a characteristic diagram;
a feature enhancement unit comprising: a plurality of convolutional layers; the input of the first layer in the plurality of convolutional layers is a feature map extracted by the feature extraction unit, and the input of the rest layers is the output of the previous layer; the outputs of the plurality of convolutional layers are all characteristic graphs;
a residual unit comprising: a plurality of convolutional layers; the residual error unit is provided with:
the long jump connection is used for connecting the image to be reconstructed with the result obtained by the convolutional neural network;
a short jump connection for transferring the output of each convolution layer of the feature enhancement unit to each convolution layer of the residual error unit, respectively;
the training reconstruction module is used for training the convolutional neural network according to the obtained training sample to obtain a trained reconstructed neural network; and performing super-resolution reconstruction on the image to be reconstructed based on the trained reconstruction convolutional neural network.
Compared with the prior art, the invention has the following beneficial effects:
the self-learning image super-resolution reconstruction method based on the convolutional neural network realizes self super-resolution reconstruction by using a single image. The invention builds a lightweight convolutional neural network, and the network comprises a feature extraction unit, a feature enhancement unit, a residual error unit and a reconstruction unit. In the training process, an input image is downsampled to form a high-low resolution image pair, and the image pair is used for training a network, so that richer high-low resolution details can be obtained, more high-frequency details can be recovered, and the loss of image details can be avoided; the invention aims to obtain more high-resolution details, so a residual error unit is added in the invention, the loss of image information in the training process is avoided, and a result with better reconstruction effect can be obtained. By combining the operations, a high-resolution image with higher peak signal-to-noise ratio and better visual effect can be obtained.
In the invention, the sample enhancement is carried out on the single image, so that the network under-fitting can be avoided.
In the invention, a convolutional neural network is built, and the image is reconstructed by learning the mapping relation between the high-resolution and low-resolution training sample pairs through the network, because the training data are obtained from the image, the image has self-similarity, the data distribution is concentrated, and the network is different from the network depending on external training, which needs a large number of high-resolution and low-resolution sample pairs with certain difference, the network can be converged quickly, the network structure is relatively simple, and a complex unit is not required to be constructed to learn the mapping relation between the high-resolution and low-resolution sample pairs. A great deal of previous experiments show that a single image contains abundant intrinsic information, and the invention utilizes the reproducibility of the information to establish a relatively light-weight and simple network which can adapt to different settings of each image and can obtain better super-score results by extracting characteristic information. The invention constructs the residual error unit, and the image inevitably has information loss during the transmission between the convolution layers, so the residual error unit can be added to make up for the loss of image information.
A great deal of facts show that low-resolution images contain many abundant low-frequency details, but the invention only carries out shallow feature extraction on the images in the feature extraction stage, if the shallow features are only transmitted to a rear hidden layer to continue feature extraction as in the prior deep network structure, obviously many important detail information is lost, and the inherent deep information of the images and the power of a deep learning network are not fully utilized. The present invention therefore seeks to introduce a feature enhancement unit to extract deep features of an image. The image features extracted from the previous layer and the features extracted from the current layer are stacked, and because a plurality of repeated information exists between the two layers of feature maps, if the redundant removal operation is not carried out on the two layers of feature maps, a large amount of time is spent on learning the repeated information. The invention mainly carries out deep extraction on shallow features extracted from the front layer of the network without introducing redundant parameters, and experiments show that the convergence speed of the network is improved and better overdivision results are obtained.
The invention can recover the high-resolution image with better visual effect, and the high-resolution image has wide application in work and life. For example: the method has important application values in the fields of monitoring equipment, satellite image remote sensing, digital high definition, microscopic imaging, video coding communication, video restoration, medical images and the like. The high-resolution image can provide more important detail information for digital image processing due to high pixel density, and lays a good foundation for image post-processing. In conclusion, the invention has wider application range and great significance.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art are briefly introduced below; it is obvious that the drawings in the following description are some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
FIG. 1 is a schematic flow chart of a self-learning image super-resolution reconstruction method based on a convolutional neural network according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a long-hop connection used in a network residual unit according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a short hop connection used in a network residual unit according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a building network according to an embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating a comparison of the super-resolution results of face images of embodiments according to the present invention and various methods;
FIG. 6 is a schematic diagram showing the comparison of the super-resolution results of the method and various methods of the present invention on the building image 1 of the embodiment;
FIG. 7 is a schematic diagram showing the comparison of the super-resolution results of the building image 2 of the embodiment of the invention by the method of the invention and various methods.
Detailed Description
In order to make the purpose, technical effect and technical solution of the embodiments of the present invention clearer, the following clearly and completely describes the technical solution of the embodiments of the present invention with reference to the drawings in the embodiments of the present invention; it is to be understood that the described embodiments are only some of the embodiments of the present invention. Other embodiments, which can be derived by one of ordinary skill in the art from the disclosed embodiments without inventive faculty, are intended to be within the scope of the invention.
Referring to fig. 1 to 4, a self-learning image super-resolution reconstruction method based on a convolutional neural network according to an embodiment of the present invention includes the following specific steps:
step 1, selecting a proper sample enhancement mode to enhance and expand a training sample;
and 2, building a convolutional neural network, and reconstructing an image by learning the mapping relation between the high-resolution and low-resolution training sample pairs through the network, wherein the training data are obtained from the image, the image has self-similarity, the data distribution is concentrated, and the network is different from a network which depends on external training and needs a large number of high-resolution and low-resolution sample pairs with certain difference, so that the network can be converged quickly, the network structure is relatively simple, and a complex unit is not required to be constructed to learn the mapping relation between the high-resolution and low-resolution sample pairs. A great deal of previous experiments show that a single image contains abundant intrinsic information, and the invention utilizes the reproducibility of the information to establish a relatively light-weight and simple network which can adapt to different settings of each image and can obtain better super-score results by extracting characteristic information.
In the embodiment of the invention, the specific steps of the step 1 comprise:
step 1.1, inputting a low-resolution image I;
step 1.2, because the single image is reconstructed, no external sample is used, and the training sample is limited. The invention firstly carries out downsampling with different multiplying powers on an input low-resolution image I to obtain the image and a plurality of downsampled versions I with different multiplying powersn(n∈Z+) These images are used as input data for training the network;
step 1.3, although the number of samples is expanded in the step 1.2, the training samples are far from enough to easily cause under-fitting of the network, so that the method also performs some enhancement operations such as rotation and mirror image on the training samples to expand the samples; the specific expansion mode in the embodiment of the invention is shown as the following formula:
Ie=f(In,A,M);
wherein: i iseIs the extended image sample, f is for the sample set InAnd performing enhancement operation, wherein A is to rotate the image by different angles, and M is to mirror-invert the image.
In the embodiment of the invention, the step 2 comprises the following specific steps:
step 2.1, constructing a feature extraction unit, and performing shallow extraction on the features of the image by using the convolutional layer;
2.2, constructing a feature enhancement unit, and performing depth extraction on the image features obtained in the step 2.1 again;
step 2.3, a residual error unit is constructed, and information loss is inevitable when the image is transmitted between convolution layers, so that the residual error unit can be added to make up for the loss of image information;
and 2.4, inputting the test image into the trained network by the image reconstruction unit for super-resolution reconstruction.
In the embodiment of the present invention, the specific steps of step 2.1 include:
(1) inputting a low-resolution image I;
(2) the feature extraction layer is designed, because the feature extraction is directly carried out on the low-resolution image after the down sampling, in order to avoid the loss of information and control the calculated amount, the size of the selected convolution kernel is 3 multiplied by 3, the input image is a three-channel RGB image, and therefore a filter of [3, 3, 3, 64] is selected for extracting the shallow feature of the image.
In the embodiment of the present invention, the specific steps of step 2.2 include:
(1) the features extracted in step 2.1 are de-redundant. A great deal of facts show that low-resolution images contain many abundant low-frequency details, but the invention only carries out shallow feature extraction on the images in the feature extraction stage, if the shallow features are only transmitted to a rear hidden layer to continue feature extraction as in the prior deep network structure, obviously many important detail information is lost, and the inherent deep information of the images and the power of a deep learning network are not fully utilized. The present invention therefore seeks to introduce an enhancement unit to extract deep features of an image. Therefore, the image features extracted from the previous layer and the features extracted from the current layer are stacked, and because there is much repeated information between the two layers of feature maps, it takes a lot of time to learn some repeated information if the redundancy removing operation is not performed on the two layers of feature maps.
In the embodiment of the present invention, the stack is linearly stacked, and the stacking process is as follows:
Fn+1=a*Fn+(1-a)*Fn-1
wherein: fnRepresenting the image features extracted at the current layer, Fn-1Representing the output characteristic of the preceding layer, Fn+1The method represents the input of the next layer of the current layer, n represents the number of layers of the hidden layer (n is 2.. 5), in order to control the depth of the network, only 4 layers are selected for enhancement, and a is the methodObviously, the product factor obtained by a large number of experiments is that when a is 0.6, the network performance is best, and the image reconstruction effect is best.
(2) After the linear stacking operation is completed, the invention adds an active layer to better fit the relationship between the image features extracted by the two hidden layers.
In the embodiment of the present invention, the whole enhancing process is shown as follows:
Figure BDA0002627890800000091
wherein: c represents a linear stacking operation, and R represents an activation operation; the invention mainly carries out deep extraction on shallow features extracted from the front layer of the network without introducing redundant parameters, and experiments show that the convergence speed of the network is improved and better overdivision results are obtained.
In the embodiment of the present invention, the specific steps of step 2.3 include:
(1) in order to fully utilize the intrinsic information of the image, the invention also introduces a residual error unit. In this unit the invention uses long-skip-connection and short-skip-connection. Because the low-resolution image contains abundant low-frequency information and can be directly used for the super-resolution of the image, the invention uses a long jump connection to connect the input low-frequency image with the details obtained by the convolutional neural network, as shown in the following formula:
Ioutput=Iinput+Ffinal
since the invention builds an end-to-end network structure, IinputAnd IoutputRespectively representing the input and output of the network, FfinalRepresents the last layer of learning from network training, which contains many valuable high-frequency details;
(2) unlike previous residual learning, the present invention uses several short hop connections to transmit the outputs of the first layers of the network to the next layers, and the recursive pattern can be expressed as follows:
Fp+1=Fp+Fq-p
wherein: f represents the operation of extracting the features, and F is required to be explainedp+1Refers to the input of the p +1 layer, and FpAnd Fq-pFor the output of each hidden layer, wherein p and q both represent the number of layers of the network, except that q refers to the total number of layers of the network, the invention also sets the value range of p to (1 … q/2-1) in order not to increase the complexity of the network.
In the embodiment of the present invention, the specific steps of step 2.4 include:
(1) inputting a low-resolution image I;
(2) down-sampling the low-resolution image I input in (1) to obtain a down-sampled image I↓SInputting the down-sampling image into network to extract features and learning parameters to obtain I↓SCorresponding super-resolution reconstruction map HI↓S
(3) The reconstructed image HI obtained in the step (2) is processed↓SFeeding back the information to an input end through a back propagation algorithm (BP), repeating all the steps again, adjusting the size of a parameter according to a loss function by using Mean Squared Error (MSE) as the loss function in the process, and repeating iteration to finally obtain a target image I with the best hyper-resolution effect↑S
In summary, the method of the embodiment of the present invention improves the resolution of a single image by a self-learning method, i.e., uses a low-resolution image as a training and testing sample to train a convolutional neural network, thereby realizing the reconstruction of the low-resolution image. The method provided by the invention aims to search the inherent information of a single image, namely when a low-resolution image I is reconstructed by super-resolution, firstly, the image is down-sampled to obtain a down-sampled image I↓S(s is a sampling factor), learning the mapping relation between the network convolution layer and the sampling factor by utilizing the network convolution layer, and finally using the trained network for super-resolution reconstruction of the low-resolution image to obtain a reconstructed image I↑SI.e. a high resolution image.
The embodiment of the invention provides a self-learning image super-resolution reconstruction system based on a convolutional neural network, which comprises the following steps:
the sample acquisition module is used for acquiring a training sample of an image to be reconstructed; the training sample includes: training sample pairs with high and low resolutions;
a convolutional neural network, comprising:
a feature extraction unit comprising: laminating the layers in a roll; the input of the convolution layer is an image to be reconstructed, and the output is a characteristic diagram;
a feature enhancement unit comprising: a plurality of convolutional layers; the feature graph extracted by the input feature extraction unit of the first layer in the plurality of convolution layers, and the input of the rest layers is the output of the previous layer; the outputs of the plurality of convolutional layers are all characteristic graphs;
a residual unit comprising: a plurality of convolutional layers; the residual error unit is provided with:
the long jump connection is used for connecting the image to be reconstructed with the result obtained by the convolutional neural network;
a short jump connection for transferring the output of each convolution layer of the feature enhancement unit to each convolution layer of the residual error unit, respectively;
the training reconstruction module is used for training the convolutional neural network according to the obtained training sample to obtain a trained reconstructed convolutional neural network; and performing super-resolution reconstruction on the image to be reconstructed based on the trained reconstruction convolutional neural network.
The working principle of the method disclosed by the embodiment of the invention comprises the following steps: the invention is based on a convolutional neural network, the resolution of a single image is improved by a self-learning method, namely, a low-resolution image is used as a training and testing sample to train the convolutional neural network so as to realize the reconstruction of the image. The invention aims to search the intrinsic information of a single image, namely when a low-resolution image I is superseparated, the image is firstly downsampled to obtain a downsampled image I↓S(s is a sampling factor), designing a convolutional neural network to learn the mapping relation between the convolutional neural network and the convolutional neural network, and applying the trained network to the super-resolution of the low-resolution imageReconstructing the rate to obtain an image I↑SI.e. a high resolution image.
Experimental comparative analysis of the inventive examples: and measuring the image super-resolution reconstruction effect by comparing and calculating the peak signal to noise ratio (PSNR).
The Mean Square Error (MSE) may reflect the difference between the reconstructed image and the original image, as shown below:
Figure BDA0002627890800000121
wherein:
Figure BDA0002627890800000122
xi is the number of rows and columns of image data, Xi,jIs the pixel value of the ith row and jth column of the original image, Yi,jIs the pixel value of the ith row and the jth column of the reconstructed image;
the peak signal-to-noise ratio (PSNR) reflects the fidelity of the reconstructed image, and the calculation formula is as follows:
Figure BDA0002627890800000123
wherein: l represents the dynamic variation range of the image pixels.
The network built by the invention comprises an enhancement unit, a residual error unit and a linear superposition unit, and in order to verify the necessity and effectiveness of adding the units, the invention designs four network structures and respectively carries out a comparison test on a data set Urban100 by using a sampling factor 2X. The four network structures are: structure 1: enhancement unit + residual unit; structure 2: an enhancement unit + a linear superposition unit; structure 3: a residual error unit + a linear superposition unit; structure 4: enhancement unit + residual unit + linear superposition unit. The peak signal-to-noise ratios for the four network structures are shown in table 1 below:
table 1: comparison of peak signal-to-noise ratios for four different network architectures
Figure BDA0002627890800000131
As can be seen from table 1, the PSNR value of structure 4 is the highest, that is, when the network combines the enhancement unit, the residual unit, and the linear superposition unit, the super-resolution reconstruction effect on the image is better.
Referring to fig. 5-7, the data results are compared as shown in table 2:
table 2: comparison of peak signal-to-noise ratios of different algorithms
Figure BDA0002627890800000132
As can be seen from the results in table 2, when the images are reconstructed on the data sets Set5, Set14 and Urban100 with the sampling factors of 2, 3 and 4, compared with the machine learning method a +, the external learning method EDSR for deep learning and the self-learning method zsrs for deep learning, the method of the present invention can obtain high-resolution images with higher peak signal-to-noise ratio and better visual effect, and has a wider application range. Compared with a machine learning method A +, an external deep learning method EDSR and a self-learning method ZSR of deep learning, the peak signal-to-noise ratio of the method provided by the embodiment of the invention can be improved by 0.5-1 dB.
In summary, the invention discloses a self-learning image super-resolution reconstruction method based on a convolutional neural network, which comprises the following steps: selecting a proper sample enhancement mode to enhance and expand the training sample; building a convolutional neural network, and reconstructing an image by learning a mapping relation between high and low resolution training sample pairs; constructing a feature extraction unit, and performing shallow extraction on the features of the image by using the convolution layer; constructing a feature enhancement unit, and carrying out deep extraction on the image features again; a residual error unit is constructed, and the information loss is inevitable when the image is transmitted between the convolution layers, so that the residual error unit can be added to make up for the loss of image information; and (4) image reconstruction, namely inputting the test image into the trained network for super-resolution reconstruction. The single image contains rich intrinsic information, the image has self-similarity and centralized data distribution, and a complex unit is not required to be constructed to learn the mapping relation between the high-resolution sample pair and the low-resolution sample pair, so that the network structure is relatively simple, and convergence can be quickly obtained. The self-learning image super-resolution reconstruction method based on the convolutional neural network can effectively solve the problem of insufficient training samples of the self-learning algorithm, avoid the phenomenon of overfitting of the network, and obtain a high-resolution image with higher peak signal-to-noise ratio and better visual effect.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Although the present invention has been described in detail with reference to the above embodiments, those skilled in the art can make modifications and equivalents to the embodiments of the present invention without departing from the spirit and scope of the present invention, which is set forth in the claims of the present application.

Claims (10)

1. A self-learning image super-resolution reconstruction method based on a convolutional neural network is characterized by comprising the following steps:
step 1, obtaining a training sample of an image to be reconstructed; the training sample includes: training sample pairs with high and low resolutions;
step 2, constructing a convolutional neural network; the convolutional neural network includes:
a feature extraction unit comprising: laminating the layers in a roll; the input of the convolution layer is an image to be reconstructed, and the output is a characteristic diagram;
a feature enhancement unit comprising: a plurality of convolutional layers; the input of the first layer in the plurality of convolutional layers is a feature map extracted by the feature extraction unit, and the input of the rest layers is the output of the previous layer; the outputs of the plurality of convolutional layers are all characteristic graphs;
a residual unit comprising: a plurality of convolutional layers; the residual error unit is provided with:
the long jump connection is used for connecting the image to be reconstructed with the result obtained by the convolutional neural network;
a short jump connection for transferring the output of each convolution layer of the feature enhancement unit to each convolution layer of the residual error unit, respectively;
step 3, training the convolutional neural network constructed in the step 2 based on the training samples obtained in the step 1 to obtain a trained reconstructed convolutional neural network;
and 4, performing super-resolution reconstruction on the image to be reconstructed based on the reconstruction convolutional neural network trained in the step 3.
2. The self-learning image super-resolution reconstruction method based on the convolutional neural network as claimed in claim 1, wherein in step 1, the step of obtaining the training sample of the image to be reconstructed specifically comprises:
step 1.1, down-sampling with different multiplying powers is carried out on an image I to be reconstructed to obtain the image and a plurality of down-sampled versions I with different multiplying powersn,n∈Z+Obtaining an initial sample;
step 1.2, expanding the initial sample obtained in the step 1.1 to obtain a training sample of an image to be reconstructed;
wherein, the expansion mode is as follows:
Ie=f(In,A,M);
in the formula: i iseIs the extended image sample, f is for the sample set InAnd performing enhancement operation, wherein A is to rotate the image by different angles, and M is to mirror-invert the image.
3. The self-learning image super-resolution reconstruction method based on the convolutional neural network as claimed in claim 1, wherein in the step 2, in the feature extraction unit of the constructed convolutional neural network, the size of a convolutional kernel is 3 x 3; choose [3, 3, 3, 64] filter.
4. The self-learning image super-resolution reconstruction method based on the convolutional neural network as claimed in claim 1, wherein in the step 2, the feature enhancement unit of the constructed convolutional neural network linearly stacks the extracted features, and the expression of the stacking process is as follows:
Fn+1=a*Fn+(1-a)*Fn-1
wherein: fnRepresenting the image features extracted at the current layer, Fn-1Representing the output characteristic of the preceding layer, Fn+1Representing the input of the next layer of the current layer, wherein n represents the number of layers of the hidden layer, and a is a product factor obtained by a large number of experiments;
and adding an active layer for enhancement after the linear stacking operation is completed, wherein the expression is as follows:
Figure FDA0002627890790000021
wherein: c denotes a linear stacking operation and R denotes an activation operation.
5. The self-learning image super-resolution reconstruction method based on the convolutional neural network as claimed in claim 4, wherein the value of n is 2, 3, 4, 5; and a is 0.6.
6. The self-learning image super-resolution reconstruction method based on the convolutional neural network as claimed in claim 1, wherein in the residual unit of the convolutional neural network constructed in step 2,
the expression for a long hop connection is:
Ioutput=Iinput+Ffinal
in the formula IinputAnd IoutputRepresenting the input and output of a convolutional neural network, respectively, FfinalRepresenting the last layer of learning from the network training.
7. The self-learning image super-resolution reconstruction method based on the convolutional neural network as claimed in claim 1, wherein in the residual unit of the convolutional neural network constructed in step 2,
the expression for a short hop connection is:
Fp+1=Fp+Fq-p
wherein F represents an operation of extracting a feature, Fp+1Refers to the input of the p +1 layer, FpAnd Fq-pFor the output of each hidden layer, p, q each represent the number of layers of the network.
8. The convolutional neural network-based self-learning image super-resolution reconstruction method of claim 1, further comprising:
and 5, taking the super-resolution reconstruction image obtained in the step 4 as an image to be reconstructed, and repeating the steps 1 to 4.
9. The self-learning image super-resolution reconstruction method based on the convolutional neural network as claimed in claim 8, wherein the step 5 specifically comprises:
feeding back the super-resolution reconstruction image obtained in the step 4 to an input end through a back propagation algorithm, and repeating the steps 1 to 4; and adjusting the size of network parameters according to the loss function by using the mean square error as the loss function in the repeated process, and repeatedly iterating to obtain a final target image.
10. A self-learning image super-resolution reconstruction system based on a convolutional neural network is characterized by comprising the following components:
the sample acquisition module is used for acquiring a training sample of an image to be reconstructed; the training sample includes: training sample pairs with high and low resolutions;
a convolutional neural network, comprising:
a feature extraction unit comprising: laminating the layers in a roll; the input of the convolution layer is an image to be reconstructed, and the output is a characteristic diagram;
a feature enhancement unit comprising: a plurality of convolutional layers; the input of the first layer in the plurality of convolutional layers is a feature map extracted by the feature extraction unit, and the input of the rest layers is the output of the previous layer; the outputs of the plurality of convolutional layers are all characteristic graphs;
a residual unit comprising: a plurality of convolutional layers; the residual error unit is provided with:
the long jump connection is used for connecting the image to be reconstructed with the result obtained by the convolutional neural network;
a short jump connection for transferring the output of each convolution layer of the feature enhancement unit to each convolution layer of the residual error unit, respectively;
the training reconstruction module is used for training the convolutional neural network according to the obtained training sample to obtain a trained reconstructed convolutional neural network; and performing super-resolution reconstruction on the image to be reconstructed based on the trained reconstruction convolutional neural network.
CN202010802461.6A 2020-08-11 2020-08-11 Self-learning image super-resolution reconstruction method and system based on convolutional neural network Active CN111932461B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010802461.6A CN111932461B (en) 2020-08-11 2020-08-11 Self-learning image super-resolution reconstruction method and system based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010802461.6A CN111932461B (en) 2020-08-11 2020-08-11 Self-learning image super-resolution reconstruction method and system based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN111932461A true CN111932461A (en) 2020-11-13
CN111932461B CN111932461B (en) 2023-07-25

Family

ID=73310662

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010802461.6A Active CN111932461B (en) 2020-08-11 2020-08-11 Self-learning image super-resolution reconstruction method and system based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN111932461B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308781A (en) * 2020-11-23 2021-02-02 中国科学院深圳先进技术研究院 Single image three-dimensional super-resolution reconstruction method based on deep learning
CN112580381A (en) * 2020-12-23 2021-03-30 成都数之联科技有限公司 Two-dimensional code super-resolution reconstruction enhancing method and system based on deep learning
CN112617850A (en) * 2021-01-04 2021-04-09 苏州大学 Premature beat and heart beat detection method for electrocardiosignals
CN112734622A (en) * 2021-03-30 2021-04-30 深圳大学 Image steganalysis method and terminal based on Tucker decomposition
CN112907446A (en) * 2021-02-07 2021-06-04 电子科技大学 Image super-resolution reconstruction method based on packet connection network
CN113962855A (en) * 2021-06-07 2022-01-21 长春理工大学 Satellite image super-resolution reconstruction method and device of combined convolutional network
CN115880158A (en) * 2023-01-30 2023-03-31 西安邮电大学 Blind image super-resolution reconstruction method and system based on variational self-coding
CN117788293A (en) * 2024-01-26 2024-03-29 西安邮电大学 Feature aggregation image super-resolution reconstruction method and system
WO2024082796A1 (en) * 2023-06-21 2024-04-25 西北工业大学 Spectral cross-domain transfer super-resolution reconstruction method for multi-domain image
WO2024138719A1 (en) * 2022-12-30 2024-07-04 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for image optimization

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130221961A1 (en) * 2012-02-27 2013-08-29 Medimagemetric LLC System, Process and Computer-Accessible Medium For Providing Quantitative Susceptibility Mapping
CN106228512A (en) * 2016-07-19 2016-12-14 北京工业大学 Based on learning rate adaptive convolutional neural networks image super-resolution rebuilding method
CN108734660A (en) * 2018-05-25 2018-11-02 上海通途半导体科技有限公司 A kind of image super-resolution rebuilding method and device based on deep learning
CN109903226A (en) * 2019-01-30 2019-06-18 天津城建大学 Image super-resolution rebuilding method based on symmetrical residual error convolutional neural networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130221961A1 (en) * 2012-02-27 2013-08-29 Medimagemetric LLC System, Process and Computer-Accessible Medium For Providing Quantitative Susceptibility Mapping
CN106228512A (en) * 2016-07-19 2016-12-14 北京工业大学 Based on learning rate adaptive convolutional neural networks image super-resolution rebuilding method
CN108734660A (en) * 2018-05-25 2018-11-02 上海通途半导体科技有限公司 A kind of image super-resolution rebuilding method and device based on deep learning
CN109903226A (en) * 2019-01-30 2019-06-18 天津城建大学 Image super-resolution rebuilding method based on symmetrical residual error convolutional neural networks

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308781B (en) * 2020-11-23 2024-07-30 中国科学院深圳先进技术研究院 Single image three-dimensional super-resolution reconstruction method based on deep learning
CN112308781A (en) * 2020-11-23 2021-02-02 中国科学院深圳先进技术研究院 Single image three-dimensional super-resolution reconstruction method based on deep learning
CN112580381A (en) * 2020-12-23 2021-03-30 成都数之联科技有限公司 Two-dimensional code super-resolution reconstruction enhancing method and system based on deep learning
CN112617850B (en) * 2021-01-04 2022-08-30 苏州大学 Premature beat and heart beat detection system for electrocardiosignals
CN112617850A (en) * 2021-01-04 2021-04-09 苏州大学 Premature beat and heart beat detection method for electrocardiosignals
CN112907446A (en) * 2021-02-07 2021-06-04 电子科技大学 Image super-resolution reconstruction method based on packet connection network
CN112907446B (en) * 2021-02-07 2022-06-07 电子科技大学 Image super-resolution reconstruction method based on packet connection network
CN112734622B (en) * 2021-03-30 2021-07-20 深圳大学 Image steganalysis method and terminal based on Tucker decomposition
CN112734622A (en) * 2021-03-30 2021-04-30 深圳大学 Image steganalysis method and terminal based on Tucker decomposition
CN113962855A (en) * 2021-06-07 2022-01-21 长春理工大学 Satellite image super-resolution reconstruction method and device of combined convolutional network
CN113962855B (en) * 2021-06-07 2024-09-13 长春理工大学 Satellite image super-resolution reconstruction method and device of combined convolution network
WO2024138719A1 (en) * 2022-12-30 2024-07-04 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for image optimization
CN115880158A (en) * 2023-01-30 2023-03-31 西安邮电大学 Blind image super-resolution reconstruction method and system based on variational self-coding
CN115880158B (en) * 2023-01-30 2023-10-27 西安邮电大学 Blind image super-resolution reconstruction method and system based on variation self-coding
WO2024082796A1 (en) * 2023-06-21 2024-04-25 西北工业大学 Spectral cross-domain transfer super-resolution reconstruction method for multi-domain image
CN117788293A (en) * 2024-01-26 2024-03-29 西安邮电大学 Feature aggregation image super-resolution reconstruction method and system
CN117788293B (en) * 2024-01-26 2024-09-10 西安邮电大学 Feature aggregation image super-resolution reconstruction method and system

Also Published As

Publication number Publication date
CN111932461B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
CN111932461B (en) Self-learning image super-resolution reconstruction method and system based on convolutional neural network
CN106991646B (en) Image super-resolution method based on dense connection network
CN111105352B (en) Super-resolution image reconstruction method, system, computer equipment and storage medium
CN110136063A (en) A kind of single image super resolution ratio reconstruction method generating confrontation network based on condition
CN110599401A (en) Remote sensing image super-resolution reconstruction method, processing device and readable storage medium
CN111161146B (en) Coarse-to-fine single-image super-resolution reconstruction method
CN111815516B (en) Super-resolution reconstruction method for weak supervision infrared remote sensing image
CN110889895A (en) Face video super-resolution reconstruction method fusing single-frame reconstruction network
CN112862689A (en) Image super-resolution reconstruction method and system
CN111768340B (en) Super-resolution image reconstruction method and system based on dense multipath network
CN112699844A (en) Image super-resolution method based on multi-scale residual error level dense connection network
CN115564649B (en) Image super-resolution reconstruction method, device and equipment
CN113781308A (en) Image super-resolution reconstruction method and device, storage medium and electronic equipment
CN113538246A (en) Remote sensing image super-resolution reconstruction method based on unsupervised multi-stage fusion network
CN116188272B (en) Two-stage depth network image super-resolution reconstruction method suitable for multiple fuzzy cores
CN112907448A (en) Method, system, equipment and storage medium for super-resolution of any-ratio image
CN115393191A (en) Method, device and equipment for reconstructing super-resolution of lightweight remote sensing image
CN111861886A (en) Image super-resolution reconstruction method based on multi-scale feedback network
CN115880158A (en) Blind image super-resolution reconstruction method and system based on variational self-coding
CN115797176A (en) Image super-resolution reconstruction method
CN115953294A (en) Single-image super-resolution reconstruction method based on shallow channel separation and aggregation
CN113421187A (en) Super-resolution reconstruction method, system, storage medium and equipment
CN115526779A (en) Infrared image super-resolution reconstruction method based on dynamic attention mechanism
Li Image super-resolution using attention based densenet with residual deconvolution
CN111414988A (en) Remote sensing image super-resolution method based on multi-scale feature self-adaptive fusion network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant