WO2021022929A1 - 一种单帧图像超分辨率重建方法 - Google Patents

一种单帧图像超分辨率重建方法 Download PDF

Info

Publication number
WO2021022929A1
WO2021022929A1 PCT/CN2020/098001 CN2020098001W WO2021022929A1 WO 2021022929 A1 WO2021022929 A1 WO 2021022929A1 CN 2020098001 W CN2020098001 W CN 2020098001W WO 2021022929 A1 WO2021022929 A1 WO 2021022929A1
Authority
WO
WIPO (PCT)
Prior art keywords
resolution
image
model
low
super
Prior art date
Application number
PCT/CN2020/098001
Other languages
English (en)
French (fr)
Inventor
赵盛荣
梁虎
董祥军
Original Assignee
齐鲁工业大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 齐鲁工业大学 filed Critical 齐鲁工业大学
Priority to ZA2021/00526A priority Critical patent/ZA202100526B/en
Publication of WO2021022929A1 publication Critical patent/WO2021022929A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • the invention relates to the technical field of computer image processing, in particular to a single-frame image super-resolution reconstruction method.
  • the current super-resolution reconstruction methods are broadly divided into single-frame reconstruction algorithms and multi-frame reconstruction algorithms.
  • the single-frame reconstruction algorithm refers to the algorithm required to reconstruct the corresponding high-resolution image by using a low-resolution image that is affected by noise, blur, down-sampling and other degradation factors.
  • the image reconstruction depends on the prior model (internal estimation).
  • the learning-based method relies on the data set (external model) established earlier. Algorithms that rely on a priori model strengthen a certain feature while ignoring other multiple features. They are shifted by people’s subjective will and have a strong artificial tendency. For example: TV priori emphasizes edge protection and ignores texture details The protection of the resulting image is too smooth.
  • the learning method relies on the external image library, and there is a problem of low accuracy in reconstructing high-resolution images. This is related to the following two problems: 1. Is the image library complete? If the image library is not complete, some features cannot be restored; 2. Internal structural information Without good confidence, it mainly relies on external information for recovery.
  • the object of the present invention is to provide a single-frame image super-resolution reconstruction method to improve the accuracy of reconstructing high-resolution images.
  • the present invention provides a single-frame image super-resolution reconstruction method, the method includes:
  • S2 Construct a training set corresponding to the structure, edge, and texture levels between the high-resolution image and the low-resolution image according to the type of image to be reconstructed;
  • S4 Use a semi-quadratic iterative method to establish a super-resolution reconstruction model according to the multiple differential consistency constraint model and the prior constraint, and solve it to obtain a high-resolution reconstructed image.
  • the S1 specifically includes:
  • the observation models on the structure, the edge and the texture are respectively established according to the obtained 0 step, 1 step and 2 step, and the specific formula is:
  • ⁇ i (X,y) represents the corresponding relationship between the low-resolution image y and the high-resolution image X to be solved based on the i-th gradient, that is, the observation model.
  • the values of i are 0, 1, and 2 respectively , Used to mark 0 gradient, 1 gradient and 2 gradient respectively, ⁇ i (y) represents the i-th gradient of the low-resolution image y, and ⁇ i (WX) represents the fitted low-resolution image WX Find the i-th step degree, Table demonstration number.
  • the multiple differential consistency constraint model is determined according to each of the observation models, and the specific formula is:
  • F(X,y) represents the multi-differential consistency constraint model
  • ⁇ i is the weight parameter of the i-th step.
  • the S3 specifically includes:
  • S32 Input the training set to the training model for N iterations of training to obtain a mapping relationship between a high-resolution image and a low-resolution image, where N is a positive integer greater than or equal to 1.
  • the S4 specifically includes:
  • S42 Solve the super-resolution reconstruction model using a semi-quadratic iteration method, and output a high-resolution reconstructed image.
  • argmin represents the minimum value function
  • z represents the auxiliary variable
  • X represents the high-resolution image to be solved
  • y represents the low-resolution image
  • ⁇ and ⁇ represent the weight parameters in the solution process
  • ⁇ () represents a priori constraint
  • Table model number represents the weight parameters in the solution process
  • F(X,y) represents multiple differential consistency constraint model.
  • the S42 specifically includes:
  • solution formula is specifically:
  • W represents the degraded matrix
  • I represents the identity matrix
  • X k+1 represents the high-resolution image after k+1 iterations
  • T represents the transposed symbol
  • represents the Laplacian.
  • Represents a gradient operator Indicates that a gradient operator acts on the degraded matrix
  • ⁇ W indicates that the Laplacian operator acts on the degraded matrix
  • y indicates a low-resolution image
  • ⁇ y indicates that the Laplacian operator acts on a low-resolution image
  • indicates the weight parameter in the solution process
  • z k indicates k times Auxiliary variable after iterative solution.
  • the present invention discloses the following technical effects:
  • the present invention discloses a single-frame image super-resolution reconstruction method.
  • the method includes: establishing a consistent correspondence relationship between a low-resolution image and a high-resolution image, and according to the obtained 0 step, 1 step and 2 step, Establish observation models for structure, edge and texture respectively, and then determine the multiple differential consistency constraint model; construct the training set corresponding to the structure, edge and texture level between the high-resolution image and the low-resolution image; input the training set
  • the training model is trained to obtain a priori constraints between high-resolution images and low-resolution images; a semi-quadratic iterative method is used to establish a super-resolution reconstruction model based on multiple differential consistency constraints and prior constraints, and solve them , Get high-resolution reconstructed image.
  • the invention constructs a multiple differential consistency constraint model based on multiple gradients, and adopts a semi-quadratic iterative algorithm to effectively integrate internal and external information, thereby improving the accuracy of super-resolution image reconstruction.
  • FIG. 1 is a flowchart of a single-frame image super-resolution reconstruction method according to an embodiment of the present invention
  • Figure 2 is a structural block diagram of a training model based on a symmetric redundant deep neural network
  • Figure 3(a) is a schematic diagram of the original image
  • Figure 3(b) is a schematic diagram of the reconstruction result obtained by the interpolation algorithm
  • Figure 3(c) is a schematic diagram of the reconstruction results obtained by the SRCNN method
  • Figure 3(d) is a schematic diagram of the reconstruction result obtained by the DRNN method
  • Figure 3(e) is a schematic diagram of the reconstruction result obtained by the SISR method
  • Figure 3(f) is a schematic diagram of the reconstruction result obtained by the method of the present invention.
  • Fig. 3(g) is a schematic diagram of the difference image between the original image of Fig. 3(a) and the original image of Fig. 3(a);
  • Fig. 3(h) is a schematic diagram of the difference image between the reconstructed result image of Fig. 3(b) and the original image of Fig. 3(a);
  • Fig. 3(i) is a schematic diagram of the difference image between the reconstructed result image of Fig. 3(c) and the original image of Fig. 3(a);
  • Fig. 3(j) is a schematic diagram of the difference image between the reconstructed result image of Fig. 3(d) and the original image of Fig. 3(a);
  • Fig. 3(k) is a schematic diagram of the difference image between the reconstructed result image of Fig. 3(e) and the original image of Fig. 3(a);
  • Fig. 3(1) is a schematic diagram of the difference image between the reconstructed result image of Fig. 3(f) and the original image of Fig. 3(a).
  • the purpose of the present invention is to provide a single-frame image super-resolution reconstruction method to improve the accuracy of reconstructing high-resolution images.
  • Multi-differential consistency constraint In the process of single-frame image reconstruction, in order to effectively restore the structure, edge and texture information of the image, use 0 gradient, 1 gradient and 2 gradient to constrain the consistency of the estimated image. Make the degradation in the simulation situation as realistic as possible.
  • Symmetrical redundant deep neural network It means that the involved network is a symmetrical deep neural network.
  • the so-called symmetry is to divide the deep neural network into encoding and decoding. As shown in Figure 1, the encoding process is divided into 5 Two identical functional blocks, each functional block includes convolution operation, batch normalization operation and activation operation.
  • the symmetrical decoding process is also divided into 5 functional blocks, and each functional block includes deconvolution, batch normalization and activation operations.
  • the so-called redundancy means that in the neural network training process, the training information is residual information, that is, the difference between the estimated value and the label value.
  • the so-called zero gradient is the gray scale difference.
  • Step degree is to determine the first-order difference using the first-order difference function, the specific formula is:
  • ⁇ X is the second-order difference.
  • Image structure the composition information of the image is composed of the edge area, flat area and corner area of the image, and describes the overall frame of the image.
  • Edge refers to the intersection of the image attribute area and another attribute area. It is the place where the attribute of the area changes suddenly. It is the place with the greatest uncertainty in the image and the place where the image information is most concentrated.
  • the edge of the image Contains a wealth of information. In this field, it usually refers to an area with a larger gradient value.
  • Texture feature is also a kind of global feature, it also describes the surface properties of the scene corresponding to the image or image area. In this field, it usually refers to an area with a small gradient value.
  • Fig. 1 is a flowchart of a single-frame image super-resolution reconstruction method according to an embodiment of the present invention. As shown in Fig. 1, the present invention discloses a single-frame image super-resolution reconstruction method. The method includes:
  • S2 Construct a training set corresponding to the structure, edge, and texture levels between the high-resolution image and the low-resolution image according to the type of image to be reconstructed;
  • S4 Use a semi-quadratic iterative method to establish a super-resolution reconstruction model according to the multiple differential consistency constraint model and the prior constraint, and solve it to obtain a high-resolution reconstructed image.
  • S1 Establish a consistent correspondence relationship between low-resolution images and high-resolution images. According to the obtained 0 step, 1 step, and 2 step, the observation models of structure, edge and texture are established respectively, and then the multiple Differential consistency constraint model.
  • S11 Establish a consistent correspondence between low-resolution images and high-resolution images; its purpose is to make the reconstructed data as consistent as possible with the observed data.
  • the data fidelity model describes the consistency between the low-resolution image y and the simulated low-resolution image WX by using the 0 gradient, which only reflects the consistency between the image point pairs, and cannot reflect the deeper differences of the image. Therefore, based on the 0-gradient degradation model, the degradation relationship based on the first-gradient and the second-gradient is added to describe the degradation process of edges and textures, so as to realize the low-resolution image y and the high-resolution to be solved The degraded correspondence between image X in different gradients (1 step or 2 step), so as to achieve consistency constraints under different feature level angles.
  • ⁇ i (X,y) represents the corresponding relationship between the low-resolution image y and the high-resolution image X to be solved based on the i-th gradient, that is, the observation model.
  • the values of i are 0, 1, and 2 respectively , Used to mark 0 gradient, 1 gradient and 2 gradient respectively, ⁇ i (y) represents the i-th gradient of the low-resolution image y, and ⁇ i (WX) represents the fitted low-resolution image WX Find the i-th step degree, Table demonstration number.
  • F(X,y) represents the multi-differential consistency constraint model based on the low-resolution image y and the high-resolution image X to be solved
  • ⁇ i (X,y) represents the low-resolution image y and the high-resolution image to be solved
  • the high-resolution images X are based on the corresponding relationship of the i-th gradient, and the values of i are 0, 1, and 2, respectively, which are used to mark 0, 1 and 2 gradients respectively
  • ⁇ i is the i-th gradient
  • the weight parameter of the gradient, ⁇ i (y) represents the i-th gradient of the low-resolution image y, and ⁇ i (WX) represents the i-th gradient of the fitted low-resolution image WX.
  • S2 Construct a training set corresponding to the structure, edge, and texture level between the high-resolution image and the low-resolution image according to the type of reconstructed image required; the training set includes the high-resolution image training set and the The low-resolution image training set corresponding to the high-resolution image training set.
  • the high-resolution image training set and the low-resolution image training set form multiple data pairs.
  • the category refers to the huge difference in image structure, edge, texture and other information when reconstructing high-resolution images due to different image content.
  • the image to be reconstructed needs to be different according to the content. They are classified into different types of images to obtain more information support, and the content obtained is called external information. For example, CT images and MRI images in the medical field; building images in the architectural field; natural images and facial images in the environmental field.
  • low-resolution images include blur and noise; because different features have different responses to noise, blur and other degradation factors, in order to ensure the effective removal of noise and blur, it is necessary to establish a training set
  • the targeted selection part contains image pairs with large gradient edges and part contains image pairs with rich textures as part of the training set.
  • S21 According to the type of image to be reconstructed, construct a high-resolution image set between the high-resolution image and the low-resolution image at the structure, edge and texture level And a low-resolution image set corresponding to the high-resolution image training set
  • i is a sample index
  • N is the total number of samples in a sample library
  • X i represents the i-th frame of high resolution image focused high-resolution image to be solved
  • y i represents the low-resolution image corresponding to X i and concentrated The i-th frame of low-resolution image.
  • the most important information is the facial features, wrinkles, and facial scars within the human face that have more distinguishing capabilities.
  • the appearance of these features some rely on the strong contrast between the edge of the part and the surroundings, such as eyes, eyebrows, etc.; some show relatively weak contrast, such as wrinkles, facial skin, etc.
  • the strong contrast features are displayed stronger, and the weak contrast texture details are highlighted.
  • more face images will be selected as the training set samples. The advantage is that it can make the reconstructed face more real and natural.
  • the faces with different skin colors, ages, genders or other characteristics are filtered and put into the training set.
  • the following technical means are adopted: 1) Add different types of noise or blur with different intensities to these pictures to produce low-resolution images, 2) Disrupt the order of these pictures Random and repeated placement.
  • S3 Establish a training model based on a symmetric redundant deep neural network, input the training set to the training model for training, and obtain a mapping relationship between a high-resolution image and a low-resolution image.
  • the mapping relationship is first Experience constraints, that is, external information.
  • the training model based on the symmetric redundant deep neural network is a symmetric model, including an encoding part and a symmetric decoding part.
  • the encoding part includes 5 same functional modules, each of which includes convolution (Conv) and normalization (Bnorm). ) And activation operation (ReLU); the symmetrical decoding part also includes 5 functional modules, each of which includes deconvolution, normalization and activation operations; the number of channels of the 5 functional modules from left to right in the coding part They are 256, 128, 64, 32, and 16, respectively.
  • the channel numbers of the 5 functional modules from left to right in the symmetrical decoding section are 16, 32, 64, 128, and 256, respectively.
  • the data in the training set appear in pairs, that is: low-resolution images and high-resolution images exist in pairs, where the input is the low-resolution image and the output is the high-resolution image; the high-resolution image and the low-resolution image
  • the mapping relationship of the resolution image refers to adjusting the parameters, connection weights or structure in the training model based on the symmetric redundant deep neural network according to the existing training set data pairs, so that the input and output form a mapping relationship to ensure high resolution of the output
  • the error between the rate image and the real high-resolution image is as small as possible.
  • f s represents the mapping function of each layer of the s-layer symmetric redundant deep neural network
  • f 1 () represents the output result of the first layer of the neural network
  • N is a positive integer greater than or equal to 1; in this embodiment, N is 100 Times.
  • redundancy refers to the information trained in the neural network training process, that is, residual information-estimated value and The difference of the tag value.
  • S4 Use a semi-quadratic iterative method to establish a super-resolution reconstruction model according to the multiple differential consistency constraint model and the prior constraint, and solve it to obtain a high-resolution reconstructed image.
  • argmin represents the minimum value function
  • z represents the auxiliary variable
  • X represents the high-resolution image to be solved
  • y represents the low-resolution image
  • ⁇ and ⁇ represent the weight parameters in the solution process
  • ⁇ () represents a priori constraint
  • Table model number represents the weight parameters in the solution process
  • F(X,y) represents multiple differential consistency constraint model.
  • S42 Solve the super-resolution reconstruction model using a semi-quadratic iterative method, and output a high-resolution reconstructed image, which specifically includes:
  • W represents a degraded matrix
  • I represents an identity matrix, that is, a matrix with diagonal elements 1 and other elements
  • X k+1 represents a high-resolution image after k+1 iterations
  • T represents Transpose the symbol
  • represents the Laplacian operator
  • Represents a gradient operator Indicates that a gradient operator acts on the degraded matrix
  • ⁇ W indicates that the Laplacian operator acts on the degraded matrix
  • y indicates a low-resolution image
  • ⁇ y indicates that the Laplacian operator acts on a low-resolution image
  • indicates the weight parameter in the solution process
  • z k indicates k times Auxiliary variable after iterative solution.
  • W is a degradation matrix.
  • the degradation process described includes information such as downsampling, blurring, and deformation, but W itself is unknown and needs to be solved in the subsequent solution process.
  • S424 Judge whether the difference between the obtained two adjacent high-resolution images is less than the preset minimum value, that is
  • ⁇ ; if the obtained two adjacent high-resolution images The difference between the resolution images is less than the preset minimum value, then the high-resolution reconstructed image is output If the obtained difference between two adjacent high-resolution images is greater than or equal to the preset minimum value, set k k+1, and return to step S421.
  • the present invention seeks a point of fit. Starting from the image's own characteristics, it uses consistency constraints to reduce artificial preference constraints at the level of multiple features. At the same time, it uses an external image library to train feature sets and multiple gradient features to complement the missing features of the original image. Improve the accuracy of reconstructing super-resolution images.
  • the acquisition of internal information (training set) in the present invention refers to the structure, gradient and detail information of the image.
  • Obtaining external information refers to using a symmetric convolutional deep network to train an existing training set, obtaining effective support from the structure, gradient, and details, and filling in missing information.
  • the semi-quadratic iterative algorithm is used to effectively integrate internal and external information.
  • the present invention uses a multi-gradient algorithm to design a data constraint model, which can constrain the estimated image from the image frame, edge structure features and detailed features, and effectively integrate the internal information of the image.
  • the present invention establishes a symmetric redundant network.
  • the network is symmetrically divided into a 5-layer convolution part and a 5-layer deconvolution part.
  • the convolution part multiple gradient operators are obtained through convolution to reduce its spatial dimension; the deconvolution part uses the learning result to integrate channel information to complement the spatial detail information.
  • the network and the proposed data constraint model simultaneously act on the reconstruction process to further improve the accuracy of reconstructing super-resolution images.
  • this embodiment selects the classic Lena image in the image processing field as the test image, and the data set uses a data set composed of 5000 pictures from public databases such as ImageNet and BSD as the training set.
  • objective evaluation indicators include PSNR value, SSIM value (the larger the two values, the better the effect), the subjective evaluation method is to compare the reconstruction results;
  • Figure 3 (a) is the original Image
  • Figure 3(b) is the reconstruction result obtained by the interpolation algorithm
  • Figure 3(c) is the reconstruction result obtained by the SRCNN method
  • Figure 3(d) is the reconstruction result obtained by the DRNN method
  • Figure 3(e) is the reconstruction result obtained by the DRNN method.
  • Fig. 3(f) is the reconstruction result obtained by the method of the present invention.
  • Fig. 3(g) is the difference image between the original image of Fig. 3(a) and the original image of Fig. 3(a)
  • Fig. 3(h) is the reconstruction result image of Fig. 3(b) and Fig. 3(a)
  • the difference image of the original image Fig. 3(i) is the difference image of the reconstruction result image of Fig. 3(c) and the original image of Fig. 3(a)
  • Fig. 3(j) is the reconstruction result of Fig.
  • Table 1 shows the six low-resolution images (Cameraman, Parrots, Lena, Boat, Man, Couple) selected for this example 1 through the existing A+, SRCRC, SRCRNS, DRRN, VDSR, The reconstruction results obtained by SRCNN, SISR, SRMMPM and the method of the present invention, in this experiment, blur and noise are unknown.
  • Table 1 compares the numerical results of the method proposed by the present invention and the traditional algorithm under the condition that the resolution is increased by 3 times.
  • the front of " ⁇ " is the PSNR value
  • the back is the SSIM value. The larger the two values, the better the reconstruction effect. Therefore, it can be seen that the method of the present application has the best reconstruction effect compared with the existing traditional algorithms.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

本发明公开一种单帧图像超分辨率重建方法,方法包括:建立低分辨率图像和高分辨率图像之间的一致性对应关系,根据获取的0阶梯度、1阶梯度和2阶梯度,分别建立关于结构、边缘和纹理的观测模型,进而确定多重微分一致性约束模型;构建高分辨率图像和低分辨率图像之间在结构、边缘和纹理层面上对应的训练集;将训练集输入至训练模型进行训练,获得高分辨率图像和低分辨率图像之间的先验约束;利用半二次迭代方法根据多重微分一致性约束模型和先验约束建立超分辨率重建模型,并进行求解,得到高分辨率重建图像。本发明基于多重梯度构建多重微分一致性约束模型,并采用半二次迭代算法有效地整合内外部信息,提高超分辨率图像重建的精度。

Description

一种单帧图像超分辨率重建方法
本申请要求于2019年8月8日提交中国专利局、申请号为201910728888.3、发明名称为“基于多重微分一致性约束和对称冗余网络的单帧图像超分辨率重建方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及计算机图像处理技术领域,特别是涉及一种单帧图像超分辨率重建方法。
背景技术
获取高分辨率图像是计算机视觉和后续相关领域的重要基础。目前的超分辨率重建方法广义分为单帧重建算法和多帧重建算法。单帧重建算法是指利用一幅受到噪声、模糊、降采样等降质因素影响的低分辨率图像来重建出其对应的高分辨率图像所需的算法。具体参见文献W.Shi,J.Caballero,F.Huszar,J.Totz,A.P.Aitken,R.Bishop,D.Rueckert,Z.Wang,Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network.IEEE Conference on Computer Vision and Pattern Recognition,1874-1883,Las Vegas,NV,United States,2016及文献Y.Zhang,Q.Fan,F.Bao,Y.Liu,C.Zhang,Single-Image Super-Resolution Based on Rational Fractal Interpolation.IEEE Transactions on Image Processing,27(8):3782-797,2018。多帧图像重建算法是指利用同一场景下的多帧有相对位移的低分辨率图像(受到噪声、模糊、降采样等降质因素影响的降质图像)来重建一帧高分辨率图像,具体参见文献K.Konstantoudakis,L.Vrysis,N.Tsipas,C.Dimoulas,Block unshifting high-accuracy motion estimation:A new method adapted to super-resolution enhancement.Signal Processing:Image Communication,65:81-93,2018及文献I.Mourabit,M.Rhabi,A.Hakim,A.Laghrib,E.Moreau,A new denoising model for multi-frame super-resolution image reconstruction.Signal Processing,132(C):51-65,2017。
目前,解决单帧图像超分辨率重建算法有最优化方法和学习方法两 种。基于最优化方法是建立观测模型后,依赖先验模型(内部估计)进行图像重建。而基于学习方法依赖于前期建立的数据集(外部模型)。依赖先验模型的算法就某一种特征进行加强而忽略了其他多种特征,以人们的主观意志为转移,具有强烈的人工倾向性,例如:TV先验着重强调边缘的保护而忽略纹理细节的保护,从而导致重建图像过平滑。学习方法依赖于外部图像库,存在重建高分辨率图像的精度低的问题,这与以下两个问题有关,1,图像库是否完备,不完备则某些特征无法恢复;2,内部结构性信息没有良好把握,主要依赖外部信息进行恢复。
发明内容
基于此,本发明的目的是提供一种单帧图像超分辨率重建方法,以提高重建高分辨率图像的精度。
为实现上述目的,本发明提供了一种单帧图像超分辨率重建方法,所述方法包括:
S1:建立低分辨率图像和高分辨率图像之间的一致性对应关系,根据获取的0阶梯度、1阶梯度和2阶梯度,分别建立关于结构、边缘和纹理的观测模型,进而确定多重微分一致性约束模型;
S2:根据所需重建图像的种类,构建高分辨率图像和低分辨率图像之间在结构、边缘和纹理层面上对应的训练集;
S3:建立基于对称冗余深度神经网络的训练模型,将所述训练集输入至所述训练模型进行训练,获得高分辨率图像和低分辨率图像之间的映射关系,所述映射关系为先验约束;
S4:利用半二次迭代方法根据所述多重微分一致性约束模型和所述先验约束建立超分辨率重建模型,并进行求解,得到高分辨率重建图像。
可选地,所述S1具体包括:
S11:建立低分辨率图像和高分辨率图像之间的一致性对应关系;
S12:基于一致性对应关系,根据获取的0阶梯度、1阶梯度和2阶梯度,分别建立关于结构、边缘和纹理的观测模型;
S13:根据各所述观测模型确定多重微分一致性约束模型。
可选地,所述基于一致性对应关系,根据获取的0阶梯度、1阶梯度和2阶梯度,分别建立关于结构、边缘和纹理的观测模型,具体公式为:
Figure PCTCN2020098001-appb-000001
其中,Ψ i(X,y)表示低分辨率图像y和待求解的高分辨率图像X之间基于第i阶梯度的对应关系,即观测模型,i的取值分别为0、1和2,分别用来标记0阶梯度、1阶梯度和2阶梯度,ψ i(y)表示对低分辨率图像y求第i阶梯度,ψ i(WX)表示对拟合的低分辨率图像WX求第i阶梯度,
Figure PCTCN2020098001-appb-000002
表示范数。
可选地,根据各所述观测模型确定多重微分一致性约束模型,具体公式为:
Figure PCTCN2020098001-appb-000003
其中,F(X,y)表示多重微分一致性约束模型,λ i为第i阶梯度的权重参数。
可选地,所述S3具体包括:
S31:建立基于对称冗余深度神经网络的训练模型;
S32:将所述训练集输入至所述训练模型进行N次迭代训练,得到高分辨率图像与低分辨率图像的映射关系,其中N为大于等于1的正整数。
可选地,所述S4具体包括:
S41:根据所述多重微分一致性约束模型和所述先验约束建立超分辨率重建模型;
S42:采用半二次迭代方法对所述超分辨率重建模型求解,输出高分辨率重建图像。
可选地,所述根据所述多重微分一致性约束模型和所述先验约束建立超分辨率重建模型,具体公式为:
Figure PCTCN2020098001-appb-000004
其中,
Figure PCTCN2020098001-appb-000005
表示高分辨率重建图像,argmin表示求取最小值函数,z表示辅助变量,X表示待求解的高分辨率图像,y表示低分辨率图像,λ和γ 分别表示求解过程中的权重参数,Φ()表示先验约束,
Figure PCTCN2020098001-appb-000006
表示范数,F(X,y)表示多重微分一致性约束模型。
可选地,所述S42具体包括:
S421:采用半二次迭代方法,根据迭代公式
Figure PCTCN2020098001-appb-000007
对(X,z)问题进行求解,获得求解公式;其中,argmin为求取最小值函数,X k+1和z k+1分别表示经过k+1次迭代求解后的高分辨率图像和辅助变量,y表示低分辨率图像,λ和γ分别表示求解过程中的权重参数,Φ()表示先验约束,
Figure PCTCN2020098001-appb-000008
表示范数,z表示辅助变量,F(X,y)表示多重微分一致性约束模型;
S422:根据所述求解公式确定经过k+1次迭代求解后的高分辨率图像X k+1
S423:基于低分辨率图像与高分辨率图像的映射关系,根据经过k+1次迭代求解后的高分辨率图像X k+1确定第k+1次迭代后求得的辅助变量z k+1
S424:判断求得的相邻两次高分辨率图像之间的差异是否小于预先设定的极小值;如果求得的相邻两次高分辨率图像之间的差异小于预先设定的极小值,则输出高分辨率重建图像;如果求得的相邻两次高分辨率图像之间的差异大于或等于预先设定的极小值,则令k=k+1,返回步骤S421。
可选地,所述求解公式具体为:
Figure PCTCN2020098001-appb-000009
其中,W表示降质矩阵,I表示恒等矩阵,X k+1表示经过k+1次迭代求解后的高分辨率图像,T表示转置符号,Δ表示拉普拉斯算子,
Figure PCTCN2020098001-appb-000010
表示一阶梯度算子,
Figure PCTCN2020098001-appb-000011
表示一阶梯度算子作用于降质矩阵,ΔW表示拉普拉斯算子作用于降质矩阵,
Figure PCTCN2020098001-appb-000012
表示一阶梯度算子作用于低分辨率图像,y表示低分辨率图像,Δy表示拉普拉斯算子作用于低分辨率图像,γ表示求 解过程中的权重参数,z k表示经过k次迭代求解后的辅助变量。
根据本发明提供的具体实施例,本发明公开了以下技术效果:
本发明公开一种单帧图像超分辨率重建方法,方法包括:建立低分辨率图像和高分辨率图像之间的一致性对应关系,根据获取的0阶梯度、1阶梯度和2阶梯度,分别建立关于结构、边缘和纹理的观测模型,进而确定多重微分一致性约束模型;构建高分辨率图像和低分辨率图像之间在结构、边缘和纹理层面上对应的训练集;将训练集输入至训练模型进行训练,获得高分辨率图像和低分辨率图像之间的先验约束;利用半二次迭代方法根据多重微分一致性约束模型和先验约束建立超分辨率重建模型,并进行求解,得到高分辨率重建图像。本发明基于多重梯度构建多重微分一致性约束模型,并采用半二次迭代算法有效地整合内外部信息,提高超分辨率图像重建的精度。
说明书附图
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例单帧图像超分辨率重建方法流程图;
图2为基于对称冗余深度神经网络的训练模型的结构框图;
图3(a)为原始图像示意图;
图3(b)为通过插值算法获得的重建结果示意图;
图3(c)为通过SRCNN方法获得的重建结果示意图;
图3(d)为DRNN方法获得的重建结果示意图;
图3(e)为通过SISR方法获得的重建结果示意图;
图3(f)为本发明方法获得的重建结果示意图;
图3(g)为图3(a)的原始图像与图3(a)的原始图像的差值图像示意图;
图3(h)为图3(b)的重建结果图像与图3(a)的原始图像的差值图像示意图;
图3(i)为图3(c)的重建结果图像与图3(a)的原始图像的差值图像示意图;
图3(j)为图3(d)的重建结果图像与图3(a)的原始图像的差值图像示意图;
图3(k)为图3(e)的重建结果图像与图3(a)的原始图像的差值图像示意图;
图3(1)为图3(f)的重建结果图像与图3(a)的原始图像的差值图像示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明的目的是提供一种单帧图像超分辨率重建方法,以提高重建高分辨率图像的精度。
为使本发明的上述目的、特征和优点能够更加明显易懂,下面结合附图和具体实施方式对本发明作进一步详细的说明。
术语解释:
1、多重微分一致性约束:是指在单帧图像重建过程中,为了有效复原图像的结构、边缘和纹理信息,利用0阶梯度、1阶梯度和2阶梯度对待估图像进行一致性约束,使得在仿真情况下的降质尽可能符合实际。
2、对称冗余深度神经网络:是指所涉及的网络是一个对称的深度神经网络,所谓的对称是将深度神经网络分为编码和解码两部分,如图1所示,编码过程分为5个相同的功能块,每个功能块包括卷积操作,批量标准化操作和激活操作。对称的解码过程也分为5个功能块,同样每个功能块中包括反卷积,批量标准化和激活操作。所谓冗余指的是在神经网络训练过程中,训练的信息为残差信息,即估计值和标签值的差值。
3、0阶梯度、1阶梯度和2阶梯度:假设m×n图像定义为X(i,j),i=1,…n;j=1,…,m,m,n分别表示图像的行和列,X(i,j)表示图像中位 于第i行、第j列的像素点。
所谓0阶梯度即灰度差值。
1阶梯度是利用1阶差分函数确定1阶差分,具体公式为:
Figure PCTCN2020098001-appb-000013
其中,
Figure PCTCN2020098001-appb-000014
为1阶差分。
2阶梯度即利用Laplace函数求解图像在横坐标方向(x方向)和纵坐标方向(y方向)的2阶差分:
Figure PCTCN2020098001-appb-000015
Figure PCTCN2020098001-appb-000016
Figure PCTCN2020098001-appb-000017
其中,ΔX为2阶差分。
4、图像结构:是指图像的构图信息由图像的边缘区域、平坦区域和角点区域组成,描述了图像的整体框架。
5、边缘(梯度):是指图像属性区域和另一个属性区域的交接处,是区域属性发生突变的地方,是图像中不确定性最大的地方,也是图像信息最集中的地方,图像的边缘包含着丰富的信息。在本领域通常指的是1阶梯度值较大的区域。
6、半二次迭代方法:将正则项中的原始变量进行变量替换,然后增加拉格朗日乘子项和二次惩罚项。
7、纹理:纹理特征也是一种全局特征,它也描述了图像或图像区域所对应景物的表面性质。在本领域通常指的是1阶梯度值较小的区域。
图1为本发明实施例单帧图像超分辨率重建方法流程图,如图1所示,本发明公开一种单帧图像超分辨率重建方法,所述方法包括:
S1:建立低分辨率图像和高分辨率图像之间的一致性对应关系,根据获取的0阶梯度、1阶梯度和2阶梯度,分别建立关于结构、边缘和纹 理的观测模型,进而确定多重微分一致性约束模型;
S2:根据所需重建图像的种类,构建高分辨率图像和低分辨率图像之间在结构、边缘和纹理层面上对应的训练集;
S3:建立基于对称冗余深度神经网络的训练模型,将所述训练集输入至所述训练模型进行训练,获得高分辨率图像和低分辨率图像之间的映射关系,所述映射关系为先验约束;
S4:利用半二次迭代方法根据所述多重微分一致性约束模型和所述先验约束建立超分辨率重建模型,并进行求解,得到高分辨率重建图像。
下面对各个步骤进行详细论述:
S1:建立低分辨率图像和高分辨率图像之间的一致性对应关系,根据获取的0阶梯度、1阶梯度和2阶梯度,分别建立关于结构、边缘和纹理的观测模型,进而确定多重微分一致性约束模型。
S11:建立低分辨率图像和高分辨率图像之间的一致性对应关系;其目的使得重建数据尽量和观测数据保持一致。
通常数据保真模型利用0阶梯度描述了低分辨率图像y和仿真低分辨率图像WX之间的一致性,仅仅反映了图像点对之间的一致性,不能反映图像更深层次的区别。因此,在基于0阶梯度降质模型的基础上,增加基于一阶梯度和二阶梯度的降质关系来描述边缘和纹理的降质过程,实现低分辨率图像y和待求解的高分辨率图像X在不同梯度之间(1阶梯度或2阶梯度)的降质对应关系,从而实现不同特征层面角度下的一致性约束。
S12:基于一致性对应关系,根据获取的0阶梯度、1阶梯度和2阶梯度,分别建立关于结构、边缘和纹理的观测模型,具体公式为:
Figure PCTCN2020098001-appb-000018
其中,Ψ i(X,y)表示低分辨率图像y和待求解的高分辨率图像X之间基于第i阶梯度的对应关系,即观测模型,i的取值分别为0、1和2,分别用来标记0阶梯度、1阶梯度和2阶梯度,ψ i(y)表示对低分辨率图像y求第i阶梯度,ψ i(WX)表示对拟合的低分辨率图像WX求第i阶梯度,
Figure PCTCN2020098001-appb-000019
表示范数。
S13:根据各所述观测模型确定多重微分一致性约束模型,具体公式为:
Figure PCTCN2020098001-appb-000020
其中,F(X,y)表示基于低分辨率图像y和待求解的高分辨率图像X之间的多重微分一致性约束模型,Ψ i(X,y)表示低分辨率图像y和待求解的高分辨率图像X之间基于第i阶梯度的对应关系,i的取值分别为0、1和2,分别用来标记0阶梯度、1阶梯度和2阶梯度,λ i为第i阶梯度的权重参数,ψ i(y)表示对低分辨率图像y求第i阶梯度,ψ i(WX)表示对拟合的低分辨率图像WX求第i阶梯度。
S2:根据所需重建图像的种类,构建高分辨率图像和低分辨率图像之间在结构、边缘和纹理层面上对应的训练集;所述训练集包括高分辨率图像训练集和与所述高分辨率图像训练集对应的低分辨率图像训练集。高分辨率图像训练集与所述低分辨率图像训练集组成多组对数据对。
所述种类是指在重建高分辨率图像时,由于图像的内容不同会导致图像结构、边缘、纹理等信息的巨大差异,为了尽可能多的弥补这些差异,需要将待重建的图像根据内容不同划归到不同种类图像中,以获取更多的信息支持,而这些获取到的内容被称之为外部信息。例如,医学领域的CT图像和MRI图像;建筑领域的建筑物图像;环境领域的自然图像和人脸图像等。
相对于高分辨率图像,低分辨率图像包括了模糊、噪声;由于不同的特征对应噪声、模糊等降质因素的响应不同,因此为了保证噪声、模糊的有效去除,在建立训练集时,需要针对性的选取部分包含大梯度边缘的图像对、部分包含丰富纹理的图像对作为训练集的组成部分。
S21:根据所需重建图像的种类,构建高分辨率图像和低分辨率图像之间在结构、边缘和纹理层面上高分辨率图像集
Figure PCTCN2020098001-appb-000021
和与所述高分辨率图像训练集对应的低分辨率图像集
Figure PCTCN2020098001-appb-000022
其中,i为样本序号,N为样本库内样本的总个数;X i表示高分辨率图像集中第i帧待求解的高分辨率图 像,y i表示低分辨率图像集中与X i相对应的第i帧低分辨率图像。
S22:在训练过程中,基于所述高分辨率图像集和所述低分辨率图像集通过重复抽取、求取1阶梯度和2阶梯度进行图像扩充,建立高分辨率图像训练集
Figure PCTCN2020098001-appb-000023
和与所述高分辨率图像训练集
Figure PCTCN2020098001-appb-000024
对应的低分辨率图像训练集
Figure PCTCN2020098001-appb-000025
例如,重建人脸图像时,所关注的信息除了人脸轮廓之外,更为主要的是人脸内部的五官、皱纹、脸部疤痕等更具有区分能力的特征。这些特征的呈现,有的是依靠该部位边缘与周围强烈的对比,例如眼睛、眉毛等;有的则是呈现比较弱的对比,比如,皱纹、脸部皮肤等。为了更好地恢复这些细节,将强对比的特征展现的更强,弱对比的纹理细节特征勾显出来,在选取训练集图像时,会更多的选择人脸图像作为训练集中的样本,这样做的好处是可以使重建的人脸更为真实自然。因此,在建立人脸数据集时,针对性的对具有不同肤色、年龄、性别或其他特征的人脸进行筛选并放入训练集。同时,为了保证训练过程中,不出现过拟合现象,采用了以下技术手段:1)对这些图片增加不同种类不同强度的噪声或者模糊,产生低分辨率图像,2)将这些图片打乱顺序随机、重复放入。
S3:建立基于对称冗余深度神经网络的训练模型,将所述训练集输入至所述训练模型进行训练,获得高分辨率图像和低分辨率图像之间的映射关系,所述映射关系为先验约束,即外部信息。
基于对称冗余深度神经网络的训练模型是一个对称模型,包括编码部分和对称的解码部分,编码部分包括5个一样的功能模块,每一个功能模块包括卷积(Conv)、归一化(Bnorm)和激活操作(ReLU);对称的解码部分也包括5个功能模块,每一个功能模块包括反卷积、归一化和激活操作;编码部分的从左往右的5个功能模块的通道数量分别为256、128、64、32、16,对称的解码部分的从左往右的5个功能模块的通道数量为分别16、32、64、128、256。每层功能模块之间是一个冗余链接,形成冗余网络。
训练集中的数据成对出现的,即:低分辨率图像和高分辨率图像是成对存在的,此处输入即为低分辨率图像、输出即为高分辨率图像;高分辨 率图像与低分辨率图像的映射关系是指根据已有的训练集中数据对调整基于对称冗余深度神经网络的训练模型中的参数、连接权值或者结构,使得输入和输出形成映射关系,保证输出的高分辨率图像和真实的高分辨率图像之间误差尽量的小。
S31:建立基于对称冗余深度神经网络的训练模型,具体公式为:
Figure PCTCN2020098001-appb-000026
其中,
Figure PCTCN2020098001-appb-000027
表示与所述高分辨率图像训练集
Figure PCTCN2020098001-appb-000028
对应的低分辨率图像训练集,f s表示s层对称冗余深度神经网络的每层的映射函数,f 1()表示神经网络第一层的输出结果。
S32:将所述训练集输入至所述训练模型进行N次迭代训练,得到高分辨率图像与低分辨率图像的映射关系,其中N为大于等于1的正整数;本实施例中N为100次。
建立一个基于对称冗余深度神经网络的训练模型,学习图像各层梯度求得冗余信息,所谓的冗余指的是在神经网络训练过程中训练的信息,即残差信息--估计值和标签值的差值。
S4:利用半二次迭代方法根据所述多重微分一致性约束模型和所述先验约束建立超分辨率重建模型,并进行求解,得到高分辨率重建图像。
S41:根据所述多重微分一致性约束模型和所述先验约束建立超分辨率重建模型,具体公式为:
Figure PCTCN2020098001-appb-000029
其中,
Figure PCTCN2020098001-appb-000030
表示高分辨率重建图像,argmin表示求取最小值函数,z表示辅助变量,X表示待求解的高分辨率图像,y表示低分辨率图像,λ和γ分别表示求解过程中的权重参数,Φ()表示先验约束,
Figure PCTCN2020098001-appb-000031
表示范数,F(X,y)表示多重微分一致性约束模型。
S42:采用半二次迭代方法对所述超分辨率重建模型求解,输出高分辨率重建图像,具体包括:
S421:采用半二次迭代方法,根据迭代公式
Figure PCTCN2020098001-appb-000032
对(X,z)问题进行求解,获得求解公式;其中,argmin为求取最小值函数,X k+1和z k+1分别表示经过k+1次迭代求解后的高分辨率图像和辅助变量,y表示低分辨率图像,λ和γ分别表示求解过程中的权重参数,Φ()表示先验约束,
Figure PCTCN2020098001-appb-000033
表示范数,z表示辅助变量,F(X,y)表示多重微分一致性约束模型。
所述求解公式具体为:
Figure PCTCN2020098001-appb-000034
其中,W表示降质矩阵,I表示恒等矩阵,即对角线元素为1、其他元素为0的矩阵,X k+1表示经过k+1次迭代求解后的高分辨率图像,T表示转置符号,Δ表示拉普拉斯算子,
Figure PCTCN2020098001-appb-000035
表示一阶梯度算子,
Figure PCTCN2020098001-appb-000036
表示一阶梯度算子作用于降质矩阵,ΔW表示拉普拉斯算子作用于降质矩阵,
Figure PCTCN2020098001-appb-000037
表示一阶梯度算子作用于低分辨率图像,y表示低分辨率图像,Δy表示拉普拉斯算子作用于低分辨率图像,γ表示求解过程中的权重参数,z k表示经过k次迭代求解后的辅助变量。
W为降质矩阵,描述的降质过程包括降采样、模糊、形变等信息,但是W本身是未知的,需要在后续的求解过程中求解。
S422:根据所述求解公式确定经过k+1次迭代求解后的高分辨率图像X k+1
S423:基于低分辨率图像与高分辨率图像的映射关系,根据经过k+1次迭代求解后的高分辨率图像X k+1确定第k+1次迭代后求得的辅助变量 z k+1
S424:判断求得的相邻两次高分辨率图像之间的差异是否小于预先设定的极小值,即|X k-X k+1|<ε;如果求得的相邻两次高分辨率图像之间的差异小于预先设定的极小值,则输出高分辨率重建图像
Figure PCTCN2020098001-appb-000038
如果求得的相邻两次高分辨率图像之间的差异大于或等于预先设定的极小值,则令k=k+1,返回步骤S421。
本发明的有益效果为:
1、本发明寻找一个契合点,从图像自身特征出发,从多重特征层面利用一致性约束减少人工倾向性约束,同时利用外部图像库训练特征集,利用多重梯度特征补全原图像缺失特征,从而提高提高重建超分辨率图像的精度。
2、本发明获取内部信息(训练集)指的是图像的结构、梯度和细节信息。获取外部信息指的是利用对称卷积深度网络训练已有训练集,从结构、梯度和细节方面获得有效支持,补全缺失信息。同时利用半二次迭代算法有效地整合内外部信息。
3、本发明利用多重梯度算法设计了一个数据约束模型,该模型可以从图像框架、边缘结构特征和细节特征对待估图像进行约束,有效地整合图像内部信息。
4、本发明建立了一个对称冗余网络。该网络对称的分为5层卷积部分和5层反卷积部分。对于卷积部分,通过卷积获得多重梯度算子,减少其空间维度;反卷积部分利用学习结果整合通道信息补全空间细节信息。该网络和所提出数据约束模型同时作用于重建过程,进一步提高重建超分辨率图像的精度。
为了检验本发明的有效性,本实施例选取了图像处理领域经典的Lena图像作为测试图像,数据集采用来自ImageNet、BSD等公共数据库的5000张图片组成的数据集作为训练集,为了确保实验结果的说服力,采用多种主客观评价指标,客观评价指标包括PSNR值,SSIM值(该两项数值越大说明效果越好),主观评价方法为对比展示重建结果;图3(a)为 原始图像,图3(b)为通过插值算法获得的重建结果,图3(c)为通过SRCNN方法获得的重建结果,图3(d)为DRNN方法获得的重建结果,图3(e)为通过SISR方法获得的重建结果,图3(f)为本发明方法获得的重建结果。
为了让重建结果更易于分辨,利用重建图和原图做了点对点的差值图。图3(g)为图3(a)的原始图像与图3(a)的原始图像的差值图像,图3(h)为图3(b)的重建结果图像与图3(a)的原始图像的差值图像,图3(i)为图3(c)的重建结果图像与图3(a)的原始图像的差值图像,图3(j)为图3(d)的重建结果图像与图3(a)的原始图像的差值图像,图3(k)为图3(e)的重建结果图像与图3(a)的原始图像的差值图像,图3(l)为图3(f)的重建结果图像与图3(a)的原始图像的差值图像;在差值图中,白点表示误差,白点越多,表示差值越大,重建效果越差。对于图3(h)-图3(l),图3(l)中白点最少,几乎没有,可见本申请方法获得的重建效果是最好的。
表1为本实施例1选取的六幅低分辨率图像(Cameraman、Parrots、Lena、Boat、Man、Couple)在提升3倍分辨率条件下通过现有的A+、SRCRC、SRCRNS、DRRN、VDSR、SRCNN、SISR、SRMMPM及本发明方法获取的重建结果,在本实验中,模糊和噪声是未知的。
表1 重建结果对比表
Figure PCTCN2020098001-appb-000039
表1在提升3倍分辨率条件下本发明提出方法与传统算法的数值结果对比。表1中”\”前为PSNR值,后为SSIM值,这两个数值越大,说明重建效果越好,因此可知本申请方法与现有的传统算法相比,重建效果最优。
本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即 可。
本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处。综上所述,本说明书内容不应理解为对本发明的限制。

Claims (9)

  1. 一种单帧图像超分辨率重建方法,其特征在于,所述方法包括:
    S1:建立低分辨率图像和高分辨率图像之间的一致性对应关系,根据获取的0阶梯度、1阶梯度和2阶梯度,分别建立关于结构、边缘和纹理的观测模型,进而确定多重微分一致性约束模型;
    S2:根据所需重建图像的种类,构建高分辨率图像和低分辨率图像之间在结构、边缘和纹理层面上对应的训练集;
    S3:建立基于对称冗余深度神经网络的训练模型,将所述训练集输入至所述训练模型进行训练,获得高分辨率图像和低分辨率图像之间的映射关系,所述映射关系为先验约束;
    S4:利用半二次迭代方法根据所述多重微分一致性约束模型和所述先验约束建立超分辨率重建模型,并进行求解,得到高分辨率重建图像。
  2. 根据权利要求1所述的单帧图像超分辨率重建方法,其特征在于,所述S1具体包括:
    S11:建立低分辨率图像和高分辨率图像之间的一致性对应关系;
    S12:基于一致性对应关系,根据获取的0阶梯度、1阶梯度和2阶梯度,分别建立关于结构、边缘和纹理的观测模型;
    S13:根据各所述观测模型确定多重微分一致性约束模型。
  3. 根据权利要求1所述的单帧图像超分辨率重建方法,其特征在于,所述基于一致性对应关系,根据获取的0阶梯度、1阶梯度和2阶梯度,分别建立关于结构、边缘和纹理的观测模型,具体公式为:
    Figure PCTCN2020098001-appb-100001
    其中,Ψ i(X,y)表示低分辨率图像y和待求解的高分辨率图像X之间基于第i阶梯度的对应关系,即观测模型,i的取值分别为0、1和2,分别用来标记0阶梯度、1阶梯度和2阶梯度,ψ i(y)表示对低分辨率图像y求第i阶梯度,ψ i(WX)表示对拟合的低分辨率图像WX求第i阶梯度,
    Figure PCTCN2020098001-appb-100002
    表示范数。
  4. 根据权利要求3所述的单帧图像超分辨率重建方法,其特征在于,根据各所述观测模型确定多重微分一致性约束模型,具体公式为:
    Figure PCTCN2020098001-appb-100003
    其中,F(X,y)表示多重微分一致性约束模型,λ i为第i阶梯度的权重参数。
  5. 根据权利要求1所述的单帧图像超分辨率重建方法,其特征在于,所述S3具体包括:
    S31:建立基于对称冗余深度神经网络的训练模型;
    S32:将所述训练集输入至所述训练模型进行N次迭代训练,得到高分辨率图像与低分辨率图像的映射关系,其中N为大于等于1的正整数。
  6. 根据权利要求1所述的单帧图像超分辨率重建方法,其特征在于,所述S4具体包括:
    S41:根据所述多重微分一致性约束模型和所述先验约束建立超分辨率重建模型;
    S42:采用半二次迭代方法对所述超分辨率重建模型求解,输出高分辨率重建图像。
  7. 根据权利要求6所述的单帧图像超分辨率重建方法,其特征在于,所述根据所述多重微分一致性约束模型和所述先验约束建立超分辨率重建模型,具体公式为:
    Figure PCTCN2020098001-appb-100004
    其中,
    Figure PCTCN2020098001-appb-100005
    表示高分辨率重建图像,argmin表示求取最小值函数,z表示辅助变量,X表示待求解的高分辨率图像,y表示低分辨率图像,λ和γ分别表示求解过程中的权重参数,Φ()表示先验约束,
    Figure PCTCN2020098001-appb-100006
    表示范数,F(X,y)表示多重微分一致性约束模型。
  8. 根据权利要求7所述的单帧图像超分辨率重建方法,其特征在于,所述S42具体包括:
    S421:采用半二次迭代方法,根据迭代公式
    Figure PCTCN2020098001-appb-100007
    对(X,z)问题进行求解,获得求解公式;其中,argmin为求取最小值函数,X k+1和z k+1分别表示经过k+1次迭代求解后的高分辨率图像和辅助变量,y表示低分辨率图像,λ和γ分别表示求解过程中的权重参数,Φ()表示先验约束,
    Figure PCTCN2020098001-appb-100008
    表示范数,z表示辅助变量,F(X,y)表示多重微分一致性约束模型;
    S422:根据所述求解公式确定经过k+1次迭代求解后的高分辨率图像X k+1
    S423:基于低分辨率图像与高分辨率图像的映射关系,根据经过k+1次迭代求解后的高分辨率图像X k+1确定第k+1次迭代后求得的辅助变量z k+1
    S424:判断求得的相邻两次高分辨率图像之间的差异是否小于预先设定的极小值;如果求得的相邻两次高分辨率图像之间的差异小于预先设定的极小值,则输出高分辨率重建图像;如果求得的相邻两次高分辨率图像之间的差异大于或等于预先设定的极小值,则令k=k+1,返回步骤S421。
  9. 根据权利要求8所述的单帧图像超分辨率重建方法,其特征在于,所述求解公式具体为:
    Figure PCTCN2020098001-appb-100009
    其中,W表示降质矩阵,I表示恒等矩阵,X k+1表示经过k+1次迭代求解后的高分辨率图像,T表示转置符号,Δ表示拉普拉斯算子,
    Figure PCTCN2020098001-appb-100010
    表示一阶梯度算子,
    Figure PCTCN2020098001-appb-100011
    表示一阶梯度算子作用于降质矩阵,ΔW表示拉普拉斯算子作用于降质矩阵,
    Figure PCTCN2020098001-appb-100012
    表示一阶梯度算子作用于低分辨率图像,y表示低分辨率图像,Δy表示拉普拉斯算子作用于低分辨率图像,γ表示求解过程中的权重参数,z k表示经过k次迭代求解后的辅助变量。
PCT/CN2020/098001 2019-08-08 2020-06-24 一种单帧图像超分辨率重建方法 WO2021022929A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
ZA2021/00526A ZA202100526B (en) 2019-08-08 2021-01-25 Single-frame image super-resolution reconstruction method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910728888.3A CN110443768B (zh) 2019-08-08 2019-08-08 基于多重一致性约束的单帧图像超分辨率重建方法
CN201910728888.3 2019-08-08

Publications (1)

Publication Number Publication Date
WO2021022929A1 true WO2021022929A1 (zh) 2021-02-11

Family

ID=68433878

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/098001 WO2021022929A1 (zh) 2019-08-08 2020-06-24 一种单帧图像超分辨率重建方法

Country Status (3)

Country Link
CN (1) CN110443768B (zh)
WO (1) WO2021022929A1 (zh)
ZA (1) ZA202100526B (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112927138A (zh) * 2021-03-19 2021-06-08 重庆邮电大学 一种基于即插即用的磁共振成像超分辨重建系统及方法
CN112967185A (zh) * 2021-02-18 2021-06-15 复旦大学 基于频率域损失函数的图像超分辨率算法
CN112991174A (zh) * 2021-03-13 2021-06-18 长沙学院 一种提高单帧红外图像分辨率的方法与系统
CN112990053A (zh) * 2021-03-29 2021-06-18 腾讯科技(深圳)有限公司 图像处理方法、装置、设备及存储介质
CN115035230A (zh) * 2022-08-12 2022-09-09 阿里巴巴(中国)有限公司 视频渲染处理方法、装置、设备及存储介质
CN115063293A (zh) * 2022-05-31 2022-09-16 北京航空航天大学 采用生成对抗网络的岩石显微图像超分辨率重建方法
CN116452425A (zh) * 2023-06-08 2023-07-18 常州星宇车灯股份有限公司 图像超分辨率重建方法、设备及介质
CN117474763A (zh) * 2023-12-26 2024-01-30 青岛埃克曼科技有限公司 基于神经网络的沿海低分辨率水深数据高分辨率化方法

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110443768B (zh) * 2019-08-08 2023-05-12 齐鲁工业大学 基于多重一致性约束的单帧图像超分辨率重建方法
CN113747099B (zh) * 2020-05-29 2022-12-06 华为技术有限公司 视频传输方法和设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108550115A (zh) * 2018-04-25 2018-09-18 中国矿业大学 一种图像超分辨率重建方法
CN109214989A (zh) * 2018-09-04 2019-01-15 四川大学 基于多方向特征预测先验的单幅图像超分辨率重建方法
CN109559278A (zh) * 2018-11-28 2019-04-02 山东财经大学 基于多特征学习的超分辨图像重建方法及系统
CN110443768A (zh) * 2019-08-08 2019-11-12 齐鲁工业大学 基于多重微分一致性约束和对称冗余网络的单帧图像超分辨率重建方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292819A (zh) * 2017-05-10 2017-10-24 重庆邮电大学 一种基于边缘细节保护的红外图像超分辨率重建方法
CN107492070B (zh) * 2017-07-10 2019-12-03 华北电力大学 一种双通道卷积神经网络的单图像超分辨率计算方法
CN107784628B (zh) * 2017-10-18 2021-03-19 南京大学 一种基于重建优化和深度神经网络的超分辨率实现方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108550115A (zh) * 2018-04-25 2018-09-18 中国矿业大学 一种图像超分辨率重建方法
CN109214989A (zh) * 2018-09-04 2019-01-15 四川大学 基于多方向特征预测先验的单幅图像超分辨率重建方法
CN109559278A (zh) * 2018-11-28 2019-04-02 山东财经大学 基于多特征学习的超分辨图像重建方法及系统
CN110443768A (zh) * 2019-08-08 2019-11-12 齐鲁工业大学 基于多重微分一致性约束和对称冗余网络的单帧图像超分辨率重建方法

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHAO DONG; LOY CHEN CHANGE; HE KAIMING; TANG XIAOOU: "Image Super-Resolution Using Deep Convolutional Networks", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 38, no. 2, 1 June 2015 (2015-06-01), pages 1 - 14, XP055572436, DOI: 10.1109/TPAMI.2015.2439281 *
LIANG YUDONG; WANG JINJUN; ZHOU SANPING; GONG YIHONG; ZHENG NANNING: "Incorporating image priors with deep convolutional neural networks for image super-resolution", NEUROCOMPUTING, vol. 194, 5 March 2016 (2016-03-05), pages 340 - 347, XP029523308, ISSN: 0925-2312, DOI: 10.1016/j.neucom.2016.02.046 *
SUN XU, XIAO-GUANG LI, JIA-FENG LI, LI ZHUO: "Review on Deep Learning Based Image Super-resolution Restoration Algorithms", ACTA AUTOMATICA SINICA, vol. 43, no. 5, 15 May 2017 (2017-05-15), pages 697 - 709, XP055777613, ISSN: 0254-4156, DOI: 10.16383/j.aas.2017.c160629 *
ZHAO SHENGRONG: "Research on Variational Bayesian Image Super Resolution Algorithms Based on Adaptive Prior Models", CHINESE DOCTORAL DISSERTATIONS FULL-TEXT DATABASE, 1 May 2016 (2016-05-01), pages 1 - 167, XP055777596 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112967185A (zh) * 2021-02-18 2021-06-15 复旦大学 基于频率域损失函数的图像超分辨率算法
CN112991174A (zh) * 2021-03-13 2021-06-18 长沙学院 一种提高单帧红外图像分辨率的方法与系统
CN112927138A (zh) * 2021-03-19 2021-06-08 重庆邮电大学 一种基于即插即用的磁共振成像超分辨重建系统及方法
CN112927138B (zh) * 2021-03-19 2023-09-19 重庆邮电大学 一种基于即插即用的磁共振成像超分辨重建系统及方法
CN112990053A (zh) * 2021-03-29 2021-06-18 腾讯科技(深圳)有限公司 图像处理方法、装置、设备及存储介质
CN112990053B (zh) * 2021-03-29 2023-07-25 腾讯科技(深圳)有限公司 图像处理方法、装置、设备及存储介质
CN115063293B (zh) * 2022-05-31 2024-05-31 北京航空航天大学 采用生成对抗网络的岩石显微图像超分辨率重建方法
CN115063293A (zh) * 2022-05-31 2022-09-16 北京航空航天大学 采用生成对抗网络的岩石显微图像超分辨率重建方法
CN115035230A (zh) * 2022-08-12 2022-09-09 阿里巴巴(中国)有限公司 视频渲染处理方法、装置、设备及存储介质
CN116452425A (zh) * 2023-06-08 2023-07-18 常州星宇车灯股份有限公司 图像超分辨率重建方法、设备及介质
CN116452425B (zh) * 2023-06-08 2023-09-22 常州星宇车灯股份有限公司 图像超分辨率重建方法、设备及介质
CN117474763A (zh) * 2023-12-26 2024-01-30 青岛埃克曼科技有限公司 基于神经网络的沿海低分辨率水深数据高分辨率化方法
CN117474763B (zh) * 2023-12-26 2024-04-26 青岛埃克曼科技有限公司 基于神经网络的沿海低分辨率水深数据高分辨率化方法

Also Published As

Publication number Publication date
ZA202100526B (en) 2022-09-28
CN110443768B (zh) 2023-05-12
CN110443768A (zh) 2019-11-12

Similar Documents

Publication Publication Date Title
WO2021022929A1 (zh) 一种单帧图像超分辨率重建方法
Li et al. Underwater scene prior inspired deep underwater image and video enhancement
CN108734659B (zh) 一种基于多尺度标签的亚像素卷积图像超分辨率重建方法
CN111784602B (zh) 一种生成对抗网络用于图像修复的方法
CN109671023A (zh) 一种人脸图像超分辨率二次重建方法
CN103093444B (zh) 基于自相似性和结构信息约束的图像超分辨重建方法
CN102902961B (zh) 基于k近邻稀疏编码均值约束的人脸超分辨率处理方法
CN113177882B (zh) 一种基于扩散模型的单帧图像超分辨处理方法
CN109214989B (zh) 基于多方向特征预测先验的单幅图像超分辨率重建方法
CN105513033B (zh) 一种非局部联合稀疏表示的超分辨率重建方法
CN110136060B (zh) 基于浅层密集连接网络的图像超分辨率重建方法
CN105550989B (zh) 基于非局部高斯过程回归的图像超分辨方法
CN107341776A (zh) 基于稀疏编码与组合映射的单帧超分辨率重建方法
CN109523513A (zh) 基于稀疏重建彩色融合图像的立体图像质量评价方法
CN107330854B (zh) 一种基于新型模板的图像超分辨率增强方法
CN114170088A (zh) 一种基于图结构数据的关系型强化学习系统及方法
CN108492270A (zh) 一种基于模糊核估计和变分重构的超分辨率方法
CN116934592A (zh) 一种基于深度学习的图像拼接方法、系统、设备及介质
CN112785502A (zh) 一种基于纹理迁移的混合相机的光场图像超分辨率方法
CN114897694A (zh) 基于混合注意力和双层监督的图像超分辨率重建方法
CN112598604A (zh) 一种盲脸复原方法及系统
CN116523985B (zh) 一种结构和纹理特征引导的双编码器图像修复方法
CN113240581A (zh) 一种针对未知模糊核的真实世界图像超分辨率方法
CN117252782A (zh) 基于条件去噪扩散和掩膜优化的图像修复方法
Hesabi et al. Structure and texture image inpainting

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20850226

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20850226

Country of ref document: EP

Kind code of ref document: A1