CN112598575B - An image information fusion and super-resolution reconstruction method based on feature processing - Google Patents

An image information fusion and super-resolution reconstruction method based on feature processing Download PDF

Info

Publication number
CN112598575B
CN112598575B CN202011528460.3A CN202011528460A CN112598575B CN 112598575 B CN112598575 B CN 112598575B CN 202011528460 A CN202011528460 A CN 202011528460A CN 112598575 B CN112598575 B CN 112598575B
Authority
CN
China
Prior art keywords
feature
image
resolution
low
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202011528460.3A
Other languages
Chinese (zh)
Other versions
CN112598575A (en
Inventor
傅志中
吴宇峰
徐进
李晓峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202011528460.3A priority Critical patent/CN112598575B/en
Publication of CN112598575A publication Critical patent/CN112598575A/en
Application granted granted Critical
Publication of CN112598575B publication Critical patent/CN112598575B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image information fusion and super-resolution reconstruction method based on feature processing, and belongs to the technical field of image processing. The invention firstly inputs the high-resolution reference image into an image preprocessing module to obtain a low-resolution reference image. And secondly, inputting the low-resolution input image, the high-resolution reference image and the low-resolution reference image into a feature processing module, and performing feature processing on the input image by the feature processing module to realize feature information matching, transfer and fusion of the high-resolution reference image and the low-resolution input image so as to obtain a fusion feature image. And finally, inputting the low-resolution input image and the fusion characteristic image into a super-resolution reconstruction module to obtain a final super-resolution reconstruction image. The invention adopts the characteristic information of the image instead of the pixel information to carry out information matching and fusion, and can fully utilize the abundant detail texture information carried by the high-resolution reference image, thereby effectively improving the super-resolution reconstruction quality of the low-resolution input image.

Description

一种基于特征处理的图像信息融合及超分辨率重建方法An image information fusion and super-resolution reconstruction method based on feature processing

技术领域technical field

本发明属于图像处理技术领域,具体涉及到一种基于特征处理的图像信息融合及其超分辨率重建方法。The invention belongs to the technical field of image processing, and specifically relates to an image information fusion based on feature processing and a super-resolution reconstruction method thereof.

背景技术Background technique

超分辨率重建是一种通过对低分辨率图像(分辨率低于某个指定值)进行处理来得到高分辨率图像(分辨率高于某个指定值)的技术。按照输入图像的数目来分类,可以分为基于单图的超分辨率重建算法和基于多图的超分辨率重建算法。Super-resolution reconstruction is a technique that obtains high-resolution images (resolution higher than a specified value) by processing low-resolution images (resolution lower than a specified value). According to the number of input images, it can be divided into single-image-based super-resolution reconstruction algorithms and multi-image-based super-resolution reconstruction algorithms.

基于单图的图像超分辨率重建算法直接利用输入的低分辨率图像的信息来进行重建。这种方法简洁易行,但由于输入的低分辨率图像本身包含的信息有限,因此这种方法重建出来的图像质量也是受限的。The single-image-based image super-resolution reconstruction algorithm directly utilizes the information of the input low-resolution image for reconstruction. This method is simple and easy to implement, but due to the limited information contained in the input low-resolution image itself, the quality of the reconstructed image is also limited.

为了克服上述缺陷,在条件允许的情况下,高分辨率的参考图像可以被引入到超分辨率重建算法中,将高分辨率图像包含的丰富的纹理细节信息融合到低分辨率图像上,进而提升超分辨率重建的质量。这就是基于多图的超分辨率重建算法的基本原理。In order to overcome the above defects, if conditions permit, high-resolution reference images can be introduced into the super-resolution reconstruction algorithm, and the rich texture details contained in the high-resolution images can be fused to the low-resolution images. Improve the quality of super-resolution reconstructions. This is the basic principle of multi-image-based super-resolution reconstruction algorithms.

为了得到比较理想的超分辨率重建效果,高分辨率参考图像需要与待处理的低分辨率输入图像有着较高的相似度。与此同时,高分辨率参考图像和低分辨率输入图像之间的信息匹配和融合算法对最终的超分辨率重建效果有重大影响。要提高基于多图的超分辨率重建算法的重建质量,需要将高分辨率参考图像所包含的信息准确地与低分辨率输入图像的信息进行匹配,并且将两者信息有效融合。这样才能充分发挥高分辨率参考图像在超分辨率重建中的指导作用,进而提高超分辨率重建质量。In order to obtain an ideal super-resolution reconstruction effect, the high-resolution reference image needs to have a high similarity with the low-resolution input image to be processed. At the same time, the information matching and fusion algorithms between the high-resolution reference image and the low-resolution input image have a significant impact on the final super-resolution reconstruction effect. To improve the reconstruction quality of the multi-image-based super-resolution reconstruction algorithm, it is necessary to accurately match the information contained in the high-resolution reference image with the information of the low-resolution input image, and effectively fuse the two information. In this way, the guiding role of high-resolution reference images in super-resolution reconstruction can be fully utilized, thereby improving the quality of super-resolution reconstruction.

然而,目前现有的方法都是直接使用图像的像素信息来进行信息匹配和融合。由于高分辨率参考图像和低分辨率输入图像之间分辨率和清晰度存在明显区别,再加上两者的时空差异,因此基于像素信息的信息匹配和融合算法存在误匹配概率高、匹配效率低下、超分辨率重建效果差等问题。这些问题制约着基于多图的超分辨率重建算法的性能和稳定性。However, the existing methods all directly use the pixel information of the image for information matching and fusion. Due to the obvious difference in resolution and sharpness between the high-resolution reference image and the low-resolution input image, coupled with the spatial and temporal differences between the two, the information matching and fusion algorithm based on pixel information has a high probability of mismatching and matching efficiency. Low performance, poor super-resolution reconstruction effect, etc. These problems restrict the performance and stability of multi-image-based super-resolution reconstruction algorithms.

发明内容SUMMARY OF THE INVENTION

本发明的发明目的在于:针对现有的基于多图的超分辨率重建方法的不足和缺陷,提出了一种基于特征处理的图像信息融合及其超分辨率方法,有效提升低分辨率输入图像超分辨率重建质量。The purpose of the invention is to propose a feature processing-based image information fusion and its super-resolution method for the deficiencies and defects of the existing multi-image-based super-resolution reconstruction methods, which can effectively improve the low-resolution input image Super-resolution reconstruction quality.

本发明的基于特征处理的图像信息融合及超分辨率重建方法,包括以下步骤:The image information fusion and super-resolution reconstruction method based on feature processing of the present invention includes the following steps:

对高分辨率参考图像和低分辨率输入图像(待重建图像)进行图像预处理:Perform image preprocessing on the high-resolution reference image and the low-resolution input image (the image to be reconstructed):

对高分辨率参考图像进行下采样,得到与低分辨率输入图像的清晰度相匹配的低分辨率参考图像;并对低分辨率输入图像和低分辨率参考图像进行相同的上采样;down-sampling the high-resolution reference image to obtain a low-resolution reference image that matches the sharpness of the low-resolution input image; and performing the same upsampling on the low-resolution input image and the low-resolution reference image;

对高分辨率参考图像、上采样后的低分辨率参考图像和低分辨率输入图像进行特征信息提取处理,得到高分辨率参考特征图、低分辨率参考特征图和低分辨率输入特征图;并对得到的特征图进行图像信息匹配、转移和融合处理:Extracting feature information from the high-resolution reference image, the upsampled low-resolution reference image, and the low-resolution input image to obtain a high-resolution reference feature map, a low-resolution reference feature map, and a low-resolution input feature map; And perform image information matching, transfer and fusion processing on the obtained feature map:

将高分辨率参考特征图、低分辨率参考特征图和低分辨率输入特征图进行分块处理;Block the high-resolution reference feature map, low-resolution reference feature map, and low-resolution input feature map;

遍历低分辨率输入特征图中的每个子块,在低分辨率参考特征图中搜索第一最优匹配特征子块,并基于高分辨率参考特征图与低分辨率参考特征图之间的空间映射关系,基于第一最优匹配特征子块的图像位置确定当前子块在于高分辨率参考特征图的第二最优匹配特征子块;Traverse each sub-block in the low-resolution input feature map, search for the first optimal matching feature sub-block in the low-resolution reference feature map, and based on the space between the high-resolution reference feature map and the low-resolution reference feature map The mapping relationship, based on the image position of the first optimal matching feature sub-block, it is determined that the current sub-block is in the second optimal matching feature sub-block of the high-resolution reference feature map;

基于低分辨率输入特征图的每个子块的图像位置和第二最优匹配特征子块进行特征图重组处理,得到重组特征图像;Perform feature map reorganization processing based on the image position of each sub-block of the low-resolution input feature map and the second optimal matching feature sub-block to obtain a reorganized feature image;

基于特征图进行图像信息融合处理,将重组特征图与低分辨率输入特征图进行融合得到融合特征图像:Image information fusion processing is performed based on the feature map, and the reconstructed feature map is fused with the low-resolution input feature map to obtain a fused feature image:

基于融合特征图像对低分辨率输入图像进行超分辨率重建,得到超分辨率重建图像。Perform super-resolution reconstruction on the low-resolution input image based on the fusion feature image to obtain a super-resolution reconstructed image.

进一步的,本发明基于编码和解码结构的超分辨率重建网络进行超分辨率重建过程处理,其中,超分辨率重建网络的编码部分用于对低分辨率输入图像进行特征提取,并通过维度拼接的方式将提取的特征图像与融合特征图像进行融合;解码部分用于对融合的特征图像进行重建,输出超分辨率重建图像。Further, the present invention performs the super-resolution reconstruction process processing based on the super-resolution reconstruction network of the coding and decoding structure, wherein the coding part of the super-resolution reconstruction network is used for feature extraction on the low-resolution input image, and splicing through dimensions. The extracted feature image is fused with the fused feature image in the way of method; the decoding part is used to reconstruct the fused feature image and output the super-resolution reconstructed image.

进一步的,本发明中,采用基于卷积神经网络的特征提取网络对高分辨率参考图像、上采样后的低分辨率参考图像和低分辨率输入图像进行特征信息提取处理,所采用的特征提取网络包括顺次连接的若干级卷积块,卷积块之间通过池化层连接,各级卷积块包括若干顺次连接的子层,每个子层由顺次连接的卷积层和非线性激活层构成,从每级卷积块中的指定一层非线性激活层作为当前级的特征图输出,得到多级高分辨率参考特征图、低分辨率参考特征图和低分辨率输入特征图;对同级特征图进行图像信息匹配、转移和融合处理,得到多级融合特征图像。Further, in the present invention, a feature extraction network based on a convolutional neural network is used to perform feature information extraction processing on the high-resolution reference image, the up-sampled low-resolution reference image and the low-resolution input image, and the feature extraction method used is: The network includes several levels of convolutional blocks connected in sequence, and the convolutional blocks are connected by a pooling layer. Each level of convolutional blocks includes several sequentially connected sublayers, each sublayer is composed of sequentially connected convolutional layers and non-convolutional layers. The linear activation layer is composed of a specified layer of nonlinear activation layer in each level of convolution block as the feature map output of the current level, and the multi-level high-resolution reference feature map, low-resolution reference feature map and low-resolution input feature are obtained. Figure; perform image information matching, transfer and fusion processing on the feature maps of the same level to obtain multi-level fusion feature images.

进一步的,在超分辨率重建处理时,构建的超分辨率重建网络包括若干编码器和解码器,按照前向传播的方向,各编码器依次定义为第1级~第N级编码器,第N级~第1级解码器,其中N的取值与特征信息提取处理的卷积神经网络包括的卷积块级数相同;第一级编码器与第二级编码器之间通过拼接层连接,从第2级编码器开始,各编码器之后顺次连接下采样模块和拼接层(即第N级编码器与解码器之间顺次连接下采样模块和拼接层);拼接层的输入还包括指定级的融合特征图像(按照前向传播的方向,各拼接层输入的融合特征图像的分辨率依次降低),通过维度拼接的方式对编码器提取的特征图像与输入的融合特征图像进行融合处理;邻级的解码器之间顺次连接上采样模块和标准化层,且第r级编码器还通过跳连接方式接入第r-1级解码器(即第r级编码器的输出与接入第r级解码器的标准化层的输出进行叠加后再输入r-1级解码器),其中1<r≤N;第1级解码器接入重建层,该重建层用于对第1级解码器的输出的特征图像进行重建得到超分辨率重建残差图像,最后将超分辨率重建残差图像与低分辨率输入图像进行叠加得到最终的超分辨率重建图像。即,本发明中,每一级编码器输出都配备拼接层,用于拼接融合特征图;除第一级编码器外,每一级编码器都配备下采样层,用于对特征图图像下采样,调整尺寸;除第一级解码器外,每一级解码器都配置有上采样层和标准化层。Further, in the super-resolution reconstruction process, the constructed super-resolution reconstruction network includes several encoders and decoders. According to the direction of forward propagation, each encoder is defined as the 1st to Nth level encoders, and the N-level to first-level decoders, where the value of N is the same as the number of convolution block series included in the convolutional neural network for feature information extraction; the first-level encoder and the second-level encoder are connected through a splicing layer , starting from the second-level encoder, each encoder is connected to the down-sampling module and the splicing layer in sequence (that is, the down-sampling module and the splicing layer are sequentially connected between the N-th encoder and the decoder); the input of the splicing layer is also Including the fusion feature image of the specified level (according to the direction of forward propagation, the resolution of the fusion feature image input by each splicing layer decreases in turn), and the feature image extracted by the encoder and the input fusion feature image are fused by dimensional splicing. Processing; the upsampling module and the normalization layer are sequentially connected between the adjacent decoders, and the rth encoder is also connected to the r-1th decoder by skip connection (that is, the output of the rth encoder is connected to the connection. The output of the normalization layer of the r-th decoder is superimposed and then input to the r-1-level decoder), where 1<r≤N; the first-level decoder is connected to the reconstruction layer, which is used for the first-level decoder. The feature image output by the decoder is reconstructed to obtain a super-resolution reconstructed residual image, and finally the super-resolution reconstructed residual image and the low-resolution input image are superimposed to obtain the final super-resolution reconstructed image. That is, in the present invention, the output of each level of encoder is equipped with a splicing layer for splicing and fused feature maps; except for the first-level encoder, each level of encoder is equipped with a downsampling layer for downsampling the feature map image. Sampling, resizing; except the first-level decoder, each level of decoder is configured with an upsampling layer and a normalization layer.

进一步的,在对高分辨率参考图像进行下采样时,可以采用金字塔降采样方法。而在上采样过程中可以是简单的插值算法,也可以是采用已有的基于单图的图像超分辨率重建方法进行上采样和初步的超分辨率重建。Further, when down-sampling the high-resolution reference image, a pyramid down-sampling method can be used. In the up-sampling process, a simple interpolation algorithm can be used, or an existing single-image-based image super-resolution reconstruction method can be used to perform up-sampling and preliminary super-resolution reconstruction.

进一步的,在基于特征图进行图像信息融合处理时,将待匹配的两个特征子块视为向量,即将特征子块的图像信息进行向量化,然后基于向量相似度来衡量两个向量之间的相似性,其中,向量相似度可以由分别经过标准化处理的向量余弦距离和向量曼哈顿距离经过加权和后得到。向量相似度越大,表明两个特征子块的相似性越大。对给定的一个特征子块,当向量相似度取到最大值时,对应的候选特征子块即为给定特征子块的最优匹配特征子块。Further, when the image information fusion processing is performed based on the feature map, the two feature sub-blocks to be matched are regarded as vectors, that is, the image information of the feature sub-blocks is vectorized, and then the similarity between the two vectors is measured based on the vector similarity. The similarity of the vector, where the vector similarity can be obtained by the weighted summation of the vector cosine distance and the vector Manhattan distance after normalization respectively. The greater the vector similarity, the greater the similarity between the two feature sub-blocks. For a given feature sub-block, when the vector similarity reaches the maximum value, the corresponding candidate feature sub-block is the optimal matching feature sub-block of the given feature sub-block.

进一步的,在基于特征图进行图像信息融合处理时,将分别经过标准化处理后的低分辨率输入特征图和重组特征图像(基于低分辨率输入特征图的每个子块的图像位置和第二最优匹配特征子块进行特征图重组得到的)进行线性融合,并对该线性融合结果进行标准化,得到融合特征图像。即基于数据分布空间整体迁移实现图像信息的融合处理。Further, when performing image information fusion processing based on the feature map, the normalized low-resolution input feature map and the reorganized feature image (based on the image position of each sub-block of the low-resolution input feature map and the second highest The optimal matching feature sub-blocks are obtained by performing feature map recombination) and linear fusion is performed, and the linear fusion result is standardized to obtain a fusion feature image. That is, the fusion processing of image information is realized based on the overall migration of the data distribution space.

进一步的,在基于特征图进行图像信息融合处理时,还可以基于线性引导滤波的方式实现,将低分辨率输入特征图作为输入,重组特征图像作为引导模板,线性引导滤波的输出为融合特征图像。融合特征图像大体上与低分辨率输入特征图大体上相似,但是其整合了引导模板(也就是重组特征图像)的关键特征信息。Further, when performing image information fusion processing based on feature maps, it can also be implemented based on linear guided filtering. The low-resolution input feature map is used as input, the reconstructed feature image is used as a guided template, and the output of linear guided filtering is the fusion feature image. . The fused feature image is generally similar to the low-resolution input feature map, but it integrates the key feature information of the bootstrap template (ie, the reconstructed feature image).

综上所述,由于采用了上述技术方案,本发明的有益效果是:To sum up, due to the adoption of the above-mentioned technical solutions, the beneficial effects of the present invention are:

与现有的图像融合及超分辨方法相比,本发明采用经卷积神经网络提取的特征信息取代像素信息进行图像块匹配,在特征域内实现不同分辨率图像之间的信息匹配、转移和融合。这可以将高分辨率参考图像信息完整准确地转移并融合到低分辨率输入图像上,显著提升了低分辨率输入图像的超分辨率重建效果。Compared with the existing image fusion and super-resolution methods, the present invention uses the feature information extracted by the convolutional neural network to replace the pixel information to perform image block matching, and realizes information matching, transfer and fusion between images of different resolutions in the feature domain. . This can completely and accurately transfer and fuse the high-resolution reference image information to the low-resolution input image, which significantly improves the super-resolution reconstruction effect of the low-resolution input image.

本发明避免了传统的基于像素的图像信息匹配及融合和基于多图的图像超分辨率方法的种种弊端,解决了跨分辨率的图像匹配问题,同时不受到图像尺寸和数目的限制。The invention avoids various drawbacks of traditional pixel-based image information matching and fusion and multi-image-based image super-resolution methods, solves the problem of cross-resolution image matching, and is not limited by the size and number of images.

附图说明Description of drawings

图1为具体实施方式中,本发明的重建过程的处理过程示意图;1 is a schematic diagram of the processing process of the reconstruction process of the present invention in a specific embodiment;

图2为具体实施方式中,预处理过程示意图;2 is a schematic diagram of a preprocessing process in a specific embodiment;

图3为具体实施方式中,特征提取网络结构示意图;3 is a schematic diagram of a feature extraction network structure in a specific embodiment;

图4为具体实施方式中,特征匹配和转移处理流程图;4 is a flowchart of feature matching and transfer processing in a specific embodiment;

图5为具体实施方式中,特征融合流程图;5 is a flow chart of feature fusion in a specific embodiment;

图6为具体实施方式中,优化的最佳匹配特征搜索方法示意图。FIG. 6 is a schematic diagram of an optimized best matching feature search method in a specific embodiment.

图7为具体实施方式中,超分辨率重建处理过程示意图,其中图7(a)为超分辨率重建网络结构图,图7(b)为编码器/解码器结构示意图。FIG. 7 is a schematic diagram of a super-resolution reconstruction process in a specific embodiment, wherein FIG. 7( a ) is a structural diagram of a super-resolution reconstruction network, and FIG. 7( b ) is a schematic structural diagram of an encoder/decoder.

具体实施方式Detailed ways

为使本发明的目的、技术方案和优点更加清楚,下面结合实施方式和附图,对本发明作进一步地详细描述。In order to make the objectives, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail below with reference to the embodiments and accompanying drawings.

本发明中基于特征处理的图像信息融合及超分辨率重建方法,采用图像的特征信息而非像素信息来进行信息匹配和融合,可以充分地利用高分辨率参考图像所携带的丰富细节纹理信息,进而有效提升低分辨率输入图像超分辨率重建质量。参考图1,本发明的重建方法需要1张低分辨率输入图像和至少1张高分辨率输入图像作为重建方法的输入。首先,进行图像预处理,得到低分辨率参考图像,以及预处理后的低分辨率输入图像。其次,对低分辨率输入图像、高分辨率参考图像和低分辨率参考图像进行特征提取处理,得到各自对应的特征图像。接下来对高分辨率参考图像和低分辨率输入图像进行特征信息匹配、转移和融合处理,得到融合特征图像。最后对低分辨率输入图像和融合特征图像进行超分辨率重建处理,得到最终的超分辨率重建图像。The image information fusion and super-resolution reconstruction method based on feature processing in the present invention uses the feature information of the image instead of the pixel information to perform information matching and fusion, and can fully utilize the rich detailed texture information carried by the high-resolution reference image. In turn, the quality of super-resolution reconstruction of low-resolution input images is effectively improved. Referring to FIG. 1 , the reconstruction method of the present invention requires one low-resolution input image and at least one high-resolution input image as inputs to the reconstruction method. First, image preprocessing is performed to obtain a low-resolution reference image and a pre-processed low-resolution input image. Secondly, feature extraction is performed on the low-resolution input image, the high-resolution reference image, and the low-resolution reference image to obtain their corresponding feature images. Next, perform feature information matching, transfer and fusion processing on the high-resolution reference image and the low-resolution input image to obtain a fused feature image. Finally, perform super-resolution reconstruction processing on the low-resolution input image and the fusion feature image to obtain the final super-resolution reconstructed image.

参考图2,本具体实施方式中,基于高分辨率参考图像进行图像预处理的方式具体为:对高分辨率参考图像进行高斯金字塔降采样,得到低分辨率参考图像。之后低分辨率参考图像和低分辨率输入图像各自进行双三次样条插值上采样,扩大物理尺寸。高分辨率参考图像降采样倍数和上采样倍数互为倒数,进而确保上采样后低分辨率参考图像和高分辨率参考图像在物理尺寸上保持一致。同时,低分辨率参考图像和低分辨率输入图像的上采样的倍数保持相同,使得两者在像素的清晰度上相似。Referring to FIG. 2 , in this specific implementation manner, an image preprocessing method based on a high-resolution reference image is specifically: performing Gaussian pyramid downsampling on the high-resolution reference image to obtain a low-resolution reference image. Afterwards, the low-resolution reference image and the low-resolution input image are each upsampled by bicubic spline interpolation to expand the physical size. The downsampling factor and the upsampling factor of the high-resolution reference image are reciprocal of each other, thereby ensuring that the physical size of the low-resolution reference image and the high-resolution reference image remains the same after upsampling. At the same time, the upsampling factor of the low-resolution reference image and the low-resolution input image remains the same, so that the two are similar in pixel sharpness.

可选的,在图像预处理时,可以使用现有的、成熟的基于单图的超分辨率方法进行上采样操作,这样可以在上采样的同时实现初步的单图超分辨率重建。上采样的方法是支持自定义的。引入现有的基于单图的超分辨率网络可以保证在最差情况下最终的超分辨率重建效果不低于基于单图的超分辨率重建方法。同时从优化的角度考虑,这一过程有助于超分辨率重建网络在训练时更快地收敛。Optionally, during image preprocessing, the existing and mature single-image-based super-resolution methods can be used to perform upsampling operations, so that preliminary single-image super-resolution reconstruction can be achieved while upsampling. The upsampling method is customizable. The introduction of the existing single-image-based super-resolution network can ensure that the final super-resolution reconstruction effect is not inferior to the single-image-based super-resolution reconstruction method in the worst case. At the same time, from an optimization point of view, this process helps the super-resolution reconstruction network to converge faster during training.

本具体实施方式中,通过神经网络实现对图像的特征提取处理,所构建的征提取网络本质上为卷积神经网络,其包括若干级卷积块,卷积块之间通过池化层连接,各级卷积块包括一定数量的卷积及非线性激活处理,如图3所示,其中conv指代卷积层,relu指代ReLU激活层,pooling指代池化层。池化层的池化倍数为2,并采用convi_j、relui_j区分不同的卷积层及ReLU激活层,其中i表示所处级数,j表示对应的层数。且在特征提取处理时,为了保证各层的输入输出图像尺寸一致,所有卷积层都需要进行边缘补0处理。In this specific embodiment, the feature extraction processing of the image is realized through a neural network. The constructed feature extraction network is essentially a convolutional neural network, which includes several levels of convolution blocks, and the convolution blocks are connected by a pooling layer. Convolution blocks at all levels include a certain number of convolution and nonlinear activation processing, as shown in Figure 3, where conv refers to the convolution layer, relu refers to the ReLU activation layer, and pooling refers to the pooling layer. The pooling multiplier of the pooling layer is 2, and convi_j and relui_j are used to distinguish different convolutional layers and ReLU activation layers, where i represents the level, and j represents the corresponding number of layers. And in the feature extraction process, in order to ensure that the input and output image sizes of each layer are consistent, all convolutional layers need to perform edge 0 processing.

本具体实施方式中,选取relu1_1,relu2_1和relu3_1这三层的输出作为该特征提取网络提取到的特征图像,分别命名为一级特征图像relu1_1、二级特征图像relu2_1和三级特征图像relu3_1。这三级特征分别代表三种不同层次的图像特征。假定输入图像的尺寸为H*W*3(RGB三通道图像),那么其输出的三级特征图像的物理尺寸依次为H*W*C1,(H/2)*(W/2)*C2和(H/4)*(W/4)*C3。其中C1,C2和C3分别代表不同级别特征图像的通道数量。In this specific implementation, the outputs of the three layers of relu1_1, relu2_1 and relu3_1 are selected as the feature images extracted by the feature extraction network, which are respectively named as the first-level feature image relu1_1, the second-level feature image relu2_1 and the third-level feature image relu3_1. These three-level features represent three different levels of image features. Assuming that the size of the input image is H*W*3 (RGB three-channel image), then the physical size of the output three-level feature image is H*W*C 1 , (H/2)*(W/2)* C 2 and (H/4)*(W/4)*C 3 . where C 1 , C 2 and C 3 represent the number of channels of different levels of feature images, respectively.

可选的,在实现图3所示的特征提取网络的过程中,本发明支持导入外部的图像识别卷积神经网络中的参数作为特征提取网络的参数。现有的图像识别卷积神经网络(例如VGG网络)一般都包含图像特征提取组件。这些组件的参数经过较长时间的应用后,已经相对稳定和成熟,可以直接使用而不用担心图像特征提取不稳定的问题。相比与重新自定义特征提取网络,导入成熟网络的参数既可以在有效减少工作量,又不会对最终的超分辨率重建结果产生明显的不利影响。Optionally, in the process of implementing the feature extraction network shown in FIG. 3 , the present invention supports importing parameters from an external image recognition convolutional neural network as parameters of the feature extraction network. Existing image recognition convolutional neural networks (such as VGG network) generally contain image feature extraction components. After a long period of application, the parameters of these components have become relatively stable and mature, and can be used directly without worrying about the instability of image feature extraction. Compared with re-customizing the feature extraction network, the parameters imported into the mature network can not only effectively reduce the workload, but will not have obvious adverse effects on the final super-resolution reconstruction results.

参考图4,在特征匹配和转移的过程中,本具体实施方式中,首先需要将高分辨率参考图像的特征图R0_F,低分辨率参考图像的特征图R1_F和低分辨率输入图像的特征图LR_F各自进行拆分,得到若干特征子块。每个特征子块均被视为向量,都有相同的物理尺寸且彼此间可能在空间上相互重合。给定特征图像LR_F中第i个特征子块LRi,对特征图像R1_F中的每一个特征子块,本发明中的定义了一个特征匹配算法,通过计算向量相似度VS来衡量两个特征子块间的相似性。其中,VS的计算公式为:Referring to FIG. 4 , in the process of feature matching and transfer, in this specific embodiment, the feature map R0_F of the high-resolution reference image, the feature map R1_F of the low-resolution reference image and the feature map of the low-resolution input image need to be combined first. LR_F is split separately to obtain several feature sub-blocks. Each feature sub-block is treated as a vector, has the same physical size and may spatially coincide with each other. Given the i-th feature sub-block LR i in the feature image LR_F, for each feature sub-block in the feature image R1_F, a feature matching algorithm is defined in the present invention, and the two feature sub-blocks are measured by calculating the vector similarity VS. Similarity between blocks. Among them, the calculation formula of VS is:

Figure BDA0002851525070000061
Figure BDA0002851525070000061

其中,LRi表示特征图像LR_F中第i个特征子块,R1j表示特征图像R1_F中第j个特征子块,w表示介于0到1之间的加权权重。IP表示向量的内积,MD表示向量的曼哈顿距离(又称城市距离)。‖*‖表示向量的模长,‖*‖norm表示标准化过程。在计算IP时,R1j需要化为单位模长向量

Figure BDA0002851525070000062
这样可以消除R1j的模带来的影响,因而向量内积IP的计算结果才能在数学上等价为向量的余弦距离。同时为了克服IP和MD两者在度量上的不一致,在计算IP和MD的数值后需要各自进行标准化操作。具体来讲,对于给定的LRi,计算特征图像R1_F中所有的候选特征子块R1j的IP值和MD值。对所有候选特征子块的IP值和MD值分别进行标准化,标准化后度量上的差异即可消除。随后再进行加权和得到最后的VS值。Among them, LR i represents the i-th feature sub-block in the feature image LR_F, R1 j represents the j-th feature sub-block in the feature image R1_F, and w represents the weighted weight between 0 and 1. IP represents the inner product of the vector, and MD represents the Manhattan distance (aka city distance) of the vector. ‖*‖ represents the modulo length of the vector, and ‖*‖ norm represents the normalization process. When calculating IP, R1 j needs to be converted into a unit modulo length vector
Figure BDA0002851525070000062
In this way, the influence of the modulus of R1 j can be eliminated, so the calculation result of the vector inner product IP can be mathematically equivalent to the cosine distance of the vector. At the same time, in order to overcome the inconsistency in the measurement of IP and MD, it is necessary to perform normalization operations respectively after calculating the values of IP and MD. Specifically, for a given LR i , the IP and MD values of all candidate feature sub-blocks R1 j in the feature image R1_F are calculated. The IP values and MD values of all candidate feature sub-blocks are normalized respectively, and the differences in metrics can be eliminated after normalization. The weighted sum is then performed to obtain the final VS value.

由VS公式可以看到,VS值越大,表明2个特征子块相似性越高。记j*为特征图像R1_F中与给定的特征图像LR_F中第i个特征子块LRi相似性最高的特征子块标号,

Figure BDA0002851525070000063
代表R1_F中最佳匹配特征子块。当VS(i,j)取到最大时,对应的j即为j*。数学表达式为:It can be seen from the VS formula that the larger the VS value, the higher the similarity of the two feature sub-blocks. Denote j * as the feature sub-block label with the highest similarity with the i-th feature sub-block LR i in the given feature image LR_F in the feature image R1_F,
Figure BDA0002851525070000063
Represents the best matching feature sub-block in R1_F. When VS (i,j) takes the maximum, the corresponding j is j * . The mathematical expression is:

Figure BDA0002851525070000064
Figure BDA0002851525070000064

在求解得到j*之后,R1_F上第j*个特征子块

Figure BDA0002851525070000065
便已经确定,这里
Figure BDA0002851525070000066
称为第一最优匹配特征子块。之后利用R0_F与R1_F之间的空间位置映射关系,即可得到R0_F中第j*个特征子块
Figure BDA0002851525070000067
是LR_F中第i个特征子块LRi的最佳匹配特征子块,这里
Figure BDA0002851525070000068
称为第二最优匹配特征子块。从信息匹配的角度讲,第二最优匹配特征子块
Figure BDA0002851525070000069
所包含的信息正是LR_F中第i个特征子块LRi所需要的。After solving for j * , the j * th feature sub-block on R1_F
Figure BDA0002851525070000065
It has been determined that here
Figure BDA0002851525070000066
It is called the first optimal matching feature sub-block. Then, using the spatial position mapping relationship between R0_F and R1_F, the j * th feature sub-block in R0_F can be obtained
Figure BDA0002851525070000067
is the best matching feature sub-block of the i-th feature sub-block LR i in LR_F, here
Figure BDA0002851525070000068
It is called the second optimal matching feature sub-block. From the point of view of information matching, the second optimal matching feature sub-block
Figure BDA0002851525070000069
The information contained is exactly what is needed by the i-th feature sub-block LR i in LR_F.

最后一步为特征转移,第二最优匹配特征子块

Figure BDA00028515250700000610
将会填充到重组特征图像RC_F特定的位置中。重组特征图像的尺寸与LR_F的尺寸相同,
Figure BDA00028515250700000611
在RC_F中的填充位置为LR_F中第i个特征子块LRi在LR_F中的相对位置。The last step is feature transfer, the second best matching feature sub-block
Figure BDA00028515250700000610
will be filled into a specific position of the reconstructed feature image RC_F. The size of the reconstructed feature image is the same as that of LR_F,
Figure BDA00028515250700000611
The filling position in RC_F is the relative position of the i-th feature sub-block LR i in LR_F in LR_F.

对于LR_F中每一个特征子块,通过上述匹配方式总能找到R0_F中的第二最优匹配特征子块。之后再通过给定的特征子块的相对位置,将第二最优匹配特征子块填充到重组特征图像的对应位置。由于在拆分特征子块的过程中不同特征子块之间有可能相互重合。因此在将第二最优匹配特征子块填充到重组特征图像时也会出现相互重叠的情况。进一步的,当多个特征子块中在空间上相互重叠时,可以通过求解空间位置上重叠元素的平均值来解决这一冲突。For each feature sub-block in LR_F, the second best matching feature sub-block in R0_F can always be found through the above matching method. Then, through the relative position of the given feature sub-block, the second optimal matching feature sub-block is filled to the corresponding position of the reconstructed feature image. Since different feature sub-blocks may overlap each other during the process of splitting feature sub-blocks. Therefore, when the second optimal matching feature sub-blocks are filled into the reconstructed feature image, there will also be overlapping situations. Further, when multiple feature sub-blocks overlap each other in space, the conflict can be resolved by calculating the average value of the overlapping elements in the spatial position.

参考图5,在特征融合的过程中,需要先计算4个变量:重组特征图像RC_F均值和方差,低分辨率输入特征图LR_F的均值和方差。得到这4个变量后,再通过特征融合公式计算融合特征图像M_F:Referring to Figure 5, in the process of feature fusion, four variables need to be calculated first: the mean and variance of the reconstructed feature image RC_F, and the mean and variance of the low-resolution input feature map LR_F. After obtaining these four variables, the fusion feature image M_F is calculated by the feature fusion formula:

M_F=||μ*||LR_F||norm+(1-μ)*||RC_F||norm||norm M_F=||μ*||LR_F|| norm +(1-μ)*||RC_F|| norm || norm

其中,||*||norm为标准化操作,在进行标准化时需要计算输入数据的均值和方差。μ为介于0到1之间的加权权重。Among them, ||*|| norm is a standardization operation, and the mean and variance of the input data need to be calculated during standardization. μ is a weighted weight between 0 and 1.

可选的,在特征融合的过程中,还可以采用基于引导滤波的特征融合方式实现,其具体实现过程为:Optionally, in the process of feature fusion, a feature fusion method based on guided filtering can also be used to implement, and the specific implementation process is as follows:

令低分辨率输入特征图为p,重组特征图为I,给定窗口半径r和正则化参数ε。其中,窗口半径r表示计算均值时的区域为一个r*r的区域。Let the low-resolution input feature map be p and the reconstituted feature map be I, given the window radius r and the regularization parameter ε. Among them, the window radius r indicates that the area when the mean is calculated is an area of r*r.

计算特征图p的均值图像meanp=fmean(p),重组特征图I的均值图像meanI=fmean(I);Calculate the mean value image mean p =f mean (p) of the feature map p, and the mean value image mean I =f mean (I) of the reorganized feature map I;

对重组特征图I的每个像素点的像素值取平方,得到图像I2,并计算其均值图像corrI=fmean(I.*I),将重组特征图为I与特征图p相同位置的像素值相乘,得到图像I.*p并计算其均值图像corrIp=fmean(I.*p);其中A.*B表示将图像A和B中同一像素点位置的像素值相乘。Take the square of the pixel value of each pixel of the reorganized feature map I to obtain the image I 2 , and calculate its mean image corr I =f mean (I.*I), and set the reorganized feature map as I and the feature map p in the same position Multiply the pixel values of , to obtain the image I.*p and calculate its mean image corrI p =f mean (I.*p); where A.*B represents multiplying the pixel values of the same pixel position in images A and B .

计算重组特征图I的方差图像varI=corrI-meanI.*meanI,以及重组特征图I和特征图p的协方差图像covIp=corrIp-meanI.*meanpCalculate the variance image var I =corr I -mean I .*mean I of the reconstructed feature map I, and the covariance image cov Ip =corr Ip -mean I .*mean p of the reconstructed feature map I and the feature map p ;

根据公式a=covIp/(varI+ε)计算参数a,以及根据公式b=meanp-a.*meanI计算参数b,从而得到参数a、b各自的均值meana、meanb,最后根据公式q=meana.*I+meanb得到融合特征图q。Calculate the parameter a according to the formula a=cov Ip /(var I +ε), and calculate the parameter b according to the formula b=mean p -a.*mean I , so as to obtain the respective mean values mean a and mean b of the parameters a and b, and finally The fusion feature map q is obtained according to the formula q=mean a .*I+mean b .

可选的,为了加快匹配流程,本发明中特征匹配流程仅对三级特征图像relu31进行匹配。对于一级特征图像和二级特征图像而言,可以根据三级特征图像的匹配结果和空间对应关系进行匹配。当三级特征图像中某两个特征子块成功匹配时,可以将匹配结果映射到一级特征图像和二级特征图像。Optionally, in order to speed up the matching process, the feature matching process in the present invention only matches the third-level feature image relu31. For the first-level feature image and the second-level feature image, matching can be performed according to the matching result of the third-level feature image and the spatial correspondence. When two feature sub-blocks in the third-level feature image are successfully matched, the matching result can be mapped to the first-level feature image and the second-level feature image.

可选的,在计算最佳匹配特征子块的过程中,在低分辨率输入图像和高分辨率参考图像相似度足够高(大于或等于指定的相似度阈值)的前提下,可以借助相邻特征子块的匹配位置信息来约束搜索范围,这有助于大大减少匹配过程中的计算量并加快运行速度。参考图6,本发明的优化的最佳匹配特征搜索过程具体为:对LR_F上某一个特征子块X,其在R1_F上的第一最佳匹配特征子块为XO。对于X的相邻特征子块Y,可以将上一个第一最佳匹配特征子块XO的中心位置进行相对偏移(具体偏移量视具体处理需求进行设置),进而得到搜索锚点。之后通过该锚点的位置来确定一个比较小的区域来搜索Y的第一最佳匹配子块Y0。通常将搜索区域设置为矩形,基于配置的锚点到矩形搜索区域各边的距离来确定搜索区域的具体范围。经实验表明,在大多数情况下这种快速搜索的方法得到的最佳匹配结果与图4所述的完备的全图搜索方法得到的结果是一致的,并且从最终重建的角度来看,2种匹配方案最后重建的效果图没有显著的差异。Optionally, in the process of calculating the best matching feature sub-block, on the premise that the similarity between the low-resolution input image and the high-resolution reference image is sufficiently high (greater than or equal to the specified similarity threshold), the adjacent The matching position information of the feature sub-blocks is used to constrain the search range, which helps to greatly reduce the amount of computation in the matching process and speed up the operation. Referring to FIG. 6 , the optimized best matching feature search process of the present invention is specifically: for a certain feature sub-block X on LR_F, the first best matching feature sub-block on R1_F is XO. For the adjacent feature sub-block Y of X, the center position of the last first best matching feature sub-block XO can be relatively offset (the specific offset is set according to specific processing requirements), and then the search anchor point can be obtained. Then, a relatively small area is determined by the position of the anchor point to search for the first best matching sub-block Y0 of Y. The search area is usually set as a rectangle, and the specific range of the search area is determined based on the distance from the configured anchor point to each side of the rectangular search area. Experiments show that in most cases the best matching results obtained by this fast search method are consistent with the results obtained by the complete full-graph search method described in Figure 4, and from the perspective of final reconstruction, 2 There is no significant difference in the final reconstructed renderings of each matching scheme.

本具体实施方式中,超分辨率重建处理是通过卷积网络实现,参考图7(a),所采用的超分辨率重建卷积神经网络整体上为编码——解码结构。网络的前半部分有3个编码器,其中二级编码器和三级编码器后面连接一个下采样模块和一个Batch Normalization标准化层。下采样模块是一个步长为2的卷积层,这可以将编码器输出的特征图像的长和宽调整为原来的1/2,与外部拼接进来的融合特征图像在物理尺寸上保持一致。在拼接层,原网络的特征图像与融合特征图像在最后一维进行维度拼接。举例说明,假设原网络特征图像尺寸为B*H*W*C1,融合特征图像尺寸为B*H*W*C2,那么拼接后的特征图像尺寸为B*H*W*(C1+C2),其中B、H、C分别表示每个批次图像数量、图像的高和图像的宽,C1和C2表示特征图像通道数。网络的后半部分对应的有3个解码器,其中二级解码器和三级解码器后面连接一个上采样模块和Batch Normalization标准化层。上采样模块是一个步长为2的反卷积层,用来将调整解码器输出特征图像的长和宽调整至原来的2倍。在网络内部还存在3个长距离的跳连接层,网络的最后一层为重建层,对输入的特征图像进行重建得到超分辨率重建残差图像,重建层可以设置为一个步长为1的卷积层,输出通道数基于实际情况调整,例如对于RGB图像,输出通道数设置为3;对于单一的Y通道图像(称为亮度图像),输出通道数为1;最后,将超分辨率重建残差图像与原输入图像叠加得到最终的超分辨率重建结果。在超分辨率重建处理过程中,为了保证各层的输入输出图像尺寸一致,除了下采样模块,所有卷积层都需要进行边缘补0处理。In this specific embodiment, the super-resolution reconstruction process is implemented by a convolutional network. Referring to FIG. 7( a ), the super-resolution reconstruction convolutional neural network adopted is an encoding-decoding structure as a whole. The first half of the network has 3 encoders, where the second-level encoder and the third-level encoder are followed by a downsampling module and a Batch Normalization normalization layer. The downsampling module is a convolutional layer with a stride of 2, which can adjust the length and width of the feature image output by the encoder to 1/2 of the original, and keep the physical size of the fused feature image spliced in from the outside. In the stitching layer, the feature image of the original network and the fused feature image are dimensionally stitched in the last dimension. For example, assuming the original network feature image size is B*H*W*C 1 , and the fusion feature image size is B*H*W*C 2 , then the size of the stitched feature image is B*H*W*(C 1 +C 2 ), where B, H, and C represent the number of images in each batch, the height of the image, and the width of the image, respectively, and C 1 and C 2 represent the number of feature image channels. The second half of the network corresponds to three decoders, of which the second-level decoder and the third-level decoder are followed by an upsampling module and a Batch Normalization normalization layer. The upsampling module is a deconvolutional layer with a stride of 2, which is used to adjust the length and width of the output feature image of the decoder to 2 times the original. There are also three long-distance jump connection layers in the network. The last layer of the network is the reconstruction layer. The super-resolution reconstruction residual image is obtained by reconstructing the input feature image. The reconstruction layer can be set to a step size of 1. For the convolutional layer, the number of output channels is adjusted based on the actual situation. For example, for RGB images, the number of output channels is set to 3; for a single Y-channel image (called luminance image), the number of output channels is 1; finally, the super-resolution reconstruction is performed The residual image is superimposed with the original input image to obtain the final super-resolution reconstruction result. In the process of super-resolution reconstruction, in order to ensure that the input and output image sizes of each layer are consistent, all convolutional layers except the downsampling module need to perform edge-filling 0 processing.

参考图7(b),编码器/解码器内部结构为:每个编码器和解码器内部由5个卷积层conv和1个Batch Normalization标准化层组成。每个卷积层的激活函数为pReLU函数。卷积层之间采用稠密连接的方式相互连接,同时有一个跳跃连接层,将第一个卷积层输出和最后一个卷积层输出相加得到最终的输出。稠密连接可以使图像特征得以复用,提升网络的性能。而跳跃连接使得网络在训练过程中学习输入和预期输出的残差而非预期输出的本身,这可以有效降低网络训练负担,并且同样对网络性能提升有着积极作用。Referring to Figure 7(b), the internal structure of the encoder/decoder is: each encoder and decoder consists of 5 convolutional layers conv and 1 Batch Normalization normalization layer. The activation function of each convolutional layer is the pReLU function. The convolutional layers are densely connected to each other, and there is a skip connection layer, which adds the output of the first convolutional layer and the output of the last convolutional layer to obtain the final output. Dense connections can reuse image features and improve the performance of the network. The skip connection allows the network to learn the residuals of the input and expected output rather than the expected output itself during the training process, which can effectively reduce the network training burden and also has a positive effect on network performance improvement.

本具体实施方式中,超分辨率重建网络的训练和应用(即超分辨率重建)分别为:In this specific embodiment, the training and application (ie, super-resolution reconstruction) of the super-resolution reconstruction network are respectively:

在训练过程中,首先准备训练数据,将其划分为训练集和测试集。训练集用来训练更新超分辨率重建网络的参数,测试集用来评估网络性能。在本具体实施方式中,特征提取网络导入了外部网络的部分参数,因此不做更新。为了加快训练速度,先进行离线的特征处理过程,保存融合特征图像的数据作为中间数据。之后在训练超分辨率重建网络时将该数据导入,用来更新网络参数并保存网络模型。在训练结束后即可导出保存的超分辨率重建网络的参数。In the training process, the training data is first prepared and divided into training set and test set. The training set is used to train and update the parameters of the super-resolution reconstruction network, and the test set is used to evaluate the network performance. In this specific implementation manner, the feature extraction network imports some parameters of the external network, so it is not updated. In order to speed up the training, the offline feature processing process is performed first, and the data of the fusion feature image is saved as the intermediate data. This data is then imported when training the super-resolution reconstruction network to update the network parameters and save the network model. The parameters of the saved super-resolution reconstruction network can be exported after training.

在应用阶段,对给定的高分辨率参考图像和低分辨率输入图像,先执行图像预处理和特征处理得到融合特征图像数据,再一并送入到超分辨率重建网路中进行超分辨率重建,进而得到最终结果。In the application stage, for a given high-resolution reference image and a low-resolution input image, image preprocessing and feature processing are performed to obtain fused feature image data, which are then sent to the super-resolution reconstruction network for super-resolution. rate reconstruction to obtain the final result.

以上所述,仅为本发明的具体实施方式,本说明书中所公开的任一特征,除非特别叙述,均可被其他等效或具有类似目的的替代特征加以替换;所公开的所有特征、或所有方法或过程中的步骤,除了互相排斥的特征和/或步骤以外,均可以任何方式组合。The above descriptions are only specific embodiments of the present invention, and any feature disclosed in this specification, unless otherwise stated, can be replaced by other equivalent or alternative features with similar purposes; all the disclosed features, or All steps in a method or process, except mutually exclusive features and/or steps, may be combined in any way.

Claims (10)

1.一种基于特征处理的图像信息融合及超分辨率重建方法,其特征在于,包括以下步骤:1. an image information fusion and super-resolution reconstruction method based on feature processing, is characterized in that, comprises the following steps: 对高分辨率参考图像和低分辨率输入图像进行图像预处理:对高分辨率参考图像进行下采样,得到与低分辨率输入图像的清晰度相匹配的低分辨率参考图像;并对低分辨率输入图像和低分辨率参考图像进行相同的上采样;Perform image preprocessing on the high-resolution reference image and the low-resolution input image: downsample the high-resolution reference image to obtain a low-resolution reference image that matches the clarity of the low-resolution input image; The same upsampling is performed on the low-resolution input image and the low-resolution reference image; 对高分辨率参考图像、上采样后的低分辨率参考图像和低分辨率输入图像进行特征信息提取处理,得到高分辨率参考特征图、低分辨率参考特征图和低分辨率输入特征图;并对得到的特征图进行图像信息匹配、转移和融合处理:Extracting feature information from the high-resolution reference image, the upsampled low-resolution reference image, and the low-resolution input image to obtain a high-resolution reference feature map, a low-resolution reference feature map, and a low-resolution input feature map; And perform image information matching, transfer and fusion processing on the obtained feature map: 对高分辨率参考特征图、低分辨率参考特征图和低分辨率输入特征图进行分块处理;Perform block processing on high-resolution reference feature maps, low-resolution reference feature maps, and low-resolution input feature maps; 遍历低分辨率输入特征图中的每个特征子块,在低分辨率参考特征图中搜索第一最优匹配特征子块,并基于高分辨率参考特征图与低分辨率参考特征图之间的空间映射关系,基于第一最优匹配特征子块的图像位置确定当前子块在高分辨率参考特征图的第二最优匹配特征子块;Traverse each feature sub-block in the low-resolution input feature map, search for the first optimal matching feature sub-block in the low-resolution reference feature map, and based on the difference between the high-resolution reference feature map and the low-resolution reference feature map The spatial mapping relationship, based on the image position of the first optimal matching feature sub-block, determine the second optimal matching feature sub-block of the current sub-block in the high-resolution reference feature map; 基于低分辨率输入特征图的每个子块的图像位置和第二最优匹配特征子块进行特征图重组处理,得到重组特征图像;Perform feature map reorganization processing based on the image position of each sub-block of the low-resolution input feature map and the second optimal matching feature sub-block to obtain a reorganized feature image; 基于重组特征图进行图像信息融合处理,得到融合特征图像;Perform image information fusion processing based on the recombination feature map to obtain a fusion feature image; 基于融合特征图像对低分辨率输入图像进行超分辨率重建,得到超分辨率重建图像。Perform super-resolution reconstruction on the low-resolution input image based on the fusion feature image to obtain a super-resolution reconstructed image. 2.如权利要求1所述的方法,其特征在于,基于编码和解码结构的超分辨率重建网络进行超分辨率重建过程处理,其中,超分辨率重建网络的编码部分用于对低分辨率输入图像进行特征提取,并通过维度拼接的方式将提取的特征图像与融合特征图像进行融合;解码部分用于对融合的特征图像进行重建,输出超分辨率重建图像。2. The method according to claim 1, wherein the super-resolution reconstruction process is processed by a super-resolution reconstruction network based on an encoding and decoding structure, wherein the encoding part of the super-resolution reconstruction network is used for low-resolution reconstruction. Feature extraction is performed on the input image, and the extracted feature image is fused with the fused feature image by dimension stitching; the decoding part is used to reconstruct the fused feature image and output the super-resolution reconstructed image. 3.如权利要求1所述的方法,其特征在于,采用基于卷积神经网络的特征提取网络对高分辨率参考图像、上采样后的低分辨率参考图像和低分辨率输入图像进行特征信息提取处理,所采用的卷积神经网络包括顺次连接的若干级卷积块,卷积块之间通过池化层连接,各级卷积块包括若干顺次连接的子层,每个子层由顺次连接的卷积层和非线性激活层构成,从每级卷积块中指定一层非线性激活层作为当前级别的特征图输出,得到多级高分辨率参考特征图、低分辨率参考特征图和低分辨率输入特征图;并对同级特征图进行图像信息匹配、转移和融合处理,得到多级融合特征图像。3. The method of claim 1, wherein a feature extraction network based on a convolutional neural network is used to perform feature information on a high-resolution reference image, an upsampled low-resolution reference image and a low-resolution input image. For the extraction process, the convolutional neural network used includes several levels of convolutional blocks connected in sequence, and the convolutional blocks are connected by a pooling layer. It consists of sequentially connected convolutional layers and non-linear activation layers. A layer of non-linear activation layers is designated from each level of convolution block as the feature map output of the current level, and multi-level high-resolution reference feature maps and low-resolution reference feature maps are obtained. Feature map and low-resolution input feature map; and perform image information matching, transfer and fusion processing on the feature maps of the same level to obtain multi-level fusion feature images. 4.如权利要求2所述的方法,其特征在于,所述超分辨率重建网络包括若干编码器和解码器,按照前向传播的方向,各编码器依次定义为第1级~第N级编码器,第N级~第1级解码器,其中N的取值与特征信息提取处理的卷积神经网络包括的卷积块级数相同;第一级编码器与第二级编码器之间通过拼接层连接,从第2级编码器开始,各编码器之后顺次连接下采样模块和拼接层;拼接层的输入还包括指定级的融合特征图像,通过维度拼接的方式对编码器提取的特征图像与输入的融合特征图像进行融合处理;邻级的解码器之间顺次连接上采样模块和标准化层,且第r级编码器还通过跳连接方式接入第r-1级解码器,其中1<r≤N;第1级解码器接入重建层,该重建层用于对第1级解码器输出的特征图像进行重建得到超分辨率重建残差图像,最后将超分辨率重建残差图像与低分辨率输入图像进行叠加得到最终的超分辨率重建图像。4. The method according to claim 2, wherein the super-resolution reconstruction network comprises several encoders and decoders, and according to the direction of forward propagation, each encoder is sequentially defined as the first level to the Nth level Encoder, N-th to 1st-level decoder, where the value of N is the same as the number of convolution block series included in the convolutional neural network for feature information extraction processing; between the first-level encoder and the second-level encoder Through the splicing layer connection, starting from the second-level encoder, each encoder is connected to the downsampling module and the splicing layer in sequence; the input of the splicing layer also includes the fusion feature image of the specified level. The feature image and the input fused feature image are fused; the adjacent decoders are connected to the upsampling module and the normalization layer in sequence, and the rth encoder is also connected to the r-1th decoder by skip connection. Where 1<r≤N; the first-level decoder is connected to the reconstruction layer, which is used to reconstruct the feature image output by the first-level decoder to obtain a super-resolution reconstruction residual image, and finally the super-resolution reconstruction residual image is obtained. The difference image is superimposed with the low-resolution input image to obtain the final super-resolution reconstructed image. 5.如权利要求3所述的方法,其特征在于,对于多级高分辨率参考特征图、低分辨率参考特征图和低分辨率输入特征图,在对同级特征图进行特征子块匹配处理时,仅对特征信息提取处理的卷积神经网络的最后一级特征图进行第一和第二最优匹配特征子块的特征块匹配处理,对于其余级的特征图之间的特征块匹配处理,直接基于最后一级特征图之间的特征块匹配处理结果的空间映射关系获得对应的匹配结果。5. The method according to claim 3, wherein, for the multi-level high-resolution reference feature map, the low-resolution reference feature map and the low-resolution input feature map, feature sub-block matching is performed on the feature maps of the same level. During processing, only the feature block matching processing of the first and second optimal matching feature sub-blocks is performed on the feature map of the last level of the convolutional neural network in the feature information extraction process, and the feature block matching between the feature maps of the remaining levels is performed. processing, and the corresponding matching result is obtained directly based on the spatial mapping relationship of the feature block matching processing result between the last-level feature maps. 6.如权利要求1所述的方法,其特征在于,在基于特征图进行图像信息融合处理时,将待匹配的两个特征子块作为向量,基于向量相似度作为两个特征子块的匹配度,基于特征子块之间的匹配度搜索低分辨率输入特征图中的特征子块的第一最优匹配特征子块。6. The method according to claim 1, wherein when performing image information fusion processing based on the feature map, the two feature sub-blocks to be matched are used as vectors, and the vector similarity is used as the matching of the two feature sub-blocks The first optimal matching feature sub-block of the feature sub-blocks in the low-resolution input feature map is searched based on the matching degree between the feature sub-blocks. 7.如权利要求6所述的方法,其特征在于,所述向量相似度为:对两个向量的向量余弦距离和向量曼哈顿距离分别进行标准化处理后,再进行加权求和得到。7 . The method according to claim 6 , wherein the vector similarity is obtained by performing a weighted summation on the vector cosine distance and the vector Manhattan distance of the two vectors after normalization respectively. 8 . 8.如权利要求1所述的方法,其特征在于,在基于特征图进行图像信息融合处理时,基于低分辨率输入特征图的每个子块的图像位置和第二最优匹配特征子块进行特征图重组得到重组特征图像,再将分别经过标准化处理后的低分辨率输入特征图和重组特征图像进行线性融合,并对该线性融合结果进行标准化处理,得到融合特征图像。8. The method according to claim 1, wherein, when performing image information fusion processing based on the feature map, the image position of each sub-block of the low-resolution input feature map and the second optimal matching feature sub-block are performed. The feature map is reconstructed to obtain a reconstructed feature image, and then the low-resolution input feature map and the reconstructed feature image that have been standardized respectively are linearly fused, and the linear fusion result is standardized to obtain a fused feature image. 9.如权利要求1所述的方法,其特征在于,在基于特征图进行图像信息融合处理时,基于线性引导滤波的方式实现,将低分辨率输入特征图作为输入,重组特征图像作为引导模板,线性引导滤波的输出为融合特征图像。9. The method according to claim 1, wherein, when performing image information fusion processing based on feature maps, it is realized by means of linear guided filtering, using low-resolution input feature maps as input, and recombining feature images as guide templates , the output of linear guided filtering is the fusion feature image. 10.如权利要求1所述的方法,其特征在于,在搜索低分辨率输入特征图中的特征子块的第一最优匹配特征子块时,若低分辨率输入图像和高分辨率参考图像之间的相似度大于或等于指定的相似度阈值时,基于相邻特征子块的匹配位置信息约束第一最优匹配特征子块的搜索范围:10. The method of claim 1, wherein when searching for the first optimal matching feature sub-block of the feature sub-block in the low-resolution input feature map, if the low-resolution input image and the high-resolution reference When the similarity between images is greater than or equal to the specified similarity threshold, the search range of the first optimal matching feature sub-block is constrained based on the matching position information of adjacent feature sub-blocks: 定义低分辨率输入特征图中的某个特征子块为特征子块X,其在低分辨率参考特征图中的第一最优匹配特征子块为X0;A certain feature sub-block in the low-resolution input feature map is defined as feature sub-block X, and its first optimal matching feature sub-block in the low-resolution reference feature map is X0; 对于特征子块X的相邻特征子块Y,对X0的中心位置进行相对偏移得到搜索锚点;For the adjacent feature sub-block Y of the feature sub-block X, the relative offset of the center position of X0 is performed to obtain the search anchor point; 基于配置的搜索锚点到第一最优匹配特征子块的搜索范围的各边界的间距,确定特征子块Y的第一最优匹配特征子块的搜索范围。Based on the distance from the configured search anchor point to each boundary of the search range of the first optimal matching feature sub-block, the search range of the first optimal matching feature sub-block of the feature sub-block Y is determined.
CN202011528460.3A 2020-12-22 2020-12-22 An image information fusion and super-resolution reconstruction method based on feature processing Expired - Fee Related CN112598575B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011528460.3A CN112598575B (en) 2020-12-22 2020-12-22 An image information fusion and super-resolution reconstruction method based on feature processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011528460.3A CN112598575B (en) 2020-12-22 2020-12-22 An image information fusion and super-resolution reconstruction method based on feature processing

Publications (2)

Publication Number Publication Date
CN112598575A CN112598575A (en) 2021-04-02
CN112598575B true CN112598575B (en) 2022-05-03

Family

ID=75200199

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011528460.3A Expired - Fee Related CN112598575B (en) 2020-12-22 2020-12-22 An image information fusion and super-resolution reconstruction method based on feature processing

Country Status (1)

Country Link
CN (1) CN112598575B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114418853B (en) * 2022-01-21 2022-09-20 杭州碧游信息技术有限公司 Image super-resolution optimization method, medium and equipment based on similar image retrieval
CN114723986B (en) * 2022-03-16 2024-10-29 平安科技(深圳)有限公司 Text image matching method, device, equipment and storage medium
CN114723685A (en) * 2022-03-22 2022-07-08 中国民用航空飞行学院 An Intelligent Fusion System of Image Information Based on Aerial Data
CN114760435A (en) * 2022-06-13 2022-07-15 深圳达慧信息技术有限公司 Conference relaying method, device, equipment and storage medium based on image processing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016019484A1 (en) * 2014-08-08 2016-02-11 Xiaoou Tang An apparatus and a method for providing super-resolution of a low-resolution image
CN106228512A (en) * 2016-07-19 2016-12-14 北京工业大学 Based on learning rate adaptive convolutional neural networks image super-resolution rebuilding method
CN106952228A (en) * 2017-03-10 2017-07-14 北京工业大学 Single image super-resolution reconstruction method based on non-local self-similarity of images
CN111667412A (en) * 2020-06-16 2020-09-15 中国矿业大学 Method and device for reconstructing image super-resolution based on cross learning network
CN111915484A (en) * 2020-07-06 2020-11-10 天津大学 Reference image guiding super-resolution method based on dense matching and self-adaptive fusion

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017075768A1 (en) * 2015-11-04 2017-05-11 北京大学深圳研究生院 Super-resolution image reconstruction method and device based on dictionary matching

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016019484A1 (en) * 2014-08-08 2016-02-11 Xiaoou Tang An apparatus and a method for providing super-resolution of a low-resolution image
CN106228512A (en) * 2016-07-19 2016-12-14 北京工业大学 Based on learning rate adaptive convolutional neural networks image super-resolution rebuilding method
CN106952228A (en) * 2017-03-10 2017-07-14 北京工业大学 Single image super-resolution reconstruction method based on non-local self-similarity of images
CN111667412A (en) * 2020-06-16 2020-09-15 中国矿业大学 Method and device for reconstructing image super-resolution based on cross learning network
CN111915484A (en) * 2020-07-06 2020-11-10 天津大学 Reference image guiding super-resolution method based on dense matching and self-adaptive fusion

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Learning a Deep Convolutional Network for Image Super-Resolution;Chao Dong等;《European conference on computer vision》;20141231;第184-199页 *
Super-Resolution Through Neighbor Embedding;Hong Chang等;《Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition》;20041231;第1-8页 *
图像超分辨率卷积神经网络加速算法;刘超 等;《国防科技大学学报》;20190430;第41卷(第2期);第91-97页 *
基于CNN迁移特征融合与池化的高分辨率遥感图像检索研究;葛芸;《中国博士学位论文全文数据库 工程科技Ⅱ辑》;20190415;第C028-6页 *
基于卷积神经网络的视频超分辨技术研究;敬琳萍;《中国优秀硕士学位论文全文数据库 信息科技辑》;20191215;第I138-607页 *
基于稀疏表示的图像超分辨率重建方法研究;李炜;《中国优秀硕士学位论文全文数据库 信息科技辑》;20181015;第I138-614页 *

Also Published As

Publication number Publication date
CN112598575A (en) 2021-04-02

Similar Documents

Publication Publication Date Title
CN112598575B (en) An image information fusion and super-resolution reconstruction method based on feature processing
CN110084863B (en) Multi-domain image conversion method and system based on generation countermeasure network
CN110033410B (en) Image reconstruction model training method, image super-resolution reconstruction method and device
CN107610194B (en) Magnetic resonance image super-resolution reconstruction method based on multi-scale fusion CNN
CN109727195B (en) Image super-resolution reconstruction method
CN103295197B (en) Based on the image super-resolution rebuilding method of dictionary learning and bilateral canonical
CN101719266B (en) Affine transformation-based frontal face image super-resolution reconstruction method
CN113538246B (en) Remote sensing image super-resolution reconstruction method based on unsupervised multi-stage fusion network
CN113744136B (en) Image super-resolution reconstruction method and system based on channel-constrained multi-feature fusion
Shi et al. Structure-aware deep networks and pixel-level generative adversarial training for single image super-resolution
CN110288529B (en) A Single Image Super-Resolution Reconstruction Method Based on Recurrent Local Synthesis Network
CN112785502B (en) Light field image super-resolution method of hybrid camera based on texture migration
Hui et al. Two-stage convolutional network for image super-resolution
CN115205527A (en) A bidirectional semantic segmentation method of remote sensing images based on domain adaptation and super-resolution
CN116703725A (en) Method for realizing super resolution for real world text image by double branch network for sensing multiple characteristics
CN107424119B (en) A single-image super-resolution method
CN118230131A (en) Image recognition and target detection method
CN116523985A (en) Structure and texture feature guided double-encoder image restoration method
CN116258632A (en) A text-assisted super-resolution reconstruction method for text images
CN109741258B (en) Reconstruction-based image super-resolution methods
CN116049469A (en) Multi-matching search and super-resolution reconstruction method based on reference diagram
CN118195897A (en) A digital core image super-resolution reconstruction method based on dual-dimensional attention
CN114708353B (en) Image reconstruction method, device, electronic device and storage medium
CN116468812A (en) Image compressed sensing reconstruction method and system based on multiple branches and multiple scales
CN110322548A (en) A kind of three-dimensional grid model generation method based on several picture parametrization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220503

CF01 Termination of patent right due to non-payment of annual fee