WO2017075768A1 - 一种基于字典匹配的图像超分辨率重建方法及装置 - Google Patents

一种基于字典匹配的图像超分辨率重建方法及装置 Download PDF

Info

Publication number
WO2017075768A1
WO2017075768A1 PCT/CN2015/093765 CN2015093765W WO2017075768A1 WO 2017075768 A1 WO2017075768 A1 WO 2017075768A1 CN 2015093765 W CN2015093765 W CN 2015093765W WO 2017075768 A1 WO2017075768 A1 WO 2017075768A1
Authority
WO
WIPO (PCT)
Prior art keywords
filter
layer
image block
image
resolution image
Prior art date
Application number
PCT/CN2015/093765
Other languages
English (en)
French (fr)
Inventor
赵洋
王荣刚
高文
王振宇
王文敏
Original Assignee
北京大学深圳研究生院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京大学深圳研究生院 filed Critical 北京大学深圳研究生院
Priority to US15/749,554 priority Critical patent/US10339633B2/en
Priority to PCT/CN2015/093765 priority patent/WO2017075768A1/zh
Publication of WO2017075768A1 publication Critical patent/WO2017075768A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/28Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/772Determining representative reference patterns, e.g. averaging or distorting patterns; Generating dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Definitions

  • the present application relates to a method and apparatus for image super-resolution reconstruction based on dictionary matching.
  • Super-Resolution also known as upsampling and image magnification, refers to the recovery of high-resolution, sharp images from low-resolution images.
  • Super-resolution is one of the fundamental problems in the field of image and video processing. It has a very wide application prospect in the fields of medical image processing, image recognition, digital photo processing, and high-definition television.
  • kernel-based interpolation algorithms such as bilinear interpolation, spline interpolation, and so on.
  • the interpolation algorithm generates continuous data by using known discrete data, so blurring, aliasing, and the like occur, and the image restoration effect is not good.
  • the present application provides a method and apparatus for image super-resolution reconstruction based on dictionary matching, which can improve the quality of reconstructed high-resolution images.
  • the present application provides a method for image super-resolution reconstruction based on dictionary matching, comprising: establishing a matching dictionary library; inputting an image block to be reconstructed into a multi-layer line filter network, and extracting the to-be-reconstructed a local feature of the image block to be reconstructed in the image; searching for a local feature of the low-resolution image block having the highest similarity with the local feature of the image to be reconstructed from the matching dictionary library; searching for the matching dictionary library
  • the local feature of the low-resolution image block with the highest similarity is located in the residual value of the joint sample; the local feature of the low-resolution image block with the highest similarity is interpolated and amplified, and the residual value is added to obtain the reconstruction After the high resolution image block.
  • the present application provides an image super-resolution reconstruction apparatus based on dictionary matching, comprising: an establishing unit for establishing a matching dictionary library; and an extracting unit for inputting the image block to be reconstructed into multiple layers.
  • a matching unit is configured to search for a low resolution image with the highest similarity to the local feature of the image to be reconstructed from the matching dictionary library a local feature of the block; a search unit for finding a residual value of the joint sample in which the local feature of the low-resolution image block having the highest similarity is located in the matching dictionary library; and a difference amplifying unit configured to compare the similarity
  • the local feature of the highest-resolution low-resolution image block is subjected to interpolation and amplification; the reconstruction unit is configured to add the local feature of the low-resolution image block obtained by the difference amplification unit to the residual found by the searching unit Value, get the reconstructed high resolution image block.
  • the method and device for image super-resolution reconstruction based on dictionary matching establishes a matching dictionary library, inputs the image to be reconstructed into a multi-layer line filter network, extracts local features of the image to be reconstructed, and searches for a matching dictionary library. Describe the local features of the low-resolution image block with the highest local similarity of the reconstructed image, and find the residual value of the joint sample of the local feature with the highest similarity in the matching dictionary database.
  • the local features of the low-resolution image block with the highest similarity are interpolated and amplified, and the residual values are added to obtain a reconstructed high-resolution image block.
  • the local features of the image to be reconstructed extracted by the multi-layer line filter network have higher precision. Therefore, when matching with the matching dictionary database, the matching degree is higher, and the reconstructed image quality is also better. Thus, embodiments of the present application can greatly improve the quality of reconstructed high resolution images.
  • FIG. 1 is a flowchart of a method for image super-resolution reconstruction based on dictionary matching according to the present application
  • FIG. 2 is a schematic flow chart of step 101 in FIG. 1;
  • FIG. 3 is a schematic diagram of a local feature extraction process of an image of a multi-layer line filter network in an embodiment of the present application
  • FIG. 4 is a schematic diagram of a filtering process of the present application.
  • FIG. 5 is a schematic structural diagram of an image super-resolution reconstruction apparatus based on dictionary matching according to the present application.
  • Figure 6 is a schematic structural view of the establishing unit in Figure 5;
  • Figure 7 is a schematic view showing the structure of the extraction unit in Figure 5 .
  • an image super-resolution reconstruction method and apparatus based on dictionary matching is provided, which can improve the quality of the reconstructed high-resolution image.
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • FIG. 1 is a flowchart of a method according to Embodiment 1 of the present application.
  • a method for image super-resolution reconstruction based on dictionary matching may include the following steps:
  • the step 102 may include: Step 1:
  • the multi-layer line filter network includes a filtering layer, and the first stage filter of the filtering layer filters the input image block to be reconstructed by using N different size line filtering windows to obtain a corresponding N filtered images are output to a next stage filter, the filtered image comprising: line features of the image, where N is an integer greater than one.
  • Step 2 The second stage filter of the filter layer filters the N filtered images output by the first stage filter by using M different linear filter windows to obtain corresponding M ⁇ N filtered images, where M Is an integer greater than 1.
  • Step 3 repeatedly output all the filtered images obtained by each level filter to the next-stage filter, and the next-stage filter uses a plurality of line filtering windows to filter all the filtered images output by the upper-level filter respectively until the last one After filtering the stage filter, all the filtered images obtained are output to the mapping layer of the multi-layer line filter network.
  • Step 4 The mapping layer performs binarization processing on all the filtered images of the filtering layer, and outputs the same to the output layer of the multi-layer line filter network.
  • Step 5 The output layer performs concatenation and output on the binarized filtered image output by the mapping layer, that is, obtains the local feature of the input image block to be reconstructed.
  • the local features of the image block to be reconstructed extracted by the multi-layer line filter network have higher precision. Therefore, when matching with the matching dictionary database, the matching degree is higher, and the reconstructed image quality is also good.
  • the extraction of image features by the multi-layer line filter network proposed in the embodiments of the present application is also applicable to reconstruction methods based on manifold learning or sparse representation.
  • step 101 of this embodiment specifically includes the following steps:
  • 500,000 joint samples can be randomly selected from a known training image library, and 1024 joint samples are clustered from the above 500,000 clusters using a K-means clustering algorithm, and the 1024 joint samples are used as The atom of the dictionary constitutes a matching dictionary.
  • the local feature extraction process of the image by the multi-layer line filter network includes:
  • the multi-layer line filter network includes a filter layer, and the first stage filter of the filter layer utilizes N A different size of the line filtering window filters the image in the input network to obtain corresponding N filtered images, and outputs them to the next stage filter.
  • the filtered image includes: a line feature of the image, and N is an integer greater than one.
  • the second stage filter of the filter layer filters the N filtered images output by the first stage filter by using M different linear filter windows to obtain corresponding M ⁇ N filtered images.
  • M is an integer greater than one.
  • step S2 all the filtered images obtained by each level of the filter are repeatedly output to the next-stage filter, and the next-stage filter uses a plurality of line filtering windows to respectively perform all the filtered images output by the upper-stage filter.
  • the filtering process is performed until the last stage filter is filtered, and all the obtained filtered images are output to the mapping layer of the multilayer line filter network.
  • the number of repetitions of filtering can be performed by setting a number of stages of filters in advance according to actual needs.
  • mapping layer binarizes all the filtered images of the filter layer and outputs them to the output layer of the multi-layer line filter network.
  • the output layer connects and outputs the binarized filtered image outputted by the mapping layer to obtain a local feature of the image;
  • the input image of the filter network is a whole image, and the output layer first performs a block histogram on each of the binarized processed filtered images output by the mapping layer, and then performs concatenation and output, thereby obtaining local features of the image.
  • the first step the construction of a multi-layer line filter network.
  • a line filter group with different directions of one bandwidth is used to extract line features in an image, and the line filter response is calculated as follows:
  • the coordinates of the pixel points on the line L k are specifically defined as follows:
  • (i 0 , j 0 ) is the central pixel point coordinate of the local block and S k is the slope of the line L k .
  • the filter bank calculates the pixel values in the different directions on the line in the filter window, and then selects the pixel value in the direction with the smallest value and acts as a filter response.
  • Figure 4 depicts the network structure of a multi-layer line filter network.
  • P in the input network can be the entire image.
  • the multi-layer line filter network is used to extract global statistical features. It can be a partial image block for extracting local features.
  • the network structure mainly includes a filtering layer, a mapping layer, and an output layer.
  • the filter layer contains multiple levels of line filtering operations. This embodiment is described by taking a two-stage line filtering operation as an example.
  • the first stage of filtering filters the input P using a total of N1 different sizes of the above line filters LF:
  • the results obtained by filtering all the windows in the first-stage filter are output to the next-stage filter, that is, the second-stage filter.
  • the output is filtered with the first stage of p i 1 .
  • the second stage of filtering uses each output of the first stage of filtering as an input, filtering using a total of N2 differently sized line filter windows:
  • the output is filtered with p ij 2 second stage. It can be seen from Fig. 4 that by repeating the multi-stage filtering operation, more filtered images are obtained, and the filter network can be extended to a higher layer.
  • the filter layer After the filter layer is the mapping layer, the filter layer will finally filter and output multiple image features to the mapping layer, and the mapping layer binarizes each output of the filtering layer, as shown in formula (5), and then multiple binarized The output is combined into a map:
  • LB is a local binarization operation, defined as follows:
  • x is the pixel point in the middle of the current filtering window
  • x p is the adjacent pixel point of the pixel
  • the entire image is divided into a plurality of partial image blocks, and the filtered images of the respective partial image blocks are respectively obtained, and thus, in the output layer, First, a block histogram is made for each output of the mapping layer, and then connected as a global statistical feature of the image P.
  • mapping layer output is joined as an image local feature.
  • the present application also provides an image super-resolution reconstruction method based on dictionary matching, establishes a matching dictionary library, inputs the image to be reconstructed into a multi-layer line filter network, extracts local features of the image to be reconstructed, and searches for and from the matching dictionary library. Describe the local features of the low-resolution image block with the highest local similarity of the reconstructed image, and find the residual value of the joint sample of the local feature with the highest similarity in the matching dictionary database.
  • the local features of the low-resolution image block with the highest similarity are interpolated and amplified, and the residual values are added to obtain a reconstructed high-resolution image block.
  • the local features of the image to be reconstructed extracted by the multi-layer line filter network have higher precision. Therefore, when matching with the matching dictionary database, the matching degree is higher, and the reconstructed image quality is also better. Thus, embodiments of the present application can greatly improve the quality of reconstructed high resolution images.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • an embodiment of the present application provides an image super-resolution reconstruction apparatus based on dictionary matching, including: an establishing unit 30, configured to create a matching dictionary library, and further includes:
  • the extracting unit 31 is configured to input the image block to be reconstructed into the multi-layer line filter network, and extract local features of the image block to be reconstructed.
  • the matching unit 32 is configured to search, from the matching dictionary library, local features of the low-resolution image block with the highest similarity to the local features of the image block to be reconstructed.
  • the searching unit 33 searches for a residual value of the joint sample in which the local feature of the highest-resolution low-resolution image block is located in the matching dictionary library.
  • the difference amplifying unit 34 is configured to perform interpolation and amplification on local features of the low resolution image block with the highest similarity.
  • the reconstruction unit 35 is configured to add the local feature of the low-resolution image block that is amplified by the difference amplifying unit 34 to the residual value found by the searching unit to obtain the reconstructed high-resolution image block.
  • the establishing unit 30 specifically includes:
  • the acquiring module 30A is configured to collect a plurality of high-resolution image blocks, and respectively downsample the plurality of high-resolution image blocks to obtain a low-resolution image block corresponding to each of the high-resolution image blocks, one
  • the high resolution image block and the low resolution image block corresponding to the high resolution image block form a pair of training samples.
  • the subtraction module 30B is configured to subtract the image of the high resolution image block in each pair of training samples and the image of the low resolution image block by interpolation to obtain a residual value of the training sample.
  • the extraction module 30C is configured to input a low resolution image block of each pair of training samples into the multi-layer line filter network, and extract local features of the low resolution image blocks of each pair of training samples.
  • the splicing module 30D is configured to splicing the local features of the low-resolution image blocks of each pair of training samples and the residual values of the training samples as joint samples of the training samples.
  • the training module 30E is configured to train a plurality of joint samples by using K-means clustering to obtain a matching dictionary library.
  • the extracting unit 31 specifically includes:
  • the first stage filter 31A is configured to filter the input image block to be reconstructed by using N different size line filter windows to obtain corresponding N filtered images, and output to the next stage filter, where the filtered image includes : a line feature of the image, wherein N is an integer greater than one;
  • the second stage filter 31B is configured to separately filter the N filtered images output by the first stage filter by using M different size line filter windows to obtain corresponding M ⁇ N filtered images, where M is greater than An integer of 1;
  • the filtering module 31C is configured to repeatedly output all the filtered images obtained by each level of the filter to the next-stage filter, and the next-stage filter uses a plurality of line filtering windows to filter all the filtered images output by the upper-level filter respectively. Until the last stage filter is filtered, all the obtained filtered images are output to the mapping layer of the multi-layer line filter network;
  • mapping layer 31D configured to perform binarization processing on all filtered images of the filtering layer, and output to an output layer of the multi-layer line filter network
  • the output layer 31E is configured to filter the filtered image after the binarization output of the mapping layer
  • the lines are connected and output, that is, the local features of the input image block to be reconstructed are obtained.
  • Each preset size filtering window linearly filters pixel points (i 0 , j 0 ) located in the middle of the filtering window in the image by using a plurality of linear filters in different directions in the window region, and the response formula as follows:
  • the coordinates of the pixel, the line L k is defined as follows:

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本申请还提供一种基于字典匹配的图像超分辨率重建方法及装置,建立匹配字典库,将待重建图像输入多层线滤波器网络,提取待重建图像的局部特征,从匹配字典库中寻找与所述待重建图像的局部特征相似度最高的低分辨率图像块的局部特征,寻找在匹配字典库中,所述相似度最高的低分辨率图像块的局部特征所在联合样本的残差值,对相似度最高的低分辨率图像块的局部特征进行插值放大,加上残差值,获得重建后的高分辨率图像块。经过多层线滤波器网络提取的待重建图像的局部特征,精度更高,因此在后续与匹配字典库匹配时,匹配度更高,因而重建出的图像质量也跟好。因而,本申请可以大大提升重建的高分辨率图像的质量。

Description

一种基于字典匹配的图像超分辨率重建方法及装置 技术领域
本申请涉及一种基于字典匹配的图像超分辨率重建方法及装置。
背景技术
超分辨率(Super-Resolution)也被称为上采样、图像放大,指的是通过低分辨率的图像来恢复出高分辨率的清晰图像。超分辨率是图像和视频处理领域的基础问题之一,在医学图像处理、图像识别、数码照片处理、高清电视等领域有着非常广泛的应用前景。
最常用的超分辨率算法之一是基于核的插值算法,例如:双线性插值、样条曲线插值等等。但是,插值算法是通过已知的离散数据来生成连续数据,因此会出现模糊、锯齿等现象,图像还原的效果不佳。
近年来,大量的基于图像边缘的超分辨率算法被提出,改善了传统插值算法在重建图像中出现的不自然效应,同时提高了图像边缘的重建后的视觉质量。但是,这一类聚焦于改善边缘的算法不能恢复高频纹理细节。为了解决高频细节重建的问题,一些字典学习类方法也被相继提出,通过使用已有的高分辨率图像块来训练低分辨率对应的高分辨率字典,再利用高分辨率字典来恢复低分辨率图像中丢失的细节信息。但是,传统利用字典恢复高分辨率图像的方法中,由于需要将字典与低分辨率图像的进行匹配,匹配的精度将影响着图像重建的质量与效果,因此,如何提升匹配的精度,提升对低分辨率图像的重建质量,是图像处理领域一个重要的研究方向。
发明内容
本申请提供一种基于字典匹配的图像超分辨率重建方法及装置,可以提升重建的高分辨率图像的质量。
根据本申请的第一方面,本申请提供一种基于字典匹配的图像超分辨率重建方法,包括:建立匹配字典库;将待重建图像块输入到多层线滤波器网络,提取所述待重建图像中待重建图像块的局部特征;从所述匹配字典库中寻找与所述待重建图像的局部特征相似度最高的低分辨率图像块的局部特征;寻找在所述匹配字典库中,所述相似度最高的低分辨率图像块的局部特征所在联合样本的残差值;对所述相似度最高的低分辨率图像块的局部特征进行插值放大,加上所述残差值,获得重建 后的高分辨率图像块。
根据本申请的第二方面,本申请提供一种基于字典匹配的图像超分辨率重建装置,包括:建立单元,用于建立匹配字典库;提取单元,用于将待重建图像块输入到多层线滤波器网络,提取所述待重建图像中待重建图像块的局部特征;匹配单元,用于从所述匹配字典库中寻找与所述待重建图像的局部特征相似度最高的低分辨率图像块的局部特征;寻找单元,寻找在所述匹配字典库中,所述相似度最高的低分辨率图像块的局部特征所在联合样本的残差值;差值放大单元,用于对所述相似度最高的低分辨率图像块的局部特征进行插值放大;重建单元,用于将所述差值放大单元进行放大后的低分辨率图像块的局部特征加上所述寻找单元寻找到的残差值,获得重建后的高分辨率图像块。
本申请提供的基于字典匹配的图像超分辨率重建方法及装置,建立匹配字典库,将待重建图像输入多层线滤波器网络,提取待重建图像的局部特征,从匹配字典库中寻找与所述待重建图像的局部特征相似度最高的低分辨率图像块的局部特征,寻找在匹配字典库中,所述相似度最高的低分辨率图像块的局部特征所在联合样本的残差值,对相似度最高的低分辨率图像块的局部特征进行插值放大,加上残差值,获得重建后的高分辨率图像块。经过多层线滤波器网络提取的待重建图像的局部特征,精度更高,因此在后续与匹配字典库匹配时,匹配度更高,因而重建出的图像质量也跟好。因而,本申请实施例可以大大提升重建的高分辨率图像的质量。
附图说明
图1为本申请的一种基于字典匹配的图像超分辨率重建方法的流程图;
图2图1中步骤101的流程示意图;
图3为本申请的本申请实施例中多层线滤波器网络对图像的局部特征提取过程示意图;
图4为本申请的滤波过程示意图;
图5为本申请的一种基于字典匹配的图像超分辨率重建装置的结构示意图;
图6图5中的建立单元的结构示意图;
图7图5中的提取单元的结构示意图。
具体实施方式
在本申请实施例中,提供一种基于字典匹配的图像超分辨率重建方法及装置,可以提升重建的高分辨率图像的质量。
下面通过具体实施方式结合附图对本发明作进一步详细说明。
实施例一:
请参考图1,图1为本申请实施例一的方法流程图。如图1所示,一种基于字典匹配的图像超分辨率重建方法,可以包括以下步骤:
101、建立匹配字典库。
102、将待重建图像块输入多层线滤波器网络,提取待重建图像块的局部特征。
具体地,步骤102可以包括:步骤一:多层线滤波器网络包括滤波层,滤波层的第一级滤波器利用N个不同大小的线滤波窗口对输入的待重建图像块进行滤波,得到对应的N个滤波图像,并输出到下一级滤波器,滤波图像包括:所述图像的线特征,其中N为大于1的整数。
步骤二:滤波层的第二级滤波器利用M个不同大小的线滤波窗口分别对第一级滤波器输出的所述N个滤波图像进行滤波,得到对应的M×N个滤波图像,其中M为大于1的整数。
步骤三:重复将每级滤波器得到的所有滤波图像输出至下一级滤波器,下一级滤波器利用多个线滤波窗口对上级滤波器输出的所有滤波图像分别进行滤波处理,直至最后一级滤波器滤波完毕,得到的所有滤波图像输出至多层线滤波器网络的映射层。
步骤四:映射层对所述滤波层的所有滤波图像进行二值化处理,并输出至多层线滤波器网络的输出层。
步骤五:输出层对所述映射层输出的二值化处理后的滤波图像进行衔接并输出,即得到所述输入的待重建图像块的局部特征。
经过多层线滤波器网络提取的待重建图像块的局部特征,精度更高,因此在后续与匹配字典库匹配时,匹配度更高,因而重建出的图像质量也跟好。本申请实施例中提出的通过多层线滤波器网络提取图像特征也同样适用于基于流形学习或基于稀疏表达等重建方法。
103、从匹配字典库中寻找与待重建图像的局部特征相似度最高的低分辨率图像块的局部特征。
104、寻找在匹配字典库中,相似度最高的低分辨率图像块的局部特 征所在联合样本的残差值。
105、对相似度最高的低分辨率图像块的局部特征进行插值放大后,加上残差值,获得重建后的高分辨率图像块。
可以理解的是,如果我们需要利用本申请实施例方法对整幅图像进行超分辨率重建,只需将整幅图像中的分成多个低分辨率的待重建图像块,按照上述方法步骤,分别将各个低分辨率的待重建图像块重建为高分辨率图像块,再将将重建后的各个高分辨率图像块进行衔接,即可得到重建后的高分辨率图像。
其中,在建立匹配字典库时,本申请方法同样利用多层线滤波网络对一些已知的样本中低分辨率图像块的局部特征进行提取,以备后续对图像进行重建时,进行匹配利用。可以理解的是,已知样本指的是在建立匹配字典库时,预先采集的多个高分辨率图像块以及对采集的多个高分辨率分别作降采样后得到的对应的多个低分辨率图像块。优选的,如图2所示,本实施例步骤101具体包括以下步骤:
101A、采集多个高分辨率图像块,分别对多个高分辨率图像块进行降采样,得到与每个高分辨率图像块对应的低分辨率图像块,一个高分辨率图像块以及与高分辨率图像块对应的低分辨率图像块组成一对训练样本。
101B、将每对训练样本中的高分辨率图像块与低分辨率图像块进行插值放大后的图像相减,得到训练样本的残差值。
101C、将每对训练样本的低分辨率图像块输入多层线滤波器网络,提取每对训练样本的低分辨率图像块的局部特征。
101D、将每对训练样本的低分辨率图像块的局部特征以及训练样本的残差值拼接起来作为训练样本的联合样本。
101E、使用K均值聚类对多个联合样本进行训练,得到匹配字典库。
一个实施例中,可以从已知的训练图像库中随机选取五十万个联合样本,使用K均值聚类算法从上述五十万个聚类出1024个联合样本,将这1024个联合样本作为字典的原子构成匹配字典。
其中,如图3所示,多层线滤波器网络对图像的局部特征提取过程包括:
S1:多层线滤波器网络包括滤波层,滤波层的第一级滤波器利用N 个不同大小的线滤波窗口对输入网络中的图像进行滤波,得到对应的N个滤波图像,并输出到下一级滤波器。
其中,滤波图像包括:所述图像的线特征,N为大于1的整数。
S2:滤波层的第二级滤波器利用M个不同大小的线滤波窗口分别对第一级滤波器输出的所述N个滤波图像进行滤波,得到对应的M×N个滤波图像。
其中M为大于1的整数。
其中,按照步骤S2的滤波过程,重复将每级滤波器得到的所有滤波图像输出至下一级滤波器,下一级滤波器利用多个线滤波窗口对上级滤波器输出的所有滤波图像分别进行滤波处理,直至最后一级滤波器滤波完毕,得到的所有滤波图像输出至多层线滤波器网络的映射层。重复滤波的次数可以根据实际需要,预先设置若干级滤波器进行。
S3:映射层对滤波层的所有滤波图像进行二值化处理,并输出至多层线滤波器网络的输出层。
S4:若多层线滤波器网络的输入图像为局部图像块,输出层对映射层输出的二值化处理后的滤波图像进行衔接并输出,即得到所述图像的局部特征;若多层线滤波器网络的输入图像为整幅图像,输出层先对所述映射层输出的二值化处理后的各个滤波图像分别做分块直方图,再进行衔接并输出,即得到图像的局部特征。
下面举一实际例子,对本申请方法的实施过程进一步阐述。
第一步:多层线滤波器网络的构建。
本发明提出的多层线滤波器网络中,使用带宽为一的不同方向的线型滤波器组来提取图像中的线特征,该线滤波器响应计算方式如下:
Figure PCTCN2015093765-appb-000001
其中,PF为滤波器窗口大小的局部图像块,Lk是滤波器窗口中第k,(k=1,2,…N)个方向上的局部线,(i,j)是线Lk上的像素点的坐标,线Lk具体定义如下:
Lk={(i,j):j=Sk(i-i0)+j0,i∈PF}     (2)
其中(i0,j0)是局部块的中心像素点坐标,Sk是线Lk的斜率。该滤波器组即是在滤波器窗口内计算不同方向上线上的像素值和,然后选择值最小的方向上的像素值和作为滤波器响应。
图4描述了多层线滤波器网络的网络结构,如图4所示,输入网络中的P可以是整幅图像,多层线滤波网络用以提取全局的统计特征,输入网络中的P也可以是局部图像块,用于提取局部特征。该网络结构主要包含滤波层、映射层以及输出层。
滤波层包含多级的线滤波操作。本实施例以两级线滤波操作作为例子进行介绍。第一级滤波对输入P使用共计N1个不同尺寸的上述线滤波器LF进行滤波:
Figure PCTCN2015093765-appb-000002
将第一级滤波器中所有个窗口滤波得到的结果均输出下一级滤波器中,即第二级滤波器中。这里,用pi 1第一级滤波输出。第二级滤波使用第一级滤波的每个输出作为输入,使用共计N2个不同尺寸的线滤波窗口进行滤波:
Figure PCTCN2015093765-appb-000003
这里,用pij 2第二级滤波输出。通过图4可以看出,重复多级滤波操作,得到的滤波图像更多,可以将滤波器网络推广到更高层。
滤波层后是映射层,滤波层将最终滤波得到多个图像特征输出至映射层,映射层把滤波层每个输出二值化,如公式(5)所示,然后多个二值化后的输出组合成一张映射图:
其中,LB为局部二值化操作,定义如下:
Figure PCTCN2015093765-appb-000004
其中,x为当前滤波窗口中间的像素点,xp为该像素的相邻像素点。
最后为输出层,当输入P是整幅图像时,在滤波层进行时,是将整幅图像分成若干个局部图像块,分别得到各个局部图像块的滤波后的图像,因而在输出层中,首先对映射层每个输出做分块直方图,然后衔接起来作为该图像P的全局统计特征。
若输入P为局部图像块,则将映射层输出衔接起来作为图像局部特征。
本申请还提供一种基于字典匹配的图像超分辨率重建方法,建立匹配字典库,将待重建图像输入多层线滤波器网络,提取待重建图像的局部特征,从匹配字典库中寻找与所述待重建图像的局部特征相似度最高的低分辨率图像块的局部特征,寻找在匹配字典库中,所述相似度最高的低分辨率图像块的局部特征所在联合样本的残差值,对相似度最高的低分辨率图像块的局部特征进行插值放大,加上残差值,获得重建后的高分辨率图像块。经过多层线滤波器网络提取的待重建图像的局部特征,精度更高,因此在后续与匹配字典库匹配时,匹配度更高,因而重建出的图像质量也跟好。因而,本申请实施例可以大大提升重建的高分辨率图像的质量。
实施例二:
如图5所示,本申请实施例提供一种基于字典匹配的图像超分辨率重建装置,包括:建立单元30,用于建立匹配字典库,还包括:
提取单元31,用于将待重建图像块输入多层线滤波器网络,提取所述待重建图像块的局部特征。
匹配单元32,用于从所述匹配字典库中寻找与所述待重建图像块的局部特征相似度最高的低分辨率图像块的局部特征。
寻找单元33,寻找在所述匹配字典库中,所述相似度最高的低分辨率图像块的局部特征所在联合样本的残差值。
差值放大单元34,用于对所述相似度最高的低分辨率图像块的局部特征进行插值放大。
重建单元35,用于将差值放大单元34进行放大后的低分辨率图像块的局部特征加上所述寻找单元寻找到的残差值,获得重建后的高分辨率图像块。
如图6所示,一个实施例中,建立单元30具体包括:
采集模块30A,用于采集多个高分辨率图像块,分别对所述多个高分辨率图像块进行降采样,得到与每个所述高分辨率图像块对应的低分辨率图像块,一个高分辨率图像块以及与所述高分辨率图像块对应的低分辨率图像块组成一对训练样本。
减法模块30B,用于将每对训练样本中的高分辨率图像块与所述低分辨率图像块进行插值放大后的图像相减,得到所述训练样本的残差值。
提取模块30C,用于将每对训练样本的低分辨率图像块输入多层线滤波器网络,提取每对训练样本的低分辨率图像块的局部特征。
拼接模块30D,用于将所述每对训练样本的低分辨率图像块的局部特征以及所述训练样本的残差值拼接起来作为所述训练样本的联合样本。
训练模块30E,用于使用K均值聚类对多个联合样本进行训练,得到匹配字典库。
如图7所示,一个实施例中,提取单元31具体包括:
第一级滤波器31A,用于利用N个不同大小的线滤波窗口对输入的待重建图像块进行滤波,得到对应的N个滤波图像,并输出到下一级滤波器,所述滤波图像包括:所述图像的线特征,其中N为大于1的整数;
第二级滤波器31B,用于利用M个不同大小的线滤波窗口分别对第一级滤波器输出的所述N个滤波图像进行滤波,得到对应的M×N个滤波图像,其中M为大于1的整数;
滤波模块31C,用于重复将每级滤波器得到的所有滤波图像输出至下一级滤波器,下一级滤波器利用多个线滤波窗口对上级滤波器输出的所有滤波图像分别进行滤波处理,直至最后一级滤波器滤波完毕,得到的所有滤波图像输出至多层线滤波器网络的映射层;
映射层31D,用于对所述滤波层的所有滤波图像进行二值化处理,并输出至多层线滤波器网络的输出层;
输出层31E,用于对所述映射层输出的二值化处理后的滤波图像进 行衔接并输出,即得到所述输入的待重建图像块的局部特征。
其中,每个所述滤波窗口的滤波过程为:
每个预设大小的滤波窗口利用其窗口区域内的多个不同方向的线型滤波器对所述图像中位于所述滤波窗口中间的像素点(i0,j0)进行线性滤波,响应公式如下:
Figure PCTCN2015093765-appb-000005
其中,PF为滤波器窗口大小的图像块,Lk是滤波器窗口中第k(k=1,2,…N)个方向上的局部线,(i,j)是线Lk上的像素点的坐标,线Lk定义如下:
Lk={(i,j):j=Sk(i-i0)+j0,i∈PF}
其中,Sk是线Lk的斜率。
以上内容是结合具体的实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干简单推演或替换。

Claims (10)

  1. 一种基于字典匹配的图像超分辨率重建方法,其特征在于,包括:
    建立匹配字典库;
    将待重建图像块输入到多层线滤波器网络,提取所述待重建图像块的局部特征;
    从所述匹配字典库中寻找与所述待重建图像块的局部特征相似度最高的低分辨率图像块的局部特征;
    寻找在所述匹配字典库中,所述相似度最高的低分辨率图像块的局部特征所在联合样本的残差值;
    对所述相似度最高的低分辨率图像块的局部特征进行插值放大,加上所述残差值,获得重建后的高分辨率图像块。
  2. 如权利要求1所述的基于字典匹配的图像超分辨率重建方法,其特征在于,所述建立匹配字典库包括:
    采集多个高分辨率图像块,分别对所述多个高分辨率图像块进行降采样,得到与每个所述高分辨率图像块对应的低分辨率图像块,一个高分辨率图像块以及与所述高分辨率图像块对应的低分辨率图像块组成一对训练样本;
    将每对训练样本中的所述高分辨率图像块与所述低分辨率图像块进行插值放大后的图像相减,得到所述训练样本的残差值;
    将每对训练样本的低分辨率图像块输入多层线滤波器网络,提取每对训练样本的低分辨率图像块的局部特征;
    将所述每对训练样本的低分辨率图像块的局部特征以及所述训练样本的残差值拼接起来作为所述训练样本的联合样本;
    使用K均值聚类对多个联合样本进行训练,得到匹配字典库。
  3. 如权利要求1或2所述的基于字典匹配的图像超分辨率重建方法,其特征在于,所述多层线滤波器网络对图像的局部特征提取过程包括:
    步骤一:多层线滤波器网络包括滤波层,所述滤波层的第一级滤波器利用N个不同大小的线滤波窗口对输入网络中的图像进行滤波,得到对应的N个滤波图像,并输出到下一级滤波器,所述滤波图像包括:所述图像的线特征,其中N为大于1的整数;
    步骤二:所述滤波层的第二级滤波器利用M个不同大小的线滤波窗口分别对第一级滤波器输出的所述N个滤波图像进行滤波,得到对应的 M×N个滤波图像,其中M为大于1的整数;
    步骤三:重复将每级滤波器得到的所有滤波图像输出至下一级滤波器,下一级滤波器利用多个线滤波窗口对上级滤波器输出的所有滤波图像分别进行滤波处理,直至最后一级滤波器滤波完毕,得到的所有滤波图像输出至多层线滤波器网络的映射层;
    步骤四:所述映射层对所述滤波层的所有滤波图像进行二值化处理,并输出至多层线滤波器网络的输出层;
    步骤五:若所述多层线滤波器网络的输入图像为局部图像块,所述输出层对所述映射层输出的二值化处理后的滤波图像进行衔接并输出,即得到所述图像的局部特征;若所述多层线滤波器网络的输入图像为整幅图像,所述输出层先对所述映射层输出的二值化处理后的各个滤波图像分别做分块直方图,再进行衔接并输出,即得到所述图像的局部特征。
  4. 如权利要求3所述的基于字典匹配的图像超分辨率重建方法,其特征在于,每个所述滤波窗口的滤波过程为:
    每个预设大小的滤波窗口利用其窗口区域内的多个不同方向的线型滤波器对所述输入图像块中位于所述滤波窗口中间的像素点(i0,j0)进行线性滤波,响应公式如下:
    Figure PCTCN2015093765-appb-100001
    其中,PF为滤波器窗口大小的图像块,Lk是滤波器窗口中第k(k=1,2,…N)个方向上的局部线,(i,j)是线Lk上的像素点的坐标,线Lk定义如下:
    Lk={(i,j):j=Sk(i-i0)+j0,i∈PF}        (2)
    其中,Sk是线Lk的斜率。
  5. 如权利要求4所述的基于字典匹配的图像超分辨率重建方法,其特征在于,所述映射层对所述滤波层的所有滤波图像进行二值化处理包括:
    利用公式(3)对滤波图像进行二值化处理如下:
    Figure PCTCN2015093765-appb-100002
    其中,x为当前滤波窗口中间的像素点,xp为该像素的相邻像素点。
  6. 如权利要求3所述的基于字典匹配的图像超分辨率重建方法,其特征在于,所述将待重建图像块输入到多层线滤波器网络,提取所述待重建图像块的局部特征包括:
    步骤一:多层线滤波器网络包括滤波层,所述滤波层的第一级滤波器利用N个不同大小的线滤波窗口对输入的待重建图像块进行滤波,得到对应的N个滤波图像,并输出到下一级滤波器,所述滤波图像包括:所述图像的线特征,其中N为大于1的整数;
    步骤二:所述滤波层的第二级滤波器利用M个不同大小的线滤波窗口分别对第一级滤波器输出的所述N个滤波图像进行滤波,得到对应的M×N个滤波图像,其中M为大于1的整数;
    步骤三:重复将每级滤波器得到的所有滤波图像输出至下一级滤波器,下一级滤波器利用多个线滤波窗口对上级滤波器输出的所有滤波图像分别进行滤波处理,直至最后一级滤波器滤波完毕,得到的所有滤波图像输出至多层线滤波器网络的映射层;
    步骤四:所述映射层对所述滤波层的所有滤波图像进行二值化处理,并输出至多层线滤波器网络的输出层;
    步骤五:所述输出层对所述映射层输出的二值化处理后的滤波图像进行衔接并输出,即得到所述输入的待重建图像块的局部特征。
  7. 一种基于字典匹配的图像超分辨率重建装置,其特征在于,包括:
    建立单元,用于建立匹配字典库;
    提取单元,用于将待重建图像块输入到多层线滤波器网络,提取所述重建图像块的局部特征;
    匹配单元,用于从所述匹配字典库中寻找与所述待重建图像块的局部特征相似度最高的低分辨率图像块的局部特征;
    寻找单元,寻找在所述匹配字典库中,所述相似度最高的低分辨率图像块的局部特征所在联合样本的残差值;
    差值放大单元,用于对所述相似度最高的低分辨率图像块的局部特征进行插值放大;
    重建单元,用于将所述差值放大单元进行放大后的低分辨率图像块的局部特征加上所述寻找单元寻找到的残差值,获得重建后的高分辨率图像块。
  8. 如权利要求7所述的图像超分辨率重建装置,其特征在于,所述建立单元具体包括:
    采集模块,用于采集多个高分辨率图像块,分别对所述多个高分辨率图像块进行降采样,得到与每个所述高分辨率图像块对应的低分辨率图像块,一个高分辨率图像块以及与所述高分辨率图像块对应的低分辨率图像块组成一对训练样本;
    减法模块,用于将每对训练样本中的所述高分辨率图像块与所述低分辨率图像块进行插值放大后的图像相减,得到所述训练样本的残差值;
    提取模块,用于将每对训练样本的低分辨率图像块输入多层线滤波器网络,提取每对训练样本的低分辨率图像块的局部特征;
    拼接模块,用于将所述每对训练样本的低分辨率图像块的局部特征以及所述训练样本的残差值拼接起来作为所述训练样本的联合样本;
    训练模块,用于使用K均值聚类对多个联合样本进行训练,得到匹配字典库。
  9. 如权利要求7或8所述的图像超分辨率重建装置,其特征在于,所述提取单元具体包括:
    第一级滤波器,用于利用N个不同大小的线滤波窗口对输入的待重建图像块进行滤波,得到对应的N个滤波图像,并输出到下一级滤波器,所述滤波图像包括:所述图像的线特征,其中N为大于1的整数;
    第二级滤波器,用于利用M个不同大小的线滤波窗口分别对第一级滤波器输出的所述N个滤波图像进行滤波,得到对应的M×N个滤波图像,其中M为大于1的整数;
    滤波模块,用于重复将每级滤波器得到的所有滤波图像输出至下一级滤波器,下一级滤波器利用多个线滤波窗口对上级滤波器输出的所有滤波图像分别进行滤波处理,直至最后一级滤波器滤波完毕,得到的所有滤波图像输出至多层线滤波器网络的映射层;
    映射层,用于对所述滤波层的所有滤波图像进行二值化处理,并输出至多层线滤波器网络的输出层;
    输出层,所述输出层用于对所述映射层输出的二值化处理后的滤波图像进行衔接并输出,即得到所述输入的待重建图像块的局部特征。
  10. 如权利要求8所述的图像超分辨率重建装置,其特征在于,每个所述滤波窗口的滤波过程为:
    每个预设大小的滤波窗口利用其窗口区域内的多个不同方向的线型滤波器对所述输入图像块中位于所述滤波窗口中间的像素点(i0,j0)进行线性滤波,响应公式如下:
    Figure PCTCN2015093765-appb-100003
    其中,PF为滤波器窗口大小的图像块,Lk是滤波器窗口中第k(k=1,2,…N)个方向上的局部线,(i,j)是线Lk上的像素点的坐标,线Lk定义如下:
    Lk={(i,j):j=Sk(i-i0)+j0,i∈PF}
    其中,Sk是线Lk的斜率。
PCT/CN2015/093765 2015-11-04 2015-11-04 一种基于字典匹配的图像超分辨率重建方法及装置 WO2017075768A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US15/749,554 US10339633B2 (en) 2015-11-04 2015-11-04 Method and device for super-resolution image reconstruction based on dictionary matching
PCT/CN2015/093765 WO2017075768A1 (zh) 2015-11-04 2015-11-04 一种基于字典匹配的图像超分辨率重建方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/093765 WO2017075768A1 (zh) 2015-11-04 2015-11-04 一种基于字典匹配的图像超分辨率重建方法及装置

Publications (1)

Publication Number Publication Date
WO2017075768A1 true WO2017075768A1 (zh) 2017-05-11

Family

ID=58661464

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/093765 WO2017075768A1 (zh) 2015-11-04 2015-11-04 一种基于字典匹配的图像超分辨率重建方法及装置

Country Status (2)

Country Link
US (1) US10339633B2 (zh)
WO (1) WO2017075768A1 (zh)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108596838A (zh) * 2018-05-09 2018-09-28 北京环境特性研究所 一种基于学习的单帧人脸图像超分方法及装置
CN108710950A (zh) * 2018-05-11 2018-10-26 上海市第六人民医院 一种图像量化分析方法
CN108776954A (zh) * 2018-06-26 2018-11-09 北京字节跳动网络技术有限公司 用于生成图像的方法和装置
CN108986029A (zh) * 2018-07-03 2018-12-11 南京览笛信息科技有限公司 文字图像超分辨率重建方法、系统、终端设备及存储介质
CN108989731A (zh) * 2018-08-09 2018-12-11 复旦大学 一种提高视频空间分辨率的方法
CN109615576A (zh) * 2018-06-28 2019-04-12 西安工程大学 基于级联回归基学习的单帧图像超分辨重建方法
CN109741254A (zh) * 2018-12-12 2019-05-10 深圳先进技术研究院 字典训练及图像超分辨重建方法、系统、设备及存储介质
CN111709442A (zh) * 2020-05-07 2020-09-25 北京工业大学 一种面向图像分类任务的多层字典学习方法
CN111951167A (zh) * 2020-08-25 2020-11-17 深圳思谋信息科技有限公司 超分辨率图像重建方法、装置、计算机设备和存储介质
WO2021093620A1 (en) * 2019-11-15 2021-05-20 Huawei Technologies Co., Ltd. Method and system for high-resolution image inpainting
CN113158928A (zh) * 2021-04-27 2021-07-23 浙江云奕科技有限公司 一种基于图像识别的混凝土试块防造假方法
WO2022087778A1 (zh) * 2020-10-26 2022-05-05 深圳大学 一种基于多层耦合映射的低分辨率图像识别方法
CN114492450A (zh) * 2021-12-22 2022-05-13 马上消费金融股份有限公司 文本匹配方法及装置
CN114972043A (zh) * 2022-08-03 2022-08-30 江西财经大学 基于联合三边特征滤波的图像超分辨率重建方法与系统

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10410398B2 (en) * 2015-02-20 2019-09-10 Qualcomm Incorporated Systems and methods for reducing memory bandwidth using low quality tiles
US10701394B1 (en) * 2016-11-10 2020-06-30 Twitter, Inc. Real-time video super-resolution with spatio-temporal networks and motion compensation
CN110147864B (zh) * 2018-11-14 2022-02-22 腾讯科技(深圳)有限公司 编码图案的处理方法和装置、存储介质、电子装置
CN109766863B (zh) * 2019-01-18 2022-09-06 南京邮电大学 一种基于局部和稀疏非局部正则的人脸图像超分辨率方法
CN110033410B (zh) * 2019-03-28 2020-08-04 华中科技大学 图像重建模型训练方法、图像超分辨率重建方法及装置
CN111476714B (zh) * 2020-03-30 2022-10-28 清华大学 基于psv神经网络的跨尺度图像拼接方法及装置
CN111696042B (zh) * 2020-06-04 2023-06-27 四川轻化工大学 基于样本学习的图像超分辨重建方法
CN112598575B (zh) * 2020-12-22 2022-05-03 电子科技大学 一种基于特征处理的图像信息融合及超分辨率重建方法
CN112669216B (zh) * 2021-01-05 2022-04-22 华南理工大学 一种基于联邦学习的并行空洞新结构的超分辨率重构网络
CN112950470B (zh) * 2021-02-26 2022-07-15 南开大学 基于时域特征融合的视频超分辨率重建方法及系统
CN114067110A (zh) * 2021-07-13 2022-02-18 广东国地规划科技股份有限公司 一种实例分割网络模型的生成方法
CN114418853B (zh) * 2022-01-21 2022-09-20 杭州碧游信息技术有限公司 基于相似图像检索的图像超分辨率优化方法、介质及设备
CN117011196B (zh) * 2023-08-10 2024-04-19 哈尔滨工业大学 一种基于组合滤波优化的红外小目标检测方法及系统

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102142137A (zh) * 2011-03-10 2011-08-03 西安电子科技大学 基于高分辨率字典的稀疏表征图像超分辨重建方法
CN105389778A (zh) * 2015-11-04 2016-03-09 北京大学深圳研究生院 一种基于字典匹配的图像超分辨率重建方法及装置

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102142137A (zh) * 2011-03-10 2011-08-03 西安电子科技大学 基于高分辨率字典的稀疏表征图像超分辨重建方法
CN105389778A (zh) * 2015-11-04 2016-03-09 北京大学深圳研究生院 一种基于字典匹配的图像超分辨率重建方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CHEN, HUAHUA ET AL.: "Image Super-resolution Reconstruction Based on Residual Error", JOURNAL OF IMAGE AND GRAPHICS, vol. 18, no. 1, 31 January 2013 (2013-01-31), pages 43 - 45 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108596838B (zh) * 2018-05-09 2022-02-18 北京环境特性研究所 一种基于学习的单帧人脸图像超分方法及装置
CN108596838A (zh) * 2018-05-09 2018-09-28 北京环境特性研究所 一种基于学习的单帧人脸图像超分方法及装置
CN108710950A (zh) * 2018-05-11 2018-10-26 上海市第六人民医院 一种图像量化分析方法
CN108776954A (zh) * 2018-06-26 2018-11-09 北京字节跳动网络技术有限公司 用于生成图像的方法和装置
CN109615576A (zh) * 2018-06-28 2019-04-12 西安工程大学 基于级联回归基学习的单帧图像超分辨重建方法
CN108986029B (zh) * 2018-07-03 2023-09-08 南京览笛信息科技有限公司 文字图像超分辨率重建方法、系统、终端设备及存储介质
CN108986029A (zh) * 2018-07-03 2018-12-11 南京览笛信息科技有限公司 文字图像超分辨率重建方法、系统、终端设备及存储介质
CN108989731B (zh) * 2018-08-09 2020-07-07 复旦大学 一种提高视频空间分辨率的方法
CN108989731A (zh) * 2018-08-09 2018-12-11 复旦大学 一种提高视频空间分辨率的方法
CN109741254A (zh) * 2018-12-12 2019-05-10 深圳先进技术研究院 字典训练及图像超分辨重建方法、系统、设备及存储介质
CN109741254B (zh) * 2018-12-12 2022-09-27 深圳先进技术研究院 字典训练及图像超分辨重建方法、系统、设备及存储介质
WO2021093620A1 (en) * 2019-11-15 2021-05-20 Huawei Technologies Co., Ltd. Method and system for high-resolution image inpainting
US11501415B2 (en) 2019-11-15 2022-11-15 Huawei Technologies Co. Ltd. Method and system for high-resolution image inpainting
CN111709442A (zh) * 2020-05-07 2020-09-25 北京工业大学 一种面向图像分类任务的多层字典学习方法
CN111951167B (zh) * 2020-08-25 2021-05-18 深圳思谋信息科技有限公司 超分辨率图像重建方法、装置、计算机设备和存储介质
CN111951167A (zh) * 2020-08-25 2020-11-17 深圳思谋信息科技有限公司 超分辨率图像重建方法、装置、计算机设备和存储介质
WO2022087778A1 (zh) * 2020-10-26 2022-05-05 深圳大学 一种基于多层耦合映射的低分辨率图像识别方法
CN113158928A (zh) * 2021-04-27 2021-07-23 浙江云奕科技有限公司 一种基于图像识别的混凝土试块防造假方法
CN113158928B (zh) * 2021-04-27 2023-09-19 浙江云奕科技有限公司 一种基于图像识别的混凝土试块防造假方法
CN114492450A (zh) * 2021-12-22 2022-05-13 马上消费金融股份有限公司 文本匹配方法及装置
CN114972043A (zh) * 2022-08-03 2022-08-30 江西财经大学 基于联合三边特征滤波的图像超分辨率重建方法与系统
CN114972043B (zh) * 2022-08-03 2022-10-25 江西财经大学 基于联合三边特征滤波的图像超分辨率重建方法与系统

Also Published As

Publication number Publication date
US20180232857A1 (en) 2018-08-16
US10339633B2 (en) 2019-07-02

Similar Documents

Publication Publication Date Title
WO2017075768A1 (zh) 一种基于字典匹配的图像超分辨率重建方法及装置
CN112329800B (zh) 一种基于全局信息引导残差注意力的显著性目标检测方法
CN112750082B (zh) 基于融合注意力机制的人脸超分辨率方法及系统
CN110544205B (zh) 基于可见光与红外交叉输入的图像超分辨率重建方法
Zheng et al. Residual multiscale based single image deraining
CN111340744B (zh) 基于注意力双流深度网络的低质量图像降采样方法及其系统
Huang et al. Deep hyperspectral image fusion network with iterative spatio-spectral regularization
CN107220957B (zh) 一种利用滚动导向滤波的遥感图像融合方法
CN105389778B (zh) 一种基于字典匹配的图像超分辨率重建方法及装置
CN114266957B (zh) 一种基于多降质方式数据增广的高光谱图像超分辨率复原方法
WO2020043296A1 (en) Device and method for separating a picture into foreground and background using deep learning
CN110738660A (zh) 基于改进U-net的脊椎CT图像分割方法及装置
CN114372962B (zh) 基于双粒度时间卷积的腹腔镜手术阶段识别方法与系统
WO2017070841A1 (zh) 图像处理方法和装置
WO2015180055A1 (zh) 一种基于分类字典库的超分辨率图像重构方法及装置
Senthilkani et al. Overlap wavelet transform for image segmentation
Abed et al. Architectural heritage images classification using deep learning with CNN
Hua et al. Dynamic scene deblurring with continuous cross-layer attention transmission
CN108764287B (zh) 基于深度学习和分组卷积的目标检测方法及系统
Xu et al. Degradation-aware dynamic fourier-based network for spectral compressive imaging
Jadhav Image fusion based on wavelet transform
CN104504162A (zh) 一种基于机器人视觉平台的视频检索方法
CN115565034A (zh) 基于双流增强网络的红外小目标检测方法
CN114723948A (zh) 基于分层次细节增强的高分辨率遥感图像语义分割方法
Zhong et al. An efficient saliency detection model based on wavelet generalized lifting

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15907610

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15749554

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 04/09/2018)

122 Ep: pct application non-entry in european phase

Ref document number: 15907610

Country of ref document: EP

Kind code of ref document: A1