CN104021523B - A kind of method of the image super-resolution amplification based on marginal classification - Google Patents

A kind of method of the image super-resolution amplification based on marginal classification Download PDF

Info

Publication number
CN104021523B
CN104021523B CN201410193840.4A CN201410193840A CN104021523B CN 104021523 B CN104021523 B CN 104021523B CN 201410193840 A CN201410193840 A CN 201410193840A CN 104021523 B CN104021523 B CN 104021523B
Authority
CN
China
Prior art keywords
image
edge
point
interpolation
resolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410193840.4A
Other languages
Chinese (zh)
Other versions
CN104021523A (en
Inventor
端木春江
王泽思
李林伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Normal University CJNU
Original Assignee
Zhejiang Normal University CJNU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Normal University CJNU filed Critical Zhejiang Normal University CJNU
Priority to CN201410193840.4A priority Critical patent/CN104021523B/en
Publication of CN104021523A publication Critical patent/CN104021523A/en
Application granted granted Critical
Publication of CN104021523B publication Critical patent/CN104021523B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

本发明公开了一种基于边缘分类的图像放大的方法。该方法首先对低分辨率图像进行边缘检测,以得到低分辨率上的图像边缘指示的二值化的图。然后,对低分辨率的图像进行初始的插值,得到初始的高分辨率的图像。接下来,提取高分辨率图像中的3×3的图像块,根据和此块对应的二值化图像中的边缘的走向对此图像块进行分类,并根据其类别对块中某些像素进行重新插值。本发明的方法很好地考虑到了边缘的走向特点,依此进行插值,从而使放大后的图像的细节部分比较清晰,尤其是边缘部分和靠近边缘的部分。实验表明,重建后的图像与原始高分辨率图像非常接近。

The invention discloses an image enlargement method based on edge classification. The method first performs edge detection on the low-resolution image to obtain a binarized image indicated by the edge of the low-resolution image. Then, an initial interpolation is performed on the low-resolution image to obtain an initial high-resolution image. Next, extract a 3×3 image block in the high-resolution image, classify the image block according to the direction of the edge in the binarized image corresponding to this block, and classify some pixels in the block according to its category Re-interpolate. The method of the present invention well considers the characteristics of the direction of the edge, and performs interpolation according to this, so that the details of the enlarged image are relatively clear, especially the edge part and the part close to the edge. Experiments show that the reconstructed image is very close to the original high-resolution image.

Description

一种基于边缘分类的图像超分辨率放大的方法A Method of Image Super-resolution Enlargement Based on Edge Classification

技术领域technical field

本发明涉及在图像处理中,一种超分辨率图像放大的新方法。尤其是根据低分辨率图像上的边缘信息,对初始放大的高分辨率图像上的一个小图像块依据其对应的低分辨率图像上的边缘点进行分类,并根据此分类对图像块中某些像素点进行重新插值的方法。The invention relates to a new method for super-resolution image enlargement in image processing. Especially according to the edge information on the low-resolution image, a small image block on the initially enlarged high-resolution image is classified according to the edge points on the corresponding low-resolution image, and a certain image block in the image block is classified according to this classification. The method of re-interpolating some pixels.

背景技术Background technique

随着数码相机和互联网技术的飞速发展,人们对高清数字图像的需求与日俱增。但是,受到网络带宽和存储空间的限制,高清图像的存储和传输的代价比较大,非常消耗系统的资源,如需要消耗系统的非常大的存储空间或非常大的带宽。为此,数字图像的压缩技术被提了出来,产生了图像压缩的国际标准,如JPEG,JPEG2000等。目前的无损的图像压缩方法,压缩的倍数一般在4以下。同时,图像的有损的压缩方法,会带来图像的失真,产生块效应(block artifact)、振铃效应(ringing artifact)等。为此,图像的超分辨率的放大技术得到了人们的广泛关注,并成为近年来图像处理领域内的热点研究问题。With the rapid development of digital cameras and Internet technology, people's demand for high-definition digital images is increasing day by day. However, due to the limitation of network bandwidth and storage space, the cost of storage and transmission of high-definition images is relatively high, which consumes system resources, such as consuming a very large storage space or a very large bandwidth of the system. For this reason, digital image compression technology has been proposed, resulting in international standards for image compression, such as JPEG, JPEG2000 and so on. In the current lossless image compression method, the compression factor is generally below 4. At the same time, the lossy image compression method will cause image distortion, resulting in block artifacts, ringing artifacts, and the like. For this reason, image super-resolution magnification technology has received widespread attention, and has become a hot research issue in the field of image processing in recent years.

图像超分辨率放大的目的是由低分辨率的图像得到高分辨率图像的方法。这样,可以在传输和存储时,只传输或存储低分辨率的图像,然后在接受方或显示方采用图像的超分辨率放大技术得到高分辨率的图像。同时,图像的超分辨率放大技术可以和已有的图像压缩技术相结合,进一步地减少为存储和传输高分辨率图像而消耗的系统资源。进一步地,高清视频的存储和传输,也可以利用图像的超分辨方法来减少系统资源的消耗。The purpose of image super-resolution magnification is a method of obtaining high-resolution images from low-resolution images. In this way, only low-resolution images can be transmitted or stored during transmission and storage, and then high-resolution images can be obtained by using image super-resolution amplification technology on the receiver or display side. At the same time, the image super-resolution magnification technology can be combined with the existing image compression technology to further reduce the system resources consumed for storing and transmitting high-resolution images. Furthermore, the storage and transmission of high-definition video can also use the image super-resolution method to reduce the consumption of system resources.

目前,图像的超分辨率的方法分为插值的方法和基于样例的方法。其中,基于样例的方法需要先建立一个由训练得到的由低分辨率图像块和其对应的高分辨率图像块构成的样例数据库,且其计算量非常大,难以实时应用。基于插值的方法,虽然其计算复杂度低,但是容易造成图像的模糊,使图像的边缘部分不清晰。而人眼对于图像的边缘部分是比较敏感的,其较低的失真就会大大降低图像的视觉效果。At present, image super-resolution methods are divided into interpolation methods and sample-based methods. Among them, the example-based method needs to first establish a sample database composed of low-resolution image blocks and their corresponding high-resolution image blocks obtained by training, and its computational complexity is very large, making it difficult to apply in real time. Although the method based on interpolation has low computational complexity, it is easy to blur the image and make the edge of the image unclear. The human eye is more sensitive to the edge of the image, and its lower distortion will greatly reduce the visual effect of the image.

为此,本发明提出了一种在初始的放大之后,依据图像的边缘来修改放大图像中的边缘像素点和其附近的像素点的值的方法。以在插值的时候,插值方向和图像的边缘走向相一致,得到边缘清晰的放大图像。For this reason, the present invention proposes a method of modifying the values of edge pixels and nearby pixels in the enlarged image according to the edges of the image after the initial enlargement. Therefore, during interpolation, the interpolation direction is consistent with the edge direction of the image, and an enlarged image with clear edges is obtained.

发明内容Contents of the invention

有鉴于现有技术的上述缺陷,本发明提出了一种对图像中的边缘的情况进行分类,然后对不同的类别进行相应的重新插值的方法。以根据低分辨率图像上的边缘信息,作为启发式的信息,依据边缘的走向来进行插值,使放大的图像边缘清晰,克服目前已有的超分辨率插值的方法的边缘模糊的缺点。In view of the above-mentioned defects in the prior art, the present invention proposes a method for classifying the edge conditions in an image, and then performing corresponding re-interpolation for different categories. Based on the edge information on the low-resolution image as heuristic information, interpolation is performed according to the direction of the edge, so that the edge of the enlarged image is clear, and the shortcomings of the blurred edge of the existing super-resolution interpolation method are overcome.

为实现上述目的,本发明提供了一种在边缘检测后对边缘进行11种不同的分类再进行重新插值的方法。对于放大的倍数为2×2时,即,横向放大2倍,纵向放大2倍。当然本发明不限于放大2×2倍。本发明包括:In order to achieve the above object, the present invention provides a method for performing 11 different classifications on edges after edge detection and then performing re-interpolation. When the magnification factor is 2×2, that is, the horizontal magnification is 2 times, and the vertical magnification is 2 times. Of course, the present invention is not limited to magnification of 2×2 times. The present invention includes:

步骤一,初始的基于插值的超分辨率放大,以得到初始的高分辨率图像;Step 1, initial interpolation-based super-resolution upscaling to obtain an initial high-resolution image;

步骤二,对低分辨率图像进行边缘检测与提取;Step 2, performing edge detection and extraction on the low-resolution image;

步骤三,对边缘提取后的图像进行二值化处理,使非边缘点上的二值化的值均变为0,边缘点上的二值化的值为1。即,在二值化的图像上,图像边缘上的像素点值为1,其它不是边缘的像素点值全部为0;Step 3, binarize the image after edge extraction, so that the binarized values on the non-edge points become 0, and the binarized values on the edge points become 1. That is, on the binarized image, the pixel value on the edge of the image is 1, and the values of other pixel points that are not edges are all 0;

步骤四,置x=0,y=0;Step 4, set x=0, y=0;

步骤五,以(x,y)为高分辨率图像块的左上角,在高分辨率的图像上提取3×3大小的图像块。对低分辨率的二值化图像,以(x/2,y/2)为左上角,提取2×2大小的图像块。根据此2×2大小的边缘二值化后的图像块,对提取的高分辨率的图像块进行分类,具体可分为如下11类:分别为上边缘类、下边缘类,左边缘类、右边缘类、左下斜边缘类、右下斜边缘类、左上角边缘类、左下角边缘类、右上角边缘类、右下角边缘类、其它类;然后,根据边缘的类别,对初始放大的图像上提取的3×3大小的图像块的边缘像素点及其附近像素点,进行重新插值;Step 5, take (x, y) as the upper left corner of the high-resolution image block, and extract a 3×3 image block from the high-resolution image. For a low-resolution binarized image, take (x/2, y/2) as the upper left corner and extract a 2×2 image block. According to the 2×2 edge binarized image blocks, the extracted high-resolution image blocks can be classified into the following 11 categories: upper edge class, lower edge class, left edge class, left edge class, Right edge class, left lower oblique edge class, right lower oblique edge class, upper left edge class, lower left edge class, upper right edge class, lower right edge class, other classes; then, according to the edge class, the initial enlarged image The edge pixels of the 3×3 image block extracted above and the nearby pixels are re-interpolated;

步骤六,置x=x+2,若x≤2W-5(W为低分辨率图像的宽度),则跳到步骤五对下一个图像块进行操作;Step 6, put x=x+2, if x≤2W-5 (W is the width of low-resolution image), then skip to step 5 and operate on the next image block;

步骤七,置y=y+2,若y≤2H-5(H为低分辨率图像的高度),则跳到步骤五对下一个图像块进行操作;Step 7, set y=y+2, if y≤2H-5 (H is the height of the low-resolution image), then skip to step 5 and operate on the next image block;

步骤八,对当前低分辨率图像的超分辨率放大结束。得到(2W)×(2H)尺寸的高分辨率图像。Step eight, the super-resolution zoom-in of the current low-resolution image ends. A high-resolution image of (2W)×(2H) size is obtained.

进一步地,所述步骤一中,所采用的是双线性插值方法。即,对于如下图像块:Further, in the first step, a bilinear interpolation method is adopted. That is, for the following image blocks:

其中,A、B、C、D分别是低分辨率图像上的像素点的像素值,a、b、c、d、e是高分辨率图像上需插值的像素点。在双线性插值方法中,它们的像素值由下式得到:Among them, A, B, C, and D are the pixel values of the pixels on the low-resolution image respectively, and a, b, c, d, and e are the pixels to be interpolated on the high-resolution image. In the bilinear interpolation method, their pixel values are obtained by:

其中,clip(x)函数是把x的值限制到一个像素点值的值域范围之内,即,clip(x)=max(Imin,min(x,Imax))。这里,Imin和Imax分别是一个像素点可能取得最小值和最大值。Wherein, the clip(x) function limits the value of x to the range of a pixel value, that is, clip(x)=max(I min , min(x,I max )). Here, I min and I max are the minimum and maximum possible values of a pixel, respectively.

进一步地,所述步骤二中,采用Canny算子进行边缘提取。Canny算子具体的计算方法如下:Further, in the second step, the Canny operator is used for edge extraction. The specific calculation method of the Canny operator is as follows:

首先,用高斯滤波器平滑图像,也就是在通常情况下选取一个高斯平滑函数,在频域内其冲击函数为:First, use a Gaussian filter to smooth the image, that is, usually choose a Gaussian smoothing function, and its impact function in the frequency domain is:

其中,D(u,v)表示的是距离傅里叶变换较远点的距离,σ是曲率,H(u,v)表示高斯曲线扩展的程度;为避免边缘过于模糊,本发明利用范围小的卷积的模板。具体地,本发明采用如下的5×5的模板对低分辨率的图进行卷积运算:Wherein, what D (u, v) represented is the distance from the far point of the Fourier transform, σ is the curvature, and H (u, v) represents the degree of expansion of the Gaussian curve; for avoiding that the edge is too blurred, the present invention uses a small The convolutional template. Specifically, the present invention uses the following 5×5 template to perform convolution operations on low-resolution images:

其次,计算梯度的幅值和方向,也就是首先选取一个一阶差分卷积的模板:Second, calculate the magnitude and direction of the gradient, that is, first select a template for first-order differential convolution:

然后定义低分辨率图像f(m,n)(m为纵向坐标,n为横向坐标)在H1、H2两个正交方向上的梯度Ψ1(m,n)、Ψ2(m,n)分别为:Then define the gradients Ψ 1 ( m, n), Ψ 2 ( m, n) are respectively:

Ψ1(m,n)=f(m,n)*H1(m,n)Ψ 1 (m, n)=f(m, n)*H 1 (m, n)

Ψ2(m,n)=f(m,n)*H2(m,n)Ψ 2 (m, n) = f(m, n)*H 2 (m, n)

在经过进一步的运算得到边缘的强度和方向如下式所示:After further calculation, the strength and direction of the edge are obtained as follows:

在原始的Canny算子的方法中,将用已经计算出的梯度幅值进行非极大值(NMS)收敛;本发明对此步骤进行改进。首先量化边缘角度θΨ为θ1,θ1∈{0°,45°,90°,135°,180°,225°,270°,315°}。然后,如果θ1=0°,则必须 Ψ(m,n)>Ψ(m,n+1)才判断当前点为初始的边缘点;如果θ1=45°,则必须Ψ(m,n)>Ψ(m-1,n+1)才判断当前点为初始的边缘点;如果θ1=90°,则必须Ψ(m,n)>Ψ(m-1,n)才判断当前点为初始的边缘点;如果θ1=135°,则必须Ψ(m,n)>Ψ(m-1,n-1)才判断当前点为初始的边缘点;如果θ1=180°,则必须Ψ(m,n)>Ψ(m,n-1)才判断当前点为初始的边缘点;如果θ1=225°,则必须Ψ(m,n)>Ψ(m+1,n-1)才判断当前点为初始的边缘点;如果θ1=270°,则必须Ψ(m,n)>Ψ(m+1,n)才判断当前点为初始的边缘点,如果θ1=315°,则必须Ψ(m,n)>Ψ(m+1,n+1)才判断当前点为初始的边缘点。这样,可减少计算量,并取得较好的效果。In the original Canny operator method, the calculated gradient magnitude will be used for non-maximum (NMS) convergence; the present invention improves this step. First quantize the edge angle θ Ψ as θ 1 , θ 1 ∈ {0°, 45°, 90°, 135°, 180°, 225°, 270°, 315°}. Then, if θ 1 =0°, then it must be Ψ(m,n)>Ψ(m,n+1) to judge the current point as the initial edge point; if θ 1 =45°, then Ψ(m,n )>Ψ(m-1,n+1) to judge the current point as the initial edge point; if θ 1 =90°, then Ψ(m,n)>Ψ(m-1,n) to judge the current point is the initial edge point; if θ 1 =135°, then Ψ(m,n)>Ψ(m-1,n-1) is required to judge the current point as the initial edge point; if θ 1 =180°, then Only Ψ(m, n)>Ψ(m,n-1) can judge the current point as the initial edge point; if θ 1 =225°, then Ψ(m,n)>Ψ(m+1,n- 1) to determine the current point as the initial edge point; if θ 1 =270°, then Ψ(m,n)>Ψ(m+1,n) is required to determine the current point as the initial edge point, if θ 1 = 315°, then Ψ(m, n)>Ψ(m+1, n+1) must be used to determine that the current point is the initial edge point. In this way, the amount of calculation can be reduced and better results can be achieved.

最后,用双阈值算法来把已经检测出的图像边缘连接起来。即,当上一步检测出的初始边缘点的梯度幅值Ψ(m,n)>Th时,确定该点为图像边缘点。然后,以这些边缘点为种子,扫描其周围相邻的点,当其梯度幅值Ψ(m,n)>Tl时,把此点加入图像边缘点的集合。本发明中,根据大量的实验与其结果,对这两个参数Th、Tl的取值设定为Th=200,Tl=100。Finally, a double-threshold algorithm is used to connect the detected image edges. That is, when the gradient magnitude Ψ(m, n) of the initial edge point detected in the previous step>T h , this point is determined to be an image edge point. Then, take these edge points as seeds, scan the adjacent points around them, and when the gradient magnitude Ψ(m, n)>T l , add this point to the set of image edge points. In the present invention, according to a large number of experiments and their results, the values of these two parameters Th h and T l are set as T h =200 and T l =100.

进一步地,所述步骤五中,根据3×3大小的图像块中的边缘,对图像块分类的过程可描述如下。即,对高分辨率图像中提取的图像块Further, in the fifth step, the process of classifying the image blocks according to the edges in the 3×3 image blocks can be described as follows. That is, for image blocks extracted from high-resolution images

在此图像块中,a11、a13、a31、a33像素点是属于低分辨率图像中的像素点,此块中其余的像素点值是由插值得到的。根据其所对应的低分辨率的二值化的边缘分布图,可按此块中边缘点出现的位置对此图像块进行分类。其分类的类别如下:In this image block, the pixels a 11 , a 13 , a 31 , and a 33 belong to the pixels in the low-resolution image, and the values of other pixels in this block are obtained by interpolation. According to the corresponding low-resolution binarized edge distribution map, the image block can be classified according to the position where the edge point appears in the block. Its categories are as follows:

(a)上边缘类。此时a11和a13为图像边缘点。此时,需要重新插值的像素点为a21、a22、a23。其插值公式为(a) Upper edge class. At this time, a 11 and a 13 are edge points of the image. At this time, the pixel points to be re-interpolated are a 21 , a 22 , and a 23 . Its interpolation formula is

其中,clip(x)的定义如前所述,α和β为两个权值,满足条件α+β=1。Wherein, clip(x) is defined as mentioned above, α and β are two weights, and the condition α+β=1 is satisfied.

(b)下边缘类。此时a31和a33为图像边缘点。此时,需要重新插值的像素点为a21、a22、a23。其插值公式为(b) Lower edge class. At this time, a 31 and a 33 are edge points of the image. At this time, the pixel points to be re-interpolated are a 21 , a 22 , and a 23 . Its interpolation formula is

(c)左边缘类。此时a11和a31为图像边缘点。此时,需要重新插值的像素点为a12、a22、a32。其插值公式为(c) Left edge class. At this time, a 11 and a 31 are edge points of the image. At this time, the pixel points that need to be re-interpolated are a 12 , a 22 , and a 32 . Its interpolation formula is

(d)右边缘类。此时a13和a33为图像边缘点。此时,需要重新插值的像素点为a12、a22、a32。其插值公式为(d) Right edge class. At this time, a 13 and a 33 are edge points of the image. At this time, the pixel points that need to be re-interpolated are a 12 , a 22 , and a 32 . Its interpolation formula is

(e)左下斜边缘类。此时a11和a33为图像边缘点。此时,需要重新插值的像素点为a22、a12、a21、a23、a32。其插值公式为(e) Lower-left slanted edge class. At this time, a 11 and a 33 are edge points of the image. At this time, the pixel points to be re-interpolated are a 22 , a 12 , a 21 , a 23 , and a 32 . Its interpolation formula is

(f)右下斜边缘类。此时a13和a31为图像边缘点。此时,需要重新插值的像素点为a22、a12、a21、a23、a32。其插值公式为(f) Right lower sloping edge class. At this time, a 13 and a 31 are edge points of the image. At this time, the pixel points to be re-interpolated are a 22 , a 12 , a 21 , a 23 , and a 32 . Its interpolation formula is

(g)左上框边缘类。左下斜边缘类。此时a11、a13和a31为图像边缘点。此时,需要重新插值的像素点为a22、a23、a32。其插值公式为(g) Top left box edge class. Lower left sloping edge class. At this time, a 11 , a 13 and a 31 are image edge points. At this time, the pixel points that need to be re-interpolated are a 22 , a 23 , and a 32 . Its interpolation formula is

(h)左下框边缘类。左下斜边缘类。此时a11、a31和a33为图像边缘点。此时,需要重新插值的像素点为a22、a12、a23。其插值公式为(h) The edge class of the lower left box. Lower left sloping edge class. At this time, a 11 , a 31 and a 33 are image edge points. At this time, the pixel points to be re-interpolated are a 22 , a 12 , and a 23 . Its interpolation formula is

(i)右上框边缘类。此时a11、a13和a33为图像边缘点。此时,需要重新插值的像素 点为a22、a21、a32。其插值公式为(i) Top right box edge class. At this time, a 11 , a 13 and a 33 are image edge points. At this time, the pixel points that need to be re-interpolated are a 22 , a 21 , and a 32 . Its interpolation formula is

(j)右下框边缘类。此时a31、a13和a33为图像边缘点。此时,需要重新插值的像素点为a22、a21、a12。其插值公式为(j) The edge class of the lower right box. At this time, a 31 , a 13 and a 33 are image edge points. At this time, the pixel points to be re-interpolated are a 22 , a 21 , and a 12 . Its interpolation formula is

(k)其它类别。其它情况下,都属于这一类别。对属于这一类别的情况,本发明中不对所提取的高分辨率图像块进行重新插值。(k) Other categories. All other cases fall into this category. For cases belonging to this category, no re-interpolation of the extracted high-resolution image blocks is performed in the present invention.

综上所述,本发明首先进行初始的插值放大,这里可以选择任一个已有的插值放大方法,然后用本发明的方法对其进行改进和提高性能。由于双线性插值方法具有较低的计算复杂度和较好的性能,本发明采用双线性插值方法进行初始的插值放大。然后,本发明利用Canny算子对低分辨图像进行边缘提取。接下来,提取高分辨率图中3×3大小的块,根据此块在低分辨率图中的边缘的位置和走向,把此块进行归类,当此块属于前10类时,根据边缘的类别对此块进行重新插值;当此块属于其它类时,保留原始的插值。然后,提取下一块,进行相同的操作。直到对高分辨率中所有的块都进行了此操作为止。最终,得到边缘清晰的高分辨率的图像。其创新之处在于对高分辨率的图像的3×3大小的块的提取,并利用低分辨率图像上的边缘信息对其进行归类,并根据其类别进行有的放矢的重新插值,或保留原有的数值,使图像中边缘得以突出、更清晰。提升图像放大的效果。To sum up, the present invention first performs initial interpolation and amplification, here any existing interpolation and amplification method can be selected, and then the method of the present invention is used to improve it and improve its performance. Since the bilinear interpolation method has lower computational complexity and better performance, the present invention adopts the bilinear interpolation method for initial interpolation amplification. Then, the present invention uses the Canny operator to extract the edge of the low-resolution image. Next, extract a block with a size of 3×3 in the high-resolution image, and classify the block according to the position and direction of the edge of the block in the low-resolution image. When the block belongs to the top 10 categories, according to the edge The category of this block is re-interpolated; when this block belongs to other classes, the original interpolation is retained. Then, to extract the next block, do the same. Until this is done for all blocks in high resolution. Finally, a high-resolution image with sharp edges is obtained. Its innovation lies in the extraction of 3×3 blocks of high-resolution images, and using the edge information on low-resolution images to classify them, and perform targeted re-interpolation according to their categories, or retain the original Some values make the edges in the image stand out and clearer. Improve the effect of image enlargement.

以下将结合附图对本发明的构思、具体结构及产生的技术效果作进一步说明,以充分地了解本发明的目的、特征和效果。The idea, specific structure and technical effects of the present invention will be further described below in conjunction with the accompanying drawings, so as to fully understand the purpose, features and effects of the present invention.

附图说明Description of drawings

图1是本发明的基于边缘分类的超分辨率图像重建算法的流程图;Fig. 1 is the flowchart of the super-resolution image reconstruction algorithm based on edge classification of the present invention;

图2是本发明的基于边缘分类的超分辨率图像重建方法的实验结果图。Fig. 2 is a graph of experimental results of the method for super-resolution image reconstruction based on edge classification of the present invention.

具体实施方式detailed description

下面结合附图对本发明的实施例作详细说明:本实施例在以本发明技术方案前提下进行实施,给出了详细的实施方式和具体的操作过程,但本发明的保护范围不限于下述的实施例。Below in conjunction with accompanying drawing, the embodiment of the present invention is described in detail: present embodiment implements under the premise of the technical scheme of the present invention, has provided detailed implementation and specific operation process, but protection scope of the present invention is not limited to the following the embodiment.

如附图1所示,本发明的基于边缘分类的新的图像放大的方法按照如下步骤进行:As shown in accompanying drawing 1, the method for the new image enlargement based on edge classification of the present invention is carried out according to the following steps:

步骤一,初始的基于双线性插值的超分辨率放大,以得到初始的高分辨率图像;即,对于如下图像块:Step 1, initial bilinear interpolation-based super-resolution upscaling to obtain an initial high-resolution image; that is, for the following image blocks:

其中,A、B、C、D分别是低分辨率图像上的像素点的像素值,a、b、c、d、e是高分辨率图像上需插值的像素点。在双线性插值方法中,它们的像素值由下式得到:Among them, A, B, C, and D are the pixel values of the pixels on the low-resolution image respectively, and a, b, c, d, and e are the pixels to be interpolated on the high-resolution image. In the bilinear interpolation method, their pixel values are obtained by:

其中,clip(x)函数是把x的值限制到一个像素点值的值域范围之内,即,clip(x)=max(Imin,min(x,Imax))。对于8比特亮度的图像,Imin=0,Imax=255。Wherein, the clip(x) function limits the value of x to the range of a pixel value, that is, clip(x)=max(I min , min(x,I max )). For an image with 8-bit luminance, I min =0, I max =255.

这里,Imin和Imax分别是一个像素点可能取得最小值和最大值。Here, I min and I max are the minimum and maximum possible values of a pixel, respectively.

步骤二,对低分辨率图像进行边缘检测与提取;这里,采用Canny算子进行边缘提取。Canny算子具体的计算方法如下:The second step is to perform edge detection and extraction on the low-resolution image; here, the Canny operator is used for edge extraction. The specific calculation method of the Canny operator is as follows:

首先,用高斯滤波器平滑图像,也就是在通常情况下选取一个高斯平滑函数,在频域内其冲击函数为:First, use a Gaussian filter to smooth the image, that is, usually choose a Gaussian smoothing function, and its impact function in the frequency domain is:

其中,D(u,v)表示的是傅里叶变换的距离,σ是曲率,H(u,v)表示高斯曲线扩展的程度;为避免边缘过于模糊,本发明利用范围小的卷积模板。Among them, D(u, v) represents the distance of Fourier transform, σ is the curvature, and H(u, v) represents the degree of Gaussian curve expansion; in order to avoid the edge being too blurred, the present invention utilizes a small convolution template .

具体地,本发明采用如下的5×5的模板对低分辨率的图进行卷积运算:Specifically, the present invention uses the following 5×5 template to perform convolution operations on low-resolution images:

其次,计算梯度的幅值和方向,也就是首先选取一个一阶差分卷积的模板:Second, calculate the magnitude and direction of the gradient, that is, first select a template for first-order differential convolution:

然后定义低分辨率图像f(m,n)(m为纵向坐标,n为横向坐标)在H1、H2两个正交方向上的梯度Ψ1(m,n)、Ψ2(m,n)分别为:Then define the gradients Ψ 1 ( m, n), Ψ 2 ( m, n) are respectively:

Ψ1(m,n)=f(m,n)*H1(m,n)Ψ 1 (m, n)=f(m, n)*H 1 (m, n)

Ψ2(m,n)=f(m,n)*H2(m,n)Ψ 2 (m, n) = f(m, n)*H 2 (m, n)

在经过进一步的运算得到边缘的强度和方向如下式所示:After further calculation, the strength and direction of the edge are obtained as follows:

在原始的Canny算子的方法中,将用已经计算出的梯度幅值进行非极大值(NMS)收敛;本发明对此步骤进行改进。首先量化边缘角度θΨ为θ1,θ1∈{0°,45°,90°,135°,180°,225°,270°,315°}。然后,如果θ1=0°,则必须Ψ(m,n)>Ψ(m,n+1)才判断当前点为初始的边缘点;如果θ1=45°,则必须Ψ(m,n)>Ψ(m-1,n+1)才判断当前点为初始的边缘点;如果θ1=90°,则必须Ψ(m,n)>Ψ(m-1,n)才判断当前点为初始的边缘点;如果θ1=135°,则必须Ψ(m,n)>Ψ(m-1,n-1)才判断当前点为初始的边缘点;如果θ1=180°,则必须Ψ(m,n)>Ψ(m,n-1)才判断当前点为初始的边缘点;如果θ1=225°,则必须Ψ(m,n)>Ψ(m+1,n-1)才判断当前点为初始的边缘点;如果θ1=270°,则必须Ψ(m,n)>Ψ(m+1,n)才判断当前点为初始的边缘点,如果θ1=315°,则必须Ψ(m,n)>Ψ(m+1,n+1)才判断当前点为初始的边缘点。这样,可减少计算量,并取得较好的效果。In the original Canny operator method, the calculated gradient magnitude will be used for non-maximum (NMS) convergence; the present invention improves this step. First quantize the edge angle θ Ψ as θ 1 , θ 1 ∈ {0°, 45°, 90°, 135°, 180°, 225°, 270°, 315°}. Then, if θ 1 =0°, then it must be Ψ(m,n)>Ψ(m,n+1) to judge the current point as the initial edge point; if θ 1 =45°, then Ψ(m,n )>Ψ(m-1,n+1) to judge the current point as the initial edge point; if θ 1 =90°, then Ψ(m,n)>Ψ(m-1,n) to judge the current point is the initial edge point; if θ 1 =135°, then Ψ(m,n)>Ψ(m-1,n-1) is required to judge the current point as the initial edge point; if θ 1 =180°, then Only Ψ(m, n)>Ψ(m,n-1) can judge the current point as the initial edge point; if θ 1 =225°, then Ψ(m,n)>Ψ(m+1,n- 1) to determine the current point as the initial edge point; if θ 1 =270°, then Ψ(m,n)>Ψ(m+1,n) is required to determine the current point as the initial edge point, if θ 1 = 315°, then Ψ(m, n)>Ψ(m+1, n+1) must be used to determine that the current point is the initial edge point. In this way, the amount of calculation can be reduced and better results can be achieved.

最后,用双阈值算法来把已经检测出的图像边缘连接起来。即,当上一步检测出的初始边缘点的梯度幅值Ψ(m,n)>Th时,确定该点为图像边缘点。然后,以这些边缘点为种子,扫描其周围相邻的点,当其梯度幅值Ψ(m,n)>Tl时,把此点加入图像边缘点的集合。本发明中,根据大量的实验与其结果,对这两个参数Th、Tl的取值设定为Th=200,Tl=100。Finally, a double-threshold algorithm is used to connect the detected image edges. That is, when the gradient magnitude Ψ(m, n) of the initial edge point detected in the previous step>T h , this point is determined to be an image edge point. Then, take these edge points as seeds, scan the adjacent points around them, and when the gradient magnitude Ψ(m, n)>T l , add this point to the set of image edge points. In the present invention, according to a large number of experiments and their results, the values of these two parameters Th h and T l are set as T h =200 and T l =100.

步骤三,对边缘提取后的图像进行二值化处理,使非边缘点上的二值化的值均变为0,边缘点上的二值化的值为1。即,在二值化的图像上,图像边缘上的像素点值为1,其它不是边缘的像素点值全部为0;Step 3, binarize the image after edge extraction, so that the binarized values on the non-edge points become 0, and the binarized values on the edge points become 1. That is, on the binarized image, the pixel value on the edge of the image is 1, and the values of other pixel points that are not edges are all 0;

步骤四,置x=0,y=0;Step 4, set x=0, y=0;

步骤五,以(x,y)为高分辨率图像块的左上角,在高分辨率的图像上提取3×3大小的图像块。对低分辨率的二值化图像,以(x/2,y/2)为左上角,提取2×2大小的图像块。根据此2×2大小的边缘二值化后的图像块,对提取的高分辨率的图像块进行分类,具体可分为如下11类:分别为上边缘类、下边缘类,左边缘类、右边缘类、左下斜边缘类、右下斜边缘类、左上角边缘类、左下角边缘类、右上角边缘类、右下角边缘类、其它类;然后,根据边缘的类别,对初始放大的图像上提取的3×3 大小的图像块的边缘像素点及其附近像素点,进行重新插值;根据3×3大小的图像块中的边缘,对图像块分类的过程可描述如下。即,对高分辨率图像中提取的图像块Step 5, take (x, y) as the upper left corner of the high-resolution image block, and extract a 3×3 image block from the high-resolution image. For a low-resolution binarized image, take (x/2, y/2) as the upper left corner and extract a 2×2 image block. According to the 2×2 edge binarized image blocks, the extracted high-resolution image blocks can be classified into the following 11 categories: upper edge class, lower edge class, left edge class, left edge class, Right edge class, left lower oblique edge class, right lower oblique edge class, upper left edge class, lower left edge class, upper right edge class, lower right edge class, other classes; then, according to the edge class, the initial enlarged image The edge pixels of the extracted 3×3 image block and its nearby pixels are re-interpolated; according to the edge in the 3×3 image block, the process of classifying the image block can be described as follows. That is, for image blocks extracted from high-resolution images

在此图像块中,a11、a13、a31、a33像素点是属于低分辨率图像中的像素点,此块中其余的像素点值是由插值得到的。根据其所对应的低分辨率的二值化的边缘分布图,可按此块中边缘点出现的位置对此图像块进行分类。其分类的类别如下:In this image block, the pixels a 11 , a 13 , a 31 , and a 33 belong to the pixels in the low-resolution image, and the values of other pixels in this block are obtained by interpolation. According to the corresponding low-resolution binarized edge distribution map, the image block can be classified according to the position where the edge point appears in the block. Its categories are as follows:

(a)上边缘类。此时a11和a13为图像边缘点。此时,需要重新插值的像素点为a21、a22、a23。其插值公式为(a) Upper edge class. At this time, a 11 and a 13 are edge points of the image. At this time, the pixel points to be re-interpolated are a 21 , a 22 , and a 23 . Its interpolation formula is

其中,clip(x)函数是把x的值限制到一个像素点值的值域范围之内,即,clip(x)=max(Imin,min(x,Imax))。这里,Imin和Imax分别是一个像素点可能取得最小值和最大值。α和β为两个权值,满足条件α+β=1。Wherein, the clip(x) function limits the value of x to the range of a pixel value, that is, clip(x)=max(I min , min(x,I max )). Here, I min and I max are the minimum and maximum possible values of a pixel, respectively. α and β are two weights, satisfying the condition α+β=1.

(b)下边缘类。此时a31和a33为图像边缘点。此时,需要重新插值的像素点为a21、a22、a23。其插值公式为(b) Lower edge class. At this time, a 31 and a 33 are edge points of the image. At this time, the pixel points to be re-interpolated are a 21 , a 22 , and a 23 . Its interpolation formula is

(c)左边缘类。此时a11和a31为图像边缘点。此时,需要重新插值的像素点为a12、a22、a32。其插值公式为(c) Left edge class. At this time, a 11 and a 31 are edge points of the image. At this time, the pixel points that need to be re-interpolated are a 12 , a 22 , and a 32 . Its interpolation formula is

(d)右边缘类。此时a13和a33为图像边缘点。此时,需要重新插值的像素点为a12、a22、a32。其插值公式为(d) Right edge class. At this time, a 13 and a 33 are edge points of the image. At this time, the pixel points that need to be re-interpolated are a 12 , a 22 , and a 32 . Its interpolation formula is

(e)左下斜边缘类。此时a11和a33为图像边缘点。此时,需要重新插值的像素点为a22、 a12、a21、a23、a32。其插值公式为(e) Lower-left slanted edge class. At this time, a 11 and a 33 are edge points of the image. At this time, the pixel points to be re-interpolated are a 22 , a 12 , a 21 , a 23 , and a 32 . Its interpolation formula is

(f)右下斜边缘类。此时a13和a31为图像边缘点。此时,需要重新插值的像素点为a22、a12、a21、a23、a32。其插值公式为(f) Right lower sloping edge class. At this time, a 13 and a 31 are edge points of the image. At this time, the pixel points to be re-interpolated are a 22 , a 12 , a 21 , a 23 , and a 32 . Its interpolation formula is

(g)左上框边缘类。左下斜边缘类。此时a11、a13和a31为图像边缘点。此时,需要重新插值的像素点为a22、a23、a32。其插值公式为(g) Top left box edge class. Lower left sloping edge class. At this time, a 11 , a 13 and a 31 are image edge points. At this time, the pixel points that need to be re-interpolated are a 22 , a 23 , and a 32 . Its interpolation formula is

(h)左下框边缘类。左下斜边缘类。此时a11、a31和a33为图像边缘点。此时,需要重新插值的像素点为a22、a12、a23。其插值公式为(h) The edge class of the lower left box. Lower left sloping edge class. At this time, a 11 , a 31 and a 33 are image edge points. At this time, the pixel points to be re-interpolated are a 22 , a 12 , and a 23 . Its interpolation formula is

(i)右上框边缘类。此时a11、a13和a33为图像边缘点。此时,需要重新插值的像素点为a22、a21、a32。其插值公式为(i) Top right box edge class. At this time, a 11 , a 13 and a 33 are image edge points. At this time, the pixel points that need to be re-interpolated are a 22 , a 21 , and a 32 . Its interpolation formula is

(j)右下框边缘类。此时a31、a13和a33为图像边缘点。此时,需要重新插值的像素点为a22、a21、a12。其插值公式为(j) The edge class of the lower right box. At this time, a 31 , a 13 and a 33 are image edge points. At this time, the pixel points to be re-interpolated are a 22 , a 21 , and a 12 . Its interpolation formula is

(k)其它类别。其它情况下,都属于这一类别。对属于这一类别的情况,本发明中不对所提取的高分辨率图像块进行重新插值。(k) Other categories. All other cases fall into this category. For cases belonging to this category, no re-interpolation of the extracted high-resolution image blocks is performed in the present invention.

这里,本发明根据大量的实验确定α=0.7,β=0.3。Here, the present invention determines α=0.7 and β=0.3 based on a large number of experiments.

步骤六,置x=x+2,若x≤2W-5(W为低分辨率图像的宽度),则跳到步骤五对下一个图像块进行操作;Step 6, put x=x+2, if x≤2W-5 (W is the width of low-resolution image), then skip to step 5 and operate on the next image block;

步骤七,置y=y+2,若y≤2H-5(H为低分辨率图像的高度),则跳到步骤五对下一个图像块进行操作;Step 7, set y=y+2, if y≤2H-5 (H is the height of the low-resolution image), then skip to step 5 and operate on the next image block;

步骤八,对当前低分辨率图像的超分辨率放大结束。得到(2W)×(2H)尺寸的高分辨率图像。Step eight, the super-resolution zoom-in of the current low-resolution image ends. A high-resolution image of (2W)×(2H) size is obtained.

本发明的实验主要选取的是FERET数据库中的人脸图像,其中具体选取的是包括肤色不同程度的四幅图像用于重建,原始高分辨率图像的大小为120*120,根据每两个像素点取一个像素点的规定,下采样后的降质图像大小变为60*60,即缩小到原来的1/4,需要插值放大的倍数应该设为长和宽方向上各2倍。What the experiment of the present invention mainly selects is the face image in FERET database, wherein specifically selects four images that include skin color in different degrees for reconstruction, the size of the original high-resolution image is 120*120, according to every two pixels Taking the rule of one pixel, the size of the degraded image after downsampling becomes 60*60, that is, it is reduced to 1/4 of the original size, and the multiplier that needs to be interpolated should be set to 2 times in the length and width directions.

本发明选人脸图像数据库为实验对象来进行实验,然后将实验所得结果通过主观和客观两个方面的评价来检测实验结果的优劣,其中主观评价就是根据人眼对图像整体的视觉效果还有对图像细节部分的感受来进行评价,客观评价就是采用不同的公式算法通过数据来说明图像的质量。The present invention selects the human face image database as the experimental object to carry out the experiment, and then the experimental results are evaluated by subjective and objective evaluations to detect the quality of the experimental results. The evaluation is based on the feeling of the details of the image, and the objective evaluation is to use different formulas and algorithms to illustrate the quality of the image through data.

附图2显示了本发明所提出的方法的结果。此图中的(a)列部分为对实际的高分辨率图像进行下采样得到的低分辨率的图像,(b)列部分为对(a)列的低分辨率图像进行双线性插值的结果,(c)列部分为本发明的方法对(a)列的图像提取的边缘信息图,(d)列部分为本发明的方法对(a)列图像处理得到的高分辨率的图像,(e)列部分为实际的高分辨率图像。Figure 2 shows the results of the proposed method of the present invention. The part of column (a) in this figure is the low-resolution image obtained by downsampling the actual high-resolution image, and the part of column (b) is the bilinear interpolation of the low-resolution image of column (a). As a result, (c) column part is the edge information map extracted by the method of the present invention to the image of (a) column, and (d) column part is the high-resolution image obtained by the method of the present invention to (a) column image processing, Column (e) is the actual high-resolution image.

从此附图中可以看出,采用了所提出的方法能够很好地重建出边缘部分和靠近边缘的部分。同时,放大后可以看出,本发明算法重构出来的图像整体平滑度很高,边缘更加清晰,靠近边缘的点也更加接近真实的图像。由于肤色的不同会造成检测出的边缘个数的不同,所重建的图像的结果也略有不同。From this figure, it can be seen that the edge part and the part near the edge can be well reconstructed with the proposed method. At the same time, after zooming in, it can be seen that the overall smoothness of the image reconstructed by the algorithm of the present invention is very high, the edges are clearer, and the points near the edges are closer to the real image. Due to the difference in the number of detected edges due to the difference in skin color, the results of the reconstructed image are also slightly different.

表1 实际FERET人脸库图像中基于双线性插值方法和本发明的方法输出图像的PSNRTable 1 The PSNR of the output image based on the bilinear interpolation method and the method of the present invention in the actual FERET face database image

表2 实际FERET人脸库图像中基于双线性插值方法和本发明的方法输出图像的SSIMTable 2 The SSIM of the output image based on the bilinear interpolation method and the method of the present invention in the actual FERET face database image

本发明主要选用峰值信噪比(PSNR)和特征相似度(SSIM)两种客观评价方法来评价所提出的方法的性能。其中计算PSNR的公式如下:The present invention mainly selects two objective evaluation methods of peak signal-to-noise ratio (PSNR) and feature similarity (SSIM) to evaluate the performance of the proposed method. The formula for calculating PSNR is as follows:

上式中,n为图像亮度值所用的比特数,例如对于8比特图像亮度值n=8。MSE为均方根误差,定义为:In the above formula, n is the number of bits used by the image brightness value, for example, n=8 for an 8-bit image brightness value. MSE is root mean square error, defined as:

其中图像大小为(2W)×(2H),f′(i,j)表示用本发明的方法重构的高分辨率图像的像素点的值,f(i,j)表示原始高分辨率图像的像素点的值。Wherein the image size is (2W) * (2H), f ' (i, j) represents the pixel value of the high-resolution image reconstructed by the method of the present invention, and f (i, j) represents the original high-resolution image The value of the pixel.

特征相似度评价(SSIM)方法的具体计算公式如下所示:The specific calculation formula of the feature similarity evaluation (SSIM) method is as follows:

其中,μx,μy分别代表原始图像和重建后图像的均值,C1,C2则代表重建前后两幅图像的亮度,σx,σy表示原始图像和重建后图像的方差。表示的是重建前的原始图像和重建后的图像的对比度分量,表示的则是重建前后两幅图像的结构相似度。Among them, μ x , μ y represent the mean value of the original image and the reconstructed image respectively, C 1 , C 2 represent the brightness of the two images before and after reconstruction, σ x , σ y represent the variance of the original image and the reconstructed image. Represents the contrast component of the original image before reconstruction and the reconstructed image, Represents the structural similarity of the two images before and after reconstruction.

对附图2(a)中的4幅低分辨率图像分别计算传统双线性插值放法重建出的超分辨率图像与实际高分辨图像率的PSNR和SSIM,以及本发明所提出的方法重建出的超分辨率图像与实际高分辨率图像的PSNR和SSIM,得到表1和表2。从这些表中可以看出所提出的方法比传统的双线性插值方法具有更高的PSNR和SSIM,重建效果优于传统的双线性插值重建方法,性能更优越。Calculate PSNR and SSIM of the super-resolution image reconstructed by the traditional bilinear interpolation method and the actual high-resolution image rate for the 4 low-resolution images in the accompanying drawing 2 (a), and the reconstruction method proposed by the present invention Table 1 and Table 2 are obtained from the PSNR and SSIM of the super-resolution image and the actual high-resolution image. It can be seen from these tables that the proposed method has higher PSNR and SSIM than the traditional bilinear interpolation method, the reconstruction effect is better than the traditional bilinear interpolation reconstruction method, and the performance is superior.

从表1和表2的数据和图2中,同时可以看出,检测出的边缘越多越有利于本 发明的基于边缘分类的超分辨率重建方法。同时也可以说明边缘像素点对插值放大的方法具有重要的影响,而本发明所提出的基于边缘分类的重建算法能更准确地重建边缘点和其附近的像素点,因此有利于实际应用。From the data of Table 1 and Table 2 and Fig. 2, it can be seen that the more edges detected are more beneficial to the super-resolution reconstruction method based on edge classification of the present invention. At the same time, it can also be explained that edge pixels have an important influence on the method of interpolation and amplification, and the reconstruction algorithm based on edge classification proposed by the present invention can more accurately reconstruct edge points and nearby pixels, so it is beneficial to practical applications.

以上详细描述了本发明的较佳具体实施例。应当理解,本领域的普通技术无需创造性劳动就可以根据本发明的构思作出诸多修改和变化。因此,凡本技术领域中技术人员依本发明的构思在现有技术的基础上通过逻辑分析、推理或者有限的实验可以得到的技术方案,皆应在由权利要求书所确定的保护范围内。The preferred specific embodiments of the present invention have been described in detail above. It should be understood that those skilled in the art can make many modifications and changes according to the concept of the present invention without creative efforts. Therefore, all technical solutions that can be obtained by those skilled in the art based on the concept of the present invention through logical analysis, reasoning or limited experiments on the basis of the prior art shall be within the scope of protection defined by the claims.

Claims (4)

1. a kind of image super-resolution rebuilding method for adding rim detection, methods described carries out 11 after rim detection to edge Kind different classification carry out interpolation again again, when being 2 × 2 for the multiple of amplification, i.e. 2 times of horizontal magnification, 2 times of vertical magnification, Methods described includes:
Step one, the initial amplification of the super-resolution based on interpolation, to obtain initial high-definition picture;
Step 2, rim detection is carried out to low-resolution image with extracting;
Step 3, carries out binary conversion treatment to the image after edge extracting, the value of the binaryzation on non-edge point is changed into 0, The value of binaryzation on marginal point is 1, i.e. on the image of binaryzation, and the pixel point value on image border is 1, other not to be The pixel point value all 0 at edge;
Step 4, puts x=0, y=0;
Step 5, with the upper left corner that (x, y) is high-definition picture block, extracts the figure of 3 × 3 sizes on high-resolution image As block, to the binary image of low resolution, with (x/2, y/2) for the upper left corner, extract the image block of 2 × 2 sizes, according to this 2 Image block after the edge binaryzation of × 2 sizes, classifies to the high-resolution image block of extraction, particularly may be divided into as follows 11 classes:Respectively top edge class, lower edge class, left hand edge class, right hand edge class, lower-left beveled edge class, bottom right beveled edge class, upper left Corner edge class, lower-left corner edge class, upper right corner edge class, bottom right corner edge class, other classes;Then, it is right according to the classification at edge The edge pixel point of the image block for 3 × 3 sizes extracted on high-definition picture and its neighbouring pixel, carry out interpolation again;
Step 6, puts x=x+2, if x≤2W-5, W are the width of low-resolution image, then jumps to step 5 to next image Block is operated;
Step 7, puts y=y+2, if y≤2H-5, H are the height of low-resolution image, then jumps to step 5 to next image Block is operated;
Step 8, the super-resolution amplification to current low-resolution image terminates, and obtains the high-resolution of (2W) × (2H) sizes Image.
2. a kind of image super-resolution rebuilding method for adding rim detection as claimed in claim 1, wherein, the step 2 In, the specific computational methods of Canny operators are as follows:
First, Gaussian filter smoothed image is used, that is, chooses a Gaussian smoothing function under normal conditions, in frequency domain Its impulse function is:
Wherein, what D (u, v) was represented is the distance of Fourier transformation, and σ is curvature, and H (u, v) represents the degree of Gaussian curve extension; To avoid edge from excessively obscuring, the small convolution mask of utilization scope of the present invention;
Specifically, the present invention carries out convolution algorithm using 5 × 5 following template to the figure of low resolution:
Secondly, amplitude and the direction of gradient are calculated, that is, chooses the template of a first-order difference convolution first:
Then low-resolution image f (m, n) is defined, m is longitudinal coordinate, and n is lateral coordinates, in H1、H2On two orthogonal directions Gradient Ψ1(m, n), Ψ2(m, n) is respectively:
Ψ1(m, n)=f (m, n) * H1(m, n)
Ψ2(m, n)=f (m, n) * H2(m, n)
Passing through, further computing obtains the intensity at edge and direction is shown below:
In the method for original Canny operators, non-maximum convergence will be carried out with the gradient magnitude calculated;The present invention This step is improved, quantifies edge angle θ firstΨFor θ1, θ1∈ 0 °, and 45 °, 90 °, 135 °, 180 °, 225 °, 270 °, 315 ° }, then, if θ1=0 °, then must Ψ (m, n) > Ψ (m, n+1) just judge current point for initial marginal point;If θ1=45 °, then must Ψ (m, n) > Ψ (m-1, n+1) just judge current point for initial marginal point;If θ1=90 °, then must (m-1 n) just judges current point for initial marginal point to palpus Ψ (m, n) > Ψ;If θ1=135 °, then must Ψ (m, n) > Ψ (m-1, n-1) just judges current point for initial marginal point;If θ1=180 °, then must Ψ (m, n) > Ψ (m, n-1) just sentence Disconnected current point is initial marginal point;If θ1=225 °, then must Ψ (m, n) > Ψ (m+1, n-1) just judge that current point is Initial marginal point;If θ1=270 °, then must Ψ (m, n) > Ψ (m+1, n) just judges current point for initial marginal point, If θ1=315 °, then must Ψ (m, n) > Ψ (m+1, n+1) just judge current point for initial marginal point;So, it can reduce Amount of calculation, and obtain preferable effect;
Finally, the image border having been detected by is connected with dual threashold value-based algorithm, i.e. when previous step detect it is initial Gradient magnitude Ψ (m, n) the > T of marginal pointhWhen, it is image border point to determine the point;Then, using these marginal points as seed, sweep Point adjacent around it is retouched, as its gradient magnitude Ψ (m, n) > TlWhen, this point is added the set of image border point, the present invention In, according to substantial amounts of experiment and as a result, to the two parameters Th、TlValue be set as Th=200, Tl=100.
3. a kind of image super-resolution rebuilding method for adding rim detection as claimed in claim 1,
According to the edge in the image block of 3 × 3 sizes, the process to image block classification can be described as follows, i.e. to high resolution graphics The image block extracted as in,
In this image block, a11、a13、a31、a33Pixel is remaining in the pixel belonged in low-resolution image, this block Pixel point value is obtained by interpolation, the edge distribution figure of the binaryzation of the low resolution according to corresponding to it, can be by this block The position that marginal point occurs is classified to this image block, and the classification that it is classified is as follows:
(a) top edge class, now a11And a13It is image border point, it is necessary to which the pixel of interpolation is a again21、a22、a23, its interpolation Formula is
Wherein, clip (x) functions are x value to be restricted within the scope of the codomain of a pixel point value, i.e. clip (x)=max (Imin, min (x, Imax)), here, IminAnd ImaxIt is that a pixel may obtain minimum value and maximum respectively, α and β are two Individual weights, meet condition alpha+beta=1;
(b) lower edge class, now a31And a33It is image border point, it is necessary to which the pixel of interpolation is a again21、a22、a23, its interpolation Formula is
(c) left hand edge class, now a11And a31For image border point, at this time, it may be necessary to which the pixel of interpolation is a again12、a22、a32, Its interpolation formula is
(d) right hand edge class, now a13And a33It is image border point, it is necessary to which the pixel of interpolation is a again12、a22、a32, its interpolation Formula is
(e) lower-left beveled edge class, now a11And a33It is image border point, it is necessary to which the pixel of interpolation is a again22、a12、a21、 a23、a32, its interpolation formula is
(f) bottom right beveled edge class, now a13And a31It is image border point, it is necessary to which the pixel of interpolation is a again22、a12、a21、 a23、a32, its interpolation formula is
(g) upper left corner edge class, now a11、a13And a31It is image border point, it is necessary to which the pixel of interpolation is a again22、a23、 a32, its interpolation formula is
(h) lower-left corner edge class, now a11、a31And a33It is image border point, it is necessary to which the pixel of interpolation is a again22、a12、 a23, its interpolation formula is
(i) upper right corner edge class, now a11、a13And a33It is image border point, it is necessary to which the pixel of interpolation is a again22、a21、 a32, its interpolation formula is
(j) bottom right corner edge class, now a31、a13And a33It is image border point, it is necessary to which the pixel of interpolation is a again22、a21、 a12, its interpolation formula is
(k) other classifications, other situations belong to this classification, the situation to belonging to this classification, not to institute in the present invention The high-definition picture block of extraction carries out interpolation again.
4. the image super-resolution rebuilding method of rim detection is added as claimed in claim 3, wherein:Value to α and β is α=0.7, β=0.3.
CN201410193840.4A 2014-04-30 2014-04-30 A kind of method of the image super-resolution amplification based on marginal classification Active CN104021523B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410193840.4A CN104021523B (en) 2014-04-30 2014-04-30 A kind of method of the image super-resolution amplification based on marginal classification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410193840.4A CN104021523B (en) 2014-04-30 2014-04-30 A kind of method of the image super-resolution amplification based on marginal classification

Publications (2)

Publication Number Publication Date
CN104021523A CN104021523A (en) 2014-09-03
CN104021523B true CN104021523B (en) 2017-10-10

Family

ID=51438262

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410193840.4A Active CN104021523B (en) 2014-04-30 2014-04-30 A kind of method of the image super-resolution amplification based on marginal classification

Country Status (1)

Country Link
CN (1) CN104021523B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105787912B (en) * 2014-12-18 2021-07-30 南京大目信息科技有限公司 Classification-based step type edge sub-pixel positioning method
CN104881842B (en) * 2015-05-18 2019-03-01 浙江师范大学 A kind of image super-resolution method based on picture breakdown
CN106169173B (en) * 2016-06-30 2019-12-31 北京大学 A Method of Image Interpolation
CN109345465B (en) * 2018-08-08 2023-04-07 西安电子科技大学 GPU-based high-resolution image real-time enhancement method
CN109410177B (en) * 2018-09-28 2022-04-01 深圳大学 Image quality analysis method and system for super-resolution image
CN109557101B (en) * 2018-12-29 2023-11-17 桂林电子科技大学 Defect detection device and method for non-elevation reflective curved surface workpiece
CN112348103B (en) * 2020-11-16 2022-11-11 南开大学 Image block classification method and device and super-resolution reconstruction method and device
CN114863057A (en) * 2022-03-25 2022-08-05 东南大学成贤学院 Method for three-dimensional reproduction of point cloud reconstructed CT (computed tomography) image
CN117830104A (en) * 2023-12-29 2024-04-05 摩尔线程智能科技(成都)有限责任公司 Image super-resolution processing method, device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101499164A (en) * 2009-02-27 2009-08-05 西安交通大学 Image interpolation reconstruction method based on single low-resolution image
CN101957309A (en) * 2010-08-17 2011-01-26 招商局重庆交通科研设计院有限公司 All-weather video measurement method for visibility
CN102360498A (en) * 2011-10-27 2012-02-22 江苏省邮电规划设计院有限责任公司 Reconstruction method for image super-resolution

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7379625B2 (en) * 2003-05-30 2008-05-27 Samsung Electronics Co., Ltd. Edge direction based image interpolation method
CN103380615B (en) * 2011-02-21 2015-09-09 三菱电机株式会社 Image amplifying device and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101499164A (en) * 2009-02-27 2009-08-05 西安交通大学 Image interpolation reconstruction method based on single low-resolution image
CN101957309A (en) * 2010-08-17 2011-01-26 招商局重庆交通科研设计院有限公司 All-weather video measurement method for visibility
CN102360498A (en) * 2011-10-27 2012-02-22 江苏省邮电规划设计院有限责任公司 Reconstruction method for image super-resolution

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Edge-Directed Single-Image Super-Resolution Via Adaptive Gradient Magnitude Self-Interpolation;Lingfeng et al.;《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》;20130831;第23卷(第8期);第1289-1299页 *
一宗基于边缘预测的图像实时放大技术;黄彪 等;《红外与激光工程》;20130630;第42卷(第S1期);第268-273页 *
一种基于细化边缘的图像放大方法;王东鹤;《微电子学与计算机》;20100228;第27卷(第2期);第122-125段 *

Also Published As

Publication number Publication date
CN104021523A (en) 2014-09-03

Similar Documents

Publication Publication Date Title
CN104021523B (en) A kind of method of the image super-resolution amplification based on marginal classification
Gu et al. Multiscale natural scene statistical analysis for no-reference quality evaluation of DIBR-synthesized views
Zhang et al. Single-image super-resolution based on rational fractal interpolation
Liu et al. Rank-one prior: Real-time scene recovery
CN103020897B (en) Device, system and method for super-resolution reconstruction of single-frame image based on multiple blocks
US8731337B2 (en) Denoising and artifact removal in image upscaling
WO2021052261A1 (en) Image super-resolution reconstruction method and apparatus for sharpening of label data
CN105488758B (en) A kind of image-scaling method based on perception of content
CN104036479B (en) Multi-focus image fusion method based on non-negative matrix factorization
CN102243711B (en) A Method of Image Super-resolution Reconstruction Based on Neighborhood Nesting
CN107240066A (en) Image super-resolution rebuilding algorithm based on shallow-layer and deep layer convolutional neural networks
CN105931201A (en) Image subjective visual effect enhancing method based on wavelet transformation
CN102800094A (en) Fast color image segmentation method
CN103455991A (en) Multi-focus image fusion method
CN105513033B (en) A kind of super resolution ratio reconstruction method that non local joint sparse indicates
CN106934806A (en) It is a kind of based on text structure without with reference to figure fuzzy region dividing method out of focus
CN111489333B (en) No-reference night natural image quality evaluation method
CN110418139B (en) A kind of video super-resolution repair method, device, equipment and storage medium
CN109255752A (en) Image adaptive compression method, device, terminal and storage medium
CN103236047B (en) A kind of based on the PAN and multi-spectral image interfusion method replacing component matching
CN103020905A (en) Sparse-constraint-adaptive NLM (non-local mean) super-resolution reconstruction method aiming at character image
CN104809735B (en) The system and method for image haze evaluation is realized based on Fourier transformation
CN116630198A (en) A multi-scale fusion underwater image enhancement method combined with adaptive gamma correction
CN104951800A (en) Resource exploitation-type area-oriented remote sensing image fusion method
CN103903239B (en) A kind of video super-resolution method for reconstructing and its system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant