CN104700361A - Image interpolation method and system based on edge detection - Google Patents

Image interpolation method and system based on edge detection Download PDF

Info

Publication number
CN104700361A
CN104700361A CN201510152962.3A CN201510152962A CN104700361A CN 104700361 A CN104700361 A CN 104700361A CN 201510152962 A CN201510152962 A CN 201510152962A CN 104700361 A CN104700361 A CN 104700361A
Authority
CN
China
Prior art keywords
mrow
pixel
interpolated
interpolation
mtd
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510152962.3A
Other languages
Chinese (zh)
Other versions
CN104700361B (en
Inventor
韩睿
汤仁君
郭若杉
罗杨
颜奉丽
汤晓莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jilang Semiconductor Technology Co Ltd
Original Assignee
Institute of Automation of Chinese Academy of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Automation of Chinese Academy of Science filed Critical Institute of Automation of Chinese Academy of Science
Priority to CN201510152962.3A priority Critical patent/CN104700361B/en
Publication of CN104700361A publication Critical patent/CN104700361A/en
Application granted granted Critical
Publication of CN104700361B publication Critical patent/CN104700361B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Medicines Containing Antibodies Or Antigens For Use As Internal Diagnostic Agents (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image interpolation method based on edge detection. The image interpolation method based on edge detection comprises the steps that the position of a pixel to be interpolated in an original image is determined according the size of the original image and the size of an image obtained after interpolation; the edge direction of the pixel to be interpolated in the original image is determined; if the absolute value of the slope of the edge direction is not smaller than a first threshold value, interpolation is conducted according to a row intersection point method and/or line intersection point method. By the adoption of the image interpolation method based on edge detection, the edge of the image obtained after interpolation can be clear and free of sawteeth.

Description

Image interpolation method and system based on edge detection
Technical Field
The invention belongs to the field of image processing, and particularly relates to an image interpolation method and system based on edge detection.
Background
Image interpolation can be used for adjusting the resolution of the image, such as enlarging a high-definition image (1920 x 1080) into an ultra-high-definition image (3840 x 2160).
The conventional image interpolation methods, such as bilinear interpolation, bicubic interpolation, multiphase interpolation, etc., essentially use a low-pass filter for interpolation, so that when a smoother interpolated image is obtained, high-frequency information in the image is lost, and the phenomena of blurring and jagging appear at the edge of the image are caused. Currently, an image interpolation method that is more advanced is an image interpolation method based on edge detection. And calculating the edge direction of the pixel to be interpolated through edge detection, and interpolating the pixel to be interpolated along the edge direction, so as to obtain a smooth image edge and avoid the sawtooth phenomenon. However, the existing image interpolation method based on edge detection has at least one of the following disadvantages: only integral multiple image magnification is supported; the edge direction used in interpolation is not any direction, but only a few fixed directions; the number of original points used for interpolation is small, resulting in an image with insufficiently sharp edges.
Therefore, an image interpolation method capable of solving the above-described problems is required.
Disclosure of Invention
The invention provides an image interpolation method and system based on edge detection, which aim to realize the purpose of clear edge and no sawtooth of an interpolated image.
The invention provides an image interpolation method based on edge detection, which comprises the following steps:
determining the position of a pixel to be interpolated in the original image according to the sizes of the original image and the interpolated image;
determining the edge direction of a pixel to be interpolated in an original image;
if the absolute value of the slope in the edge direction is not less than the first threshold, performing interpolation according to a line intersection method and/or a column intersection method, wherein the line intersection method and/or the column intersection method include:
calculating the positions of a plurality of line intersections and/or a plurality of column intersections intercepted by straight lines determined by the pixel to be interpolated and the edge direction in a plurality of lines and/or a plurality of columns in the neighborhood of the pixel to be interpolated in the original image;
determining the pixel value of the row intersection point and/or the column intersection point according to the pixel value in the neighborhood of the row intersection point and/or the column intersection point in the original image by using a one-dimensional interpolation method;
and performing one-dimensional filtering on the pixel values of the row intersection points and/or the column intersection points in the determined pixel neighborhood to be interpolated to obtain the values of the pixels to be interpolated, and interpolating the original image.
A second aspect of the present invention provides an image interpolation system based on edge detection, including:
the coordinate calculation unit is used for determining the position of the pixel to be interpolated in the original image according to the size of the original image and the size of the image after interpolation;
the direction calculation unit is used for determining the edge direction of the pixel to be interpolated in the original image;
the intersection point calculation unit is used for calculating the positions of a plurality of row intersection points and/or a plurality of column intersection points of a plurality of rows and/or a plurality of columns in the neighborhood of the pixel to be interpolated in the original image, which are intercepted by the straight line determined by the pixel to be interpolated and the edge direction when the absolute value of the slope in the edge direction is not less than a first threshold value;
the edge interpolation filtering unit is used for performing interpolation according to a line intersection method and/or a column intersection method, and specifically, is used for determining a pixel value of a line intersection and/or a column intersection according to a value of a pixel in a neighborhood of the line intersection and/or the column intersection in the original image by using a one-dimensional interpolation method, performing one-dimensional filtering on the determined pixel value of the line intersection and/or the column intersection in the neighborhood of the pixel to be interpolated, obtaining a value of the pixel to be interpolated, and interpolating the original image.
The invention has the beneficial effects that:
the image interpolation method based on the edge detection can use a large number of original points to perform interpolation processing in any integer or non-integer scaling ratio and in any edge direction, so that the edge of an interpolated image is clear and the sawtooth phenomenon is avoided.
Drawings
FIG. 1 is a flowchart of a first embodiment of an image interpolation method based on edge detection according to the present invention;
FIG. 2 is a schematic diagram of a Sobel gradient method in a first embodiment of the image interpolation method based on edge detection according to the present invention;
FIG. 3 is a schematic diagram of a gradient covariance matrix method according to a first embodiment of the edge detection-based image interpolation method of the present invention;
FIG. 4 is a schematic diagram of a line intersection method in a first embodiment of an image interpolation method based on edge detection according to the present invention;
FIG. 5 is a schematic diagram of a column intersection method in a first embodiment of an image interpolation method based on edge detection according to the present invention;
FIG. 6 is a weighting function when a row intersection method and a column intersection method are applied in combination in a first embodiment of the image interpolation method based on edge detection according to the present invention;
fig. 7 is a block diagram of a first embodiment of an image interpolation system based on edge detection according to the present invention.
Detailed Description
Fig. 1 is a flowchart of a first embodiment of an image interpolation method based on edge detection, as shown in fig. 1, the image interpolation method based on edge detection of the present invention includes:
s11, determining the position of the pixel to be interpolated in the original image according to the size, namely the resolution, of the original image and the interpolated image; preferably, the determining the position of the pixel to be interpolated in the original image according to the sizes of the original image and the interpolated image includes calculating the position of the pixel to be interpolated in the original image according to the formula (1):
i L = i H * H L H H , i L = j H * W L W H - - - ( 1 )
wherein iLAnd jLLine and column coordinates, i, respectively, representing the position of the pixel to be interpolated in the original image, i.e. the low resolution imageHAnd jHThe row coordinates and column coordinates, H, respectively representing the position of the pixel to be interpolated in the interpolated image, i.e. in the high resolution imageLAnd WLRespectively representing the height and width of the original image, HHAnd WHRespectively representing the height and width of the interpolated image;
s12, determining the edge direction of the pixel to be interpolated in the original image; preferably, the first method for determining the edge direction of the pixel to be interpolated in the original image may include:
fig. 2 is a schematic diagram of a Sobel gradient method in an image interpolation method based on edge detection according to a first embodiment of the present invention, and as shown in fig. 2, a horizontal gradient g of a plurality of pixels in a neighborhood of a pixel to be interpolated in an original image is calculated according to formulas (2) and (3) by using a Sobel gradient operatorH(i, j) and vertical gradient gV(i,j):
gH(i,j)=IL(i-1,j+1)+2*IL(i,j+1)+IL(i+1,j+1)
(2)
-IL(i-1,j-1)-2*IL(i,j-1)-IL(i+1,j-1)
gV(i,j)=IL(i-1,j-1)+2*IL(i-1,j)+IL(i-1,j+1)
(3)
-IL(i+1,j-1)-2*IL(i+1,j)-IL(i+1,j+1)
Determining the horizontal gradient g of the pixel to be interpolated by utilizing bilinear interpolation according to the position of the pixel to be interpolated in the original image and the horizontal gradient and the vertical gradient of the pixel in the neighborhood respectively, namely according to the formulas (4) and (5)H(iL,jL) And a vertical gradient gV(iL,jL) Then the edge direction of the pixel to be interpolated is the vertical direction (g) of the gradient direction of the pixel to be interpolatedV(iL,jL),-gH(iL,jL)):
gH(iL,jL)=(1-dx)*(1-dy)*gH(i,j)+dx*(1-dy)*gH(i,j+1)
(4)
+(1-dx)*dy*gH(i+1,j)+dx*dy*gH(i+1,j+1)
gV(iL,jL)=(1-dx)*(1-dy)*gV(i,j)+dx*(1-dy)*gV(i,j+1)
(5)
+(1-dx)*dy*gV(i+1,j)+dx*dy*gV(i+1,j+1)
Wherein, IL(i-1,j+1)、IL(i,j+1)、IL(i+1,j+1)、IL(i-1,j-1)、IL(i,j-1)、IL(i+1,j-1)、IL(i-1,j)、IL(i +1, j) respectively representing pixel values of eight pixels in a pixel neighborhood to be interpolated in the original image;
next, fig. 3 is a schematic diagram of a gradient covariance matrix method in an image interpolation method based on edge detection according to a first embodiment of the present invention, and as shown in fig. 3, the second method for determining an edge direction of a pixel to be interpolated in an original image may include:
selecting a window with an arbitrary size of a window omega of H x W in a pixel neighborhood to be interpolated, for example, H is 4, and W is 6 in the example; determining the horizontal gradient g of all pixels within a windowH(i, j) and vertical gradient gV(i, j) to determine the covariance matrix M of all the pixels in the window in the neighborhood of the pixel to be interpolated:
<math> <mrow> <mi>M</mi> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>A</mi> </mtd> <mtd> <mi>B</mi> </mtd> </mtr> <mtr> <mtd> <mi>B</mi> </mtd> <mtd> <mi>C</mi> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <munder> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <mi>&Omega;</mi> </mrow> </munder> <msup> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mi>H</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mtd> <mtd> <munder> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <mi>&Omega;</mi> </mrow> </munder> <msub> <mi>g</mi> <mi>H</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>g</mi> <mi>V</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <munder> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <mi>&Omega;</mi> </mrow> </munder> <msub> <mi>g</mi> <mi>H</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>g</mi> <mi>V</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mtd> <mtd> <munder> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <mi>&Omega;</mi> </mrow> </munder> <msup> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mi>V</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow> </math>
and calculating the eigenvalue and the eigenvector of the covariance matrix, and then determining the eigenvector v corresponding to the smaller eigenvalue as the edge direction, namely:
v = = v x v y = 2 B C - A - ( C - A ) 2 + AB 2 - - - ( 7 )
wherein, v = = v x v y representing eigenvectors corresponding to smaller eigenvalues of the covariance matrix; v. ofxHorizontal component representing edge direction, vyRepresenting the vertical component of the edge direction.
In addition, the third method for determining the edge direction of the pixel to be interpolated in the original image may include the steps described in the second method and further include:
refining the covariance matrix according to equation (8) to obtain a refined covariance matrix M':
<math> <mrow> <msup> <mi>M</mi> <mo>&prime;</mo> </msup> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>A</mi> </mtd> <mtd> <mi>B</mi> </mtd> </mtr> <mtr> <mtd> <mi>B</mi> </mtd> <mtd> <mi>C</mi> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <munder> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <mi>&Omega;</mi> </mrow> </munder> <msup> <mrow> <mi>w</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>*</mo> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mi>H</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mn>2</mn> </msup> </mtd> <mtd> <munder> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <mi>&Omega;</mi> </mrow> </munder> <mi>w</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>g</mi> <mi>H</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>g</mi> <mi>V</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <munder> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <mi>&Omega;</mi> </mrow> </munder> <msub> <mrow> <mi>w</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>*</mo> <mi>g</mi> </mrow> <mi>H</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>g</mi> <mi>V</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mtd> <mtd> <munder> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <mi>&Omega;</mi> </mrow> </munder> <mi>w</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <msup> <mrow> <mo>*</mo> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mi>V</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> <mn>2</mn> </msup> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein, the value of w (i, j) adopts a bilinear interpolation method, namely:
w(i-1,j-2)=(1-dx)*(1-dy),w(i-1,j-1)=(1-dy),w(i-1,j)=(1-dy),
w(i-1,j+1)=(1-dy),w(i-1,j+2)=(1-dy),w(i-1,j+3)=dx*(1-dy);
w(i,j-2)=(1-dx),w(i,j-1)=1,w(i,j)=1,w(i,j+1)=1,w(i,j+2)=1,
w(i,j+3)=dx;w(i+1,j-2)=(1-dx),w(i+1,j-1)=1,w(i+1,j)=1,w(i+1,j+1)=1,
w(i+1,j+2)=1,w(i+1,j+3)=dx;w(i+2,j-2)=(1-dx)*dy,w(i+2,j-1)=dy,
w(i+2,j)=dy,w(i+2,j+1)=dy,w(i+2,j+2)=dy,w(i+2,j+3)=dx*dy;w(i,j)
values can also be represented in table 1:
(1-dx)*(1-dy) (1-dy) (1-dy) (1-dy) (1-dy) dx*(1-dy)
(1-dx) 1 1 1 1 dx
(1-dx) 1 1 1 1 dx
(1-dx)*dy dy dy dy dy dx*dy
TABLE 1
S13, if the absolute value of the slope in the edge direction is not less than the first threshold, performing interpolation according to a line intersection method and/or a column intersection method, that is, an edge interpolation method, where the line intersection method and/or the column intersection method preferably includes:
s131, judging the absolute value of the slope of the edge direction, and if the absolute value is smaller than a second threshold value T1, performing interpolation according to a line intersection method;
s132, if the value is not less than the third threshold value T2, performing interpolation according to a column intersection method;
s133, if the value is not less than the second threshold value T1 but less than the third threshold value T2, performing interpolation according to the row intersection method and the column intersection method at the same time, wherein the interpolation comprises the following steps:
s1331, calculating positions of a plurality of line intersections and/or a plurality of column intersections in the neighborhood of the pixel to be interpolated in the original image, where the line intersections and/or the column intersections are intercepted by the pixel to be interpolated and the straight line determined by the edge direction, includes:
calculating the positions of a plurality of line intersections and a plurality of column intersections of a plurality of lines and a plurality of columns in the neighborhood of the pixel to be interpolated in the original image, wherein the line intersections and the column intersections are intercepted by the pixel to be interpolated and the straight line determined by the edge direction;
preferably, fig. 4 is a schematic diagram of a line intersection method in the first embodiment of the image interpolation method based on edge detection, as shown in fig. 4, the line intersection method is implemented by using intersections of upper and lower four lines of pixels to be interpolated;
correspondingly, the calculating of the positions of the intersection points of the lines intercepted by the straight line determined by the pixel to be interpolated and the edge direction in the neighborhood of the pixel to be interpolated in the original image comprises calculating the positions of four line intersection points according to the formulas (9), (10), (11) and (12):
P 0 : ( i - 1 , j L + ( 1 + dy ) * v x v y ) - - - ( 9 )
P 1 : ( i , j L + dy * v x v y ) - - - ( 10 )
P 2 : ( i + 1 , j L - ( 1 - dy ) * v x v y ) - - - ( 11 )
P 3 : ( i + 2 , j L - 2 ( 2 - dy ) * v x v y ) - - - ( 12 )
similarly, fig. 5 is a schematic diagram of a column intersection method in the first embodiment of the image interpolation method based on edge detection, as shown in fig. 5, the column intersection method is implemented by using intersections of four columns on the left and right of the pixel to be interpolated, and correspondingly, the calculating positions of a plurality of column intersections intercepted by a plurality of columns of straight lines determined by the pixel to be interpolated and the edge direction in the neighborhood of the pixel to be interpolated in the original image includes calculating positions of four column intersections according to formulas (13), (14), (15), and (16), respectively:
P 0 : ( i L + ( 1 + dx ) * v y v x , j - 1 ) - - - ( 13 )
P 1 : ( i L + dx * v y v x , j ) - - - ( 14 )
P 2 : ( i L - ( 1 - dx ) * v y v x , j + 1 ) - - - ( 15 )
P 3 : ( i L - 2 ( 2 - dx ) * v y v x , j + 2 ) - - - ( 16 )
s1332, determining a pixel value of the row intersection and/or the column intersection according to a value of a pixel in a neighborhood of the row intersection and/or the column intersection in the original image by using a one-dimensional interpolation method, including:
determining pixel values of said row and column intersections from values of pixels in the neighborhood of said row and column intersections in the original image using one-dimensional interpolation, preferably including determining pixel values of four row intersections according to equations (17), (18), (19), and (20):
I P 0 = ( 1 - ( j L + ( 1 + dy ) * v x v y - [ j L + ( 1 + dy ) * v x v y ] ) ) * I L ( i - 1 , [ j L + ( 1 + dy ) * v x v y ] ) + ( j L + ( 1 + dy ) * v x v y - [ j L + ( 1 + dy ) * v x v y ] ) * I L ( i - 1 , [ j L + ( 1 + dy ) * v x v y ] + 1 ) - - - ( 17 )
I P 1 = ( 1 - ( j L + dy * v x v y - [ j L + dy * v x v y ] ) ) + I L ( i , [ j L + dy * v x v y ] ) + ( j L + dy * v x v y - [ j L + dy * v x v y ] ) * I L ( i , [ j L + dy * v x v y ] + 1 ) - - - ( 18 )
I P 2 = ( 1 - ( j L - ( 1 - dy ) * v x v y - [ j L - ( 1 - dy ) * v x v y ] ) ) * I L ( i + 1 , [ j L - ( 1 - dy ) * v x v y ] ) + ( j L - ( 1 - dy ) * v x v y - [ j L - ( 1 - dy ) * v x v y ] ) * I L ( i + 1 , [ j L - ( 1 - dy ) * v x v y ] + 1 ) - - - ( 19 )
I P 3 = ( 1 - ( j L - 2 ( 2 - dy ) * v x v y - [ j L - ( 2 - dy ) * v x v y ] ) ) * I L ( i + 2 , [ j L - ( 2 - dy ) * v x v y ] ) + ( j L - ( 2 - dy ) * v x v y - [ j L - ( 2 - dy ) * v x v y ] ) * I L ( i + 2 , [ j L - ( 2 - dy ) * v x v y ] + 1 ) - - - ( 20 )
similarly, the column intersection method is implemented by using intersections of four left and right columns of pixels to be interpolated, and correspondingly, taking the calculation of the pixel value of the first column intersection as an example, the determining, by using the one-dimensional interpolation method, the pixel value of the column intersection according to the value of the pixel in the neighborhood of the column intersection in the original image includes determining the pixel value of the column intersection according to equation (21):
I P 0 = ( 1 - ( i L + ( 1 + dx ) * v y v x - [ i L + ( 1 + dx ) * v y v x ] ) ) * I L ( [ i L + ( 1 + dx ) * v y v x ] , j - 1 ) + ( i L + ( 1 + dx ) * v y v x - [ i L + ( 1 + dx ) * v y v x ] ) * I L ( [ i L + ( 1 + dx ) * v y v x ] + 1 , j - 1 ) - - - ( 21 )
the calculation of the pixel values of other three column intersections is not repeated;
the one-dimensional filtering is performed on the pixel values of the row intersection point and/or the column intersection point in the determined pixel neighborhood to be interpolated, and the obtaining of the value of the pixel to be interpolated comprises the one-dimensional filtering according to a formula (22) to obtain the value of the pixel to be interpolated:
IH(iH,jH)=f0*IP0+f1*IP1+f2*IP2+f3*IP3 (22)
wherein, the [ alpha ], [ beta ]]Represents rounding down, (i)L,jL) Coordinates representing the position of the pixel to be interpolated in the original image, i and j representing the number of rows and columns, respectively, (v)x,vy) Indicates the edge direction, P0、P1、P2、P3Respectively representing four line intersections, IP0、IP1、IP2、IP3Pixel values of four line intersections, [ f ]0,f1,f2,f3]Being coefficients of a one-dimensional filter, e.g. [1, 3, 3, 1 ]];
It should be noted that, when calculating the intersection points of the edge direction of the pixel to be interpolated and a plurality of lines in the neighborhood thereof, if the edge direction is horizontalDirection, that is, no intersection point exists between the direction and a plurality of lines in the neighborhood of the pixel to be interpolated; when the absolute value of the slope of the edge direction is smaller than the set first threshold kT1When the interpolation is carried out, the intersection points of the interpolation pixel and a plurality of lines in the neighborhood of the pixel to be interpolated are far, and the correlation with the pixel to be interpolated is relatively small; therefore, the interpolation is carried out on the two situations by adopting a non-edge image interpolation method, the two-dimensional image interpolation is decomposed into horizontal and vertical one-dimensional directions for interpolation in sequence, and the sequence of the horizontal direction interpolation and the vertical direction interpolation can be exchanged; in a similar way, when the intersection points of the edge direction of the pixel to be interpolated and a plurality of rows in the neighborhood of the pixel to be interpolated are calculated, if the edge direction is the vertical direction, the pixel to be interpolated and the plurality of rows in the neighborhood of the pixel to be interpolated have no intersection points; when the absolute value of the slope of the edge direction is larger than the set first threshold kT2When the interpolation is carried out, the intersection points of the interpolation and a plurality of columns in the neighborhood of the pixel to be interpolated are far, and the correlation with the pixel to be interpolated is relatively small; therefore, the interpolation is carried out by adopting a non-edge image interpolation method for the two situations, the two-dimensional image interpolation is decomposed into horizontal and vertical one-dimensional directions for interpolation in sequence, and the sequence of the horizontal direction interpolation and the vertical direction interpolation can be exchanged.
S1333, performing one-dimensional filtering on the pixel values of the line intersection and/or the column intersection in the determined pixel neighborhood to be interpolated to obtain the value of the pixel to be interpolated, and interpolating the original image, including:
respectively carrying out one-dimensional filtering on the pixel values of the row intersection point and the column intersection point in the determined pixel neighborhood to be interpolated to obtain an interpolation result I of the row intersection point filteringHR(iH,jH) Interpolation result I of sum-column cross filteringHC(iH,jH) FIG. 6 shows a weight function when a row intersection method and a column intersection method are combined in a first embodiment of the image interpolation method based on edge detection, that is, weights when the row intersection method and the column intersection method are weighted are generated by a curve shown in FIG. 6, and a value I of a pixel to be interpolated is determined according to a weighting of formula (23)H(iH,jH):
IH(iH,jH)=w*IHR(iH,jH)+(1-w)*IHC(iH,jH) (23)
Then, interpolating the original image according to the value of the pixel to be interpolated;
wherein (i)H,jH) Representing the coordinates of the position of a pixel to be interpolated, and w representing the weight of an interpolation result of line intersection filtering;
it should be noted that, at a low angle, interpolation is performed by using a method of column intersection points; in other directions, interpolation is performed by using a line intersection method, T1 and T2 are preset thresholds, and the combination method of the line intersection method and the column intersection method is not limited to the above-described form;
preferably, the image interpolation method based on edge detection further includes:
s14, if the absolute value of the slope of the edge direction is smaller than a set threshold, carrying out interpolation according to a non-edge interpolation method; that is, the horizontal component v of the edge direction of the pixel to be interpolatedxAnd a vertical component vyIf the pixel to be interpolated has no direction, interpolating by adopting a non-edge image interpolation method, decomposing the two-dimensional image interpolation into a horizontal one-dimensional direction and a vertical one-dimensional direction, and sequentially interpolating in the horizontal direction and the vertical direction, wherein the sequence of the interpolation in the horizontal direction and the interpolation in the vertical direction can be exchanged.
And S15, fusing the interpolation result obtained by the line intersection method and/or the column intersection method with the interpolation result obtained by the non-edge interpolation method to obtain an interpolated image.
The image interpolation method based on the edge detection can use a large number of original points to perform interpolation processing in any integer or non-integer scaling ratio and in any edge direction, so that the edge of an interpolated image is clear and the sawtooth phenomenon is avoided.
Fig. 7 is a block diagram of a first embodiment of an image interpolation system based on edge detection according to the present invention, and as shown in fig. 7, the image interpolation system based on edge detection according to the present invention includes:
the coordinate calculation unit is used for determining the position of the pixel to be interpolated in the original image according to the size of the original image and the size of the image after interpolation;
the direction calculation unit is used for determining the edge direction of the pixel to be interpolated in the original image;
the intersection point calculation unit is used for calculating the positions of a plurality of row intersection points and/or a plurality of column intersection points of a plurality of rows and/or a plurality of columns in the neighborhood of the pixel to be interpolated in the original image, which are intercepted by the straight line determined by the pixel to be interpolated and the edge direction when the absolute value of the slope in the edge direction is not less than a first threshold value;
the edge interpolation filtering unit is used for performing interpolation according to a line intersection method and/or a column intersection method, and specifically, is used for determining a pixel value of a line intersection and/or a column intersection according to a value of a pixel in a neighborhood of the line intersection and/or the column intersection in the original image by using a one-dimensional interpolation method, performing one-dimensional filtering on the determined pixel value of the line intersection and/or the column intersection in the neighborhood of the pixel to be interpolated, obtaining a value of the pixel to be interpolated, and interpolating the original image.
Preferably, the image interpolation system based on edge detection further includes:
a non-edge interpolation unit for performing interpolation according to a non-edge interpolation method when an absolute value of a slope in an edge direction is smaller than a set threshold;
and the fusion unit is used for fusing the result obtained by the line intersection method and/or the column intersection method interpolation and the result obtained by the non-edge interpolation method so as to obtain the interpolated image.
Preferably, the direction calculating unit is specifically configured to: respectively calculating the horizontal gradients g of a plurality of pixels in the neighborhood of the pixel to be interpolated in the original image according to the formulas (2) and (3)H(i, j) and vertical gradient gV(i,j):
gH(i,j)=IL(i-1,j+1)+2*IL(i,j+1)+IL(i+1,j+1)
(2)
-IL(i-1,j-1)-2*IL(i,j-1)-IL(i+1,j-1)
gV(i,j)=IL(i-1,j-1)+2*IL(i-1,j)+IL(i-1,j+1)
(3)
-IL(i+1,j-1)-2*IL(i+1,j)-IL(i+1,j+1)
And determining the horizontal gradient g of the pixel to be interpolated by utilizing bilinear interpolation according to the position of the pixel to be interpolated in the original image and the horizontal gradient and the vertical gradient of the pixel in the neighborhood respectively, namely according to the formulas (4) and (5)H(iL,jL) And a vertical gradient gV(iL,jL) Then the edge direction of the pixel to be interpolated is the vertical direction (g) of the gradient direction of the pixel to be interpolatedV(iL,jL),-gH(iL,jL)):
gH(iL,jL)=(1-dx)*(1-dy)*gH(i,j)+dx*(1-dy)*gH(i,j+1)
(4)
+(1-dx)*dy*gH(i+1,j)+dx*dy*gH(i+1,j+1)
gV(iL,jL)=(1-dx)*(1-dy)*gV(i,j)+dx*(1-dy)*gV(i,j+1)
(5)
+(1-dx)*dy*gV(i+1,j)+dx*dy*gV(i+1,j+1)
Wherein, IL(i-1,j+1)、IL(i,j+1)、IL(i+1,j+1)、IL(i-1,j-1)、IL(i,j-1)、IL(i+1,j-1)、IL(i-1,j)、IL(i +1, j) respectively represent pixel values of eight pixels in the neighborhood of the pixel to be interpolated in the original image.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. An image interpolation method based on edge detection is characterized by comprising the following steps:
determining the position of a pixel to be interpolated in the original image according to the sizes of the original image and the interpolated image;
determining the edge direction of a pixel to be interpolated in an original image;
if the absolute value of the slope in the edge direction is not less than the first threshold, performing interpolation according to a line intersection method and/or a column intersection method, wherein the line intersection method and/or the column intersection method include:
calculating the positions of a plurality of line intersections and/or a plurality of column intersections intercepted by straight lines determined by the pixel to be interpolated and the edge direction in a plurality of lines and/or a plurality of columns in the neighborhood of the pixel to be interpolated in the original image;
determining the pixel value of the row intersection point and/or the column intersection point according to the pixel value in the neighborhood of the row intersection point and/or the column intersection point in the original image by using a one-dimensional interpolation method;
and performing one-dimensional filtering on the pixel values of the row intersection points and/or the column intersection points in the determined pixel neighborhood to be interpolated to obtain the values of the pixels to be interpolated, and interpolating the original image.
2. The image interpolation method based on edge detection according to claim 1, wherein the row intersection method and/or the column intersection method comprises:
judging the absolute value of the slope of the edge direction, and if the absolute value is smaller than a second threshold value T1, performing interpolation according to a line intersection method;
if the current value is not less than the third threshold value T2, performing interpolation according to a column intersection method;
if the value is not less than the second threshold value T1 and less than the third threshold value T2, then the interpolation is performed according to the row intersection method and the column intersection method, which includes:
calculating the positions of a plurality of line intersections and a plurality of column intersections of a plurality of lines and a plurality of columns in the neighborhood of the pixel to be interpolated in the original image, wherein the line intersections and the column intersections are intercepted by the pixel to be interpolated and the straight line determined by the edge direction;
determining the pixel values of the row intersection points and the column intersection points according to the values of the pixels in the neighborhood of the row intersection points and the column intersection points in the original image by using a one-dimensional interpolation method;
respectively carrying out one-dimensional filtering on the pixel values of the row intersection point and the column intersection point in the determined pixel neighborhood to be interpolated to obtain an interpolation result I of the row intersection point filteringHR(iH,jH) Interpolation result I of sum-column cross filteringHC(iH,jH) Determining the value I of the pixel to be interpolated according to the formula (23) by weightingH(iH,jH):
IH(iH,jH)=w*IHR(iH,jH)+(1-w)*IHC(iH,jH) (23)
Then, interpolating the original image according to the value of the pixel to be interpolated;
wherein (i)H,jH) And w represents the weight of the interpolation result of the line intersection filtering.
3. The method of image interpolation based on edge detection according to claim 1, further comprising:
if the absolute value of the slope in the edge direction is smaller than a set threshold, carrying out interpolation according to a non-edge interpolation method;
correspondingly, after the one-dimensional filtering is performed on the pixel values of the row intersection and/or the column intersection in the determined pixel neighborhood to be interpolated to obtain the value of the pixel to be interpolated, and after the original image is interpolated and the interpolation is performed according to the non-edge interpolation method, the method further includes:
and fusing the interpolation result obtained by the line intersection method and/or the column intersection method with the interpolation result obtained by the non-edge interpolation method to obtain an interpolated image.
4. The image interpolation method based on edge detection according to claim 1, wherein the line intersection method is implemented by adopting intersections of four upper and lower lines of pixels to be interpolated;
correspondingly, the calculating of the positions of the intersection points of the lines intercepted by the straight line determined by the pixel to be interpolated and the edge direction in the neighborhood of the pixel to be interpolated in the original image comprises calculating the positions of four line intersection points according to the formulas (9), (10), (11) and (12):
P 0 : ( i - 1 , j L + ( 1 + dy ) * v x v y ) - - - ( 9 )
P 1 : ( i , j L + dy * v x v y ) - - - ( 10 )
P 2 : ( i + 1 , j L - ( 1 - dy ) * v x v y ) - - - ( 11 )
P 3 : ( i + 2 , j L - ( 2 - dy ) * v x v y ) - - - ( 12 )
determining pixel values for the line intersections from values of pixels in the neighborhood of the line intersections in the original image using one-dimensional interpolation includes determining pixel values for four line intersections according to equations (17), (18), (19), and (20):
I P 0 = ( 1 - ( j L + ( 1 + dy ) * v x v y - [ j L + ( 1 + dy ) * v x v y ] ) ) * I L ( i - 1 , [ j L ( 1 + dy ) * v x v y ] ) + ( j L + ( 1 + dy ) * v x v y - [ j L + ( 1 + dy ) * v x v y ] ) * I L ( i - 1 , [ j L + ( 1 + dy ) * v x v y ] + 1 ) - - - ( 17 )
I P 1 = ( 1 - ( j L + dy * v x v y - [ j L + dy * v x v y ] ) ) * I L ( i , [ j L + dt * v x v y ) ] + ( j L + dy * v x v y - [ j L + dy * v x v y ) ] * I L ( i , + [ dy * v x v y ] + 1 ) - - - ( 18 )
I P 2 = ( 1 - ( j L - ( 1 - dy ) * v x v y - [ j L - ( 1 - dy ) * v x v y ] ) ) * I L ( i + 1 , [ j L ( 1 - dy ) * v x v y ] ) + ( j L - ( 1 - dy ) * v x v y - [ j L - ( 1 - dy ) * v x v y ] ) * I L ( i + 1 , [ j L - ( 1 - dy ) * v x v y ] + 1 ) - - - ( 19 )
I P 3 = ( 1 - ( j L - ( 2 - dy ) * v x v y - [ j L - ( 2 - dy ) * v x v y ] ) ) * I L ( i + 2 , [ j L ( 2 - dy ) * v x v y ] ) + ( j L - ( 2 - dy ) * v x v y - [ j L - ( 2 - dy ) * v x v y ] ) * I L ( i + 2 , [ j L - ( 2 - dy ) * v x v y ] + 1 ) - - - ( 20 )
the one-dimensional filtering is performed on the pixel values of the line intersection points in the determined pixel neighborhood to be interpolated, and the obtaining of the value of the pixel to be interpolated comprises the one-dimensional filtering according to a formula (22) to obtain the value of the pixel to be interpolated:
IH(iH,jH)=f0*IP0+f1*IP1+f2*IP2+f3*IP3 (22)
wherein, the [ alpha ], [ beta ]]Represents rounding down, (i)L,jL) Coordinates representing the position of the pixel to be interpolated in the original image, i and j representing the number of rows and columns, respectively, (v)x,vy) Indicating the edge direction, vxHorizontal component representing edge direction, vyIndicating the vertical component of the edge direction, P0、P1、P2、P3Respectively representing four line intersections, IP0、IP1、IP2、IP3Pixel values of four line intersections, [ f ]0,f1,f2,f3]Are the coefficients of a one-dimensional filter.
5. The method of claim 1, wherein the determining the edge direction of the pixel to be interpolated in the original image comprises:
respectively calculating the horizontal gradients g of a plurality of pixels in the neighborhood of the pixel to be interpolated in the original image according to the formulas (2) and (3)H(i, j) and vertical gradient gV(i,j):
g H ( i , j ) = I L ( i - 1 , j + 1 ) + 2 * I L ( i , j + 1 ) + I L ( i + 1 , j + 1 ) - I L ( i - 1 , j - 1 ) - 2 * I L ( i , j - 1 ) - I L ( i + 1 , j - 1 ) - - - ( 2 ) g V ( i , j ) = I L ( i - 1 , j - 1 ) + 2 * I L ( i - 1 , j ) + I L ( i - 1 , j + 1 ) - I L ( i + 1 , j - 1 ) - 2 * I L ( i + 1 , j ) - I L ( i + 1 , j + 1 ) - - - ( 3 )
Determining the horizontal gradient g of the pixel to be interpolated by utilizing bilinear interpolation according to the position of the pixel to be interpolated in the original image and the horizontal gradient and the vertical gradient of the pixel in the neighborhood respectively, namely according to the formulas (4) and (5)H(iL,jL) And a vertical gradient gV(iL,jL) Then the edge direction of the pixel to be interpolated is the vertical direction (g) of the gradient direction of the pixel to be interpolatedV(iL,jL),-gH(iL,jL)):
g H ( i L , i L ) = ( 1 - dx ) * ( 1 - dy ) * g H ( i , j ) + dx * ( 1 - dy ) * g H ( i , j + 1 ) + ( 1 - dx ) * dy * g H ( i + 1 , j ) + dx * dy * g H ( i + 1 , j + 1 ) - - - ( 4 )
g V ( i L , j L ) = ( 1 - dx ) * ( 1 - dy ) * g V ( i , j ) + dx * ( 1 - dy ) * g V ( i , j + 1 ) + ( 1 - dx ) * dy * g V ( i + 1 , j ) + dx * dy * g V ( i + 1 , j + 1 ) - - - ( 5 )
Wherein, IL(i-1,j+1)、IL(i,j+1)、IL(i+1,j+1)、IL(i-1,j-1)、IL(i,j-1)、IL(i+1,j-1)、IL(i-1,j)、IL(i +1, j) respectively represent pixel values of eight pixels in the neighborhood of the pixel to be interpolated in the original image.
6. The method of claim 1, wherein the determining the edge direction of the pixel to be interpolated in the original image comprises:
selecting a window with any size in the neighborhood of the pixel to be interpolated, and determining the horizontal gradient g of all pixels in the windowH(i, j) and vertical gradient gV(i, j) to determine the covariance matrix M of all the pixels in the window in the neighborhood of the pixel to be interpolated:
<math> <mrow> <mi>M</mi> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>A</mi> </mtd> <mtd> <mi>B</mi> </mtd> </mtr> <mtr> <mtd> <mi>B</mi> </mtd> <mtd> <mi>C</mi> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <munder> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <mi>&Omega;</mi> </mrow> </munder> <msup> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mi>H</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mtd> <mtd> <munder> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <mi>&Omega;</mi> </mrow> </munder> <msub> <mi>g</mi> <mi>H</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>g</mi> <mi>V</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <munder> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <mi>&Omega;</mi> </mrow> </munder> <msub> <mi>g</mi> <mi>H</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>g</mi> <mi>V</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mtd> <mtd> <munder> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <mi>&Omega;</mi> </mrow> </munder> <msup> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mi>V</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow> </math>
and calculating the eigenvalue and the eigenvector of the covariance matrix, and then determining the eigenvector v corresponding to the smaller eigenvalue as the edge direction, namely:
v = = v x v y = 2 B C - A - ( C - A ) 2 + 4 B 2 - - - ( 7 )
wherein, v = = v x v y and representing the eigenvector corresponding to the smaller eigenvalue of the covariance matrix. v. ofxHorizontal component representing edge direction, vyRepresenting the vertical component of the edge direction.
7. The method of claim 6, wherein the determining the edge direction of the pixel to be interpolated in the original image further comprises:
refining the covariance matrix according to equation (6) to obtain a refined covariance matrix M':
<math> <mrow> <msup> <mi>M</mi> <mo>&prime;</mo> </msup> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <mi>A</mi> </mtd> <mtd> <mi>B</mi> </mtd> </mtr> <mtr> <mtd> <mi>B</mi> </mtd> <mtd> <mi>C</mi> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <munder> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <mi>&Omega;</mi> </mrow> </munder> <mi>w</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>*</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mi>H</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mtd> <mtd> <munder> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <mi>&Omega;</mi> </mrow> </munder> <mi>w</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>g</mi> <mi>H</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>g</mi> <mi>V</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <munder> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <mi>&Omega;</mi> </mrow> </munder> <mi>w</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>g</mi> <mi>H</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>g</mi> <mi>V</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> </mtd> <mtd> <munder> <mi>&Sigma;</mi> <mrow> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>&Element;</mo> <mi>&Omega;</mi> </mrow> </munder> <mi>w</mi> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>*</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>g</mi> <mi>V</mi> </msub> <mrow> <mo>(</mo> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>8</mn> <mo>)</mo> </mrow> </mrow> </math>
wherein, the value of w (i, j) adopts a bilinear interpolation method, namely:
w(i-1,j-2)=(1-dx)*(1-dy),w(i-1,j-1)=(1-dy),w(i-1,j)=(1-dy),
w(i-1,j+1)=(1-dy),w(i-1,j+2)=(1-dy),w(i-1,j+3)=dx*(1-dy);
w(i,j-2)=(1-dx),w(i,j-1)=1,w(i,j)=1,w(i,j+1)=1,w(i,j+2)=1,
w(i,j+3)=dx;w(i+1,j-2)=(1-dx),w(i+1,j-1)=1,w(i+1,j)=1,w(i+1,j+1)=1,
w(i+1,j+2)=1,w(i+1,j+3)=dx;w(i+2,j-2)=(1-dx)*dy,w(i+2,j-1)=dy,
w(i+2,j)=dy,w(i+2,j+1)=dy,w(i+2,j+2)=dy,w(i+2,j+3)=dx*dy。
8. an image interpolation system based on edge detection, comprising:
the coordinate calculation unit is used for determining the position of the pixel to be interpolated in the original image according to the size of the original image and the size of the image after interpolation;
the direction calculation unit is used for determining the edge direction of the pixel to be interpolated in the original image;
the intersection point calculation unit is used for calculating the positions of a plurality of line intersection points and/or a plurality of column intersection points intercepted by straight lines determined by the pixel to be interpolated and the edge direction in a plurality of lines and/or a plurality of columns in the neighborhood of the pixel to be interpolated in the original image when the slope of the edge direction is not smaller than a first threshold value;
the edge interpolation filtering unit is used for performing interpolation according to a line intersection method and/or a column intersection method, and specifically, is used for determining a pixel value of a line intersection and/or a column intersection according to a value of a pixel in a neighborhood of the line intersection and/or the column intersection in the original image by using a one-dimensional interpolation method, performing one-dimensional filtering on the determined pixel value of the line intersection and/or the column intersection in the neighborhood of the pixel to be interpolated, obtaining a value of the pixel to be interpolated, and interpolating the original image.
9. The edge detection-based image interpolation system of claim 8, further comprising:
a non-edge interpolation unit for performing interpolation according to a non-edge interpolation method when an absolute value of a slope in an edge direction is smaller than a set threshold;
and the fusion unit is used for fusing the result obtained by the line intersection method and/or the column intersection method interpolation and the result obtained by the non-edge interpolation method so as to obtain the interpolated image.
10. The edge detection-based image interpolation system of claim 8, wherein the direction calculation unit is specifically configured to: respectively calculating the horizontal gradients g of a plurality of pixels in the neighborhood of the pixel to be interpolated in the original image according to the formulas (2) and (3)H(i, j) and vertical gradient gV(i,j):
g H ( i , j ) = I L ( i - 1 , j + 1 ) + 2 * I L ( i , j + 1 ) + I L ( i + 1 , j + 1 ) - I L ( i - 1 , j - 1 ) - 2 * I L ( i , j - 1 ) - I L ( i + 1 , j - 1 ) - - - ( 2 )
g V ( i , j ) = I L ( i - 1 , j - 1 ) + 2 * I L ( i - 1 , j ) + I L ( i - 1 , j + 1 ) - I L ( i + 1 , j - 1 ) - 2 * I L ( i + 1 , j ) - I L ( i + 1 , j + 1 ) - - - ( 3 )
And determining the horizontal gradient g of the pixel to be interpolated by utilizing bilinear interpolation according to the position of the pixel to be interpolated in the original image and the horizontal gradient and the vertical gradient of the pixel in the neighborhood respectively, namely according to the formulas (4) and (5)H(iL,jL) And a vertical gradient gV(iL,jL) Then the edge direction of the pixel to be interpolated is the vertical direction (g) of the gradient direction of the pixel to be interpolatedV(iL,jL),-gH(iL,jL)):
g H ( i L , i L ) = ( 1 - dx ) * ( 1 - dy ) * g H ( i , j ) + dx * ( 1 - dy ) * g H ( i , j + 1 ) + ( 1 - dx ) * dy * g H ( i + 1 , j ) + dx * dy * g H ( i + 1 , j + 1 ) - - - ( 4 )
g V ( i L , j L ) = ( 1 - dx ) * ( 1 - dy ) * g V ( i , j ) + dx * ( 1 - dy ) * g V ( i , j + 1 ) + ( 1 - dx ) * dy * g V ( i + 1 , j ) + dx * dy * g V ( i + 1 , j + 1 ) - - - ( 5 )
Wherein, IL(i-1,j+1)、IL(i,j+1)、IL(i+1,j+1)、IL(i-1,j-1)、IL(i,j-1)、IL(i+1,j-1)、IL(i-1,j)、IL(i +1, j) respectively represent pixel values of eight pixels in the neighborhood of the pixel to be interpolated in the original image.
CN201510152962.3A 2015-04-01 2015-04-01 Image interpolation method and system based on rim detection Active CN104700361B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510152962.3A CN104700361B (en) 2015-04-01 2015-04-01 Image interpolation method and system based on rim detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510152962.3A CN104700361B (en) 2015-04-01 2015-04-01 Image interpolation method and system based on rim detection

Publications (2)

Publication Number Publication Date
CN104700361A true CN104700361A (en) 2015-06-10
CN104700361B CN104700361B (en) 2017-12-05

Family

ID=53347450

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510152962.3A Active CN104700361B (en) 2015-04-01 2015-04-01 Image interpolation method and system based on rim detection

Country Status (1)

Country Link
CN (1) CN104700361B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678700A (en) * 2016-01-11 2016-06-15 苏州大学 Image interpolation method and system based on prediction gradient
CN106886981A (en) * 2016-12-30 2017-06-23 中国科学院自动化研究所 Image edge enhancement method and system based on rim detection
CN108062821A (en) * 2017-12-12 2018-05-22 深圳怡化电脑股份有限公司 Edge detection method and money-checking equipment
CN108495118A (en) * 2018-02-27 2018-09-04 吉林省行氏动漫科技有限公司 A kind of 3 D displaying method and system of Glassless
CN109993693A (en) * 2017-12-29 2019-07-09 澜至电子科技(成都)有限公司 Method and apparatus for carrying out interpolation to image
CN112862680A (en) * 2021-01-29 2021-05-28 百度时代网络技术(北京)有限公司 Image interpolation method, apparatus, device and medium thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050058371A1 (en) * 2003-09-11 2005-03-17 Chin-Hui Huang Fast edge-oriented image interpolation algorithm
CN1667650A (en) * 2005-04-08 2005-09-14 杭州国芯科技有限公司 Image zooming method based on edge detection
CN101197995A (en) * 2006-12-07 2008-06-11 深圳艾科创新微电子有限公司 Edge self-adapting de-interlacing interpolation method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050058371A1 (en) * 2003-09-11 2005-03-17 Chin-Hui Huang Fast edge-oriented image interpolation algorithm
CN1667650A (en) * 2005-04-08 2005-09-14 杭州国芯科技有限公司 Image zooming method based on edge detection
CN101197995A (en) * 2006-12-07 2008-06-11 深圳艾科创新微电子有限公司 Edge self-adapting de-interlacing interpolation method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴炜森: "手机图像插值算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105678700A (en) * 2016-01-11 2016-06-15 苏州大学 Image interpolation method and system based on prediction gradient
CN105678700B (en) * 2016-01-11 2018-10-09 苏州大学 Image interpolation method and system based on prediction gradient
CN106886981A (en) * 2016-12-30 2017-06-23 中国科学院自动化研究所 Image edge enhancement method and system based on rim detection
CN106886981B (en) * 2016-12-30 2020-02-14 中国科学院自动化研究所 Image edge enhancement method and system based on edge detection
CN108062821A (en) * 2017-12-12 2018-05-22 深圳怡化电脑股份有限公司 Edge detection method and money-checking equipment
CN108062821B (en) * 2017-12-12 2020-04-28 深圳怡化电脑股份有限公司 Edge detection method and currency detection equipment
CN109993693A (en) * 2017-12-29 2019-07-09 澜至电子科技(成都)有限公司 Method and apparatus for carrying out interpolation to image
CN109993693B (en) * 2017-12-29 2023-04-25 澜至电子科技(成都)有限公司 Method and apparatus for interpolating an image
CN108495118A (en) * 2018-02-27 2018-09-04 吉林省行氏动漫科技有限公司 A kind of 3 D displaying method and system of Glassless
CN112862680A (en) * 2021-01-29 2021-05-28 百度时代网络技术(北京)有限公司 Image interpolation method, apparatus, device and medium thereof
CN112862680B (en) * 2021-01-29 2024-07-16 百度时代网络技术(北京)有限公司 Image interpolation method, device, equipment and medium thereof

Also Published As

Publication number Publication date
CN104700361B (en) 2017-12-05

Similar Documents

Publication Publication Date Title
CN104700361B (en) Image interpolation method and system based on rim detection
CN103049914B (en) High-resolution depth graph based on border generates method and system
US8335394B2 (en) Image processing method for boundary resolution enhancement
US8175417B2 (en) Apparatus, method, and computer-readable recording medium for pixel interpolation
EP2209087B1 (en) Apparatus and method of obtaining high-resolution image
CN104700360B (en) Image-scaling method and system based on edge self-adaption
US8494308B2 (en) Image upscaling based upon directional interpolation
CN106204441B (en) Image local amplification method and device
US9105106B2 (en) Two-dimensional super resolution scaling
US20090226097A1 (en) Image processing apparatus
CN106169174A (en) A kind of image magnification method
CN102682424B (en) Image amplification processing method based on edge direction difference
US8830395B2 (en) Systems and methods for adaptive scaling of digital images
JP2002525723A (en) Method and apparatus for zooming digital images
US10410326B2 (en) Image anti-aliasing system
WO2015198368A1 (en) Image processing device and image processing method
CN101917624B (en) Method for reconstructing high resolution video image
JP4868249B2 (en) Video signal processing device
Ousguine et al. A new image interpolation using gradient-orientation and cubic spline interpolation
CN104700357A (en) Chinese character image zooming method based on bilinear operator
WO2016154970A1 (en) Method and system for image interpolation based on edge detection
JP3200351B2 (en) Image processing apparatus and method
CN101459811B (en) Video picture format conversion method and corresponding device
CN101763626A (en) Image zooming method
Chang et al. Edge directional interpolation for image upscaling with temporarily interpolated pixels

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20171211

Address after: 102412 Beijing City, Fangshan District Yan Village Yan Fu Road No. 1 No. 11 building 4 layer 402

Patentee after: Beijing Si Lang science and Technology Co.,Ltd.

Address before: 100080 Zhongguancun East Road, Beijing, No. 95, No.

Patentee before: Institute of Automation, Chinese Academy of Sciences

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220119

Address after: 519031 room 532, building 18, No. 1889, Huandao East Road, Hengqin District, Zhuhai City, Guangdong Province

Patentee after: Zhuhai Jilang Semiconductor Technology Co.,Ltd.

Address before: 102412 room 402, 4th floor, building 11, No. 1, Yanfu Road, Yancun Town, Fangshan District, Beijing

Patentee before: Beijing Si Lang science and Technology Co.,Ltd.

TR01 Transfer of patent right
CP03 Change of name, title or address

Address after: Room 701, 7th Floor, Building 56, No. 2, Jingyuan North Street, Beijing Economic and Technological Development Zone, Daxing District, Beijing 100176 (Beijing Pilot Free Trade Zone High-end Industry Zone Yizhuang Group)

Patentee after: Beijing Jilang Semiconductor Technology Co., Ltd.

Address before: 519031 room 532, building 18, No. 1889, Huandao East Road, Hengqin District, Zhuhai City, Guangdong Province

Patentee before: Zhuhai Jilang Semiconductor Technology Co.,Ltd.

CP03 Change of name, title or address