Detailed Description
Fig. 1 is a flowchart of a first embodiment of an image interpolation method based on edge detection, as shown in fig. 1, the image interpolation method based on edge detection of the present invention includes:
s11, determining the position of the pixel to be interpolated in the original image according to the size, namely the resolution, of the original image and the interpolated image; preferably, the determining the position of the pixel to be interpolated in the original image according to the sizes of the original image and the interpolated image includes calculating the position of the pixel to be interpolated in the original image according to the formula (1):
wherein iLAnd jLLine and column coordinates, i, respectively, representing the position of the pixel to be interpolated in the original image, i.e. the low resolution imageHAnd jHThe row coordinates and column coordinates, H, respectively representing the position of the pixel to be interpolated in the interpolated image, i.e. in the high resolution imageLAnd WLRespectively representing the height and width of the original image, HHAnd WHRespectively representing the height and width of the interpolated image;
s12, determining the edge direction of the pixel to be interpolated in the original image; preferably, the first method for determining the edge direction of the pixel to be interpolated in the original image may include:
fig. 2 is a schematic diagram of a Sobel gradient method in an image interpolation method based on edge detection according to a first embodiment of the present invention, and as shown in fig. 2, a horizontal gradient g of a plurality of pixels in a neighborhood of a pixel to be interpolated in an original image is calculated according to formulas (2) and (3) by using a Sobel gradient operatorH(i, j) and vertical gradient gV(i,j):
gH(i,j)=IL(i-1,j+1)+2*IL(i,j+1)+IL(i+1,j+1)
(2)
-IL(i-1,j-1)-2*IL(i,j-1)-IL(i+1,j-1)
gV(i,j)=IL(i-1,j-1)+2*IL(i-1,j)+IL(i-1,j+1)
(3)
-IL(i+1,j-1)-2*IL(i+1,j)-IL(i+1,j+1)
Determining the horizontal gradient g of the pixel to be interpolated by utilizing bilinear interpolation according to the position of the pixel to be interpolated in the original image and the horizontal gradient and the vertical gradient of the pixel in the neighborhood respectively, namely according to the formulas (4) and (5)H(iL,jL) And a vertical gradient gV(iL,jL) Then the edge direction of the pixel to be interpolated is the vertical direction (g) of the gradient direction of the pixel to be interpolatedV(iL,jL),-gH(iL,jL)):
gH(iL,jL)=(1-dx)*(1-dy)*gH(i,j)+dx*(1-dy)*gH(i,j+1)
(4)
+(1-dx)*dy*gH(i+1,j)+dx*dy*gH(i+1,j+1)
gV(iL,jL)=(1-dx)*(1-dy)*gV(i,j)+dx*(1-dy)*gV(i,j+1)
(5)
+(1-dx)*dy*gV(i+1,j)+dx*dy*gV(i+1,j+1)
Wherein, IL(i-1,j+1)、IL(i,j+1)、IL(i+1,j+1)、IL(i-1,j-1)、IL(i,j-1)、IL(i+1,j-1)、IL(i-1,j)、IL(i +1, j) respectively representing pixel values of eight pixels in a pixel neighborhood to be interpolated in the original image;
next, fig. 3 is a schematic diagram of a gradient covariance matrix method in an image interpolation method based on edge detection according to a first embodiment of the present invention, and as shown in fig. 3, the second method for determining an edge direction of a pixel to be interpolated in an original image may include:
selecting a window with an arbitrary size of a window omega of H x W in a pixel neighborhood to be interpolated, for example, H is 4, and W is 6 in the example; determining the horizontal gradient g of all pixels within a windowH(i, j) and vertical gradient gV(i, j) to determine the covariance matrix M of all the pixels in the window in the neighborhood of the pixel to be interpolated:
<math>
<mrow>
<mi>M</mi>
<mo>=</mo>
<mfenced open='[' close=']'>
<mtable>
<mtr>
<mtd>
<mi>A</mi>
</mtd>
<mtd>
<mi>B</mi>
</mtd>
</mtr>
<mtr>
<mtd>
<mi>B</mi>
</mtd>
<mtd>
<mi>C</mi>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>=</mo>
<mfenced open='[' close=']'>
<mtable>
<mtr>
<mtd>
<munder>
<mi>Σ</mi>
<mrow>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>∈</mo>
<mi>Ω</mi>
</mrow>
</munder>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>g</mi>
<mi>H</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mtd>
<mtd>
<munder>
<mi>Σ</mi>
<mrow>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>∈</mo>
<mi>Ω</mi>
</mrow>
</munder>
<msub>
<mi>g</mi>
<mi>H</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>*</mo>
<msub>
<mi>g</mi>
<mi>V</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<munder>
<mi>Σ</mi>
<mrow>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>∈</mo>
<mi>Ω</mi>
</mrow>
</munder>
<msub>
<mi>g</mi>
<mi>H</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>*</mo>
<msub>
<mi>g</mi>
<mi>V</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
</mtd>
<mtd>
<munder>
<mi>Σ</mi>
<mrow>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>∈</mo>
<mi>Ω</mi>
</mrow>
</munder>
<msup>
<mrow>
<mo>(</mo>
<msub>
<mi>g</mi>
<mi>V</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>)</mo>
</mrow>
<mn>2</mn>
</msup>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>6</mn>
<mo>)</mo>
</mrow>
</mrow>
</math>
and calculating the eigenvalue and the eigenvector of the covariance matrix, and then determining the eigenvector v corresponding to the smaller eigenvalue as the edge direction, namely:
wherein, representing eigenvectors corresponding to smaller eigenvalues of the covariance matrix; v. ofxHorizontal component representing edge direction, vyRepresenting the vertical component of the edge direction.
In addition, the third method for determining the edge direction of the pixel to be interpolated in the original image may include the steps described in the second method and further include:
refining the covariance matrix according to equation (8) to obtain a refined covariance matrix M':
<math>
<mrow>
<msup>
<mi>M</mi>
<mo>′</mo>
</msup>
<mo>=</mo>
<mfenced open='[' close=']'>
<mtable>
<mtr>
<mtd>
<mi>A</mi>
</mtd>
<mtd>
<mi>B</mi>
</mtd>
</mtr>
<mtr>
<mtd>
<mi>B</mi>
</mtd>
<mtd>
<mi>C</mi>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>=</mo>
<mfenced open='[' close=']'>
<mtable>
<mtr>
<mtd>
<munder>
<mi>Σ</mi>
<mrow>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>∈</mo>
<mi>Ω</mi>
</mrow>
</munder>
<msup>
<mrow>
<mi>w</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>*</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>g</mi>
<mi>H</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
<mn>2</mn>
</msup>
</mtd>
<mtd>
<munder>
<mi>Σ</mi>
<mrow>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>∈</mo>
<mi>Ω</mi>
</mrow>
</munder>
<mi>w</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>*</mo>
<msub>
<mi>g</mi>
<mi>H</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>*</mo>
<msub>
<mi>g</mi>
<mi>V</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<munder>
<mi>Σ</mi>
<mrow>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>∈</mo>
<mi>Ω</mi>
</mrow>
</munder>
<msub>
<mrow>
<mi>w</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>*</mo>
<mi>g</mi>
</mrow>
<mi>H</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>*</mo>
<msub>
<mi>g</mi>
<mi>V</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
</mtd>
<mtd>
<munder>
<mi>Σ</mi>
<mrow>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>∈</mo>
<mi>Ω</mi>
</mrow>
</munder>
<mi>w</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<msup>
<mrow>
<mo>*</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>g</mi>
<mi>V</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
<mo>)</mo>
</mrow>
<mo>)</mo>
</mrow>
</mrow>
<mn>2</mn>
</msup>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>8</mn>
<mo>)</mo>
</mrow>
</mrow>
</math>
wherein, the value of w (i, j) adopts a bilinear interpolation method, namely:
w(i-1,j-2)=(1-dx)*(1-dy),w(i-1,j-1)=(1-dy),w(i-1,j)=(1-dy),
w(i-1,j+1)=(1-dy),w(i-1,j+2)=(1-dy),w(i-1,j+3)=dx*(1-dy);
w(i,j-2)=(1-dx),w(i,j-1)=1,w(i,j)=1,w(i,j+1)=1,w(i,j+2)=1,
w(i,j+3)=dx;w(i+1,j-2)=(1-dx),w(i+1,j-1)=1,w(i+1,j)=1,w(i+1,j+1)=1,
w(i+1,j+2)=1,w(i+1,j+3)=dx;w(i+2,j-2)=(1-dx)*dy,w(i+2,j-1)=dy,
w(i+2,j)=dy,w(i+2,j+1)=dy,w(i+2,j+2)=dy,w(i+2,j+3)=dx*dy;w(i,j)
values can also be represented in table 1:
(1-dx)*(1-dy) |
(1-dy) |
(1-dy) |
(1-dy) |
(1-dy) |
dx*(1-dy) |
(1-dx) |
1 |
1 |
1 |
1 |
dx |
(1-dx) |
1 |
1 |
1 |
1 |
dx |
(1-dx)*dy |
dy |
dy |
dy |
dy |
dx*dy |
TABLE 1
S13, if the absolute value of the slope in the edge direction is not less than the first threshold, performing interpolation according to a line intersection method and/or a column intersection method, that is, an edge interpolation method, where the line intersection method and/or the column intersection method preferably includes:
s131, judging the absolute value of the slope of the edge direction, and if the absolute value is smaller than a second threshold value T1, performing interpolation according to a line intersection method;
s132, if the value is not less than the third threshold value T2, performing interpolation according to a column intersection method;
s133, if the value is not less than the second threshold value T1 but less than the third threshold value T2, performing interpolation according to the row intersection method and the column intersection method at the same time, wherein the interpolation comprises the following steps:
s1331, calculating positions of a plurality of line intersections and/or a plurality of column intersections in the neighborhood of the pixel to be interpolated in the original image, where the line intersections and/or the column intersections are intercepted by the pixel to be interpolated and the straight line determined by the edge direction, includes:
calculating the positions of a plurality of line intersections and a plurality of column intersections of a plurality of lines and a plurality of columns in the neighborhood of the pixel to be interpolated in the original image, wherein the line intersections and the column intersections are intercepted by the pixel to be interpolated and the straight line determined by the edge direction;
preferably, fig. 4 is a schematic diagram of a line intersection method in the first embodiment of the image interpolation method based on edge detection, as shown in fig. 4, the line intersection method is implemented by using intersections of upper and lower four lines of pixels to be interpolated;
correspondingly, the calculating of the positions of the intersection points of the lines intercepted by the straight line determined by the pixel to be interpolated and the edge direction in the neighborhood of the pixel to be interpolated in the original image comprises calculating the positions of four line intersection points according to the formulas (9), (10), (11) and (12):
similarly, fig. 5 is a schematic diagram of a column intersection method in the first embodiment of the image interpolation method based on edge detection, as shown in fig. 5, the column intersection method is implemented by using intersections of four columns on the left and right of the pixel to be interpolated, and correspondingly, the calculating positions of a plurality of column intersections intercepted by a plurality of columns of straight lines determined by the pixel to be interpolated and the edge direction in the neighborhood of the pixel to be interpolated in the original image includes calculating positions of four column intersections according to formulas (13), (14), (15), and (16), respectively:
s1332, determining a pixel value of the row intersection and/or the column intersection according to a value of a pixel in a neighborhood of the row intersection and/or the column intersection in the original image by using a one-dimensional interpolation method, including:
determining pixel values of said row and column intersections from values of pixels in the neighborhood of said row and column intersections in the original image using one-dimensional interpolation, preferably including determining pixel values of four row intersections according to equations (17), (18), (19), and (20):
similarly, the column intersection method is implemented by using intersections of four left and right columns of pixels to be interpolated, and correspondingly, taking the calculation of the pixel value of the first column intersection as an example, the determining, by using the one-dimensional interpolation method, the pixel value of the column intersection according to the value of the pixel in the neighborhood of the column intersection in the original image includes determining the pixel value of the column intersection according to equation (21):
the calculation of the pixel values of other three column intersections is not repeated;
the one-dimensional filtering is performed on the pixel values of the row intersection point and/or the column intersection point in the determined pixel neighborhood to be interpolated, and the obtaining of the value of the pixel to be interpolated comprises the one-dimensional filtering according to a formula (22) to obtain the value of the pixel to be interpolated:
IH(iH,jH)=f0*IP0+f1*IP1+f2*IP2+f3*IP3 (22)
wherein, the [ alpha ], [ beta ]]Represents rounding down, (i)L,jL) Coordinates representing the position of the pixel to be interpolated in the original image, i and j representing the number of rows and columns, respectively, (v)x,vy) Indicates the edge direction, P0、P1、P2、P3Respectively representing four line intersections, IP0、IP1、IP2、IP3Pixel values of four line intersections, [ f ]0,f1,f2,f3]Being coefficients of a one-dimensional filter, e.g. [1, 3, 3, 1 ]];
It should be noted that, when calculating the intersection points of the edge direction of the pixel to be interpolated and a plurality of lines in the neighborhood thereof, if the edge direction is horizontalDirection, that is, no intersection point exists between the direction and a plurality of lines in the neighborhood of the pixel to be interpolated; when the absolute value of the slope of the edge direction is smaller than the set first threshold kT1When the interpolation is carried out, the intersection points of the interpolation pixel and a plurality of lines in the neighborhood of the pixel to be interpolated are far, and the correlation with the pixel to be interpolated is relatively small; therefore, the interpolation is carried out on the two situations by adopting a non-edge image interpolation method, the two-dimensional image interpolation is decomposed into horizontal and vertical one-dimensional directions for interpolation in sequence, and the sequence of the horizontal direction interpolation and the vertical direction interpolation can be exchanged; in a similar way, when the intersection points of the edge direction of the pixel to be interpolated and a plurality of rows in the neighborhood of the pixel to be interpolated are calculated, if the edge direction is the vertical direction, the pixel to be interpolated and the plurality of rows in the neighborhood of the pixel to be interpolated have no intersection points; when the absolute value of the slope of the edge direction is larger than the set first threshold kT2When the interpolation is carried out, the intersection points of the interpolation and a plurality of columns in the neighborhood of the pixel to be interpolated are far, and the correlation with the pixel to be interpolated is relatively small; therefore, the interpolation is carried out by adopting a non-edge image interpolation method for the two situations, the two-dimensional image interpolation is decomposed into horizontal and vertical one-dimensional directions for interpolation in sequence, and the sequence of the horizontal direction interpolation and the vertical direction interpolation can be exchanged.
S1333, performing one-dimensional filtering on the pixel values of the line intersection and/or the column intersection in the determined pixel neighborhood to be interpolated to obtain the value of the pixel to be interpolated, and interpolating the original image, including:
respectively carrying out one-dimensional filtering on the pixel values of the row intersection point and the column intersection point in the determined pixel neighborhood to be interpolated to obtain an interpolation result I of the row intersection point filteringHR(iH,jH) Interpolation result I of sum-column cross filteringHC(iH,jH) FIG. 6 shows a weight function when a row intersection method and a column intersection method are combined in a first embodiment of the image interpolation method based on edge detection, that is, weights when the row intersection method and the column intersection method are weighted are generated by a curve shown in FIG. 6, and a value I of a pixel to be interpolated is determined according to a weighting of formula (23)H(iH,jH):
IH(iH,jH)=w*IHR(iH,jH)+(1-w)*IHC(iH,jH) (23)
Then, interpolating the original image according to the value of the pixel to be interpolated;
wherein (i)H,jH) Representing the coordinates of the position of a pixel to be interpolated, and w representing the weight of an interpolation result of line intersection filtering;
it should be noted that, at a low angle, interpolation is performed by using a method of column intersection points; in other directions, interpolation is performed by using a line intersection method, T1 and T2 are preset thresholds, and the combination method of the line intersection method and the column intersection method is not limited to the above-described form;
preferably, the image interpolation method based on edge detection further includes:
s14, if the absolute value of the slope of the edge direction is smaller than a set threshold, carrying out interpolation according to a non-edge interpolation method; that is, the horizontal component v of the edge direction of the pixel to be interpolatedxAnd a vertical component vyIf the pixel to be interpolated has no direction, interpolating by adopting a non-edge image interpolation method, decomposing the two-dimensional image interpolation into a horizontal one-dimensional direction and a vertical one-dimensional direction, and sequentially interpolating in the horizontal direction and the vertical direction, wherein the sequence of the interpolation in the horizontal direction and the interpolation in the vertical direction can be exchanged.
And S15, fusing the interpolation result obtained by the line intersection method and/or the column intersection method with the interpolation result obtained by the non-edge interpolation method to obtain an interpolated image.
The image interpolation method based on the edge detection can use a large number of original points to perform interpolation processing in any integer or non-integer scaling ratio and in any edge direction, so that the edge of an interpolated image is clear and the sawtooth phenomenon is avoided.
Fig. 7 is a block diagram of a first embodiment of an image interpolation system based on edge detection according to the present invention, and as shown in fig. 7, the image interpolation system based on edge detection according to the present invention includes:
the coordinate calculation unit is used for determining the position of the pixel to be interpolated in the original image according to the size of the original image and the size of the image after interpolation;
the direction calculation unit is used for determining the edge direction of the pixel to be interpolated in the original image;
the intersection point calculation unit is used for calculating the positions of a plurality of row intersection points and/or a plurality of column intersection points of a plurality of rows and/or a plurality of columns in the neighborhood of the pixel to be interpolated in the original image, which are intercepted by the straight line determined by the pixel to be interpolated and the edge direction when the absolute value of the slope in the edge direction is not less than a first threshold value;
the edge interpolation filtering unit is used for performing interpolation according to a line intersection method and/or a column intersection method, and specifically, is used for determining a pixel value of a line intersection and/or a column intersection according to a value of a pixel in a neighborhood of the line intersection and/or the column intersection in the original image by using a one-dimensional interpolation method, performing one-dimensional filtering on the determined pixel value of the line intersection and/or the column intersection in the neighborhood of the pixel to be interpolated, obtaining a value of the pixel to be interpolated, and interpolating the original image.
Preferably, the image interpolation system based on edge detection further includes:
a non-edge interpolation unit for performing interpolation according to a non-edge interpolation method when an absolute value of a slope in an edge direction is smaller than a set threshold;
and the fusion unit is used for fusing the result obtained by the line intersection method and/or the column intersection method interpolation and the result obtained by the non-edge interpolation method so as to obtain the interpolated image.
Preferably, the direction calculating unit is specifically configured to: respectively calculating the horizontal gradients g of a plurality of pixels in the neighborhood of the pixel to be interpolated in the original image according to the formulas (2) and (3)H(i, j) and vertical gradient gV(i,j):
gH(i,j)=IL(i-1,j+1)+2*IL(i,j+1)+IL(i+1,j+1)
(2)
-IL(i-1,j-1)-2*IL(i,j-1)-IL(i+1,j-1)
gV(i,j)=IL(i-1,j-1)+2*IL(i-1,j)+IL(i-1,j+1)
(3)
-IL(i+1,j-1)-2*IL(i+1,j)-IL(i+1,j+1)
And determining the horizontal gradient g of the pixel to be interpolated by utilizing bilinear interpolation according to the position of the pixel to be interpolated in the original image and the horizontal gradient and the vertical gradient of the pixel in the neighborhood respectively, namely according to the formulas (4) and (5)H(iL,jL) And a vertical gradient gV(iL,jL) Then the edge direction of the pixel to be interpolated is the vertical direction (g) of the gradient direction of the pixel to be interpolatedV(iL,jL),-gH(iL,jL)):
gH(iL,jL)=(1-dx)*(1-dy)*gH(i,j)+dx*(1-dy)*gH(i,j+1)
(4)
+(1-dx)*dy*gH(i+1,j)+dx*dy*gH(i+1,j+1)
gV(iL,jL)=(1-dx)*(1-dy)*gV(i,j)+dx*(1-dy)*gV(i,j+1)
(5)
+(1-dx)*dy*gV(i+1,j)+dx*dy*gV(i+1,j+1)
Wherein, IL(i-1,j+1)、IL(i,j+1)、IL(i+1,j+1)、IL(i-1,j-1)、IL(i,j-1)、IL(i+1,j-1)、IL(i-1,j)、IL(i +1, j) respectively represent pixel values of eight pixels in the neighborhood of the pixel to be interpolated in the original image.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.