EP1794715A2 - Image interpolation - Google Patents

Image interpolation

Info

Publication number
EP1794715A2
EP1794715A2 EP05784295A EP05784295A EP1794715A2 EP 1794715 A2 EP1794715 A2 EP 1794715A2 EP 05784295 A EP05784295 A EP 05784295A EP 05784295 A EP05784295 A EP 05784295A EP 1794715 A2 EP1794715 A2 EP 1794715A2
Authority
EP
European Patent Office
Prior art keywords
pixels
image
computing
sets
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP05784295A
Other languages
German (de)
French (fr)
Inventor
Henricus Wilhelm Peter Van Der Heijden
Erwin Ben Beller
Robert Schutten
Haiyan He
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Koninklijke Philips NV
Original Assignee
Koninklijke Philips Electronics NV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics NV filed Critical Koninklijke Philips Electronics NV
Priority to EP05784295A priority Critical patent/EP1794715A2/en
Publication of EP1794715A2 publication Critical patent/EP1794715A2/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/403Edge-driven scaling; Edge-based scaling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/20Circuitry for controlling amplitude response
    • H04N5/205Circuitry for controlling amplitude response for correcting amplitude versus frequency characteristic
    • H04N5/208Circuitry for controlling amplitude response for correcting amplitude versus frequency characteristic for compensating for attenuation of high frequency components, e.g. crispening, aperture distortion correction

Definitions

  • the invention relates to an image interpolation unit for creating an output image on basis of an input image by means of interpolation of pixel values of the input image, the image interpolation unit comprising: selecting means for selecting sets of evaluation pixels for respective orientations (50, 60, 70) in the input image, related to a particular pixel (10) of the output image to be interpolated; computing means for computing match errors for the respective orientations by comparing values of subsets (400, 410, 420) of the sets of evaluation pixels selected; selecting means for selecting a particular orientation based on the match errors computed; creating means for creating a set of interpolation pixels on basis of the particular orientation; and computing means for computing a pixel value of the particular pixel on basis of the set of interpolation pixels created.
  • the invention further relates to an image processing apparatus comprising such an image interpolation unit.
  • the invention further relates to a method of creating an output image on basis of an input image by means of interpolation of pixel values of the input image the method comprising : - selecting sets of evaluation pixels for respective orientations in the input image, related to a particular pixel of the output image to be interpolated; computing match errors for the respective orientations by comparing values of subsets of the sets of evaluation pixels selected; selecting a particular orientation based on the match errors computed; - creating a set of interpolation pixels on basis of the particular orientation; and computing a pixel value of the particular pixel on basis of the set of interpolation pixels being created.
  • a particular interpolated pixel value is obtained from a group of pixel values in the neighbourhood of the particular pixel.
  • a group of pixels belonging to a particular neighbourhood from set of neighbourhoods of various orientations is selected. The decision of selection is on basis of presence of an edge structure within the particular neighbourhood.
  • This object of the invention is achieved in that the computing means of the image interpolation unit as mentioned in the opening paragraphs for computing the match errors are arranged to compute a first difference between first values of a first subset of a first one of the sets of evaluation pixels and a second difference between second values of a second subset of the first one of the sets of evaluation pixels and for computing a further difference between the first difference and the second difference.
  • the unit according to the invention is thus computing the difference of • difference of pixel values. A set containing a minimum number of three pixels is considered for evaluation. Constant values of adjacent pixels and constant gradients are duly considered to avoid false indications of orientation.
  • edges of single pixel width while being compared in the orientation of edge structure will give minimum variance even when there is a constant gradient in the pixel values.
  • the novel measure prescribed by the invention advantageously gives maximum variance in the direction perpendicular to the direction of the edge in case of thin edges of one or two pixel widths.
  • true orientation of edge structure can be unambiguously identified by the invention. Since there are more than two subsets of pixels in the evaluation of orientations, the proposed method is more robust in the presence of noise.
  • a group of pixels belonging to a spatial neighbourhood form a set of evaluation pixels for detecting the presence of edge structure.
  • the spatial neighbourhood may be fixed in a number of orientations and for each orientation of the neighbourhood, a group of pixels form the candidates for evaluation. Match error for each orientation is calculated for each one of the sets of evaluation pixels.
  • the size of the neighbourhood may vary depending upon the size of edges expected in the image.
  • a further embodiment of the image interpolation unit according to the invention is characterised in that a first subset of a first one of the sets of evaluation pixels comprises pixels from a first pair of rows, a second subset of the first one of the sets of evaluation pixels comprises pixels from a second pair of rows, the first and the second pairs of rows having a common row, adjacent to the particular pixel of the image to be interpolated.
  • This embodiment is advantageous in that the comparison of pixel values is made in the immediate neighbourhood of the particular pixel to be interpolated to ascertain the orientation of the edge structure. Comparison is carried out within respective pixels belonging to a pair of rows in the neighbourhood of the particular pixel to be interpolated. Comparing the subsets of rows that have overlapping row ensures the continuity of the edge structure. In this embodiment, two rows of pixels above the pixel to be interpolated and one row of pixel below the pixel to be interpolated are considered for evaluation.
  • a further embodiment of the image interpolation unit according to the invention is characterised in that a first one of match errors is computed on basis of the second subset of the first one of the sets of evaluation pixels and a third subset of the first one of the sets of evaluation pixels that comprises pixels from a third pair of rows, the second and third pairs of rows having a further common row, adjacent to the particular pixel of the image to be interpolated.
  • each subset may contain a pair of rows.
  • two such subsets may be required.
  • Each match error may be the difference of difference measure obtained from three rows of pixels that have a common row adjacent to the particular pixel of the image to be interpolated.
  • two rows of pixels below the pixel to be interpolated and one row of pixel above the pixel to be interpolated are considered for evaluation.
  • a further embodiment of the image interpolation unit according to the invention is characterised in that, first one of match errors is a sum of a first intermediate match error computed between the first and second subsets and a second intermediate match error computed between the second and the third subsets.
  • a further embodiment of the image interpolation unit according to the invention is characterised in that the first one of the match error is a difference of a first intermediate match error computed between the first and second subsets and a second intermediate match error computed between the second and third subsets.
  • a match error may be a further difference of difference when a four pixels comparison is made.
  • a further embodiment of the image interpolation unit according to the invention is characterised in that the selecting means for selecting a particular orientation are arranged to select the particular orientation based on the minimum of the match errors being computed for respective orientations. In order to ascertain the orientation of edge structure, it is advantageous to select the minimum match error that corresponds to the orientation.
  • a further embodiment of the image interpolation unit according to the invention is characterised in that the computing means for computing the pixel value of the particular pixel are arranged to compute the pixel value of the particular pixel by averaging the pixel values of a set of interpolation pixels.
  • interpolation is carried out from the set.
  • Value of the particular pixel is computed by well-known methods from the set of interpolation pixels.
  • the computing means for computing the match errors are arranged to compute a first difference between first values of a first subset of a first one of the sets of evaluation pixels and a second difference between second values of a second subset of the first one of the sets of evaluation pixels and for computing a further difference between the first difference and the second difference.
  • the image processing apparatus might support one or more of the following types of image processing:
  • Video compression i.e., encoding, decoding and transcoding, Resolution conversion and format conversion, Interlaced scan to progressive scan conversion, Image zoom in/out.
  • the image processing apparatus may comprise additional units, e.g. a receiving unit, a processing unit and a display unit.
  • the image processing apparatus might for e.g. be a television, PC, set top box, VCR/VCP (Video Cassette Recorder/ Player), satellite tuner, or DVD (digital Versatile Disk) player or recorder.
  • VCR/VCP Video Cassette Recorder/ Player
  • satellite tuner or DVD (digital Versatile Disk) player or recorder.
  • computing a first one of the match errors comprises computing a first difference between first values of a first subset of a first subset of a first one of the sets of evaluation pixels and a second difference between second values of a second subset of the first one of the sets of evaluation pixels and computing a further difference between the first difference and the second difference.
  • Fig. 1 is a functional block diagram illustrating components of one embodiment of an image interpolation device according to the present invention
  • Fig. 2 shows a flow chart illustrating steps of the method according to the present invention
  • Fig. 3 A shows a schematic representation of a portion of an image comprising pixels arranged in rows, a particular pixel to be interpolated and a first set of pixels in a neighbourhood of first orientation;
  • Fig. 3B shows a schematic representation of a portion of an image comprising pixels arranged in rows, a particular pixel to be interpolated and a second set of pixels in a neighbourhood of second orientation;
  • Fig. 3C shows a schematic representation of a portion of an image comprising pixels arranged in rows, a particular pixel to be interpolated and a third set of pixels in a neighbourhood of third orientation
  • Fig. 4 shows a schematic representation of a portion of an image comprising ' ⁇ pixels arranged in rows, a particular pixel to be interpolated and subsets of pixels used in the evaluation of respective match errors
  • Fig. 5 shows a representation of an embodiment of an image processing apparatus according to the invention. • .
  • Fig. 1 is a functional block diagram illustrating components of one embodiment of an image interpolation device according to the present invention.
  • Selector 100 receives the image 1 from an external source. The image may also be internally generated for, e.g. in cases of video cassette players or DVD players.
  • Selector 100 selects sets of evaluation pixels for respective orientations in the input image relative to a particular pixel to be interpolated. In order to obtain an edge-dependent interpolation, sets of evaluation pixels are selected in different neighbourhoods in respective orientations with reference to the particular pixel to be interpolated.
  • the selected sets 101 are used by computer 110 to compute match errors for the respective orientations by comparing pixel values of subsets of the sets of evaluation pixels. A function of pixel values is used for calculation of match error.
  • a plurality of such match errors is computed with respect to groups of pixels belonging to different neighbourhoods corresponding to different orientation of the particular pixel to be interpolated.
  • the computed match errors 111 are sent to selector 120.
  • the function for match error could be chosen such that the minimum match error corresponds to the orientation of edge structure within the chosen neighbourhood.
  • Selector 120 selects a particular orientation based on the match errors.
  • Creator 130 creates a set of interpolation pixels 131 using the selected orientation 121. Best quality of output image may be obtained by adapting the interpolation by selecting a group of pixels in the neighbourhood along a predicted orientation of an edge structure.
  • Interpolator 140 uses the set of interpolation pixels 131 to compute the particular pixel 141 to be interpolated.
  • One of the several standard techniques may be used for interpolation.
  • Fig. 2 depicts the steps involved in the method of image interpolation according to the invention.
  • An input image 1 is received in first step 200 and reassigned to a grid with blank spaces where the pixels have to be interpolated.
  • a particular pixel to be interpolated is selected in step 210 and a neighbourhood is defined and sets of evaluation pixels are defined.
  • a set of match errors corresponding to each set of evaluation pixels is carried out in step 220. From the set of match errors, a particular match error and a particular orientation corresponding to that match error is selected in step 230.
  • a set of interpolation • pixels of a particular orientation is chosen in step 240 and the new pixel is calculated based on the selected set of interpolation pixels in step 250.
  • step 200 The process is repeated from the first , step 200 for each pixel to be interpolated.
  • step 260 the process terminates in step 270.
  • Figs. 3 A, 3B and 3C schematic representations of a portion of an image comprising pixels arranged in rows, a particular pixel to be interpolated and sets of pixels in three different neighbourhoods in three different orientations are depicted.
  • Fig.3A shows a representation of a particular pixel 10 to be interpolated in a first neighbourhood 300 in a first orientation 50. Pixels belonging to the first neighbourhood may be known as a first set of interpolation pixels.
  • Fig 3B shows a representation of a particular pixel 10 to be interpolated in a second neighbourhood 310 in a second orientation 60.
  • the pixels belonging to second neighbourhood 310 may be known as second set of evaluation pixels.
  • Fig 3 C shows a representation of a particular pixel 10 to be interpolated in a third neighbourhood 320 in a third orientation 70.
  • the pixels belonging to the third neighbourhood 320 may be known as third set of evaluation pixels.
  • Figs. 3A, 3B and 3C show a portion of an example image wherein the coordinates (x,y) of a Cartesian coordinate system may be assumed to coincide with the pixel 10 to be interpolated.
  • Squares with solid line denote the original pixels and the squares with dotted lines denote the pixels that are to be interpolated.
  • the pixels belonging to a first pair of adjacent rows 350 and 360 are denoted by ordinates (y-3) and(y- ⁇ ) respectively.
  • the pixels belonging to second pair of rows 370 and 380 are denoted by ordinates (y + 1) and (y + 3) respectively.
  • the pixels to the right of (x,y) have increasing abscissa values and pixels to the left of (x,y) have decreasing abscissa values.
  • the value of a single pixel at position (x,y) is represented by F(x,y) .
  • F ⁇ x,y) may be a single luminance value in case of a monochrome image or any one of the various representations of a colour image for example RGB,(red, blue, green) or HSI,(hue, saturation, intensity).
  • the orientation of pixels is represented by a variable L that may have one of the values .. -2,-l,0,+l,+2..
  • the neighbourhood may comprise of a strip of (x + j) pixels in the x direction, where j may be an integer value within the range of - M to + M , thereby covering a strip of (2M + 1) pixels.
  • Fig. 4 shows a schematic representation of a portion of an image in which a particular pixel to be interpolated, subsets of pixels and arrangement of rows of pixels for evaluation of respective match errors.
  • Fig 4 shows additional details of Fig 3 A.
  • the sets of evaluation pixels falling within the neighbourhood 300 are further divided into a first 400, a second 410 and a third • 420 of the subsets.
  • the subsets comprise a first pair of adjacent rows 350, and 360, a second pair of adjacent rows 360, and 370, and a third pair of adjacent rows 370, 380 respectively.
  • Respective pairs of pixel values within subsets are compared for computing match errors.
  • Comparison values within subsets 400, 410 and 420 may be used in various ways for computation of match errors.
  • Match errors may be computed in a number of ways by comparing respective pairs of pixels of first one of the pairs of adjacent rows 350,360, second one of the pair of adjacent rows 360,370, and third one of the pairs of adjacent rows 370,380.
  • a first way of computing a match error is by computing a first difference between respective pixel values of pairs of pixels within first subset 400 and computing a second difference between respective pixel values of pairs of pixels within second subset 410 and computing a further difference between the first and the second differences.
  • an absolute value of the difference (
  • the final difference values thus obtained may be summed up to obtain a single value for ease of comparison.
  • Equation for computing match error ME(L) computing the first difference of the values of pixels of first subset 400 and second difference of the values of pixels of second subset 410 and a further difference of the first and the second differences may be given by
  • a second way of computing a match error is by computing a first difference between respective pixel values of pairs of pixels within second subset 410, computing a second difference between respective pixel values of pairs of pixels within third subset 420 and computing a further difference between the first and the second difference.
  • the match error obtained by such a computation is given by
  • a third way of computing a match error is by summing the first match error obtained from the first and second subsets 400, 410 and second match error obtained from the third and fourth subsets 410, 420.
  • the final match error is the sum of two intermediate match errors obtained by considering three pixels differences at a time. The final match error in such computations is given by
  • Both equations (4) and (5) represent the summation of two intermediate match errors obtained from first and second subsets (400, 410) and second and third subsets(410, 420).
  • the line of orientation which is the axis of chosen neighbourhood coincides with the line of orientation in which the evaluation is carried out and passes through the particular pixel to be interpolated.
  • the axis of the chosen neighbourhood does not coincide with but is parallel to the line of orientation in which the evaluation is carried out.
  • the line of orientation is offset by a width of one pixel with respect to the axis of the chosen neighbourhood in equation (5). Any number of such variations that are possible may be used.
  • a fourth way of computing a match error is by computing a further difference of the match errors obtained from first and second subsets (400, 410) and third and fourth subsets (410, 420). Sets of four pixels along a orientation are thus taken into account in the computation. In essence, the measure for a neighbourhood of four pixels length gives rise to a further difference of the differences of differences.
  • Match error ME(L) in such a case is given by
  • the axis of the chosen neighbourhood coincides with the line of orientation in which the evaluation is carried out. It is possible to have the variations mentioned in equations (4) and (5) in the four pixels case as well.
  • a comparison is made within the match errors to select the local minimum.
  • Interpolation of the pixel value may be calculated exclusively from the set of pixels in the identified orientation or it may be calculated from a mix of pixels from the identified orientation and pixels from the vertical direction.
  • a typical equation of interpolation in such a case is given by
  • F new a-(F(x + D,y-l) + F(x-D,y + l))/2+(L-cc)(F(x,y-l)+F(x,y + l))/2..O) where F(x + D, •) refers to the pixel values in orientation D and F(x,») refers to pixel values in the vertical direction.
  • a direction confidence ⁇ as shown in equation (7) may be used for the ratio of mix.
  • the direction confidence ⁇ may be determined from a measure of confidence function with variables D , ME(D) and ME(O) where D is the identified orientation, ME(D) and ME(O) are the match errors in the identified and vertical directions respectively.
  • the proposed method can, unlike the prior art method detect and correctly interpolate along a constant gradient.
  • the interpolation unit is capable of handling both constant pixel value and constant pixel value gradient cases satisfactorily.
  • the averaged Mean Square Error (MSE) values were found to be superior to prior art methods.
  • FIG. 5 schematically shows an image processing apparatus 500 according to the invention comprising: receiving means 510 for receiving a signal 501 representing input images; and the image interpolation unit 520 described in connection with Fig 1 ; - an optional display means 530 for displaying the output images
  • the signal 501 may be a broadcast signal received via an antenna or cable but may also be a signal from a storage device like the VCR (video cassette recorder) or digital ⁇ versatile disk (DVD).
  • the signal is provided at the input connector of 510.
  • the image processing apparatus 500 might be e.g. a set top box, satellite tuner, a VCR player, a DVD , player or recorder.
  • the image processing apparatus comprises storage means, like 1 hard-disk means or means for storage on removable media, e.g., optical disks.
  • the output image may be displayed or transmitted out to another apparatus for e.g. a cable or wireless or internet broadcast system.
  • the image processing apparatus 500 might also be a system being applied by a film-studio or broadcaster.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Systems (AREA)
  • Image Processing (AREA)
  • Editing Of Facsimile Originals (AREA)

Abstract

The invention is related to a device, method and image processing apparatus for creating an output image on basis of an input image by means of interpolation of pixel values of the input image. The invention proposes an edge dependent interpolation scheme in which match errors for sets of pixels in different neighbourhoods in respective orientation of a particular pixel to be interpolated (10) are first computed. Comparison of subsets(400, 410, 420) of pixel values are used in a function for computation of match errors. A match error and corresponding orientation are selected and the pixels in the selected orientation are used in the interpolation of a particular pixel.

Description

Image interpolation
The invention relates to an image interpolation unit for creating an output image on basis of an input image by means of interpolation of pixel values of the input image, the image interpolation unit comprising: selecting means for selecting sets of evaluation pixels for respective orientations (50, 60, 70) in the input image, related to a particular pixel (10) of the output image to be interpolated; computing means for computing match errors for the respective orientations by comparing values of subsets (400, 410, 420) of the sets of evaluation pixels selected; selecting means for selecting a particular orientation based on the match errors computed; creating means for creating a set of interpolation pixels on basis of the particular orientation; and computing means for computing a pixel value of the particular pixel on basis of the set of interpolation pixels created. The invention further relates to an image processing apparatus comprising such an image interpolation unit.
The invention further relates to a method of creating an output image on basis of an input image by means of interpolation of pixel values of the input image the method comprising : - selecting sets of evaluation pixels for respective orientations in the input image, related to a particular pixel of the output image to be interpolated; computing match errors for the respective orientations by comparing values of subsets of the sets of evaluation pixels selected; selecting a particular orientation based on the match errors computed; - creating a set of interpolation pixels on basis of the particular orientation; and computing a pixel value of the particular pixel on basis of the set of interpolation pixels being created.
A particular interpolated pixel value is obtained from a group of pixel values in the neighbourhood of the particular pixel. For obtaining relatively better quality of interpolated images, a group of pixels belonging to a particular neighbourhood from set of neighbourhoods of various orientations is selected. The decision of selection is on basis of presence of an edge structure within the particular neighbourhood.
An implementation of such edge-dependent" interpolation scheme is known from US patent US6133957. This patent describes a method of generating a plurality of additional pixels wherein for a respective additional pixel generated at a location within the image, the method consists of steps of computing a plurality of measurement signals representing respective orientations and respective measures of variance between pairs of sets of pixels in the respective directions relative to the location, evaluating and selecting from the measurements signal to identify the single best choice direction, and generating the additional pixel by interpolating pixels belonging to the best choice direction in the image. Each measurement signal conveys a measure of variance and an indication of the direction along which the variance was measured. In one embodiment, the measure of variance consists of the sum of absolute differences between respective pixels in the two sets of a respective pair of sets.
In this prior art method where a single difference between sets of evaluation pixels along an expected direction is taken, there is no guarantee that a minimum absolute ■ difference is due to the presence of edge along the expected direction. A near zero difference of pixels may be often coincidental. By applying the method suggested by prior art, the quality of interpolated output image is found to be poor in portions of images containing thin lines, lines with gradually varying intensity values and irregular textures.
It is an object of the invention to provide an image interpolation unit, which renders output images with a relatively higher quality.
This object of the invention is achieved in that the computing means of the image interpolation unit as mentioned in the opening paragraphs for computing the match errors are arranged to compute a first difference between first values of a first subset of a first one of the sets of evaluation pixels and a second difference between second values of a second subset of the first one of the sets of evaluation pixels and for computing a further difference between the first difference and the second difference. The unit according to the invention is thus computing the difference of difference of pixel values. A set containing a minimum number of three pixels is considered for evaluation. Constant values of adjacent pixels and constant gradients are duly considered to avoid false indications of orientation. In addition, edges of single pixel width, while being compared in the orientation of edge structure will give minimum variance even when there is a constant gradient in the pixel values. Further, the novel measure prescribed by the invention advantageously gives maximum variance in the direction perpendicular to the direction of the edge in case of thin edges of one or two pixel widths. Thus, true orientation of edge structure can be unambiguously identified by the invention. Since there are more than two subsets of pixels in the evaluation of orientations, the proposed method is more robust in the presence of noise.
An embodiment of the image interpolation unit according to the invention is characterised in that the sets of evaluation pixels comprise a predetermined number of pixels of the input image in a spatial neighbourhood of the particular pixel of the image to be interpolated
A group of pixels belonging to a spatial neighbourhood form a set of evaluation pixels for detecting the presence of edge structure. The spatial neighbourhood may be fixed in a number of orientations and for each orientation of the neighbourhood, a group of pixels form the candidates for evaluation. Match error for each orientation is calculated for each one of the sets of evaluation pixels. The size of the neighbourhood may vary depending upon the size of edges expected in the image.
A further embodiment of the image interpolation unit according to the invention is characterised in that a first subset of a first one of the sets of evaluation pixels comprises pixels from a first pair of rows, a second subset of the first one of the sets of evaluation pixels comprises pixels from a second pair of rows, the first and the second pairs of rows having a common row, adjacent to the particular pixel of the image to be interpolated.
This embodiment is advantageous in that the comparison of pixel values is made in the immediate neighbourhood of the particular pixel to be interpolated to ascertain the orientation of the edge structure. Comparison is carried out within respective pixels belonging to a pair of rows in the neighbourhood of the particular pixel to be interpolated. Comparing the subsets of rows that have overlapping row ensures the continuity of the edge structure. In this embodiment, two rows of pixels above the pixel to be interpolated and one row of pixel below the pixel to be interpolated are considered for evaluation. A further embodiment of the image interpolation unit according to the invention is characterised in that a first one of match errors is computed on basis of the second subset of the first one of the sets of evaluation pixels and a third subset of the first one of the sets of evaluation pixels that comprises pixels from a third pair of rows, the second and third pairs of rows having a further common row, adjacent to the particular pixel of the image to be interpolated.
For computing the difference of pixels within a subset, each subset may contain a pair of rows. For the difference of difference measure, two such subsets may be required. Each match error may be the difference of difference measure obtained from three rows of pixels that have a common row adjacent to the particular pixel of the image to be interpolated. In this embodiment, two rows of pixels below the pixel to be interpolated and one row of pixel above the pixel to be interpolated are considered for evaluation.
A further embodiment of the image interpolation unit according to the invention is characterised in that, first one of match errors is a sum of a first intermediate match error computed between the first and second subsets and a second intermediate match error computed between the second and the third subsets.
By adding two match errors two of the three point match errors are summed up to get the final match error in which edge structures of four pixels length may be distinguished. < . A further embodiment of the image interpolation unit according to the invention is characterised in that the first one of the match error is a difference of a first intermediate match error computed between the first and second subsets and a second intermediate match error computed between the second and third subsets.
A match error may be a further difference of difference when a four pixels comparison is made.
A further embodiment of the image interpolation unit according to the invention is characterised in that the selecting means for selecting a particular orientation are arranged to select the particular orientation based on the minimum of the match errors being computed for respective orientations. In order to ascertain the orientation of edge structure, it is advantageous to select the minimum match error that corresponds to the orientation.
A further embodiment of the image interpolation unit according to the invention is characterised in that the computing means for computing the pixel value of the particular pixel are arranged to compute the pixel value of the particular pixel by averaging the pixel values of a set of interpolation pixels.
Once a set of pixels that is a good candidate for the interpolation is established, interpolation is carried out from the set. Value of the particular pixel is computed by well-known methods from the set of interpolation pixels.
It is a further object of the invention to provide an image processing apparatus of the kind described in the opening paragraphs, which provides image output of relatively high quality.
This object of the invention is achieved in that the computing means for computing the match errors are arranged to compute a first difference between first values of a first subset of a first one of the sets of evaluation pixels and a second difference between second values of a second subset of the first one of the sets of evaluation pixels and for computing a further difference between the first difference and the second difference.
The image processing apparatus might support one or more of the following types of image processing:
Video compression i.e., encoding, decoding and transcoding, Resolution conversion and format conversion, Interlaced scan to progressive scan conversion, Image zoom in/out. The image processing apparatus may comprise additional units, e.g. a receiving unit, a processing unit and a display unit. The image processing apparatus might for e.g. be a television, PC, set top box, VCR/VCP (Video Cassette Recorder/ Player), satellite tuner, or DVD (digital Versatile Disk) player or recorder.
It is a further object of the invention to propose a method defined in the opening paragraphs which provides image interpolation of relatively improved quality.
This object of invention is achieved in that, computing a first one of the match errors comprises computing a first difference between first values of a first subset of a first subset of a first one of the sets of evaluation pixels and a second difference between second values of a second subset of the first one of the sets of evaluation pixels and computing a further difference between the first difference and the second difference.
These and other aspects of the invention are apparent from and will be elucidated with reference to the embodiments described hereinafter. In the drawings:
Fig. 1 is a functional block diagram illustrating components of one embodiment of an image interpolation device according to the present invention;
Fig. 2 shows a flow chart illustrating steps of the method according to the present invention;
Fig. 3 A shows a schematic representation of a portion of an image comprising pixels arranged in rows, a particular pixel to be interpolated and a first set of pixels in a neighbourhood of first orientation;
Fig. 3B shows a schematic representation of a portion of an image comprising pixels arranged in rows, a particular pixel to be interpolated and a second set of pixels in a neighbourhood of second orientation;
Fig. 3C shows a schematic representation of a portion of an image comprising pixels arranged in rows, a particular pixel to be interpolated and a third set of pixels in a neighbourhood of third orientation; Fig. 4 shows a schematic representation of a portion of an image comprising ' pixels arranged in rows, a particular pixel to be interpolated and subsets of pixels used in the evaluation of respective match errors;
Fig. 5 shows a representation of an embodiment of an image processing apparatus according to the invention; • .
Fig. 1 is a functional block diagram illustrating components of one embodiment of an image interpolation device according to the present invention. Selector 100 receives the image 1 from an external source. The image may also be internally generated for, e.g. in cases of video cassette players or DVD players. Selector 100 selects sets of evaluation pixels for respective orientations in the input image relative to a particular pixel to be interpolated. In order to obtain an edge-dependent interpolation, sets of evaluation pixels are selected in different neighbourhoods in respective orientations with reference to the particular pixel to be interpolated. The selected sets 101 are used by computer 110 to compute match errors for the respective orientations by comparing pixel values of subsets of the sets of evaluation pixels. A function of pixel values is used for calculation of match error. A plurality of such match errors is computed with respect to groups of pixels belonging to different neighbourhoods corresponding to different orientation of the particular pixel to be interpolated. The computed match errors 111 are sent to selector 120. The function for match error could be chosen such that the minimum match error corresponds to the orientation of edge structure within the chosen neighbourhood. Selector 120 selects a particular orientation based on the match errors. Creator 130 creates a set of interpolation pixels 131 using the selected orientation 121. Best quality of output image may be obtained by adapting the interpolation by selecting a group of pixels in the neighbourhood along a predicted orientation of an edge structure. Interpolator 140 uses the set of interpolation pixels 131 to compute the particular pixel 141 to be interpolated. One of the several standard techniques may be used for interpolation.
Fig. 2 depicts the steps involved in the method of image interpolation according to the invention. An input image 1 is received in first step 200 and reassigned to a grid with blank spaces where the pixels have to be interpolated. A particular pixel to be interpolated is selected in step 210 and a neighbourhood is defined and sets of evaluation pixels are defined. A set of match errors corresponding to each set of evaluation pixels is carried out in step 220. From the set of match errors, a particular match error and a particular orientation corresponding to that match error is selected in step 230. A set of interpolation pixels of a particular orientation is chosen in step 240 and the new pixel is calculated based on the selected set of interpolation pixels in step 250. The process is repeated from the first, step 200 for each pixel to be interpolated. When there is no more pixel left that is to be interpolated in step 260, the process terminates in step 270. In Figs. 3 A, 3B and 3C, schematic representations of a portion of an image comprising pixels arranged in rows, a particular pixel to be interpolated and sets of pixels in three different neighbourhoods in three different orientations are depicted.
Fig.3A shows a representation of a particular pixel 10 to be interpolated in a first neighbourhood 300 in a first orientation 50. Pixels belonging to the first neighbourhood may be known as a first set of interpolation pixels.
Fig 3B shows a representation of a particular pixel 10 to be interpolated in a second neighbourhood 310 in a second orientation 60. The pixels belonging to second neighbourhood 310 may be known as second set of evaluation pixels.
Fig 3 C shows a representation of a particular pixel 10 to be interpolated in a third neighbourhood 320 in a third orientation 70. The pixels belonging to the third neighbourhood 320 may be known as third set of evaluation pixels.
Figs. 3A, 3B and 3C show a portion of an example image wherein the coordinates (x,y) of a Cartesian coordinate system may be assumed to coincide with the pixel 10 to be interpolated. Squares with solid line denote the original pixels and the squares with dotted lines denote the pixels that are to be interpolated. The pixels belonging to a first pair of adjacent rows 350 and 360 are denoted by ordinates (y-3) and(y-ϊ) respectively. Accordingly, the pixels belonging to second pair of rows 370 and 380 are denoted by ordinates (y + 1) and (y + 3) respectively. The pixels to the right of (x,y) have increasing abscissa values and pixels to the left of (x,y) have decreasing abscissa values. The value of a single pixel at position (x,y) is represented by F(x,y) . F{x,y) may be a single luminance value in case of a monochrome image or any one of the various representations of a colour image for example RGB,(red, blue, green) or HSI,(hue, saturation, intensity). The orientation of pixels is represented by a variable L that may have one of the values .. -2,-l,0,+l,+2.. The orientations corresponding to L = 0, + 1, - 1 are illustrated as 50, 60 and 70 in figures 3 A, 3B and 3 C respectively. The neighbourhood may comprise of a strip of (x + j) pixels in the x direction, where j may be an integer value within the range of - M to + M , thereby covering a strip of (2M + 1) pixels. Such an arrangement ensures that the strip of the neighbourhood is symmetrically disposed with respect to the axis of its orientation. Fig. 4 shows a schematic representation of a portion of an image in which a particular pixel to be interpolated, subsets of pixels and arrangement of rows of pixels for evaluation of respective match errors.
Fig 4 shows additional details of Fig 3 A. The sets of evaluation pixels falling within the neighbourhood 300 are further divided into a first 400, a second 410 and a third • 420 of the subsets. The subsets comprise a first pair of adjacent rows 350, and 360, a second pair of adjacent rows 360, and 370, and a third pair of adjacent rows 370, 380 respectively. Respective pairs of pixel values within subsets are compared for computing match errors. Comparison values within subsets 400, 410 and 420 may be used in various ways for computation of match errors. Match errors may be computed in a number of ways by comparing respective pairs of pixels of first one of the pairs of adjacent rows 350,360, second one of the pair of adjacent rows 360,370, and third one of the pairs of adjacent rows 370,380.
Three of many possible ways of computing match errors are explained in the following paragraphs. A first way of computing a match error is by computing a first difference between respective pixel values of pairs of pixels within first subset 400 and computing a second difference between respective pixel values of pairs of pixels within second subset 410 and computing a further difference between the first and the second differences. In order to map each difference value to a positive number, an absolute value of the difference (| • ) is used in a preferred embodiment but it could also be a square function (•) 2 or any other function of similar characteristics. The final difference values thus obtained may be summed up to obtain a single value for ease of comparison.
Equation for computing match error ME(L) computing the first difference of the values of pixels of first subset 400 and second difference of the values of pixels of second subset 410 and a further difference of the first and the second differences may be given by
M
ME(L) = ∑ \((F(x + 3L + j,y-3)-F(x+L + j,y-l))-(F(x + L + j,y-l)-F(x~L + j,y + l)) j=-M
(1)
ME(L) + 3L + j,y - 3)- 2F(x + L + j,y - l)) + F(x-- L + j,y + l)\ (2)
A second way of computing a match error is by computing a first difference between respective pixel values of pairs of pixels within second subset 410, computing a second difference between respective pixel values of pairs of pixels within third subset 420 and computing a further difference between the first and the second difference. The match error obtained by such a computation is given by
A third way of computing a match error is by summing the first match error obtained from the first and second subsets 400, 410 and second match error obtained from the third and fourth subsets 410, 420. Thus the final match error is the sum of two intermediate match errors obtained by considering three pixels differences at a time. The final match error in such computations is given by
ME D -
(\F(x + L + j,y-l)-2F(x + j,y + l) + F(x-3L + j,y + 3)\)
J=M
\F(x + 2L + j,y~ϊ)-2F(x + j,y + ϊ) + F(x-2L + j,y + 3)\)
Both equations (4) and (5), represent the summation of two intermediate match errors obtained from first and second subsets (400, 410) and second and third subsets(410, 420). In equation (4), the line of orientation which is the axis of chosen neighbourhood coincides with the line of orientation in which the evaluation is carried out and passes through the particular pixel to be interpolated. In equation (5) the axis of the chosen neighbourhood does not coincide with but is parallel to the line of orientation in which the evaluation is carried out. The line of orientation is offset by a width of one pixel with respect to the axis of the chosen neighbourhood in equation (5). Any number of such variations that are possible may be used.
A fourth way of computing a match error is by computing a further difference of the match errors obtained from first and second subsets (400, 410) and third and fourth subsets (410, 420). Sets of four pixels along a orientation are thus taken into account in the computation. In essence, the measure for a neighbourhood of four pixels length gives rise to a further difference of the differences of differences. Match error ME(L) in such a case is given by
(6)
In this case, the axis of the chosen neighbourhood coincides with the line of orientation in which the evaluation is carried out. It is possible to have the variations mentioned in equations (4) and (5) in the four pixels case as well.
Having computed the match errors for respective orientation, a comparison is made within the match errors to select the local minimum. A preferred method for finding the minimum works in the following manner. Consider a set of match errors for a set of L values. First the vertical direction (L = 0) is compared with the nearest orientations (L= ±1) . If ME(O) < ME(V) < ME(-\) , then the vertical direction is the local minimum and hence the optimal direction for interpolation is (D = O) . If ME(O) is not the minimum, then ME(V) and ME(-V) are compared and the comparison in that orientation is continued till a minimum is found and the orientation is ascertained.
Interpolation of the pixel value may be calculated exclusively from the set of pixels in the identified orientation or it may be calculated from a mix of pixels from the identified orientation and pixels from the vertical direction. A typical equation of interpolation in such a case is given by
Fnew = a-(F(x + D,y-l) + F(x-D,y + l))/2+(L-cc)(F(x,y-l)+F(x,y + l))/2..O) where F(x + D, •) refers to the pixel values in orientation D and F(x,») refers to pixel values in the vertical direction.
A direction confidence α as shown in equation (7) may be used for the ratio of mix. The direction confidence α may be determined from a measure of confidence function with variables D , ME(D) and ME(O) where D is the identified orientation, ME(D) and ME(O) are the match errors in the identified and vertical directions respectively.
The proposed method can, unlike the prior art method detect and correctly interpolate along a constant gradient. The interpolation unit is capable of handling both constant pixel value and constant pixel value gradient cases satisfactorily. The averaged Mean Square Error (MSE) values were found to be superior to prior art methods.
Fig. 5 schematically shows an image processing apparatus 500 according to the invention comprising: receiving means 510 for receiving a signal 501 representing input images; and the image interpolation unit 520 described in connection with Fig 1 ; - an optional display means 530 for displaying the output images
The signal 501 may be a broadcast signal received via an antenna or cable but may also be a signal from a storage device like the VCR (video cassette recorder) or digital ■ versatile disk (DVD). The signal is provided at the input connector of 510. The image processing apparatus 500 might be e.g. a set top box, satellite tuner, a VCR player, a DVD , player or recorder. Optionally the image processing apparatus comprises storage means, like1 hard-disk means or means for storage on removable media, e.g., optical disks. The output image may be displayed or transmitted out to another apparatus for e.g. a cable or wireless or internet broadcast system. The image processing apparatus 500 might also be a system being applied by a film-studio or broadcaster. It should be noted that the above-mentioned embodiments illustrate rather than limit the invention and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word comprising does not exclude the presence of a plurality of such elements. The invention can be implemented by means of hardware comprising several distinct elements and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means can be embodied by one and the same item hardware. The usage of the words first, second and third etc do not indicate any ordering. The words are to be interpreted as names.

Claims

CLAIMS:
1. An image interpolation unit for creating an output image on basis of an input image by means of interpolation of pixel values of the input image, the image interpolation unit comprising: selecting means for selecting sets of evaluation pixels for respective orientations (50, 60, 70) in the input image, related to a particular pixel (10) of the output image to be interpolated; computing means for computing match errors for the respective orientations by comparing values of subsets (400, 410, 420) of the sets of evaluation pixels selected; selecting means for selecting a particular orientation based on the match errors computed; creating means for creating a set of interpolation pixels on basis of the particular orientation; and computing means for computing a pixel value of the particular pixel on basis of the set of interpolation pixels created, characterized in that, the computing means for computing the match errors are arranged to compute a first difference between first values of a first subset (400) of a first one of the sets of evaluation pixels and a second difference between second values of a second subset(410) of the first one of the sets (300) of evaluation pixels and for computing a further difference between the first difference and the second difference.
2. An image interpolation unit as claimed in Claim 1 wherein the sets of evaluation pixels comprise a predetermined number of pixels of the input image in a spatial neighbourhood of the particular pixel(lθ) of the image to be interpolated.
3. An image interpolation unit as claimed in Claims 1 or 2 wherein a first subset
(400) of a first one of the sets (300) of evaluation pixels comprises pixels from a first pair of rows (350, 360), a second subset (410) of the first one of the sets (300) of evaluation pixels comprises pixels from a second pair of rows (360, 370), the first and the second pairs of rows having a common row(360), adjacent to the particular pixel of the image to be interpolated.
4. An image interpolation unit as claimed in Claim 3 wherein a first one of match errors is computed on basis of the second subset(410)of the first one of the sets(410) of evaluation pixels and a third subset (420) of the first one of the sets(300) of evaluation pixels that comprises pixels from a third pair of rows(470, 480), the second and third pairs of rows having a further common row(470), adjacent to the particular pixel of the image to be interpolated.
5. An image interpolation unit as claimed in Claim 4 wherein the first one of the match error is a sum of a first intermediate match error computed between the first (400) and second subsets(410) and a second intermediate match error computed between the second(410) and the third (420) subsets.
6. An image interpolation unit as claimed in Claim 4 wherein the first one of the match error is a difference of a first intermediate match error computed between the first
(400) and second subsets(410) and a second intermediate match error computed between the second(410) and the third (420) subsets.
7. An image interpolation unit as claimed in Claim 1 wherein the selecting means for selecting the particular orientation are arranged to select the particular orientation based on the minimum of the match errors being computed for respective orientations.
8. An image interpolation unit as claimed in Claim 1 wherein the computing means for computing the pixel value of the particular pixel are arranged to compute the pixel value of the particular pixel by averaging the pixel values of the set of interpolation pixels.
9. An image processing apparatus (500) comprising: receiving means(510)for receiving a signal corresponding to a sequence of input images; and - image interpolation unit(520) for creating an output image on basis of one of the input images as claimed in Claim 1.
10. A method of creating an output image on basis of an input image by means of interpolation of pixel values of the input image, the method comprising: selecting sets of evaluation pixels for respective orientations in the input image, related to a particular pixel of the output image to be interpolated; computing match errors for the respective orientations by comparing values of subsets of the sets of evaluation pixels selected; - selecting a particular orientation based on the match errors computed; creating a set of interpolation pixels on basis of the particular orientation; and computing a pixel value of the particular pixel on basis of the set of interpolation pixels being created, characterized in that, computing a first one of the match errors comprises computing a first difference between first values of a first subset of a first subset of a first one of the sets of evaluation pixels and a second difference between second values of a second subset of the first one of the sets of evaluation pixels and computing a further difference between the first difference and the second difference.
EP05784295A 2004-09-24 2005-09-23 Image interpolation Withdrawn EP1794715A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP05784295A EP1794715A2 (en) 2004-09-24 2005-09-23 Image interpolation

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
EP04104667 2004-09-24
PCT/IB2005/053150 WO2006033084A2 (en) 2004-09-24 2005-09-23 Image interpolation.
EP05784295A EP1794715A2 (en) 2004-09-24 2005-09-23 Image interpolation

Publications (1)

Publication Number Publication Date
EP1794715A2 true EP1794715A2 (en) 2007-06-13

Family

ID=35985873

Family Applications (1)

Application Number Title Priority Date Filing Date
EP05784295A Withdrawn EP1794715A2 (en) 2004-09-24 2005-09-23 Image interpolation

Country Status (5)

Country Link
EP (1) EP1794715A2 (en)
JP (1) JP2008515266A (en)
KR (1) KR20070068409A (en)
CN (1) CN101027691A (en)
WO (1) WO2006033084A2 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101473656B (en) * 2006-06-29 2011-09-14 汤姆森许可贸易公司 Adaptive filtering based on pixel
CN100551073C (en) * 2006-12-05 2009-10-14 华为技术有限公司 Decoding method and device, image element interpolation processing method and device
KR101229376B1 (en) * 2012-04-25 2013-02-05 대한민국 Method and apparatus for identifying source camera by detecting interpolaton pattern used in lens distortion correction
AU2013263760A1 (en) * 2013-11-28 2015-06-11 Canon Kabushiki Kaisha Method, system and apparatus for determining a depth value of a pixel

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5273040A (en) * 1991-11-14 1993-12-28 Picker International, Inc. Measurement of vetricle volumes with cardiac MRI
JP3438032B2 (en) * 1994-03-15 2003-08-18 松下電器産業株式会社 Spatial frequency adaptive interpolation method and spatial frequency adaptive interpolation device
FI97590C (en) * 1994-12-15 1997-01-10 Nokia Technology Gmbh A method and arrangement for highlighting edges in a video image
US5852470A (en) * 1995-05-31 1998-12-22 Sony Corporation Signal converting apparatus and signal converting method
US20040120605A1 (en) * 2002-12-24 2004-06-24 Wen-Kuo Lin Edge-oriented interpolation method for deinterlacing with sub-pixel accuracy

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2006033084A2 *

Also Published As

Publication number Publication date
KR20070068409A (en) 2007-06-29
JP2008515266A (en) 2008-05-08
WO2006033084A2 (en) 2006-03-30
WO2006033084A3 (en) 2006-06-01
CN101027691A (en) 2007-08-29

Similar Documents

Publication Publication Date Title
US7259794B2 (en) De-interlacing device and method therefor
KR101135454B1 (en) Temporal interpolation of a pixel on basis of occlusion detection
US7414671B1 (en) Systems and methods for display object edge detection and pixel data interpolation in video processing systems
US8331689B2 (en) Detecting a border region in an image
KR101725167B1 (en) Methods and systems for image registration
US6295083B1 (en) High precision image alignment detection
US8325196B2 (en) Up-scaling
WO2006033084A2 (en) Image interpolation.
US9147257B2 (en) Consecutive thin edge detection system and method for enhancing a color filter array image
JP2007501561A (en) Block artifact detection
US7233363B2 (en) De-interlacing method, apparatus, video decoder and reproducing apparatus thereof
US20070036466A1 (en) Estimating an edge orientation
KR20130001626A (en) De-interlacing apparatus using weight based on wiener filter and method of using the same
KR20060029283A (en) Motion-compensated image signal interpolation
US20060257029A1 (en) Estimating an edge orientation
JP4650684B2 (en) Image processing apparatus and method, program, and recording medium
US20050036072A1 (en) Method and device for determining the spacing between a first and a second signal sequence
WO2005091625A1 (en) De-interlacing
JP4597282B2 (en) Image information conversion apparatus, conversion method, and display apparatus
WO2006082542A1 (en) Clipping

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

17P Request for examination filed

Effective date: 20070424

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

18W Application withdrawn

Effective date: 20070521