GB2488591A - Image Interpolation Including Enhancing Artefacts - Google Patents

Image Interpolation Including Enhancing Artefacts Download PDF

Info

Publication number
GB2488591A
GB2488591A GB1103691.0A GB201103691A GB2488591A GB 2488591 A GB2488591 A GB 2488591A GB 201103691 A GB201103691 A GB 201103691A GB 2488591 A GB2488591 A GB 2488591A
Authority
GB
United Kingdom
Prior art keywords
output
pixels
filter
image
interpolation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1103691.0A
Other versions
GB201103691D0 (en
Inventor
Karl James Sharman
Manish Devshi Pindoria
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Priority to GB1103691.0A priority Critical patent/GB2488591A/en
Publication of GB201103691D0 publication Critical patent/GB201103691D0/en
Priority to PCT/GB2012/050431 priority patent/WO2012120275A1/en
Publication of GB2488591A publication Critical patent/GB2488591A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/403Edge-driven scaling; Edge-based scaling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/001
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0117Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving conversion of the spatial resolution of the incoming video signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0135Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
    • H04N7/0142Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes the interpolation being edge adaptive

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Image Processing (AREA)

Abstract

Image processing apparatus in which output pixels of an output images are generated by interpolation with respect to an input image comprises an interpolation filter for generating each output pixel from a corresponding group of input pixels derived from the input image. A filter processor modifies the interpolation filter output or the input pixels so as to enhance the generation of oscillatory artefacts in the output image in response to smaller oscillatory transitions in the input image. These artefacts, such as ringing, provide a sharper output image. The interpolation filter may comprise a bicubic filter. Test pixels may be used to control the modification performed by the interpolation filter.

Description

I
iMAGE PROCESSING This invention relates to image processing.
Digital interpolation filters are often used to generate output filter values from input filter values. The input and output filter values may be, for example pixel values in an image being processed by an image or videO processing apparatus.
For example, so-called bicubic filters may be used as an interpolation filter. These filters are relatively immune to artefacts generally considered undesirable such as so-called ringing artef acts.
This invention provides image processing apparatus in which output pixels of an output images are generated by interpolation with respect to an input image, the apparatus comprising: an interpolation filter for generating each output pixel from a corresponding group of input pixels derived from the input image; and a filter processor for modifying the interpolation filter output so as to enhance the generation of oscillatory artefacts in the output image in response to smaller oscillatory transitions in the input image.
The invention makes the counter-intuitive step of providing a filter processor which actually positively enhances ringing in the output image, so as to provide a sharper output image.
Further respective aspects and features of the invention are defined in the appended claims.
Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings in which: Figures 1 and 2 schematically illustrate an image scaling process; Figure 3 is a schematic block diagram of an image scaling apparatus; Figures 4a and 4b schematically illustrate a variance minimisation process; Figure 5 schematically illustrates a direction calculating template; Figures 6a to 8h schematically illustrate templates for angle detection; Figures 7 and B schematically illustrate a Sinc filter response; Figures 9 and 10 schematically illustrate a rotated Sinc filter response; Figures 11 and 12 schematically illustrate a sheared Sinc filter response; Figure 13 schematically illustrates a data shearing process using a Bicubic filter; Figures 14 to 19 schematically illustrate examples of a data shearing process; Figure 20 schematically illustrates a television display apparatus; Figure 21 schematically illustrates a video camera; Figure 22 schematically illustrates Bicubic interpolation; Figure 23 schematically illustrates Bicubic spline interpolation; Figure 24 schematically illustrates pixels surrounding an abrupt image transition; Figure 25 schematically illustrates changes relating to symmetrical gain; Figure 26 schematically illustrates changes relating to non-linear gain; and Figure 27 schematically illustrates changes relating to asymmetrical gain.
S Referring now to the drawings, Figures 1 and 2 schematically illustrate an image scaling process, as applied to a source image 10. In each case the source image (a two-dimensional array of input pixels) 10 is processed by an image scaling apparatus 20, to generate a larger output image 30 (Figure 1) or a smaller output image 40 (Figure 2) formed of output pixels.
Here, the terms "larger'1 and "smaller" refer to the number of pixels in the respective output images, rather than to a format in which the images are actually displayed. So, for example, if the input image has 1920 (horizontal) x 1080 (vertical) pixels, the larger output image 30 might have 2880 x 1620 pixels, and the smaller output image 40 might have 960 x 540 pixels.
Clearly, these pixel numbers are just examples, and in general the image scaler 20 can operate to scale the image across a range of scale factors which may include scale factors greater than one (for image enlargement) and/or scale factors less than one (for image size reduction). In the present description, for the purposes of the present examples the scale factors will be expressed as linear scale factors and will be considered to be the same in both the horizontal and the vertical axes (horizontal and vertical being orthogonal directions and relating to a horizontal and a vertical image direction respectively). However, in other embodiments the scale factors could be different for the two axes.
The image scaling arrangements described here use two-dimensional digital interpolation filtering techniques to generate pixels of the scaled output image, so that each output pixel is based on a mathematical combination of a group of two or more input pixels.
Such filtering techniques are appropriate because (a) when the input image is to be enlarged, new pixel values need to be generated at pixel positions which did not exist in the input image; (b) the same can be true if the image is to be reduced in size, because depending on the scale factor the output pixel positions may not align exactly with input pixel positions; and (c) even if the output pixel positions in a scale-reduced output image do align with input pixel positions, a more aesthetically pleasing result can be obtained by using a filtering process rather than simply discarding unwanted input pixels.
Video scaling is a significant application of image scaling techniques. At a simple level, each image of the succession of images forming a video signal can be scaled (larger or smaller) by an image scaling apparatus 20, so as to form a scaled video signal. Video scaling is often used for conversion between different video standards (such as between so-called standard definition (SD) video and so-called high definition (HD) video), for video special effects, for forensic analysis of video such as in the analysis of surveillance video signals or video signals relating to astronomy observations, or for other purposes. The apparatus of Figures 1 and 2 may be considered as video scaling apparatus when arranged to operate on one or more of the successive images of a video signal. The skilled person will appreciate that video scaling apparatus, method, software, computer program product and the like are all considered to be embodiments of the present invention and to be included within the scope of the present description. In particular, when a technique is described here are relating to the processing of an image, it will be appreciated that the technique may be applied to the scaling of each successive image of a video signal.
In the present description, many of the techniques are applicable to either image scaling or video scalIng. Where a technique is described that is applicable to only one of these technical fields, or is more suitable for use in one of the technical fields, that restriction or area of particular usefulness wilt be indicated.
Figure 3 is a schematic block diagram of an image scaling apparatus such as the apparatus 20 of Figures 1 and 2.
A source image (that is, an image to be scaled) is supplied as an input to the apparatus, Is as shown at the left hand side of the drawing. The apparatus generates a scaled output image, as indicated at the right hand side of the drawing.
The source image is processed by an optional pre-interpolation filter 100. The pre-interpolation filter performs bandwidth adjustments that may be required, such as softening or sharpening of an image. The filter 100 has separate parameters for the horizontal and vertical directions. The filter is a separable generic 11-tap horizontal and vertical symmetrical filter, where the coefficients can be completely user-defined. The output of the filter is passed to an interpolator 130.
Traditional methods of image interpolation typically use two one-dimensional filters to scale an image separately in the horizontal and vertical directions. An example of this is a poly-phase Sinc filter. This approach performs very well for horizontal and vertical lines, but diagonal lines are resolved less accurately and the picture quality can suffer. The present embodiments relate to an algorithm that uses a two dimensional approach to shear" the data -in other words, applying a shear transformation to the two-dimensional digital filter and/or to the array of input pixels so as to map (or align) the operation of the filter to the detected image feature direction in at least one of the two axes, such as by shaping an edge feature to align either with the horizontal or vertical axis but (in embodiments of the invention) leaving the operation of the filter unchanged (at least in terms of its direction of operation) in respect of the other of the two axes. Once the data (or the interpolation filter) has been sheared by the required amount, the data can be interpolated by applying the digital interpolation filter to obtain the output pixels.
When the shearing result cannot be used, the required interpolated value is produced using a standard Bicubic interpolation method. Embodiments of this combination algorithm have been shown in empirical tests to significantly increase picture quality.
Therefore, a feature of the interpolation filtering process is that interpolation effectively takes place along the direction of image features (such as lines or edges), In order to do this, the angles of features present in the image is found, by an interpolation angle determination unit which detects the direction of an image feature at a pixel position to be interpolated. For an S image feature, the angle and a measure of certainty for the angle, or in other words a degree of confidence in the detected image feature direction, are calculated using a statistical based method and other checks, the details of which are provided below.
Once the angle has been determined, information defining the angle is passed to a shearing interpolator 140, which also receives the filtered source image data from the filter 100 and generates output pixels based on interpolation along the direction of the image features detected by the interpolation angle determination unit 110. The filtered source image data is also passed to a non-shearing interpolator 150 which generates output pixels by a horizontal / vertical interpolation approach (that is, without reference to the image feature direction).
Information defining the measure of certainty in the currently detected image feature angle is passed to a controller 160. The controller 160 controls the operation of a mixer 170 which mixes (or combines) as a weighted combination, or (as extreme examples of mixing) selects between, the outputs of the shearing interpolator 140 and the non-shearing interpolator to provide pixels of the output image. ln effect, the output pixels are therefore selectively generated by applying and/or not applying the shear transformation, in dependence on the properties of the input image -such as a detected degree of confidence in the detected image feature direction. More specifically, a first version of an output pixel is generated by applying the shear transformation; a second version of that output pixel is generated without applying the shear transformation; and the first and second versions are mixed according to a ratio dependent upon the detected degree of confidence, to generate the output pixel. In embodiments of the invention the mixing operation is such that a higher degree of confidence in the detected image feature direction results in a higher proportion of the output pixel being derived from the first version.
The arrangement of Figure 3 can be implemented in hardware, programmable or custom hardware such as one or more application specific integrated circuits or one or more field programmable gate arrays, as a general purpose computer operating under the control of suitable software, firmware or both, or as a combination of these. In instances where software or firmware is involved in the implementation of embodiments of the invention, it will be appreciated that such software or firmware, and a providing medium such as a storage medium (for example an optical disk) or a network connection (for example an internet connection) by which such software or firmware is provided, are considered as embodiments of the invention.
It will be understood that where specific functionality is described below, a unit to perform such functionality may be provided as the corresponding unit of Figure 3.
S
Interpolation Angle DeterminatiQfl The operation of the interpolation angle determination unit 110 wIll now be described.
S The unit 110 detects an interpolation angle associated with each pixel of the source image and also carries out some quality checks to detect whether the detected angle is a reliable one for use in interpolation.
In particular, the unit 110 generates the angle of interpolation (0) and the measure of certainty (certaintyFactor). This variable certaintyFactor, which provides a level of confidence as to how well the feature matches the angle (0) found, is used later in the process to calculate a mixing value for the mixer 170. The interpolation angle determination is conducted before any filtering is conducted using the detected angle.
The angle determination module operates according to a technique based upon finding the best direction for interpolation by minimisation of variance.
is The angle determination process can be conducted (for example) on a greyscale image or separately on each GBR (green, blue, red) component. lf the latter is used, then the GBR results can be combined (for example according to the standard ratios of 0, B and R used in the generation of a luminance (Y) value) so as to generate a single output result. In one embodiment, separate GBR results are combined so as to give preference to the 0 result, then preference to the R result, then the B result.
In order to avoid finding incorrect angles in areas of low picture detail, the certainty level of an angle is set to indicate zero certainty if the variance (or standard deviation) of the pixels in a 3x3 neighbourhood around a current pixel under test is below a programmable threshold. The variance within this 3x3 neighbourhood is also used as an image activity measure in a GBR to greyscale conversion process.
An alternative image activity measure, which can save hardware or processing requirements, is to use a sum of absolute differences calculated with respect to a mean pixel value over the 3 x 3 block, the detected image activity representing differences in pixel value between pixels in a group, so that greater pixel differences indicate a greater image activity.
In either case, the image activity measure is derived separately for each of the three colour components 0, B and R, in respect of a group of input pixels around a pixel position to be interpolated.
A benefit of converting the image to greyscale for the purposes of the unit 110, so that the angle finding process needs be conducted on only one channel rather than on three separate channels, is a potential reduction in hardware requirements. The OBR data could be converted to greyscale using the following formula: Y=gCoeff*G + bCocff*B+ rCoeff*R where, gCoeif and bCoe if, rCoeif are the standard colour conversion coefficients (that is, the coefficients used to generate a standard Y value from C, B and R values, one example being Y = 0.2126 R + 0.7152 0 + 0.0722 B). However, using fixed coefficients in this way, two very different colours in the GBR colour space could produce identical greyscale colours, and therefore a particular image feature might not be detected in the resulting greyscale image.
Hence, a method to adaptively change the colour coefficients is used in embodiments of the invention to overcome this problem.
In the adaptive technique, in order to capture the detail in the image, the weighting from a particular colour component is increased when the detail in the image from a particular component is significantly greater than the detail in the other colour components. Therefore monochromatic test pixels are derived by combining the colour components of corresponding input pixels in relative proportions dependent on the detected image actMties. The detail amount is measured using the image activity, as described above. The default colour conversion coefficients are adjusted using the image activities, as follows: gCoeff' = gCoeff + a x ImageA etivity (GREEN) bCoeff'= bCoeff + fix ImageActivity(BLUE) rCoeff' = rCoeff + y x IrnageA ctivity (RED) = gCoeffkG+ bCoefft*B + rCoeff*R gCoeff' + bCoeff'+rCoff' where a, /3, y are programmable parameters, and Y is the greyscale pixel as an output of this part of the process and which is used by the unit 110 for angle determination, that is, for the detection of the direction of an image feature in a group of the monochromatic test pixels around a pixel position to be interpolated. The direction detection may comprise detecting a direction, within the group of monochromatic test pixels, for which pixels along that direction exhibit the lowest variation (such as standard deviation, variance or sum of absolute differences from a mean) in pixel value. This process aims to boost detail level which would be hidden during standard conversion methods. More generally, the arrangement aims to vary the relative proportions of the colour components in a monochromatic test pixel so that colour components with higher image activity contribute a generally higher proportion of the monochromatic test pixel, for example by modifying a set of coefficients defining the contribution of each colour component to a monochromatic pixel in dependence upon the detected image activity relating to that colour component.
The next stage of the process aims to find the best angle for interpolation, by determining which direction has the lowest variation. Here, the terms "lowest" and (to be used elsewhere) "minimise" are used merely to indicate that the lowest variation is detected amongst those variation values tested.
Figure 4a schematically illustrates a region of 5 x 5 pixels of the source image, with a current pixel position under test 200 being at the centre of the 5 x 5 region. The aim is to detect an image feature direction relevant to the pixel 200. A black line (as an example of an image feature whose angle should be detected) extends from the top left of the region to the bottom S right of the region. Using the notation of Figure 4a, the angle of this image feature is considered to be 45°, where 0° is considered to be the horizontal, and angles are considered to increase in a clockwise direction from the horizontal.
Figure 4a is a simplified example in which dashed lines 210, 220, 230, 240 indicate examples of possible interpolation angle or search directions. The 5 pixels that lie in each particular possible interpolation direction are shown in Figure 4b, where each row of Figure 4b represents the five pixels, passing through the pixel position 200, along that possible direction.
By considering the variance of the pixels for each search direction, it is clear visually that the 45 possible interpolation direction (along the line 210) contains the pixels with the lowest variance. This simple example has shown that the direction of a particular feature may be defined by the direction which has the lowest pixel variation.
In general, the variation of the pixel values along a line relating to a search or test direction can be measured using the variance (or standard deviation) of points in the search direction. An exhaustive search for every angle (within the resolution limits of the system) may be used in order to obtain the best angle. However, this process would require a large amount of hardware or other processing resources, Instead, therefore, in embodiments of the invention a set of mathematical expressions for the variance can be calculated and minimised to find the required direction, again within the resolution limits of the system.
In embodiments of the invention, the search for the minimum variance is calculated over a 5x5 pixel neighbourhood, with the pixel for which the angle is currently being calculated centred in the middle. The arrangement used means that it is not possible to search every direction using just one equation. Instead, respective sub-sets of all the directions are examined and the minimum variance is then found from among those results, Each sub-set of directions is referred to below as a "template". in total, eight templates are required in order to test every possible direction. One particular template is schematically illustrated in Figure 5, Figure 5 shows the 5 x 5 array of source (input image, or filtered. input image) pixels from which an angle is determined, with the current pixel under test 200 (labelled in Figure 5 as a pixel e) in the centre of the array. Each pixel is indicated by a square box. Pixels a to i represent the source pixels that are required for this template. This particular template is used to find the best angle between 0 and 26; another seven templates search between other sets of angles, as shown in Figure 6a to Figure Sh. The specific example of Figure 5 corresponds to the template shown in Figure Sh. In each template shown in Figures 6a to Sh, the range of angles tested by that template is illustrated as those angles between (and including) the acute intersection of the two lines labelled as p=0 and ç'=l.
For each template, the variable p is allowed to vary from 0 to 1. The value of p therefore maps to a measure of the angle. That is to say, for the template shown in Figure 5, and using the angular notation introduced in Figure 4a, when qrO, the corresponding angle is o and when p1, the angle is 26, more precisely: angle = tan'() The value of p therefore indicates a particular search (or possible interpolation) direction. An example search direction is shown in Figure 5 by a dashed search line 250, where regularly spaced points along the search line are indicated by the symbol "x" and are labelled j, k, /, m and n (note that because the search line passes through the centre of the pixel e, the "x" is shown overlying the letter "e" indicating that pixel). For most search directions, the points J, k m and n do not lie exactly on a source pixel, so their values are determined using neighbouring pixel values. In particular, the values of these points are found by linear interpolation of the pixels above and below them (or to the left and right for other templates).
The angle that yields the lowest variation in j, k, /, m and n is considered the best angle for interpolation, amongst angles in the range which is under test using that template. Then the respective best angles obtained for each of the templates are compared, and amongst the set of best angles; the one which has the lowest variance is considered to be the overall best angle, that is, the angle for use as the output of the interpolation angle determination unit.
Although every template could be searched exhaustively at the full available accuracy, as an alternative (in order to save processing resources) an approximate search can first be conducted to indicate which of the eight templates contains the best angle. The approximate search could involve operating at a lower calculation accuracy and/or a more coarse angular resolution than those applicable to a more thorough method. Once the template containing the best angle has been identified, the more thorough method can then be applied to that template, to find a more adcurate angle. This two-stage approach involves detecting a direction selected from a first set of directions, for which pixels along that direction exhibit the lowest variation in pixel value; and detecting the image feature direction by detecting a direction selected from a second set of directions around the detected first direction, the second set having a higher resolution than the first set of directions, for which pixels along that direction exhibit the lowest variation in pixel value.
A technique for such an approximate search will now be described.
In one example, as an approximate search in order to find the best template, a test angle at a value of p mid-way through each template (p = 112) is used to find the variance of the pixels on the test line. This test angle provides an approximation of the variance values that this template could produce. To ease hardware requirements, only the central three pixels (k, I and m, where / = e anyway) may be used for this variance calculation. Using q=112, the pixels value k, I and m are found 1 1 rn:!gJf Then the variance of these three pixels can be calculated. To ease hardware requirements further, an approximation to the standard deviation may be calculated as a sum of absolute differences (SAD) value rather than the full variance expression: sd J2 tnean(ça)=(k+?+rn)13 sd@p) -mean(co)] + (1-meanQp)( + (m -mean(co)( Note that for a true standard deviation, the values shown above should be divided by 43. However, as the test is for which template provides a minimum value of the expression, a constant factor of 43 may be ignored.
The minimum standard deviation or SAD value from the eight approximate values derived from the eight templates indicates the template that contains the best direction. A more accurate calculation can now be conducted on that single template. The accurate search process will now be described.
In order to find the best angle for the single template identified for further examination, the angle corresponding to the minimum variance should be found. This can be derived by minimising an expression for the variance.
Pixel values for the five pixelsjto n on a test line 250 as a function of are given by the following equations: j =(1-ço)a+qth k 1 =e n The differentials of these expressions are: =b-a k!(d_c) 4 dço 2 dç dm1 dii 4 2 By finding the value of q' which corresponds to the minimum variance off, k, /, m and n, the best direction for this template may be obtained. The variance of the search line is given by: j2 +k2+12 +m2+n2 (j+k+l+m+n)2 var(w)= 5 25 S Differentiating with respect to q: dj dk dl din dn 25var'(ço)= lOj -+lOk-+1Ol-+lOm---+lOn--- 4 dço dço 4' dço (di dj dj dj dfl.
21 +-+-+-+-i(j+k+l+m+n) dço 4 4' 4 dcp) Substituting the equations given above for the pixel values j to ri as a function of cp, and the expressions derived above for the differentials of those values with respect to p, and then equating to 0, the value of p, comin, that corresponds to the minimum variance can be found: [b-a]x[8a-2c-2e-2g-2i] + [ti-c]x[4c-a-e-g-i] + [f -g]x[4g-a-c-e-i + [h-i]x[8i-2a-2c-2e-2g] -[b-a]x[8(b-a)-2(d-c)-2(f-g)-4(h-i)] + [d-c]x[2(d-c)-(f-g)-2(h-i)] + [f -g]x [20" -g) -2(h -i)J + [h-i Jx[8(h-i)] This value of tp is then clipped to the range of 0 to 1, so that only valid values for the template are formed.
The ço, value calculated represents the angle that will be used for interpolation. At this stage the cerlaintyFactor of this angle is iven as 100% confident, but this value may be subsequently modified by various checks to be described below.
In order to detect aliased lines or incorrect detection of an angle, a number of checks are processed at this stage using the S template SAD values derived as part of the approximate search described above.
Check 1: The presence of multiple minima is detected. If any of templates, other than the two templates directly neighbouring the minimum SAD value, has a SAD that is equal to the minimum SAD value, the data is considered to be aliased and therefore this angle cannot be used. The certaintyFactor is set to U (completely uncertain) for this pixel.
Check 2:, Angles should not be detected in areas of low picture detail. If the image activity (as derived above) for this pixel is below a programmable threshold, the angle is rejected by setting the certaintyFactor to U. Check 3; For a well defined line, the SAD perpendicular (perpSAD) to the best direction should be significantly smaller than the minimum SAD (minSAD). As the perpendicular SAD approaches the minimum SAD value, the confidence value, certaintyFactor, should be increasingly penalised if a smooth fading arrangement is used, or the angle should be rejected entirely (by setting certaintyFactor to 0) if the smooth fading arrangement is not used. The perpendicular template is defined as the template that is 4 templates away from the minimum template where each mapping is shown in the table below.
Minimum Template Perpendicular Template Fig. 6a Fig. Ce Fig. 6b Fig. Cf Fig. 6c Fig. Sg Fig. Cd Fig. 6h Fig. 6e Fig. Ca Fig. Cf Fig. Sb Fig. Bg Fig. Cc Fig. 6h Fig. Cd The certaintyFactor is then set as following pseudocode, which encompasses all three of the above checks: bFadirigTriggeredfalse; if {bMultiplef4inimaDetected) certaintyFactorO; /1 check I else if(imageActivity<imageActivityThreshold) certaintyFactorO; // check 2 else if(bSmoothFadingRequired) diff = perpsAD-minSAD perpFadeAmount=1<<perpFade if(diff<templateDiffThresh) /1 this angle is no good fade cert255; else if (diff>templateDiffThresh+perpFadeAmount} fade cert =0; /1 this angle is good else if((perpFacie-8)>0) fade cert=256-((difftemplateDiffThresh)>>(perpFade-8)) else fade cert=256-( (diff-templateDiffThresh) << (8-perpEade)) certaintyFactor=2$5-fadecert; // invert all the bits bFaclingTriggered=(cert!=255) // or fade cert.'=O else If( (perpTemplateTolerance*perpsau)>>5) < miriSZkD) certaintyFactor0; 1/ check 3, no fading bFadingTriggerecl=true; else certaintyFactor=255 where perpFade, imageActivityThreshoid, templateDiffThresh, perpTemplateTolerance and bsmoothFadingRequired are programmable constants, and certaintyFactor varies between 0 (no certainty) and 255 (full certainty).
Interpolation Using the angles 0 and the certaintyFactor calculated by the unit 110, the interpolator produces an output pixel. The certaintyFactor associated with each source pixel indicates to the interpolator whether the result of the shearing interpolator 140 can be used for the output.
If the sheared value cannot be used, the interpolator must use the pixel produced using the non shearing interpolator 150. A mixer control value is generated by the controller 160 in respect of each output pixel, indicating to the mixer 170 whether to output the shear interpolated pixel, the non shearing interpolated pixel or a weighted combination of the two.
The shearing interpolator and the non-shearing interpolator will now be described.
çinq lpterpolaUon The present arrangement uses a shearing filter to carry out the interpolation, in order to explain the operation of the shearing fitter, previously proposed approaches will firèt be S discussed in brief.
Filtering using a previously proposed two-dimensional (20) Sinc filter involves convolving together a one-dimensional (1 D) filter to scale horizontally with a corresponding filter to scale vertically, to produce a filter that is able to filter and scale an image in both directions with a single 20 filter matrix. The two dimensional filter resulting from the convolution of the two orthogonal filters is (at this stage) aligned to the horizontal and vertical axes.
Figures 7 and 8 provide schematic illustrations of such a 20 Sinc filter. Figure 7 is a 3D projection representing the filter coefficients! where the x axis (horizontal on the page) and the y axis (oblique on the page) represent spatial coordinates in the plane of the image with the origin (0,0) in the centre of the drawing, and the z axis (vertical on the page) represents the amplitude of the filter coefficient at that spatial position. The filter coefficients have a peak amplitude at the origin. Figure 7 is shown with different degrees of shading representing bands of different filter coefficient amplitudes. This shading representation is carried over into Figure 8 which provides a view along the z axis onto the x-y plane, and will be referred to as an overhead view of the filter coefficients. Peaks (high points in the z direction) in the coefficients shown in Figure 7 correspond to darker regions in the overhead view of Figure 8.
One technique of edge directed Sinc filtering is to rotate such a two dimensional filter matrix, so that the principal axis of the filter is aligned to the detected edge or image feature.
The theory behind this principle is to align most of the energy of the filter along the length of the image feature. An additional stage is to increase the bandwidth of the filter in the direction perpendicular to the orientation of the edge in order to preserve the sharpness of the edge. The aim of the bandwidth increase is again to further increase the energy of the filter along the image feature line, Although this 20 edge directed Sinc filter method can produce some good results, there is a fundamental difficulty with the algorithm. Consider interpolating a pixel at offset zero in both the vertical and horizontal direction (fully aligned with a source pixel) that lies on a feature that has an orientation of 45 The Sinc filter rotation method described above would yield a filter that is aligned to the feature in the image.
An example of a Sinc filter rotated by 450 is shown schematically in Figures 9 and 10, which follow a shading similar notation to Figures 7 and 8 respectively. Figure 9 is a 3D representation of the rotated filter, and Figure 10 is an overhead view of the filter.
A filter produced from a non-rotated Sinc algorithm would have produced a two dimensional impulse response. Therefore, the interpolator would have returned the source pixel.
But as the filter has now been rotated, the filter zero points do not align to the source S data, and hence the interpolator does not return the source pixel. The effect of this is that the bandwidth has been changed in both directions unintentionally and, as a result, the output pixel is not as expected. The problems for this method increase further for non zero offset values (output pixel positions not aligned with source pixels); the edge directed rotation filter can produce significant striping artefacts when used for images with angled features.
Therefore, rather then use rotation, the present embodiments provide another method to shape the filter by shearing the filter.
An example of a Sinc filter sheared by 450 is shown schematically in Figures 11 and 12, which again follow a shading similar notation to Figures 7 and 8 respectively. Figure 11 is a 3D representation of the sheared filter, and Figure 12 is an overhead view of the filter. Here the shear transformation has been applied to filter coefficients of the two-dimensional digital filter so as to map the coefficients to different respective positions in the two-dimensional domain, in accordance with the detected image feature direction.
In general, the shear can be considered as a particular type of linear mapping. Its effect leaves fixed all points on one axis and other points are shifted parallel to the axis by a distance proportional to their perpendicular distance from the axis.
A general view of a shearing operation is that coordinates (x, y) in the unsheared data are mapped to coordinates (x', y') in the sheared data, where (x', y') = (x, y+rnx). In the example shown schematically in Figures 11 and 12, representing a 45° shear, m = I so that (x, y) maps to (x, x+y).
Shearing of the filter will align the energy of the filter along the edge; this allows for better bandwidth control for the filters when aligning to an edge and will not lead to as many artefacts as those introduced with the rotation method.
An alternative method (to shearing the filter) is to shear the data instead. The two methods should produce similar results but the latter method also allows non filter based techniques, such as Bicubic interpolation, to be used on the sheared data. In principle, a shear operation could be applied to both the data and the filter, to give the required amount of shear as a combined effect.
Shearing the data, as described below, can involve filtering groups of input pixels in respect of the first axis, the groups being spaced along the detected image feature direction, so as to generate respective intermediate pixel values; and filtering the intermediate pixel values in respect of the other axis, so as to generate an output pixel. Is
Bicubic Shearing In the present embodiment, a bicubic interpolator could be used to shear the data by controlling the phase. For example, Figure 13 shows a one pixel wide 45 line (formed of the black pixels 710). The oval pixel 720 on the diagram indicates the pixel to be interpolated.
The data can be sheared by filtering the data at various pixel offset positions. A vertical bicubic filter (shown schematically as a curve 760) is centred on different points (with sub-pixel accuracy) depending on where that particular point lies in relation to the line. The curves 760 shown in the diagram schematically indicate the points at which the filter will be placed. By offsetting the bicubic filter at different positions, the data can essentially be sheared so that a respective sheared source pixel 750 is generated at the horizontal position of each corresponding source pixel on the line. These intermediate sheared pixels 750 can then be interpolated by a horizontal bicubic filter to form the required pixel 720, as shown at the bottom of the diagram in Figure 13.
AngLQQf shearing In order to shear the data, the angle of the feature needs to be identified. This is found using the angie determination techniques described above.
Figure 14 schematically illustrates a 26 line in a region of an image.
A pixel 800 is the centre source pixel and a circle 810 represents a pixel to be interpolated. In general, the shading applied to the square representations of source pixels represents the pixels' colour or luminance, so that the image feature shown in Figure 14 is a dark line from upper left to lower right.
ln one example, the data is first be sheared in a horizontal direction, so that the line in aligned to the vertical axis. The horizontal shearing process is shown in Figure 15.
In Figure 15, the data has been sheared horizontally, so that the pixels in the lines below and above that correspond to the feature are now aligned vertically together. Boxes 820 in the diagram indicate the pixels that would be required to perform the horizontal shearing using BicubiG interpolation, so as to generate respective vertically spaced but horizontally aligned sheared pixels 830. The result of the shearing is shown at the bottom of Figure 15. The shearing has followed the line so as to find four sheared pixels 830 which can then be used to vertically interpolate the new pixel 810 at the required position.
Shearing in the horizontal direction for shallow horizontally aligned lines, produces very good results for continuous long lines, however, for smaller features, shearing in this direction is potentially hazardous and can produce some artefacts. As the angle becomes shallower, more pixels are required to perform the shearing. A range of up to 18 source pixels could be required for lines of approximately 5'. This is demonstrated in the shearing shown in Figure 16 which follows the same notation as Figure 15 but represents a line feature at approximately 5° to the horizontal. The horizontal range of pixels required to carry out the horizontal shearing is shown schematically as a range 840. In theory an infinite number of pixels are required as the angle of S the line feature approaches the horizontal. This could potentially produce an incorrect result if the image feature was only small in length. A safer approach would therefore be to shear verUcally for shallow horizontal lines, as shown schematically in Figure 17.
Again, Figure 17 follows a similar notation to Figure 15, except that boxes 850 represent the range of pixels needed to carry out the vertical shearing process. The sheared pixels 830' are then interpolated horizontally to generate the required output pixel 810.
Accordingly, the axis used as the one of the axes for which the shear operation is carried out is selected in dependence on the detected image feature direction. In particular, the interpolator 130 is operable to detect which of the two axes is angularly closest to the detected image feature direction, and to select the other of the two axes for the mapping of the filter operation by the shear technique.
Although the sheared pixels are very close to the ideal case of horizontal shearing, shearing in the nonideal vertical direction can produce some oscillations in the output of the sheared pixels. These oscillations can be selectively filtered away using a smoothing filter, the details of which will be explained below.
Accordingly, for features with angles (from the horizontal) 0 found to be between 0 and ±45', the data is sheared vertically and the angle of shearing is set to the angle determined by the angle determination unit, If the angle is found to be between ±45' to ±90', the data is sheared in a horizontal direction and the angle of shearing is then set to 90'-0, for example if the angle indicated by the angle determination stage was 85', the data would be sheared by 5' in the horizontal direction. The angle required for shearing will be referred to as O. When 9 is equal to 45', an arbitrary choice is made so that the data is sheared vertically with 0 equal to 45'.
ba1athic!S Once 9 and the shear direction (horizontal or vertical) have been determined, the data is sheared and then interpolated to find the required output pixel for the required offset. The shearing process is conducted using a Bicubic shearing method.
Consider the example of a 45' line 890, as shown in Figure 18, which is scaled by a factor of 1:2 horizontally and vertically, so that for every source pixel there are four output pixels. Source pixels are shown as squares 900 (black or white depending on the pixel colour), interpolated pixels are shown as white circles 910 in general, but the particular interpolated pixel under consideration is shown as a black circle 920.
This scaling factor means that there are pixels in the output that are aligned to the source data and have a 0 horizontal and 0 vertical sub pixel offset, and there are also other output pixels that correspond to sub pixel offsets of 0.5 horizontal and 0 vertical, 0 horizontal and 0.5 vertical, and 0.5 horizontal and 0.5 vertical, In this example Os is set to 45 and the shear direction in this case can be either horizontal or vertical; although the arbitrary choice implemented here is to use the vertical direction for this case. The pixel 920 to be interpolated is at offset 0.5 horizontally (5k) and offset 0.5 vertically (ã). However, it is clear from the diagram that the interpolated pixel lies exactly on the line 890. This leads to the conclusion that in order to interpolate the pixel in question, the vertical offset will also need to be sheared for vertical shearing. The same conclusion can be reached for the horizontal offset for horizontal shearing. In other words, in order to interpolate the pixel in question, the data that is required for filtering is selected by following the image feature direction and finding the vertical intersection points that are aligned to source pixel positions horizontally. These vertical filtering positions are found by applying a vertical shearing to the positions starting from the horizontal axis. A corresponding principle applies for horizontal shearing.
During the shearing process, a total of nine sheared pixels 930 are generated. The amount of shearing is determined from 9 and how far the pixel is from the source centre pixel, tap, so that in Figure 18, a total of nine columns are sheared, producing set of five pixels that are processed by the second stage of the filtering process. Although, for ease of explanation, a total of nine pixels are sheared for the following examples. Each column is sheared by a different amount with the respective offset dependent on the shearing angle O However, in other embodiments, nominally five points are sheared, a smoothing filter is applied at the values are interpolated to produce the output pixel. Technically only four pixels are required for the shearing process, but five may be used in embodiments of the invention to make the subsequent smoothing filter operation symmetrical.
In order to shear the data, a source pixel offset position, P and P, and a sub pixel offset position O' and O' need to be calculated. These parameters determine the centre of the filter required to perform the shearing at various points along the sheared line, and the sub pixel offset required for the interpolation. The values are calculated using the following equations: shearedPosition = + (tap -x tan(Oç) Pp = round(shearedPositbn) F =tap = shearedPosition -= ox For theexample above, Py and ö'y for the positions for ôy =0.5 and ox =0.5 become: tap Py O'y -4 -4 0.0 -3 -3 0.0 -2 -2 0.0 -1 -1 0.0 0 0 0.0
I I
2 2 0.0 3 3 0.0 4 4 0.0 S The "tap" position is the position of the filter aligned to horizontal source pixels.
This procedure can be followed for any 8 and any offset positions to provide new offsets to use during the interpolation. Using the modified offset value and new centre pixel position, the data can be filtered to calculate the required sheared pixel. Sinc interpolation could be used to calculate the required pixel1 however, the preferred method is to use a Bicubic polynomial.
The vertical boxes 940 in Figure 18 show the pixels that will be used for the Bicubic shearing. The central pixel used for the Bicubic interpolation is given by Py. and the sub pixel offset position 5y, is indicated by dots.. The Py value calculated above indicates that the pixels used for the polynomial should be offset by I pixel. TheL dots on the enclosure indicate the sub-pixel offset of the required pixel, for the example shown this value is 0. The resulting sheared pixels are shown at the bottom of this figure. Here, note that the interpolation process in use requires four pixels, pml, p0, p1, and p2. The interpolation process will interpolate a new pixel between p0 and p1 given a sub-pixel offset value. The calculations described above describe the process in which the position of each of the bicubic polynomials is calculated. The Py value described above is the position of each p0 pixel with respect to the origin, and the calculated O'y is the sub pixeE £offsetll used in the polynomial fitting. (Px, Py) = (tap, f(tap,es)).
Once the nine sheared pixels 930 are calculated, the second stage is to interpolate the required pixel from this set of sheared pixels. Again, Sinc or Bicubic interpolation could be used.
In the case of the latter, only the central four pixels will be needed, however, all the pixels are still required for the other checks. The pixels required for the second interpolation are again enclosed by a box 950 with the dots indicating the sub pixel offset position (ö'x), This is the horizontal sub-pixel offset position, for this example it is 0.5.
The output from the second stage interpolation process is clipped so that it does not go beyond the maximum or minimum of the pixels that are used for the interpolation.
The same procedure can be followed for a 26 line 690', as shown in Figure 19, which follows the same notation as Figure 18.
Here, the required interpolated pixel is at sub-pixel offset 6x0.5 and ôx=0.5. Using the equation for.shearedPosition set out above, the sheared pixel and sub-pixel position Py and O'y can be calculated. This is shown in the following table. tap P
-4 -2 0.25 -3 -2 0.75 -2 -1 0.25 -1 -1 0.75 0 0 0.25 1 0 0.75 2 1 0.25 3 1 0.75 4 2 0.25 It should be noted that for the Bicubic interpolation, the sub-pixel offset (O,) should ideally be positive. This is to ensure that the polynomial is evenly balanced. If the sub-pixel offset does go negative, the sub-pixel offset is adjusted so that it is positive and the pixel offset is adjusted accordingly, as follows: f(8 <0) o;=s;+i, iyp-i The procedure described is for applying a vertical shear. The same method can be used for a horizontal shear direction; however, the equations are adjusted as follows: shearedPosition = + (tap -5,) x tan(Oç) = round(shearedPosition) tap = shearedPosition -And the sub-pixel offset adjustment as: g(o; <0) s=s.+i, i=i-i Non Shearing Interpolation Method When the sheared interpolated pixel cannot be used reliably due to a low certaintyFactor value, the output interpolated value is replaced or mixed with a pixel created using a non edge-directed, non sheared interpolation method, referied to as the default interpolation method.
Sinc interpolation is a candidate for the default interpolation method when such a case arises. However, due to the large luminance differences observed empirically between Bicubic interpolation and Sinc interpolation, matching of the two interpolation methods to provide smooth transitions between sheared and non sheared areas can be difficult. Considering this, standard Bicubic interpolation is instead used as the non shearing algorithm in embodiments of is the present invention. The Bicubic interpolation is significantly smaller, in terms of hardware or other processing requirements.
As the Bicubic Spline implementation is very similar to the standard Bicubic interpolation method, a choice of interpolators can be provided if desired. The Spline implementation has the added benefit of being more continuous across pixel boundaries.
Me LQPiaflon Once the sheared and non-sheared interpolated results have been created, a method of mixing the two results is required. The nearest neighbour (the nearest source pixel to that output pixel position) certaintyFactor could simply be used to provide a mixing value; however this is not desirable for large zooming factors.
For example a scaling ratio of x4, a single source pixel will provide an interpolation angle and certaintyFactor for 16 output pixels, arranged in a 4x4 block. If there are neighbouring pixels that have chosen a slightly different interpolation angle, or have a different certainty, the output may look blocky. In order to avoid this, the certaintyFactor and angle values are used to interpolate a mixer control value for each output pixel, based on its sub-pixel position and neighbouring certaintyFactors.
The output is formed by the mixer 170 from a blend of the pixel derived from the non-shearing interpolation, with that from the shearing interpolation, depending on the value of the mixer control value generated by the controller 160 on a per pixel basis. This is summarised by the following equation: output = rnixercontrnlvalue x shearResult + (1 -,nixercontrolvalue,) x rionShearResult The resulting value is the interpolated output.
ffll?SQ2J1catinS Two example applications of the image scaling techniques will now be described.
Figure 20 schematically illustrates a television apparatus 1500 comprising a flat panel display screen 1510 and a video scaler 1520 which comprises apparatus as shown in Figure 3, acting on a succession of video images. The television apparatus can display video signals of different resolutions, either by routing them directly to the display (in the case that the video is signal has a native resolution of the display 1510) or by routing them via the video scaler 1520 (in other cases).
Figure 21 schematically illustrates a video camera 1550 comprising a lens arrangement 1560, an image capture device 1570 which generates video signals at a particular image resolution, and a video scaler 1580 arranged to scale images output by the image capture device to another resolution. This arrangement allows the video camera to output video images at its native resolution on an output 1590 and/or video images at a different resolution on an output 1600.
It will be appreciated that any of the techniques described here may be implemented by hardware, software or a combination of them.
It will be understood that the present techniques may be applied, where appropriate, to a Sinc filter, a Bicubic filter, a Bicubic Spline filter, a Gaussian filter, a Bilinear filter, a Biquadratic filter, a Biquintic filter, a generic polynomial-based filter, or FIR filters and splines.
The Filters More information on the interpolation filters will now be provided.
Sic,vbiciPtrpgiaflgij Bicubic interpolation involves fitting a third order polynomial to a set of data, to interpolate the required pixel. The four closest pixels to the interpolated pkel are used to form the polynomial.
The general form of the curve is given by: y-Ax3+Bx2+Cx+D The conditions for the polynomial are that each of four source pixels must intersect the curve. A schematic example of such a curve is shown in Figure 22.
Therefore, evaluating the equation at x = -110,1 and 2, yields the following matrix of equations: -11-11 A 0001 B P0 1111 ci; 8421 D Solving the equation above yields the values of ASIC and D. A -13-31 P..
B 1 3-630 C -6 -2 -3 6 -1 I D 0600 I Using the values of A,S,C and D, the general equation can be evaluated to find the value of the interpolated pixel for any given value of x in the range of 0 tol. It is imperative that the value of x does not exceed the range of 0 and 1; this is to ensure that the curve is balanced either side by the same number of pixels.
Bicubic interpolation can be used to scale an image horizontally, vertically or both. In the case of the latter, sixteen of the closest pixels are used to interpolate the output pixel.
E12PILaJm.e rpt Bicubic Spline interpolation also involves fitting a third order polynomial to the data points. However, rather than forcing the bicubic equation to intersect all the source pixels, other restrictions are imposed.
The first restriction is that the curve must intersect pixel P0 and P1. The second is that the gradient at pixels P0 and P1 must be continuous, so they must match the gradient formed from the pixels either side of them. A schematic example of a curve of this nature is illustrated in Figure 23.
The general form of the curve is given by: y(x)= Ax3 +Bx2 +Cx+D Using the first condition that the curve must pass through pixels P0 and P1, yields the following equations: y(0) = P0 = D y(l)=I=A+B+C+D The second conditions states that the gradients at pixels P0 and P1 must be continuous.
The gradient at pixel P0 must be the same as the gradient formed between pixels P.1 and P1, also, the gradient at pixel P1 must be the same as the gradient formed between pixels F'2 and P0. These conditions yield the following equations: y'(x)=3Ax2+2Bx+C y'(0)=Cz IF1 y(1)=3A+2B+C= 2 Solving the equations above, the values of A,B,C and Dare calculated as follows: A -13-31 P4 B -1 2 -5 4 -1 P0 C --1 0 1 0 1 D 0200 P2 The values of A,B,C and D can now be substituted back to find the interpolated value Y for any given value of x in the range of 0 to 1.
Ih?S2..QpIftQflLQE1tsfl It can be appropriate to take measures to "sharpen" the pictures generated by the interpolation algorithms described above. So, in an image processing apparatus in which output pixels of an output image are generated by interpolation with respect to an input image using an interpolation filter for generating each output pixel from a corresponding group of pixels derived from the input image, sharpening a picture can mean enhancing edges or abrupt transitions in the image, which generally implies that the high frequency content of the image data is increased. This can be achieved (as described below) by a filter processing which modifies the interpolation filter output so as to enhance the generation of oscillatory artefacts in the output image in response to smaller oscillatory transitions, in the input image.
Sharpening can be achieved in various ways. Each of these may be applied to the filter arrangements described above, either individually or in various combinations. In embodiments of the invention, the enhancements are applied to the non-sheared (orthogonal) filters provided in the (non-edge-directed) non-shearing interpolator 150, but not in the (edge-directed) shearing interpolator 140. However, in other embodiments, corresponding techniques could be applied to the shearing interpolator and to the non-shearing interpolator.
Here, "smaller oscillatory transitions" mean features of the input image which: (a) are oscillatory but of a smaller amplitude and/or extent than those oscillatory artefacts generated at corresponding image positions in the filter output by virtue the effect of the filter processor, or (b) non-oscillatory transitions (such as steps, ramps or other transitions which are generally locally monotonic over at least a range of (say) a certain number of pixels such as five or ten pixels in the relevant direction), which of course can be considered to have a smaller (i,e, zero) local oscillatory content than the artefacts generated in the output image by virtue of the effect of the filter processor. Note that non-oscillatory" does not imply that the entire image is formed of a single monotonic transition, but rather that the transition is generally (or completely) monotonic over a local range as mentioned above.
is Note also that image data in the present context is considered as a two dimensional array of data values. So, at a particular point in n image, it may be considered that there is a certain type of transition in one direction and maybe a different type of transition in another direction (where the directions need not necessarily be considered as the horizontal and vertical directions). The enhanced generation of oscillatory artefacts to be described here generally refers to artelacts generated in a particular image direction in response to a transition present in that direction.
The techniques to be described, which fall within this general definition, are: sharpening the filter response and sharpening by adjusting the source pixels.
The "enhancement" of the generation of oscillatory artefacts should be taken to imply that either more artefacts, or artefacts of a greater oscillatory amplitude and/or extent, or both, are generated by virtue of the effect of the filter processor, than would be generated if the filter processor were not in operation.
Shapinq the filter response These techniques provide a sharpened output from the filter by encouraging or enhancing ringing in the filters.
Filter ringing is an artefact which occurs when a non-oscillating input to a filter yields an oscillating output, or more formally, when an input signal which is monotonic on an interval has output response which is not monotonic. It can occur, for example, when the impulse response or step response of a filter has oscillations.
In most practical filters, output oscillations caused by a step change to the filter input will decay with increasing separation from the step change. In the example of image filtering, ringing artefacts are generally seen as ripples adjacent to an abrupt transition in the image, where the ripples decay and disappear with increasing distance from the transition.
Often, ringing artefacts in image filters are considered to be a problem to be solved. In the present embodiments, however, a controlled enhancement of the amount of ringing (which is generally quite low with bicubic filters) is used as a technique to sharpen the resulting images.
Referring now to Figure 24, four source pixels are illustrated: pem1, pel_0, pet_I and pel_2. These are spaced along the horizontal axis to illustrate their relative positions along the interpolation direction which would generally be a horizontal or vertical pixel direction (depending on the filter direction under consideration). The vertical axis of Figure 24 represents "pixel value', which is the quantity being filtered, for example pixel luminance or the value of a particular colour component of that pixel such as the green component.
A variable x is defined in Figuie 24. The variable x varies linearly from 0 (at the position along the horizontal axis of a current source pixel) to I (at a next source pixel). It is used when referring to the position, along the horizontal axis of Figure 24, of an output pixel to be interpolated. So, if an output pixel coincides in horizontal axis position with a source pixel, then x is 0 or I If an output pixel position ties between two source pixels along the horizontal axis of Figure24,then0<x< 1.
The pixels in Figure 24 could be taken to represent an abrupt or step change such as that illustrated by a line 2000. A bicubic interpolation amongst the four illustrated pixel values produces a curve 2010. A linear interpolation between pel_0 and pel_1 produces a straight line 2020.
In general terms, the sharpness enhancement to be described now will tend to increase the difference between the curve 2010 and the line 2020.
To sharpen the response around the transition, first a bicubic or bicubic spline equation (ycubic = ax3 + bx2 + cx + d), as a first or main' filtering algorithm, is fitted to the four pixels shown in Figure 24. Define Ynnear as the linearly interpolated value (on the line 2020) at the output pixel position, where Yiinear = ex + I and the linear interpolation is considered as a second filtering algorithm being used to generate test pixels derived from the input data so as to have corresponding output pixel positions. The process involves comparing the test pixels with the output pixels derived by the first filtering algorithm, and potentially modifying the output pixel values as a result of the comparison. As a particular example, the output of the first interpolation filter may be modified in dependence on the difference between those pixels and the corresponding test pixels generated by the second filtering algorithm.
A variable sharpnessfactor is a user-defined parameter representing the amount of sharpen ing to apply.
The amount of sharpening is modulated (varied with respect to the spatial position in the output image, with respect to input image pixel positions, of the current output pixel under consideration) so that none is applied when the interpolation position (output pixel position) is the same as a source pixel position, and the amount of sharpening tends to decrease on approaching a source pixel position and therefore to increase with increasing separation from a nearest input pixel position: k = sharpnessfactor * x * (1-x) The esulting pixel value sharpen_result is defined as: 1.. LI-F *1 SF arpen resu't -F1⁄4 Ycubc Yiineao Ycubic In situations where sharpening is allowed for both the shearing and the non-shearing filters, the sharpness factor can be controlled separately for pixels processed by the shearing interpolator and pixels processed by the non-shearing interpolator. In each case, in a 6 bit representation, a value of the respective sharpness factor is suggested as a maximum value of 10, with a preferred value of 5.
The sharpened pixel value is then clipped to between 0 and the maximum allowable value (such as 4095 in a 12 bit representation). However, as an option, the ringing result cén be blended (subjected to a weighted sum) with a clipped result generated using pel_O and pelj. Whether blending takes place can be controlled by a user parameter. It is suggested that blending should be turned on when the sharpness factor is non-zero.
SJspintt!Source PiQls In this technique, the source pixels are adjusted prior to interpolation. In this arrangement, a filter processor is operable to modify the input pixels to the interpolation filter so as to enhance the generation of oscillatory artefacts in response to smaller oscillatory transitions in the input image. Here, the modification implies that pixels at a stage before they are supplied to the interpolator are modified. This could mean that they are modified as a last stage before they are supplied to the interpolator, or the modification could come at another stage in the process. In either instance, the effect of the modification is such that the generation of oscillatory artefacts is enhanced by virtue of the operation of the filter processor, compared to the situation if the filter processor was not in use.
There are various features of this technique.
If the technique applies an increase in the amplitude of oscillatory data, this results in symmetrical gain to the pixel values. Biasing the amplitude increase so that small oscillations are boosted more than large oscillations results in non-linear gain; this can emphasise detail but tends not to emphasis noise in the small oscillations and also tends not to introduce clipping in the large oscillations. Biasing the amplitude increase so that oscillations near zero level do not grow below zero and similarly oscillations near the maximum level do not grow beyond the maximum level results in asymmetrical gain. This last feature again reduces the probability of S undesirable harsh clipping having to be applied.
The pixels used in bi-cubic shearing and the bi-cubic non-shearing interpolator can be adjusted using this method. However, in embodiments of the invention the bi-cubic pixels for the shearing interpolator are not modified.
Let pel_0 be the source pixel when an output pixel is being interpolated between pel_0 and pel_1. The pixel data is in a 12-bit range (0-4095).
In order to adjust pel_0, referring to Figure 24, the energy at this position needs to be first calculated: energyj = pel_0 -(pel_mi + pel_1)/2 is If energy_0> 0, then pel_0 is adjusted (to generate a replacement value pel_Oover$hoot) using: = pel) + growthFactorSymm x energy_0 + growthFactorNonLinear x (2048 -Jenergyo -20481) + growthFactorAsymm x energy 0 x (4095 -pelo)/4096 If energyfi < 0, then pel_0 is adjusted using: pel_Oovershoot = pel_O + growthFactorSymm x energy 0 -growthFactorNonLinear x (2048 -energy_0 + 20481) + growthFactorAsymm x energy_0 x (pel_0)/4096 Here, growthFactorSymm is a variable to define how much symmetrical gain is applied (see above), growthFactorN4onLinear is a variable to define how much non-linear gain is applied (see above), and growthFactorAsymm is a variable to define how much asymmetrical gain is applied (again, see above).
The process is applied to all pixels prior to interpolation in a particular direction.
Filter Processor In either case (adjusting the input pixels or post-processing the filter output), the filter processor can be provided as part of the functionality of the non-shearing interpolator 150. In the case that similar processing is applied to the shearing filter process, a filter processor can be provided as part of the functionality of the shearing interpolator 140.
As an alternative, the filter processor in the case of the adjustment of input data could be provided as part of the pre-interpolation filter 100.
As another alternative, a post-processing filter processor could be provided as part of the functionality of the mixer 170.
ExrnResplls Figure 25 schematically illustrates the effect of the variable growthFactorsymm and the corresponding changes relating to symmetrical gain. Figure 26 schematically illustrates the effect of the variable growthFactorNonLinear and the corresponding changes relating to non-linear gain. Figure 27 schematically illustrates the effect of the variable growthFactorAsymm and the corresponding changes relating to asymmetrical gain.
A common notation between Figures 25, 26 and 27 is as follows.
A source signal 2100 is the same in each of the six charts, Figure 25(a) and (b), Figure 26 (a) and (b), and Figure 27 (a) and (b). In some of the charts, the source signal 2100 is close to (and therefore hard to distinguish visually from) the respective output signal; in such cases, it is noted that the output signal has the higher amplitude and the greater excursions than the source signal. The source signal is of course sampled to form pixel values, with the sampling points being shown as vertical bars 2110 in Figure 25(a) but not being shown in the remaining five charts simply for clarity of the diagrams. The vertical axis in each case represents pixel value, nornialised so that the minimum allowable pixel value is 0 and the maximum allowable pixel value is I (though excursions outside of this range may be allowable as inputs to the interpolation fUtering operation). In each case, the respective variable under test has an allowable range of 0 to 63.
Figures 25(a) and (b) schematically illustrate the effect of different values of the variable groyvthFactorSymm and the corresponding changes relating to symmetrical gain. In Figure 25(a), the value of growthFactorSymm is 63, generating an output signal 2120. In Figure 25(b) the value of growthFactorSymm is 6, generating a slightly enhanced output signal 2130.
Figures 26(a) and (b) schematically illustrate the effect of different values of the variable growthFactorNonLihear and the corresponding changes relating to non-linear gain. In Figure 26(a), the value of growthFactorNonLinear is 63, generating an output signal 2140. In Figure 26(b) the value of growthFactorNonLinear is 13, generating a slightly enhanced output signal 2150.
Figures 27(a) and (b) schematically illustrate the effect of different values of the variable growthFactorAsymm and the corresponding changes relating to symmetrical gain. In Figure 27(a), the value of growthFactorAsymm is 63, generating an output signal 2160. In Figure 27(b) the value of growthFactorAsymm is 13, generating a slightly enhanced output signal 2170.
Clipping Processing During the filtering process, the result may be allowed to over or under shoot. The amount of this over or under shoot is determined by examining the filters used during the S filtering process. The output value is limited to be no bigger then a multiple of the difference between the maximum (max) and minimum (mm) pixels used during the filtering: shootAmount = (inaxPel -minPel) * factor fr'(output> tnax + shootAmount) output = max + shootAmount jf(output czmin -shootAmount) output = miii -shootAe'nount

Claims (17)

  1. QLAIMS1. Image processing apparatus in which output pixels of an output image are generated by interpolation with respect to an input image, the apparatus comprising: S an interpolation filter for generating each output pixel from a corresponding group of input pixels derived from the input image; and a filter processor for modifying the interpolation filter output so as to enhance the generation of oscillatory artefacts in the output image in response to smaller oscillatory transitions in the input image.
  2. 2. Apparatus according to claim 1, in which: the interpolation filter operates according to a first filtering algorithm; and the filter processor is operable to modify pixels output by the interpolation filter in dependence on a comparison of those pixels with test pixels derived at corresponding output is pixel positions but using a second filtering algorithm different to the first filter algorithm.
  3. 3. Apparatus according to claim 2, in which the second filtering algorithm is a linear interpolation algorithm.
  4. 4. Apparatus according to claim 2 or claim 3, in which the filter processor is operable to modify pixels output by the first interpolation filter in dependence on the difference between those pixels with the test pixels.
  5. 5. Apparatus according to any one of the preceding claims, in which the filter processor is operable to vary the degree of modification of an output pixel in response to the spatial position of an output pixel with respect to input pixel positions.
  6. 6. Apparatus according to claim 5, in which variation applied by the filter processor is such that the degree of modification increases with increasing separation of the output pixel position from a nearest input pixel position.
  7. 7. Apparatus according to any one of the preceding claims, in which the interpolation filter comprises: an edge-directed interpolator for interpolating along detected image feature directions in the input image; a non-edge-directed interpolator for interpolating with respect to predetermined image axes; and a combiner for generating a weighted combination of the output of the edge-directed interpolator and the non-edge-directed interpolator according to a weighting dependent upon properties of the input image.
  8. 8. Apparatus according to claim 7, in which the filter processor is operable with respect to the operation of the non-edge-directed interpolator but not with respect to the operation of the edge-directed interpolator.
  9. 9. Image processing apparatus in which output pixels of an output images are generated by interpolation with respect to an input image, the apparatus comprising: an interpolation filter for generating each output pixel from a corresponding group of input pixels derived from the input image; and a filter processor for modifying the input pixels so as to enhance the generation of oscillatory artefacts in the output image in response to smaller oscillatory transitions in the input image.
  10. 10. Apparatus according to any one of the preceding claims, in which the interpolation filter is a polynomial filter.
  11. 11. Apparatus according to claim 10, in which the interpolation filter is a bicubic filter.
  12. 12. Image processing apparatus substantially as hereinbefore described with reference to the accompanying drawings.
  13. 13. A method of image processing in which output pixels of an ?utPut images are generated by interpolation with respect to an input image, the method comprising: interpolation filtering each output pixel from a corresponding group of input pixels derived from the input image; and modifying the interpolation filtering output so as to enhance the generation of oscillatory artefacts in the output image in response to smaller oscillatory transitions in the input image.
  14. 14. A method of image processing in which output pixels of an output images are generated by interpolation with respect to an input image, the method comprising: interpolation filtering each output pixel from a corresponding group of input pixels derived from the input image; and modifying the input pixels so as to enhance the generation of oscillatory artefacts in the output image in response to smaller oscillatory transitions in the input image.
  15. 15. A method of generating output filter values, the method being substantially as hereinbefore described with reference to the accompanying drawings.
  16. 16. Computer software for implementing a method according to any one of claims 13 to 15.
  17. 17. A storage medium carrying software according to claim 16.
GB1103691.0A 2011-03-04 2011-03-04 Image Interpolation Including Enhancing Artefacts Withdrawn GB2488591A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB1103691.0A GB2488591A (en) 2011-03-04 2011-03-04 Image Interpolation Including Enhancing Artefacts
PCT/GB2012/050431 WO2012120275A1 (en) 2011-03-04 2012-02-24 Image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1103691.0A GB2488591A (en) 2011-03-04 2011-03-04 Image Interpolation Including Enhancing Artefacts

Publications (2)

Publication Number Publication Date
GB201103691D0 GB201103691D0 (en) 2011-04-20
GB2488591A true GB2488591A (en) 2012-09-05

Family

ID=43923190

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1103691.0A Withdrawn GB2488591A (en) 2011-03-04 2011-03-04 Image Interpolation Including Enhancing Artefacts

Country Status (2)

Country Link
GB (1) GB2488591A (en)
WO (1) WO2012120275A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781375B (en) * 2021-09-10 2023-12-08 厦门大学 Vehicle-mounted vision enhancement method based on multi-exposure fusion

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006211402A (en) * 2005-01-28 2006-08-10 Casio Comput Co Ltd Camera apparatus and image processing method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SG162641A1 (en) * 2008-12-31 2010-07-29 St Microelectronics Asia System and process for imate rescaling using adaptive interpolation kernel with sharpness and overshoot control

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006211402A (en) * 2005-01-28 2006-08-10 Casio Comput Co Ltd Camera apparatus and image processing method

Also Published As

Publication number Publication date
WO2012120275A1 (en) 2012-09-13
GB201103691D0 (en) 2011-04-20

Similar Documents

Publication Publication Date Title
US8666154B2 (en) Interpolation
US8588554B2 (en) Interpolation
US7483040B2 (en) Information processing apparatus, information processing method, recording medium, and program
Dengwen An edge-directed bicubic interpolation algorithm
US9275463B2 (en) Stereo image processing device and stereo image processing method
JP6164564B1 (en) Image processing apparatus, image processing method, recording medium, program, and imaging apparatus
US20160225125A1 (en) Image Interpolation Method and Image Interpolation Apparatus
US9519952B2 (en) Image processing apparatus and method
Chung et al. Low-complexity color demosaicing algorithm based on integrated gradients
JP2010034964A (en) Image composition apparatus, image composition method and image composition program
CN102957846B (en) Image processing method, image processing apparatus and image pick-up device
KR101136688B1 (en) Single-pass image resampling system and method with anisotropic filtering
CN100585620C (en) Edge adaptive image expansion and enhancement system and method
Maalouf et al. Colour image super-resolution using geometric grouplets
TW201536029A (en) Image downsampling apparatus and method
GB2488591A (en) Image Interpolation Including Enhancing Artefacts
US20040086201A1 (en) Fast edge directed polynomial interpolation
JP2015228113A (en) Image processor and image processing method
Suresh et al. A comparative analysis of image scaling algorithms
RU2310911C1 (en) Method for interpolation of images
GB2487361A (en) A digital interpolating FIR filter using fewer multipliers
JP2010171624A (en) Outline correction circuit and interpolation pixel generation circuit
JP4730525B2 (en) Image processing apparatus and program thereof
Tsai et al. Design of a scan converter using the cubic convolution interpolation with canny edge detection
Schiemenz et al. Scalable high quality nonlinear up-scaler with guaranteed real time performance

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)