AU745562B2 - A method of kernel selection for image interpolation - Google Patents

A method of kernel selection for image interpolation Download PDF

Info

Publication number
AU745562B2
AU745562B2 AU65270/99A AU6527099A AU745562B2 AU 745562 B2 AU745562 B2 AU 745562B2 AU 65270/99 A AU65270/99 A AU 65270/99A AU 6527099 A AU6527099 A AU 6527099A AU 745562 B2 AU745562 B2 AU 745562B2
Authority
AU
Australia
Prior art keywords
values
mapping
kernel
sample values
discrete sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU65270/99A
Other versions
AU6527099A (en
Inventor
Andrew Peter Bradley
Kai Huang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU65270/99A priority Critical patent/AU745562B2/en
Publication of AU6527099A publication Critical patent/AU6527099A/en
Application granted granted Critical
Publication of AU745562B2 publication Critical patent/AU745562B2/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/403Edge-driven scaling; Edge-based scaling

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Description

1" S&F Ref: 483325
AUSTRALIA
PATENTS ACT 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT
ORIGINAL
Name and Address of Applicant: n Canon Kabushiki Kaisha 30-2, Shimomaruko 3-chome Ohta-ku Tokyo Japan Actual Inventor(s): Address for Service: Invention Title: Andrew Peter Bradley and Kai Huang Spruson Ferguson St Martins Tower 31 Market Street Sydney NSW 2000 A Method of Kernel Selection for Image Interpolation ASSOCIATED PROVISIONAL APPLICATION DETAILS [33] Country [31] Applic. No(s) AU PP7798 [32] Application Date 18 December 1998 The following statement is a full description of this invention, including the best method of performing it known to me/us:- IP Australia Documents received on: 1 6 DEC 1999 5815c in No: -1- A METHOD OF KERNEL SELECTION FOR IMAGE INTERPOLATION Field of Invention The present invention relates to the field of resolution conversion of multidimensional digital data, for example digital image data.
Background Art There are several methods available for digital data resolution conversion.
Popular methods are transform domain methods such as the fractional Fourier transform (fractional FFT or Chirp-Z transform), the discrete cosine transform (DCT), and discrete wavelet transform (DWT). In addition, there are a number of spatial domain methods such as re-sampling and digital filtering with finite-impulse response (FIR) and infiniteimpulse response (TiR) filters and interpolation with continuous, usually cubic, splines.
When a continuous kernel produces data that passes through original data points, it is often called an interpolating kernel. When the interpolated data produced is not 15 constrained to pass through the original data points, it is often called an approximating kernel. There are a number of constraints that must be met in the design of these continuous kernels.
Commonly used continuous kernels for interpolation are the nearest neighbour linear, quadratic, and cubic kernels. The NN kernel is the simplest method of interpolation, which interpolates the image with the pixel value that is spatially nearest to *0000: the required one. This method works quite well when the scaling ratio is an integer multiple of the original data as it introduces no new values no new colours) and preserves sharp edges. However, at other ratios the NN kernel has the disadvantage of shifting edge locations which often produces visible distortions in the output image, especially in images containing text or fine line details. Linear interpolation on the other hand allows for the introduction of new grey levels (or colours) that are effectively used to position edges at sub-pixel locations. This has the advantage of reducing the effect of shifted edge locations, however sharp edges can appear to be blurred. Quadratic and cubic interpolation provide steeper step responses and therefore less edge blurring, CFP1523AU RC07 483325 [O:\CISRA\RC\RC07]483325AU.doc:SaF -2however, the steeper response results in an overshoot on either side of the edge. These overshoots can make the edges in natural images appear sharper, but on text, fine lines, or on other computer generated graphics these overshoots are clearly visible and detract from the perceived image quality and text legibility.
From the above, it can be concluded that each kernel as its own strengths and weaknesses. Further, there are certain image areas which are best interpolated using kernels of different shapes. Simply applying a single continuous convolution kernel at every image pixel will not satisfy all of the requirements for a general-purpose resolution conversion application.
One known method of generating a kernel with both a steep step response, but no overshoot is to adjust the parameters of the cubic kernel according to image information so as to remove the overshoot in the step response. The two-parameter Catmull-Rom cubic has a kernel of the form: (2-3b-c)s 3 +(-3+2b+c)s 2 s <1 2 3 h(s) +(b+5c)s 2 b+4c), l< s<2 (1) 6 3 0, Otherwise Popular choices for the parameters b and c are (b 0, c which is the interpolating cubic that agrees with the first three terms of the Taylor series expansion of the original image, and (b 1, c 0) which is the approximating cubic B-spline. One known method fixes the parameter b at b 0, whilst c is varied between 0, 0.5, and 1 dependent upon the edge strength measured using a Laplacian of Gaussian (LOG) edge detector. At a sharp edge c 0 the resulting cubic is: h(s) 2s 3S2 1, S 1 (2) 0h(s) Otherwise There is however, a problem with using this kernel to interpolate image data when the re-sampled pixel displacement is not significantly different from the original pixel displacement, say a re-sampling ratio of 10/11 or 11/10. In this instance pixels at CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF ;r -3the edges of text and other fine lines take on a grey value rather than the original black or white values. This again results in the blurring of sharp edges and a reduction in the observed image quality.
A further problem with the conventional continuous convolution kernel is associated with its application to edges at oblique orientations in the image plane. The conventional kernels can be either applied in separable fashion, ie., first to the rows of the image and then to the columns, or applied in a 2-dimensional form where they are directly convolved with the 2-dimensional image data. However, their orientations in these implementations are limited to either: horizontal, vertical, or symmetrical. Upon encountering an oblique edge, the pixels on either side of the edge are primarily used in •the interpolation rather than pixels along the edge. This results in an interpolated edge that no longer appears to be clean and smooth, but appears to be jagged, or blurred, or both. A solution to the above problem is known whereby interpolation across edges is prevented by using extrapolated estimates of pixel values for pixels on the other side of 1: 5 the edge in the bilinear interpolation. However, this method requires highly accurate subpixel edge estimation at the output resolution and iterative post-processing using a successive approximation procedure. Both of the above-described methods place high demands on memory and processing resources. Another approach to the problem is to utilise a set of 2-dimensional "steerable" kernels that can be oriented along the line of an 20 edge during image interpolation. In this way the method smooths along the edge line (to reduce edge jaggedness), but not across the edge (so preserving edge sharpness).
A method of selecting interpolation kernels based on edge strength, or user input is known. However, there are some defects that prevent this method from working optimally. Firstly, the use of edge strength alone as the basis for kernel selection does not provide sufficient information for reliable kernel selection (especially at oblique edges).
Secondly, kernel selection based solely upon user input is impractical and does not specify the kernel selection in enough detail, eg., for the example in the sub-image shown in Figure there is not one single kernel that is ideal for the whole sub-image. In CFP1111AU RC07 483325 [0A\CISRA\RC\RC07]AUspedi:SaF -4general, different kernels are required at a resolution that is impractical to be specified by a user.
It is an object of the present invention to ameliorate one or more disadvantages of the prior art.
Summary of the Invention According to one aspect of the present invention there is provided a method of interpolating a first set of discrete sample values to generate a second set of discrete sample values using one of a plurality of interpolation kernels, characterised in that said interpolation kernel is selected depending on an edge strength indicator, an edge direction 10 indicator and an edge context indicator for each discrete sample value of said first set.
*o According to another aspect of the present invention there is provided a method of interpolating a first set of discrete sample values to generate a second set of discrete sample values using an interpolation kernel, characterised in that said interpolation kernel are selected depending on an edge strength indicator, an edge direction indicator and an S: 15 edge context indicator for each discrete sample value of said first set.
I. According to still another aspect of the present invention there is provided a method of interpolating image data, said method comprising the steps of: S o: accessing a first set of discrete sample values of said image data; calculating kernel values for each of said discrete sample values using one of a 20 plurality of kernels depending upon an edge orientation indicator, an edge strength indicator, and an edge context indicator for each of said discrete sample values; and convolving said kernel values with said discrete sample values to provide a second set of discrete sample values.
According to still another aspect of the present invention there is provided an apparatus for interpolating image data, said apparatus comprising: means for accessing a first set of discrete sample values of said image data; calculator means for calculating kernel values for each of said discrete sample values using one of a plurality of kernels depending upon an edge orientation indicator, an CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF I. edge strength indicator, and an edge context indicator for each of said discrete sample values; and convolution means for convolving said kernel values with said discrete sample values to provide a second set of discrete sample values.
According to still another aspect of the present invention there is provided a computer readable medium for storing a program for an apparatus which processes data, said processing comprising a method of interpolating image data, said program comprising: code for accessing a first set of discrete sample values of said image data; code for calculating kernel values for each of said discrete sample values using one of a plurality of kernels depending upon an edge orientation indicator, an edge strength indicator, and an edge context indicator for each of said discrete sample values; and code for convolving said kernel values with said discrete sample values to 15 provide a second set of discrete sample values.
According to still another aspect of the present invention there is provided a method of interpolating image data comprising a first mapping of discrete sample values, said method comprising the steps of: identifying text regions within said first mapping and labelling each 20 discrete sample value within each text region; (ii) calculating edge information for each of said discrete sample values of said image data to identify edge sample values and storing an angle of orientation for each of said edge sample values; (iii) combining said labels and said angle of orientation for each of said discrete sample values to form a second mapping of said discrete sample values; (iv) manipulating said angle of orientation for each edge sample value within said second mapping to form a third mapping of said discrete sample values; manipulating said image data of said third mapping to form a fourth mapping of said image data; and CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF (vi) interpolating each sample value of said fourth mapping with a first one of a plurality of kernels depending on said labels and said angle of orientation of each of said sample values of said fourth mapping to form a fifth mapping of said image data.
According to still another aspect of the present invention there is provided an apparatus for interpolating image data comprising a first mapping of discrete sample values, said apparatus comprising: means for identifying text regions within said first mapping and labelling each discrete sample value within each text region; first calculating means for calculating edge information for each of said discrete sample values of said image data to identify edge sample values and storing an angle of orientation for each of said edge sample values; combining means for combining said labels and said angle of orientation for each of said discrete sample values to form a second mapping of said discrete sample values; manipulating means for manipulating said angle of orientation for each edge 15 sample value within said second mapping to form a third mapping of said discrete sample values, and manipulating said image data of said third mapping to form a fourth mapping :of said image data; and S• interpolation means for interpolating each sample value of said fourth mapping with a first one of a plurality of kernels depending on said labels and said angle of 20 orientation of each of said sample values of said fourth mapping to form a fifth mapping of said image data.
According to still another aspect of the present invention there is provided a computer readable medium for storing a program for an apparatus which processes data, said processing comprising a method of interpolating image data comprising a first mapping of discrete sample values, said program comprising: code for identifying text regions within said first mapping and labelling each discrete sample value within each text region; CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF code for calculating edge information for each of said discrete sample values of said image data to identify edge sample values and storing an angle of orientation for each of said edge sample values; code for combining said labels and said angle of orientation for each of said discrete sample values to form a second mapping of said discrete sample values; code for manipulating said angle of orientation for each edge sample value within said second mapping to form a third mapping of said discrete sample values; code for manipulating said image data of said third mapping to form a fourth mapping of said image data; and 10 code for interpolating each sample value of said fourth mapping with a first one .o of a plurality of kernels depending on said labels and said angle of orientation of each of said sample values of said fourth mapping to form a fifth mapping of said image data.
Brief description of drawings Fig. 1 is a flowchart showing a method of interpolation in accordance with a first S 15 embodiment of the present invention; Fig. 2 is a flowchart showing a method of text detection in accordance with the method of interpolation of Fig. 1; oo• ~Fig. 3 is a flowchart showing a method of edge strength and orientation detection in accordance with the method of interpolation of Fig. 1; S. •20 Fig. 4 is a flowchart showing a method of combining the text map and the edge map in accordance with the method of interpolation of Fig. 1; Fig. 5 is a flowchart showing a method of cleaning the kernel selection map in accordance with the method of interpolation of Fig. 1; Fig. 6 is a flowchart showing a method of interpolating an output image in accordance with the method of interpolation of Fig. 1; Fig. 7(a) shows an original image at a certain resolution; Fig. 7(b) shows the image of Fig. 7(a) at a higher resolution after being interpolated using the conventional cubic kernel; CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF -8- Fig. 7(c) shows the image of Fig. 7(a) at a higher resolution after being interpolated in accordance with the first embodiment of the present invention; Fig. 8(a) shows an original text image; Fig. 8(b) shows the text image of fig. 8(a) after being interpolated in accordance with the embodiment of the present invention; Fig. 9(a) shows a graphic image that has been interpolated using the conventional NN kernel; Fig. 9(b) shows the graphic image at Fig. 9(a) after being interpolated in accordance with the first embodiment of the present embodiment; and 10 Fig. 10 is a block diagram of a general purpose computer with which the oo. •embodiments can be implemented.
Detailed Description When re-sampling a digital image, smooth regions and edge regions need to be re-sampled differently. A long symmetrical kernel, such as the conventional Catmull- 15 Rom cubic with parameter c 0.5, is ideal for smooth image regions. A short kernel, such as the Catmull-Rom cubic with c 0, is generally good for edge, comer, or highly textured regions. However, in order to reduce the jaggy effect on oblique edges, edge oo oi S* direction also needs to be taken into account in the interpolation process. Edge direction is important so that the interpolation can smooth along an edge, but not across an edge. In this way, the edge contour is kept smooth, whilst the edge transition is kept sharp.
The first embodiment of the present invention discloses a method of image interpolation that automatically selects the appropriate interpolation kernel for each image region. This selection is based not only on edge strength, but also edge direction and local edge context information. In addition, high contrast text regions are also detected and interpolated so as to preserve the shape and contrast of the text.
The second embodiment of the present invention adjusts the parameters of a single interpolation kernel so as to reshape the kernel to the appropriate shape for each region in the image.
CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF 1( -9- The proposed resolution conversion method first identifies high contrast text regions and then measures both edge strength and edge orientation of the image data. In the first embodiment, the text and edge information is then used to select the appropriate interpolation kernel to use. In the second embodiment, the edge strength and edge orientation data is used to adjust the parameters of an interpolation kernel. Context information, from the text and edge maps, is then used to remove unnecessary kernel changes and prevent selection of an inappropriate kernel. This post-processing on the raw edge information is required to reduce and remove any interpolation artefacts.
The disclosed interpolation method according to the preferred embodiment will be briefly explained with reference to Fig. 1. The method comprises of a series of steps which will be explained in more detail later in this document. The process begins at step 100 where image data is accessed. The process continues at step 105, where high contrast text regions are detected. At step 110, both edge strength and edge orientation of the image data are measured. The detected text regions contain cases of isolated pixels, or pixel groups, which are labelled as text. To reduce the chances of unnecessary interpolation kernel switching, these cases need to be removed. This is done, in the preferred embodiment, at the next step 115 using a morphological opening operation oooo• which is known in the image processing prior art, on the binary text map. The process continues at the next step 120, where the detected text regions and edge regions are combined into a kernel, or kernel-parameter, selection map for each input pixel. At the next step 125, the kernel, or kernel-parameter, selection map is cleaned. This involves reorientating edge regions to an underlying uniformly directed edge region or smooth background to produce a cleaned kernel selection map at step 130. The cleaned kernel selection map is at the input pixel resolution. The process continues at the next step 135, where the cleaned kernel selection map is interpolated using the NN interpolation kernel.
The result of the NN interpolation is to produce the kernel, or kernel-parameter, selection map for each output pixel, at step 140. At the next step 145, the appropriate interpolation kernel, based on the output-resolution kernel selection map, is applied to the image data.
The interpolation kernel applied to the image data, at step 145, can be a Universal Cubic CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF Kernel 150 in accordance with the preferred embodiment, which will be disclosed later in this document. The process concludes at step 155, where the resolution converted output image is preferably displayed or further processed by a graphics processor or the like.
The above steps do not have to operate on the complete image at any one time.
However, for simplicity the preferred embodiment will be described in terms of processing the complete image of input pixels. It is desirable, although not mandatory, that the algorithm according to the preferred embodiment operate in a memory-localised manner so that only a limited number of lines (preferably 5 lines in the raster scan direction) of the input image are required at any one time. Alternatively, the algorithm can be applied to arbitrary image sub-regions, or blocks.
o :oo The following description of the preferred embodiment is described in terms of colour images represented in a Red, Green, Blue (RGB) colour space. With suitable modifications the technique can easily be applied to grey level images (only one colour plane) or any arbitrary colour space representation (such as YUV, or YCbCr).
15 Alternatively, if images are presented in another colour space they can first be transformed to an RGB colour space before being processed.
The above steps of the present invention will now be explained in more detail °i with reference to Figures 1 to 6.
Text detection and text map cleaning step: 20 The text detection and text map cleaning is explained with reference to Fig. 2.
The local contrast between neighbouring pixels is used as the basis of text region detection. Text regions are usually regions where the local contrast is high, the number of colours is limited, and the texture is simple. These criteria allow the detection of multilingual text information rendered in a high contrast fashion.
The process begins at step 200, where the text map is initialised to smooth. At the next step 205, the following steps 215 to 230 are carried out for each colour plane and for each pixel in the image. Each pixel and colour plane of the input image is scanned with a 3 by 3 neighbourhood operator. At step 215, for each centre pixel, PO, the value C is compared with a threshold, Ttxt, where C is given as follows: CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF -11 C= max(Po i (3) and where i is the index of the 8 nearest neighbour pixels of the centre pixel PO. In the preferred embodiment a value of Ttxt 220 is used. At the next step 220, the value of C is compared with the threshold, Ttxt. If the value of C is over the threshold, Ttxt, the pixel PO is labelled as text region at step 225.
The detected text regions contain cases of isolated pixels, or pixel groups, which are labelled as text. To reduce the chances of unnecessary interpolation kernel switching, these cases need to be removed. This is done, in the preferred embodiment, at the next step 230 using a morphological opening operation which is known in the image processing prior art, on the binary text map. A structuring element defined by matrix S is used to clean the text detection map as follows: I 1 1 1 The morphological opening operation, which is defined as an erosion followed by a dilation with S, removes the following patterns (including their rotated versions): 0 10 1 0 1 01 0 1 The text detection and text map cleaning process concludes after steps 215 to 230 have been carried out for each pixel in the image.
Edge strength and direction detection step: The edge strength and direction detection step will be explained with reference to Figure 3. The process begins at step 300, where the edge map is initialised to smooth. At the next step 305, the following steps 315 to 335 are carried out for each colour plane and for each pixel in the image. At step 315, horizontal and vertical edge responses, which we shall refer to as Gh and Gv respectively, are calculated for each input image pixel. In the preferred embodiment this is done utilising a 5-tap optimal edge detector which is known in the art. The coefficients used for forming Gh and Gv are shown in Table 1.
Low-pass 0.036420 0.248972 0.429217 0.248972 0.036420 High-pass -0.108415 -0.280353 0 0.280353 0.108415 CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF -12 Table 1. Edge Detection coefficients used in the preferred embodiment. Shown are the low-pass (interpolating) and high-pass (first derivative) kernels.
At the next step 320, the gradient magnitude, Gm, is obtained from the strengths of these two components using: G, G The maximum gradient value in the R, G, and B colour components is used to determine the overall edge gradient strength. The process continues at the next step 320 10 where the gradient magnitude, Gm, is compared with a threshold Gth. If it is less than the threshold, the pixel is classified as a non-edge pixel. Otherwise the pixel is classified to be an edge pixel and the edge direction G6 recorded in the EdgeMap, at step 330.
Therefore, the colour plane with the maximum gradient strength is used to estimate the edge direction for each input pixel. The edge gradient direction, G6, is estimated using: 15 G tan' G (4)
G
SThe process continues at step 335, where the edge pixel direction is quantised into one of the four cases: horizontal vertical diagonal and anti-diagonal (371/4).
The edge strength and direction detection process concludes after steps 315 to 335 have been carried out for each pixel in the image.
It is noted that by increasing the number of quantisation bins and interpolating with the correspondingly oriented steerable kernels, better interpolation output can be produced. However, this also increases the complexity of the implementation and so quantisation into 4 directions is used in the preferred embodiment.
CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF -13- Combination of the text and edge information into the kernel selection map: The combination of the text and edge information into the kernel selection map will now be explained with reference to Figure 4. It is assumed that the input image is smooth with the exception of the pixels where edge or text regions have been detected.
The kernel selection map is therefore initialised, at step 400, to select a generic smoothing kernel. In the preferred embodiment a cubic interpolation kernel with parameter c 0.5 is used for interpolating pixels in these smooth regions. The process continues at step 405, where steps 410 to 425 are carried out for each pixel in the input image where the text region and edge region (edge direction) information is then superimposed onto the kernel selection map. Where both text region and edge region are present, the text region information takes precedence. The precedence of the text region over edge region is an important one since there are lots of edge activities in text regions and counting them as directional edges can cause excessive kernel switching and therefore visual artefacts. The process continues at step 410 where a check is carried out to see if the current pixel in the 15 EdgeMap is classified as smooth. If the current pixel in the EdgeMap is not smooth, the EdgeMap information is recorded in the KemelMap, at step 415. At the next step 420, a S.i :check is carried out to see if the current pixel in the TextMap is classified as smooth. If •the current pixel in the TextMap is not smooth, the TextMap information is recorded in the KernelMap, at step 425. The combination of the text and edge information into the S 20 kernel selection map process concludes after steps 410 to 425 have been carried out for each pixel in the input image.
Cleaning of the kernel selection map: The cleaning of the kernel selection map process is explained with reference to Figure 5. There are cases of isolated edge directions occurring in an otherwise uniformly directed local region. These sparsely distributed edge regions are best re-oriented to the underlying uniformly directed edge region or smooth background. This again is to avoid excessive kernel switching which may result in visual artefacts in the interpolated image.
The process begins at step 500 where the accumulators are initialised. At the next step 505, the steps 510 to 540 are carried out for each 5 by 5 block in the KernelMap.
CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF -14- At step 510, in a 5x5 neighbourhood, the number of edge pixels of each edge orientation (including text and background) are accumulated. The process continues at step 515 where the major edge orientations are identified. At the next step 520, the minor edge orientations are identified. The major and minor edge orientations are identified and minor edge pixels are reassigned to the major orientation in the following steps, with the exception of identified text region pixels. At the next step 525, the calculated major edge orientation is compared with Tmajor and the calculated minor edge orientation is compared with Tminor. At step 530, an orientation is set to be the major orientation if there are more than Tmajor pixels in the 5 by 5 region and an orientation is set to be a minor orientation if there are less than Tmrninor pixels in the 5 by 5 region. In the preferred 0* embodiment the major threshold, Tmajor, is 15 and the minor threshold, Tminor, is *.The total number of pixels in the 5 by 5 region is 25. If the accumulated edge pixels are above the major threshold, a major edge direction is found. Otherwise, the pixel region remains unchanged. The major or minor orientations can also be background or text 0 V. 15 regions. At the next step 535, the 5 by 5 pixel block shifts one column along the image
I
0. buffer and the process repeats for the next adjacent 5 by 5 pixel block. These blocks do .i :not overlap each other in any single pass of the image. The process iterates a fixed "i number of times, and a check is carried out at step 540 to see if 5 iterations have been completed at which point the process will conclude. In the preferred embodiment, 20 iterations are sufficient to clean the map to a reasonable degree, however 5 iterations are not mandatory.
Apply interpolation to cleaned kernel selection map: The step of applying interpolation to the cleaned kernel selection map will now be explained with reference to Figure 6. The process begins at step 600, where the kernel selection map (KernelMap) is interpolated to be the same size as the output image. The nearest neighbour (NN) interpolation method is used to obtain the kernel selection map at the required output resolution. NN interpolation is used to reduce the computation cost associated with more complex interpolation methods (such as linear and cubic) without significantly reducing the output image quality. Alternatively, the NN interpolation is not CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF actually carried out, but the kernel is selected in the O/P image dependent on the nearest pixel in the KernelMap.
After the kernel selection map (KernelMap) has been interpolated to be the same size as the output image, each entry is read so that the correct interpolation kernel can be used. At the next step 610, a check is carried out to see if the KernelMap information is classified as text. If KernelMap Text then the text pixel is interpolated with a modified cubic kernel, at step 615. The modified cubic kernel of the preferred embodiment, is given by: 1, sid h(s) s V6662 s-d s-d 2 2 3 +1, 1-2d 1-2d 10 where s= t/At is a normalised coordinate that has integer values at the sample points.
In the preferred embodiment parameter, d, is set to 0.2. Note this kernel has a reduced size of 2-by-2.
The process continues at step 620, if the KernelMap is not classified as text, where a check is carried out to see if the KernelMap information is classified as an edge.
S 15 If KernelMap Edge, then the edge pixels are interpolated with one of the four steerable S. .o cubic kernels at step 625. The steerable kernels of the preferred embodiment, where h(s x Sy), are given by: h(sx,s) h(sx)c=5 h(sy)c= 2 (6) h(sSy) 2 (7) 1 h e h(sx,sy)=3x/4 x S= h 0.5 (9 CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF -16where sx= x/Ax and Sy=y/Ay are re-sampling distances in the horizontal and vertical directions, respectively, and indicates matrix multiplication. The quantised edge orientations define the required orientation of the steerable cubic kernel, ie., either 0, 7t/4, t/2, or 3rt/4.
The process continues at step 630 where, if the KernelMap is not classified as an edge, a check is carried out to see if the KemelMap information is smooth. If KernelMap Smooth, then the pixels are interpolated with the conventional Catmull-Rom cubic kernel (b 0, c 1" Figs. 7 to 9 illustrate the interpolation efficacy of the present invention on a number of typical images. Fig. 7(a) shows an original image at a certain resolution before interpolation has taken place. Fig. 7(b) shows the image of Fig. 7(a) at a higher resolution after being interpolated using the conventional cubic kernel. In contrast, Fig. 7(c) shows the image of Fig. 7(a) at a higher resolution after being interpolated in accordance with the preferred embodiment of the present invention. It can be seen that Fig. 7(c) shows an 15 image with sharper edges and less blurring than Fig. 7(b).
Fig. 8(b) shows the text image of Fig. 8(a) that has been enlarged to 2.7 times in both directions using the interpolation method in accordance with the preferred embodiment of the present invention.
Fig. 9(a) shows a graphic image that has been interpolated using a conventional NN kernel. Fig. 9(b) shows the same graphic image that has been interpolated using the interpolation method in accordance with the preferred embodiment of the present invention. It can be seen from the image in Fig. 9(b) that the oblique lines are smoothed along their directions compared to the image of Fig. 9(a).
In the second embodiment of the present invention the KernelMap does not contain an encoding for the different kernels that can be used in the interpolation, but rather parameter values to be applied to one "universal" kernel. The definition of the universal cubic interpolation kernel, being a combination of the conventional kernel of equation the modified kernel of equation and fully steerable cubic kernels of equations and is as follows: CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF -17s) 0 0 2 h -20 r)s (20 ir)s, h(((20 n)s (20 ir h(s, s) {h((20 -l)s (20 -2)s h(((20 2)s, +(1-20 ir)sy)w(O))= (11) Where the across edge weighting factor, is a smooth function constrained to pass through 1 when 0 0, r/2, and n and through 42 when 0 7/4 and 37/4. The function used in the preferred embodiment is as follows: J -1 J-\ 10 cos(40) (12) 2 2 And the universal interpolation kernel, is given by: 1, 0 sI<d 1, 0<s d 3 2 3 s-d s-d 1 d<sjl1-d 2 1-2d 1-2d 3 0, 1-d<s 1<+d s-3d s-3d 2 s-3d 4 b-c) s 2 s 3 b+4c), l+d<s <2-d 6 12d 1-2d '1-2d 3 0, Otherwise (13) where d is the parameter controlling the width of a "dead zone" of the cubic interpolation kernel.
Based on the kernel-parameter map, the input image regions are interpolated with the universal cubic kernel with parameters set as follows: Text regions are interpolated with the edge orientation angle parameter, 0 0, the "dead zone" parameter, d 0.2, and cubic parameters b 0 and c 0. This gives a reduced kernel size of 2-by-2, ie., the remaining kernel coefficients are all zero.
CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF -18- Edge regions are interpolated with 0 set to either 0, rt/4, n7/2, or 37t/4 (dependent on the edge angle, GO), d 0, b 0, and c 0.5. This is a 4 by 4 kernel, though either 6 or 8 of these coefficients will be zero, depending on the orientation angle Smooth regions are interpolated with 0 0, d 0, b 0, and c 0.5. This is a 4 by 4 non-zero kernel.
Preferred Embodiment of Apparatus(s) The preferred method is preferably practiced using a conventional general- *purpose computer system, such as the system 1000 shown in Fig. 10, wherein the process of Figs. 1 to 6 can be implemented as software executing on the computer. In particular, 10 the steps of the method are effected by instructions in the software that are carried out by the computer. The software can be divided into two separate parts; one part for carrying out the method of the preferred embodiment; and another part to manage the user interface between the latter and the user. The software can be stored in a computer readable medium, including the storage devices described below, for example. The S. 15 software is loaded into the computer from the computer readable medium, and then executed by the computer. A computer readable medium having such software or computer program recorded on it is a computer program product. The use of the computer program product in the computer preferably effects an advantageous apparatus for orientating a character stroke or n-dimensional finite space curves in accordance with the embodiments of the invention.
The computer system 1000 has a computer module 1002, a video display 1016, and input devices 1018, 1020. In addition, the computer system 1000 can have any of a number of other output devices including line printers, laser printers, plotters, and other reproduction devices connected to the computer module 1002. The computer system 1000 can be connected to one or more other computers via a communication interface 1008c using an appropriate communication channel 1030 such as a modem communications path, a computer network, or the like. The computer network can include a local area network (LAN), a wide area network (WAN), an Intranet, and/or the Internet.
CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF -19- The computer module 1002 has a central processing unit(s) (simply referred to as a processor hereinafter) 1004, a memory 1006 which can include random access memory (RAM) and read-only memory (ROM), input/output (IO) interfaces 1008, a video interface 1010, and one or more storage devices generally represented by a block 1012 in Fig. 10. The storage device(s) 1012 can include of one or more of the following: a floppy disc, a hard disc drive, a magneto-optical disc drive, CD-ROM, magnetic tape or any other of a number of non-volatile storage devices well known to those skilled in the art.
Each of the components 1004 to 1012 is typically connected to one or more of the other devices via a bus 1014 that in turn has data, address, and control buses.
.o 10 The video interface 1010 is connected to the video display 1016 and provides video signals from the computer 1002 for display on the video display 1016. User input to operate the computer 1002 can be provided by one or more input devices 1008. For example, an operator can use the keyboard 1018 and/or a pointing device such as the mouse 1020 to provide input to the computer 1002.
15 The computer system 1000 is simply provided for illustrative purposes and other configurations can be employed without departing from the scope and spirit of the invention. Exemplary computers on which the embodiment can be practiced include the IBM-PC/ATs or compatibles, one of the Macintosh (TM) family of PCs, Sun Sparcstation arrangements evolved therefrom. The foregoing are merely exemplary of the types of computers with which the embodiments of the invention can be practiced. Typically, the processes of the embodiments, described hereinafter, are resident as software or a program recorded on a hard disk drive (generally depicted as block 1012 in Fig. 10) as the computer readable medium, and read and controlled using the processor 1004.
Intermediate storage of the program and pixel data and any data fetched from the network can be accomplished using the semiconductor memory 1006, possibly in concert with the hard disk drive 1012.
In some instances, the program can be supplied to the user encoded on a CD- ROM or a floppy disk (both generally depicted by block 1012), or alternatively could be read by the user from the network via a modem device connected to the computer, for CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF example. Still further, the software can also be loaded into the computer system 1000 from other computer readable medium including magnetic tape, a ROM or integrated circuit, a magneto-optical disk, a radio or infra-red transmission channel between the computer and another device, a computer readable card such as a PCMCIA card, and the Internet and Intranets including email transmissions and information recorded on websites and the like. The foregoing are merely exemplary of relevant computer readable mediums. Other computer readable mediums can be practiced without departing from the scope and spirit of the invention.
1 The preferred method of reconstructing a continuous signal can alternatively be implemented in dedicated hardware such as one or more integrated circuits performing the functions or sub functions of the steps of the method. Such dedicated hardware can include graphic processors, digital signal processors, or one or more microprocessors and associated memories.
The foregoing only describes two embodiments of the present invention, 15 however, modifications and/or changes can be made thereto by a person skilled in the art without departing from the scope and spirit of the invention.
In the context of this specification, the word "comprising" means "including principally but not necessarily solely" or "having" or "including" and not "consisting only of'. Variations of the word comprising, such as "comprise" and "comprises" have corresponding meanings.
CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF

Claims (88)

1. A method of interpolating a first set of discrete sample values to generate a second set of discrete sample values using one of a plurality of interpolation kernels, characterised in that said interpolation kernel is selected depending on an edge strength indicator, an edge direction indicator and an edge context indicator for each discrete sample value of said first set. o S2. The method according to claim 1, wherein said interpolation kernel is a universal 10 interpolation kernel, h(s).
3. The method according to claim 2, wherein said universal interpolation kernel, is of the form: S.3 2 3 s-d 3 s-d 2 1 d<s s<l-d 2 1-2d \1-2d 3 0, 1-d<sl+d 1 s-3d s-3d 2 s-3d 4 b-c) 3 d b+4c), l+d<s _2-d 6 1-2d l-2d l \-2d 3 0, Otherwise and wherein s t At and 0 d
4. The method according to claim 1, wherein said plurality of kernels are given by: h(sx,sy)e h(sx)c=0.5 h(sy)c=0 h(sxsy)O=/2 {h(sx)c=0 CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF -22- r 1 sx+sy Sx-Sy h(sx,sy)O=/ 4 h 2 hc= h(sx,sy)Q=3Nx/4=- x S 2 c=O and wherein sx= x/Ax and sy=y/Ay are re-sampling distances in the horizontal and vertical directions, respectively, and indicates matrix multiplication. The method according to claim 1, wherein said first set of discrete sample values are at a different resolution to said second set of discrete sample values.
6. A method of interpolating a first set of discrete sample values to generate a second set of discrete sample values using an interpolation kernel, characterised in that said interpolation kernel are selected depending on an edge strength indicator, an edge direction indicator and an edge context indicator for each discrete sample value of said first set.
7. The method according to claim 6, wherein said interpolation kernel is a universal interpolation kernel, h(s).
8. The method according to claim 7, wherein said universal interpolation kernel, is of the form: CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF I -23- 1, O< s d 3 s-d 3 s-d 2 1 (2 bc) 2 d<s l-d 2 1-2d 1-2d 3 0, 1-d<sl +d 1 s-3d s-3d s-3d b-c) s b+4c), 1+d<s I2-d 6 1-2d 1-2d 1-2d 3 0, Otherwise and wherein s t At and 0 d
9. The method according to claim 6, wherein said first set of discrete sample values are at a different resolution to said second set of discrete sample values.
10. A method of interpolating image data, said method comprising the steps of: e maccessing a first set of discrete sample values of said image data; calculating kernel values for each of said discrete sample values using one of a 10 plurality of kernels depending upon an edge orientation indicator, an edge strength indicator, and an edge context indicator for each of said discrete sample values; and convolving said kernel values with said discrete sample values to provide a second set of discrete sample values.
11. The method according to claim 10, wherein said kernel is a universal interpolation kernel, h(s).
12. The method according to claim 11, wherein said universal interpolation kernel, is of the form: CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF I 24 1, O sj: d (2--3b +2b sd2+ (1 Ib), d <jsI 1 -d 2 1-2d 1-2d 3 0, 1 -d sjI d s d3+ (b +5c) s d2+ (-2b -8c) s l±d<Isl: 2-d 6 1-2d 1-2d 1-2d 3 0, Otherwise and wherein s t/At andO0 d<O.
13. The method according to claim 11, wherein said plurality of kernels are given by: h fs~x~sy)OzO 'r jj h( h(s)sYc= h(sx,sy)O=7/2 72= =0.5 2 h~s~syh(/ h(sx~sy)o=37/4 T2__ c0 c=. and wherein sx= x/Ax and sy=y/4v are re-sampling distances in the horizontal and vertical directions, respectively, and. indicates matrix multiplication.
14. The method according to claim 10, wherein said first set of discrete sample values are at a different resolution to said second set of discrete sample values.
15. An apparatus for interpolating image data, said apparatus comprising: means for accessing a first set of discrete sample values of said image data; calculator means for calculating kernel values for each of said discrete sample values using one of a plurality of kernels depending upon an edge orientation indicator, an CFP1111AU RC07 483325 [O:\CISRA\RC\RCO7]AUspeci:SaF edge strength indicator, and an edge context indicator for each of said discrete sample values; and convolution means for convolving said kernel values with said discrete sample values to provide a second set of discrete sample values.
16. The apparatus according to claim 15, wherein said kernel is a universal interpolation kernel, h(s).
17. The apparatus according to claim 16, wherein said universal interpolation kernel, 10 is of the form: 1, 0< s <d 3 s 2 *3 s-d s-d 2 d<s l-d 2 1-2d 1-2d 3 0, l-d<s l+d S s-3d 3 s-3d2-3d l b+4c), l+d<s <2-d 6 1-2d 1-2d 1-2d 3 0, Otherwise and wherein s t /At and 0 d
18. The apparatus according to claim 15, wherein said plurality of kernels are given by: 1 h(sx,sy)= 2 h(sx)c=05 h(sy)c=0 1 {h(sx)c= 0 -h(sy)cO. -s h(sx,sy)O=,42 h h +Sy CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF -26- .1 sx+s (sx- -s^ h(sx,sy)O= 3 4 7 h-h[} and wherein sx= x/Ax and sy=y/Ay are re-sampling distances in the horizontal and vertical directions, respectively, and. indicates matrix multiplication.
19. The apparatus according to claim 15, wherein said first set of discrete sample values are at a different resolution to said second set of discrete sample values. A computer readable medium for storing a program for an apparatus which processes data, said processing comprising a method of interpolating image data, said program comprising: code for accessing a first set of discrete sample values of said image data; code for calculating kernel values for each of said discrete sample values using one of a plurality of kernels depending upon an edge orientation indicator, an edge strength indicator, and an edge context indicator for each of said discrete sample values; 15 and code for convolving said kernel values with said discrete sample values to provide a second set of discrete sample values.
21. The computer readable medium according to claim 20, wherein said kernel is a 20 universal interpolation kernel, h(s). .o
22. The computer readable medium according to claim 21, wherein said universal interpolation kernel, is of the form: CFP1523AU RC07 483325 [0:\CISRA\RC\RC07]483325AU.doc:eaa 27 1, O I sI::d (2--3b sd3+ +2b d<lsI 1-d 2 1-2d 1-2d 3 0, 1 d jsj:I+ d s d3+ (b +5c) s d2+ (-2b -8c) s 1+d<IsI 2-d 6 1-2d 1-2d 1-2d 3 0, Otherwise :and whereins t/ At andO0
23. The computer readable medium according to claim 21, wherein said plurality of kernels are given by: h(sx~sy)e 2 jh(sx)co.s h(sy)c=Oj :h(sx,sy)O=7/2 'r2 {h(sx)c=O h(sxsy)o=3/4 ={hX J T2Sc= 7 j} h~s~s~oh(/ and wherein sx= x/Ax and syy/Ay are re-sampling distances in the horizontal and vertical directions, respectively, and. indicates matrix multiplication.
24. The computer readable medium according to claim 20, wherein said first set of discrete sample values are at a different resolution to said second set of discrete sample values. A method of interpolating image data comprising a first mapping of discrete sample values, said method comprising the steps of: CFP1111Au RC07 483325 [O:\CISRA\RC\RCO7]AUspeci:SaF Z 77-~ 28 identifying text regions within said first mapping and labelling each discrete sample value within each text region; (ii) calculating edge information for each of said discrete sample values of said image data to identify edge sample values and storing an angle of orientation for each of said edge sample values; (iii) combining said labels and said angle of orientation for each of said discrete sample values to form a second mapping of said discrete sample values; (iv) manipulating said angle of orientation for each edge sample value within said second mapping to form a third mapping of said discrete sample values; manipulating said image data of said third mapping to form a fourth mapping of said image data; and (vi) interpolating each sample value of said fourth mapping with a first one a plurality of kernels depending on said labels and said angle of orientation of each of said sample values of said fourth mapping to form a fifth mapping of said image data.
26. The method according to claim 25, wherein step comprises the following sub- steps: associating each discrete sample value of said fourth mapping with a S: corresponding discrete sample value of said third mapping; 20 scaling said third mapping of said image data based on said association.
27. The method according to claim 25, wherein step (vi) comprises the step of interpolating said image data of said third mapping using a second kernel.
28. The method according to claim 27, wherein said second kernel is an NN interpolation kernel. CFP1523AU RC07 483325 [O:\CISRA\RC\RC07]483325AU.doc:eaa $NT CCFPI152 RCO7 483325 [O:\ClSRA\Rc\RC071483325AU doc:eaa :2 29
29. The method according to any one of claim 25, wherein said labels and said angle of orientation of each of said sample values of said fourth mapping are used to select kernel parameter values of said first kernel.
30. The method according to claim 25, wherein said first kernel is a universal interpolation kernel, h(s).
31. The method according to claim 30, wherein said universal interpolation kernel, is of the form: 1, 0 sj! d (2--3b sd3+ sd2+ d<jsj: l-d 2 l1-2d l-2d 3 0, 1-d<IsI 1 +d I b s +5c) s d2+ (-2b -8c) 1+d<jsI !2-d *6 1-2d l-2d l-2d 3 C0, Otherwise whereins t/At and 0
32. The method according to claim 25, wherein said plurality of kernels are given by: h(sx,sy) 0 o0 I jh(sx)czsj.5hsy=1 h(sx,sy)0=n/2 j h(sx)co0 h(sx 'y)O=7t/4 =72={hX c=O. 5 h 2XfY c=0} h(sx,sy)o3704 7= {h 2XY 2hX CFP1111AU RC07 483325 CFP111AU CO7 43325[CISRA\RC\RCO7]AUspeci:SaF wherein sx= x/Ax and sy=y/Ay are re-sampling distances in the horizontal and vertical directions, respectively, and indicates matrix multiplication.
33. The method according to claim 25, wherein said steps to (vi) are carried out on one of a plurality of portions of said first mapping of discrete sample values of said image data.
34. The method according to claim 25, wherein the labels take precedence over said angle of orientation when forming said second mapping of said discrete sample values
35. The method according to claim 25, wherein said fourth mapping is at a different resolution to said first mapping.
36. The method according to claim 25, wherein said image data is colour image data.
37. The method according to claim 36, wherein steps and (ii) are carried out for each colour plane of said colour image data.
38. The method according to claim 36, wherein steps and (ii) are carried out for a luminance component of said colour image data.
39. The method according to claim 25, wherein step includes the following sub- steps: calculating a text indicator value, C; and comparing said text indicator value with a threshold value, wherein said labelling of each discrete sample value within each text region is based on said comparison. The method according to claim 39, wherein said text indicator, C, is of the form: C maxPo-P\ i CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF -31- and wherein i is the index of the 8 nearest neighbour discrete sample values to a centre discrete sample value, PO.
41. The method according to claim 39, wherein step includes the following sub- step: performing a cleaning operation on said text labels.
42. The method according to claim 41, wherein said cleaning operation is a morphological opening operation. s.. .4 .0 S So
43. steps: values; compari The method according to claim 25, wherein step (ii) includes the following sub- calculating edge response values for each of said discrete sample values; calculating a gradient magnitude value based on said edge response md comparing said gradient magnitude value with a threshold value; classifying a current pixel on the basis of said comparison; calculating said angle of orientation for a current pixel based on said son; and storing said angle of orientation.
44. The method according to claim 43, wherein said gradient magnitude value, Gm is of the form: G G and wherein Gv and Gh are the vertical and horizontal edge responses, respectively. The method according to claim 43, wherein said edge gradient value, G6 is of the form: CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF I- r I r: I -32- G, tan-' G, G, wherein Gv and Gh are the vertical and horizontal edge responses, respectively.
46. The method according to claim 25, wherein step (iv) includes the following sub- steps: accumulating a number of discrete data values of each angle of orientation for one of a plurality of portions of said discrete data values; *0O@ calculating a highest value and lowest value of discrete sample values for each angle of orientation; 10 comparing said highest and lowest values with highest and lowest threshold values, respectively; S* reassigning an angle of orientation of said discrete data values of said portion on the basis of said comparison; and repeating steps to for each of said portions of said discrete data values. S..
47. The method according to claim 46, wherein said plurality of portions of said discrete data values is five portions.
48. The method according to claim 25, wherein a modified cubic interpolation kernel is applied to a discrete data value which is labelled as text.
49. The method according to claim 48, wherein said modified cubic interpolation kernel, is of the form: CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF -33- 1, s d h(s) s 3 2 s-d s-d 2 2- 3- 1, 1-2d 1-2d wherein s= t/At is a normalised coordinate that has integer values at a sample point and 0 d
50. The method according to claim 25, wherein a steerable cubic interpolation kernel is applied to a discrete data value which is classified as an edge.
51. The method according to claim 50, wherein said steerable cubic interpolation kernels, h(sx,sy), are of the form: 1 h(sx,sy)e=o 1 h(sx)c=0.5 h(sy)c=o 1 Sh(sx,sy)O=/2 (h(sx)=0 S. 1h(sx sy)+ s y x-s .i h(sx,sy)O=3/ 4 h h c=o0.5 c=0 S..h(sx,sy)=3x/4 h "sx c=0 -hs c=0 wherein sx= x/Ax and sy=y/Ay are re-sampling distances in the horizontal and vertical directions, respectively, and. indicates matrix multiplication.
52. The method according to claim 50, wherein a conventional cubic interpolation kernel is applied to a discrete data value which is classified as smooth.
53. The method according to claim 52, wherein said conventional cubic interpolation kernel is of the form: CFP1111AU RC07 483325 [0A\CI SRA\RC\RC07]AUspedi:SaF -34- (2-3 b-c)ls3 s <1 2 3 b-c)s3 +(b+5c)s2+(-2b-8c)s b+4c), 1< s<2 6 3 0, Otherwise and wherein b 0 and c
54. An apparatus for interpolating image data comprising a first mapping of discrete sample values, said apparatus comprising: means for identifying text regions within said first mapping and labelling each *discrete sample value within each text region; first calculating means for calculating edge information for each of said discrete sample values of said image data to identify edge sample values and storing an angle of orientation for each of said edge sample values; S"combining means for combining said labels and said angle of orientation for each of said discrete sample values to form a second mapping of said discrete sample values; manipulating means for manipulating said angle of orientation for each edge sample value within said second mapping to form a third mapping of said discrete sample 15 values, and manipulating said image data of said third mapping to form a fourth mapping of said image data; and interpolation means for interpolating each sample value of said fourth mapping with a first one of a plurality of kernels depending on said labels and said angle of orientation of each of said sample values of said fourth mapping to form a fifth mapping of said image data. The apparatus according to claim 54, wherein said apparatus further comprises: associating means for associating each discrete sample value of said fourth mapping with a corresponding discrete sample value of said third mapping; scaling means for scaling said third mapping of said image data based on said association. CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF
56. The apparatus according to claim 54, said apparatus further comprising interpolation means for interpolating said image data of said third mapping using a second kernel.
57. The apparatus according to claim 56, wherein said second kernel is an NN interpolation kernel.
58. The apparatus according to any one of claims 54 to 57, wherein said labels and said angle of orientation of each of said sample values of said fourth mapping are used to select kernel parameter values of said first kernel.
59. The apparatus according to claim 54, wherein said first kernel is a universal V: interpolation kernel, h(s). i *60. The apparatus according to claim 59, wherein said universal interpolation kernel, ih(s), isoftheform: O* s S 3 s-d s-d (2 b c)a s (ccod 2b+ c) bcm), d s i lin k l-d 2 1 2d 1-2d 3 0, f d S 1, 0I s I^d 1 s-3d s -3d s-3d 4s c) b+4c), l+d<Isj 2-c 6 1-2d 1-2d 1-2d 3 0, Otherwise wherein s t At and 0 d CFP1523AU RC07 483325 [O:\CISRA\RC\RC07]483325AU.doc:eaa -36-
61. The apparatus according to claim 54, wherein said plurality of kernels are given by: 1 h(Sx,Sy)o 0 h(s)c0.5 (y) h(sxsy)Q=/ 2 J {h(sx)c=0 1 h s y J -s h(sx,Sy)O=/42 =2 h(sx,Sy)O=nt/4 hX 2 y hYJ c=0. 5 c=0 1 Jhfsx+sy ,sx-sy h(sx,sy)=3t/4 =7 h c=O c=O S wherein sx= x/Ax and sy=y/Ay are re-sampling distances in the horizontal and vertical directions, respectively, and indicates matrix multiplication. 10 62. The apparatus according to claim 54, wherein the labels take precedence over said angle of orientation when forming said second mapping of said discrete sample .values
63. The apparatus according to claim 54, wherein said fourth mapping is at a different resolution to said first mapping.
64. The apparatus according to claim 54, wherein said image data is colour image data.
65. The apparatus according to claim 54, said apparatus further comprising: second calculating means for calculating a text indicator value, C; and comparing means for comparing said text indicator value with a threshold value, wherein said labelling of each discrete sample value within each text region is based on said comparison.
66. The apparatus according to claim 65, wherein said text indicator, C, is of the form: CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF -37- C=max(|Po-Pi), iEl,..,8 and wherein i is the index of the 8 nearest neighbour discrete sample values to a centre discrete sample value, PO.
67. The apparatus according to claim 65, wherein said means for identifying performs a cleaning operation on said text labels.
68. The apparatus according to claim 67, wherein said cleaning operation is a morphological opening operation.
69. The apparatus according to claim 54, wherein said first calculating means is configured to carry out the following functions: calculating edge response values for each of said discrete sample values; S•calculating a gradient magnitude value based on said edge response values; comparing said gradient magnitude value with a threshold value; classifying a current pixel on the basis of said comparison; calculating said angle of orientation for a current pixel based on said comparison; and i storing said angle of orientation. The apparatus according to claim 69, wherein said gradient magnitude value, Gm ,is of the form: G. G and wherein Gv and Gh are the vertical and horizontal edge responses, respectively.
71. The apparatus according to claim 69, wherein said edge gradient value, G6, is of the form: Go tan-' G v Gh CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF -38- wherein Gv and Gh are the vertical and horizontal edge responses, respectively.
72. The apparatus according to claim 54, wherein said manipulating means is configured to carry out the following functions: accumulating a number of discrete data values of each angle of orientation for one of a plurality of portions of said discrete data values; (ii) calculating a highest value and lowest value of discrete sample values for each angle of orientation; (iii) comparing said highest and lowest values with highest and lowest threshold 10 values, respectively; o (iv) reassigning an angle of orientation of said discrete data values of said portion on the basis of said comparison; and repeating the above functions to (iv) for each of said portions of said discrete data values.
73. The apparatus according to claim 72, wherein said plurality of portions of said discrete data values is five portions.
74. The apparatus according to claim 54, wherein a modified cubic interpolation kernel is applied to a discrete data value which is labelled as text. The apparatus according to claim 74, wherein said modified cubic interpolation kernel, is of the form: 1, s<d s>(1-d) 2 s-d 3 3s-d 2 1, -31 -d +11, 1-2d 1-2d CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF -39- wherein s= t/At is a normalised coordinate that has integer values at a sample point and 0 Sd
76. The apparatus according to claim 74, wherein a steerable cubic interpolation kernel is applied to a discrete data value which is classified as an edge.
77. The apparatus according to claim 76, wherein said steerable cubic interpolation kernels, h(sx,sy), are of the form: h(sxY= 1 h(sx)c=0. 5 h(sy)c=0 h(sxSy)Q=o h(sx,sy))=/4 s h(sy)h= F sc=0.5 c=0 .l (sx +sy .hsx -Sy h(sx,sy)O=3x/4 I (h Sx_ S.h) c=0 wherein sx= x/Ax and sy=y/Ay are re-sampling distances in the horizontal and vertical directions, respectively, and. indicates matrix multiplication.
78. The apparatus according to claim 76, wherein a conventional cubic interpolation kernel is applied to a discrete data value which is classified as smooth.
79. The apparatus according to claim 78, wherein said conventional cubic interpolation kernel is of the form: (2-3 b-c)s s <1 2 3 1 4 b-c)s +(b+5c)s 2 1<s <2 6 3 0, Otherwise and wherein b 0 and c CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF A computer readable medium for storing a program for an apparatus which processes data, said processing comprising a method of interpolating image data comprising a first mapping of discrete sample values, said program comprising: code for identifying text regions within said first mapping and labelling each discrete sample value within each text region; code for calculating edge information for each of said discrete sample values of said image data to identify edge sample values and storing an angle of orientation for each of said edge sample values; code for combining said labels and said angle of orientation for each of said 10 discrete sample values to form a second mapping of said discrete sample values; S-code for manipulating said angle of orientation for each edge sample value within said second mapping to form a third mapping of said discrete sample values; code for manipulating said image data of said third mapping to form a fourth °mapping of said image data; and code for interpolating each sample value of said fourth mapping with a first one of a plurality of kernels depending on said labels and said angle of orientation of each of ooooo S"said sample values of said fourth mapping to form a fifth mapping of said image data. ooo0o 81. The computer readable medium according to claim 80, said program further comprising: code for associating each discrete sample value of said fourth mapping with a corresponding discrete sample value of said third mapping; code for scaling said third mapping of said image data based on said association.
82. The computer readable medium according to claim 80, wherein said program further comprises code for interpolating said image data of said third mapping using a second kernel. CFP1111AU RC07 483325 [0A\CISRA\RC\RC07jAUspedi:SaF -41
83. The computer readable medium according to claim 82, wherein said second kernel is an NN interpolation kernel.
84. The computer readable medium according to any one of claims 80 to 83, wherein said labels and said angle of orientation of each of said sample values of said fourth mapping are used to select kernel parameter values of said first kernel. The computer readable medium according to claim 80, wherein said first kernel is a universal interpolation kernel, h(s).
86. The computer readable medium according to claim 85, wherein said universal interpolation kernel, is of the form: 1, 0 3 s-d 3 s-d 1 (2 b c) 2b c) d slI-d 2 1-2d 1- 2d 3 0, l-d< s l1+d 3 2 1 s-3d s- 3d s-3d 4 b-c)s- b+4c), 1+d<jsj 2- 6 1-2d 1-2d 1-2d 3 0, Otherwise 15 wherein s t /t and d
87. The computer readable medium according to claim 80, wherein said plurality of kernels are given by: S* h(y) h(sx,sy)9=7/ 2 h(sx)c=0 CFP1523AU RC07 483325 [0:\CISRA\RC\RC07]483325AU.doc:eaa -42- /1 s x+sy sx-s h(sx,sy)0=7/4 2 c=0.5 h c= O h(sxsy)9=3 4- 2 h4 {s X c=O -2 wherein sx= x/Ax and sy=y/Ay are re-sampling distances in the horizontal and vertical directions, respectively, and. indicates matrix multiplication.
88. The computer readable medium according to claim 80, wherein the labels take precedence over said angle of orientation when forming said second mapping of said S*discrete sample values 10 89. The computer readable medium according to claim 80, wherein said fourth mapping is at a different resolution to said first mapping. The computer readable medium according to claim 80, wherein said image data is colour image data.
91. The computer readable medium according to claim 80, said program further comprising: code for calculating a text indicator value, C; and code for comparing said text indicator value with a threshold value, wherein said labelling of each discrete sample value within each text region is based on said comparison.
92. The computer readable medium according to claim 91, wherein said text indicator, C, is of the form: C= max(Po-P), i e and wherein i is the index of the 8 nearest neighbour discrete sample values to a centre discrete sample value, PO. CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF i -43-
93. The computer readable medium according to claim 91, said program further comprising: code for performing a cleaning operation on said text labels.
94. The computer readable medium according to claim 93, wherein said cleaning operation is a morphological opening operation. The computer readable medium according to claim 80, said program further 10 comprising: code for calculating edge response values for each of said discrete sample values; code for calculating a gradient magnitude value based on said edge response values; and code for comparing said gradient magnitude value with a threshold value; code for classifying a current pixel on the basis of said comparison; code for calculating said angle of orientation for a current pixel based on said comparison; and code for storing said angle of orientation.
96. The computer readable medium according to claim 95, wherein said gradient magnitude value, Gm, is of the form: G= G +G ,and wherein Gv and Gh are the vertical and horizontal edge responses, respectively.
97. The computer readable medium according to claim 95, wherein said edge gradient value, G6, is of the form: G o tan G, Gh wherein Gv and Gh are the vertical and horizontal edge responses, respectively. CFP1111AU RC07 483325 [O:\CISRA\RC\RC07]AUspeci:SaF -44-
98. The computer readable medium according to claim 80, said program further comprising: code for accumulating a number of discrete data values of each angle of orientation for one of a plurality of portions of said discrete data values; code for calculating a highest value and lowest value of discrete sample values for each angle of orientation; code for comparing said highest and lowest values with highest and lowest threshold values, respectively; and code for reassigning an angle of orientation of said discrete data values of said portion on the basis of said comparison.
99. The computer readable medium according to claim 80, wherein a modified cubic interpolation kernel is applied to a discrete data value which is labelled as text. :t o 6406 5 100. The computer readable medium according to claim 99, wherein said modified cubic interpolation kernel, is of the form: o S.* 9 1, s d Sh(s)= 0,(1 d) s (1 d) 3 1-2d 1-2d wherein s= t/At is a normalised coordinate that has integer values at a sample point and 0 <d
101. The computer readable medium according to claim 80, wherein a steerable cubic interpolation kernel is applied to a discrete data value which is classified as an edge. CFP1523AU RC07 483325 SRA\RC\RC07]483325AU .doc:eaa i :;i
102. The computer readable medium according to claim 101, wherein said steerable cubic interpolation kernels, h(sx,sy), are of the form: 1 h(sx,sy)O 0 h(sx)c=o h (sy)c h(sx s h 2 {h(sx)c=0 h(sy)c=. 1 s+s(s sy h(sx,Sy)O=3r/ 4 =7 h h 2 S h( Y c=0.5 wherein sx= x/Ax and Sy=y/Ay are re-sampling distances in the horizontal and vertical directions, respectively, and. indicates matrix multiplication. 10 103. The computer readable medium according to claim 101, wherein a conventional cubic interpolation kernel is applied to a discrete data value which is classified as smooth. o :o 104. The computer readable medium according to claim 103, wherein said conventional cubic interpolation kernel is of the form: (2-3b-c)s3 +(-3+2b+c)s 2 s S. 2 3 1 4 15 b-c)s3+(b+5c)|s2+(-2b-8c)ls+(4b+4c), 1< s_2 6 3 0, Otherwise 0 :and wherein b 0 and c
105. A method of interpolating a first set of discrete sample values to generate a second set of discrete sample values using one of a plurality of interpolation kernels, substantially as hereinbefore described with reference to any one of the embodiments as illustrated in the accompanying drawings. CFP1523AU RC07 483325 [O:\CISRA\RC\RC07]483325AU.doc:eaa -46-
106. A method of interpolating image data, substantially as hereinbefore described with reference to any one of the embodiments as illustrated in the accompanying drawings.
107. An apparatus for interpolating image data, substantially as hereinbefore described with reference to any one of the embodiments as illustrated in the accompanying drawings.
108. A computer readable medium for storing a program for an apparatus which 10 processes data, said processing comprising a method of interpolating image data, said program being substantially as hereinbefore described with reference to any one of the embodiments as illustrated in the accompanying drawings. Dated 13 December, 1999 Canon Kabushiki Kaisha Patent Attorneys for the Applicant/Nominated Person SPRUSON FERGUSON 0@O@ 4 S *0OS S. S@OS S S. S Sc OS 5500 55.. SO S S 54 S 0 5040 S S CFP1111AU RC07 483325 [O:\CISRA\RC\RCO7]AUspeci:SaF
AU65270/99A 1998-12-18 1999-12-16 A method of kernel selection for image interpolation Ceased AU745562B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU65270/99A AU745562B2 (en) 1998-12-18 1999-12-16 A method of kernel selection for image interpolation

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
AUPP7798 1998-12-18
AUPP007798 1998-12-18
AU65270/99A AU745562B2 (en) 1998-12-18 1999-12-16 A method of kernel selection for image interpolation

Publications (2)

Publication Number Publication Date
AU6527099A AU6527099A (en) 2000-06-22
AU745562B2 true AU745562B2 (en) 2002-03-21

Family

ID=25634648

Family Applications (1)

Application Number Title Priority Date Filing Date
AU65270/99A Ceased AU745562B2 (en) 1998-12-18 1999-12-16 A method of kernel selection for image interpolation

Country Status (1)

Country Link
AU (1) AU745562B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003102903A2 (en) * 2002-06-03 2003-12-11 Koninklijke Philips Electronics N.V. Adaptive scaling of video signals
EP1833018A2 (en) * 2006-03-06 2007-09-12 Sony Corporation Image processing apparatus, image processing method, recording medium, and program

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU759341B2 (en) * 1999-10-29 2003-04-10 Canon Kabushiki Kaisha Method for kernel selection for image interpolation
AUPQ377899A0 (en) 1999-10-29 1999-11-25 Canon Kabushiki Kaisha Phase three kernel selection

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1990016035A2 (en) * 1989-06-16 1990-12-27 Eastman Kodak Company Digital image interpolator
WO1996016380A1 (en) * 1994-11-23 1996-05-30 Minnesota Mining And Manufacturing Company System and method for adaptive interpolation of image data
EP0908845A1 (en) * 1997-10-09 1999-04-14 Agfa-Gevaert N.V. Image sharpening and re-sampling method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1990016035A2 (en) * 1989-06-16 1990-12-27 Eastman Kodak Company Digital image interpolator
WO1996016380A1 (en) * 1994-11-23 1996-05-30 Minnesota Mining And Manufacturing Company System and method for adaptive interpolation of image data
EP0908845A1 (en) * 1997-10-09 1999-04-14 Agfa-Gevaert N.V. Image sharpening and re-sampling method

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003102903A2 (en) * 2002-06-03 2003-12-11 Koninklijke Philips Electronics N.V. Adaptive scaling of video signals
WO2003102903A3 (en) * 2002-06-03 2004-02-26 Koninkl Philips Electronics Nv Adaptive scaling of video signals
CN1324526C (en) * 2002-06-03 2007-07-04 皇家飞利浦电子股份有限公司 Adaptive scaling of video signals
EP1833018A2 (en) * 2006-03-06 2007-09-12 Sony Corporation Image processing apparatus, image processing method, recording medium, and program
EP1833018A3 (en) * 2006-03-06 2007-09-19 Sony Corporation Image processing apparatus, image processing method, recording medium, and program
US7936942B2 (en) 2006-03-06 2011-05-03 Sony Corporation Image processing apparatus, image processing method, recording medium, and program

Also Published As

Publication number Publication date
AU6527099A (en) 2000-06-22

Similar Documents

Publication Publication Date Title
US7054507B1 (en) Method of kernel selection for image interpolation
US6928196B1 (en) Method for kernel selection for image interpolation
EP1347410B1 (en) Edge-based enlargement and interpolation of images
JP3887245B2 (en) Gradation descreening using sigma filters
US6816166B2 (en) Image conversion method, image processing apparatus, and image display apparatus
US20030189579A1 (en) Adaptive enlarging and/or sharpening of a digital image
Morse et al. Image magnification using level-set reconstruction
AU727503B2 (en) Image filtering method and apparatus
Su et al. Neighborhood issue in single-frame image super-resolution
US7061492B2 (en) Text improvement
US7333674B2 (en) Suppression of ringing artifacts during image resizing
EP0874330A2 (en) Area based interpolation for image enhancement
JPH06245113A (en) Equipment for improving picture still more by removing noise and other artifact
JP2003018398A (en) Method for generating a super-resolution image from pixel image
WO2009154596A1 (en) Method and system for efficient video processing
JP2005122361A (en) Image processor, its processing method, computer program, and recording medium
JP2007527567A (en) Image sharpening with region edge sharpness correction
JP2000182039A (en) Image processing method, image processor and computer readable medium
AU745562B2 (en) A method of kernel selection for image interpolation
JP3026706B2 (en) Image processing device
US6687417B1 (en) Modified kernel for image interpolation
US8358867B1 (en) Painterly filtering
AU748831B2 (en) A modified kernel for image interpolation
AU745082B2 (en) A steerable kernel for image interpolation
AU759341B2 (en) Method for kernel selection for image interpolation

Legal Events

Date Code Title Description
FGA Letters patent sealed or granted (standard patent)