GB2466375A - Improved method for interpolation of tristimulus values on pixels of an image sensor. - Google Patents

Improved method for interpolation of tristimulus values on pixels of an image sensor. Download PDF

Info

Publication number
GB2466375A
GB2466375A GB0921952A GB0921952A GB2466375A GB 2466375 A GB2466375 A GB 2466375A GB 0921952 A GB0921952 A GB 0921952A GB 0921952 A GB0921952 A GB 0921952A GB 2466375 A GB2466375 A GB 2466375A
Authority
GB
United Kingdom
Prior art keywords
pixels
values
tristimulus values
tristimulus
interpolated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB0921952A
Other versions
GB2466375B (en
GB0921952D0 (en
Inventor
Alfred Nischwitz
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LFK Lenkflugkoerpersysteme GmbH
Original Assignee
LFK Lenkflugkoerpersysteme GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LFK Lenkflugkoerpersysteme GmbH filed Critical LFK Lenkflugkoerpersysteme GmbH
Publication of GB0921952D0 publication Critical patent/GB0921952D0/en
Publication of GB2466375A publication Critical patent/GB2466375A/en
Application granted granted Critical
Publication of GB2466375B publication Critical patent/GB2466375B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4015Image demosaicing, e.g. colour filter arrays [CFA] or Bayer patterns
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/12Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with one sensor only
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
    • H04N9/045
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2209/00Details of colour television systems
    • H04N2209/04Picture signal generators
    • H04N2209/041Picture signal generators using solid-state devices
    • H04N2209/042Picture signal generators using solid-state devices having a single pick-up sensor
    • H04N2209/045Picture signal generators using solid-state devices having a single pick-up sensor using mosaic colour filter
    • H04N2209/046Colour interpolation to calculate the missing colour values

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Color Television Image Signal Generators (AREA)
  • Color Image Communication Systems (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

The invention improves a known adaptive homogeneity-directed (AHD) method for interpolating missing tristimulus values on pixels of an image sensor with color filter in one of three colours, for example RGB, in front of each pixel (such as the Bayer filter of Figure 1). The improvements comprise different variants of individual method steps, provided individually or in combination. The method employs the horizontal (vertical) colour differences for calculating the horizontal (vertical) red and blue values. According to the invention, first the missing blue values are calculated at the red pixels and the missing red values are calculated at the blue pixels (step 2a). Only then are the missing blue and red values at the green pixels calculated, as indicated in step 2b. The method further comprises the calculation of homogeneity values (also in the CIELAB space) in order to select the horizontal or vertical tristimulus values.

Description

Method for Interpolating Tristimulus Values on Pixels of an Image Sensor
TECHNICAL FIELD
The present invention relates to a method for interpolating tristimulus or colour values on pixels of an image sensor.
PRIOR ART
Digital colour cameras usually have a single two-dimensional sensor (CCD or CMOS chip), in front of which a so-called "Bayer" filter is arranged (Bayer colour-filter array, abbreviated to Bayer-CFA) . Fig. 1 shows such a Bayer colour-filter array. A colour filter of one colour is positioned in front of each pixel, so that only one of the three tristimulus values R ("red"), G (for "green"), B (for "blue") is measured per pixel. The colour information of the other two tristimulus values is therefore missing for this pixel; they are determined by the so-called demosaicing method. Instead of an RGB colour filter, it is also possible to provide a CMY colour filter in the colours cyan (C), magenta (H), yellow (Y) The purpose of a demosaicing method is therefore to reconstruct the two respectively missing tristimulus values from the surrounding measurement values of a pixel. A complete image is obtained as a result, in which all three tristimulus values R, G and B or C, M and Y are available on each pixel.
Very many demosaicing methods have already been described to date, for example in US 4,774,565, in US 5,629,734 or in US 7,053,908. However, the image quality thereby achieved is not yet very satisfactory. The quality of the demosaicing method in question is typically measured as a mean-square error from the original image in the RGB colour space, in the HSI colour space (H = hue, S = saturation, I = intensity), or in the YUV colour space (Y = intensity adapted to human perception, U = weighted colour difference B-Y, V = weighted colour difference R-Y) . The human observer must furthermore have a good visual impression of the result (Faille, F.: "Comparison of demosaicing methods for color information extraction", Proceedings of "Computer Vision and Graphics International Conference", ICCVG 2004, p. 820-825, Warsaw, Poland, September 2004", ISBN: 978-1- 4020-4178-5) Very good results, particularly for images having pronounced edges and lines, that is to say with high spatial frequencies, can be achieved with the "adaptive homogeneity-directed demosaicing algorithm" (abbreviated to AHD) proposed by Hirakawa & Parks in 2005 (Hirakawa, K., Parks, I. W. "Adaptive homogeneity-directed demosaicing algorithm", IEEE Transactions on Image Processing, Vol. 14, No 3, March 2005) For very noisy images, a relatively simple method, for example the "high-quality linear interpolation for demosaicing" algorithm of Malvar, He & Cutler (Malvar, H.S.; He Li-Wei & Cutler, R., "High-quality linear interpolation for demosaicing of Bayer-patterned color images", IEEE International Conference on Acoustics, Speech, and Signal Processing, 2004 (ICASSP 04), p. 485-488), performs better than the elaborate "adaptive homogeneity-directed demosaicing algorithm" of Hirakawa and Parks.
Description of the "adaptive homogeneity-directed
demosaicing algorithm" (AHD) The description will be given with the aid of an RGB colour space by way of example.
1st Step: First interpolate the missing green values on the red/blue pixels (in Fig. 1 for example R23 or B32), and specifically one each for the horizontal (G_h) and vertical (G) privileged directions with the following filter kernel: -1/4 +1/2 +1/2 +1/2 -1/4 I (in the vertical direction, the filter kernel needs to be transposed), for example: G23_h = (-R21 + 2G22+ 2R23+ 2G24-R25)14; G23 (-R03+ 2G13+ 2R23+ 2G33-R43)14; Finally a median is formed in the privileged direction: G23h = Median (G22, G23h, G24; G23 = Median (G13, G23, G33); If the algorithm is to be implemented on hardware with stringent memory restrictions (FPGA5, ASICs), the first step should be carried out only in a 7x7 pixel central region within a lixil pixel catchment area around the current pixel.
2' step: a) Interpolate the missing B values on the R pixels (in Fig. 1 for example R43) and the missing R values on the B pixels (in Fig. 2 for example B34), and specifically one each for the horizontal (Rh, B_h) and vertical (R, B) privileged directions, for example: R34_h = G34h + (R23 -G23h + R25 -°25_h + R43 -G43h + R45 -G45h)/4; R34 = G34 + (R23 -G23 + R25 -+ R43 -G43 � R45 -G45)I4; B43h = G43h + (B32 -G32h + B34 -G34h + B52 -G52h + B54 -G54h)14; 843_v = G43 + (B32 -G32 + B34 -G34 + B52 -+ B54 -G54)I4; b) Interpolate the missing R/B values on the G pixels (in Fig. 1 for example G33), and specifically one each for the horizontal (Rh, Bh) and vertical (R, B) privileged directions, for example: R33h = G33 + (R23 -G23h + R43 -G43h)12; R33 = + (R23 -G23 + R43 -G43)I2; B33h = G33 + (B32 -G32h + B34 -G34h)/2; B33 = G33 + (B32 -G32 + B34 -G34)I2; If the algorithm is to be implemented on hardware with stringent memory restrictions (FPGAs, ASICs), the second step should be carried out only in a 5x5-pixel central region within a llxll-pixel catchment area around the current pixel.
3rd step: Conversion of RGB tristimulus values via the CIE-XYZ colour space into the CIE-Lab colour space: a) Transformation from RGB to CIE-XYZ with the following matrix operations: XI 0.412453, 0.357580, 0.180423 I RI IYI = I 0.212671, 0.715160, 0.072169 I * IGI IZI 0.019334, 0.119193, 0.950227 I IBI b) Transformation from CIE-XYZ to CIE-Lab: L = 116. f(Y/Yn) -16; a = 500 (f(X/Xn) -f(Y/Yn)); b = 200 (f(Y/Yn) -f(Z/Zn)); where the white point for a Planckian radiator at 6500 K is: {Xn, Yn, Zn} = { 0.950456, 1, 1.088754}; and f(x) = x113 if x > 0.008856, = 7.787x + 16/116.0 otherwise; 4th step: Compilation of homogeneity maps for the horizontal (homoh) and vertical (homo) privileged directions: a) Calculation of the absolute differences of the L values from the four respective neighbouring positions (right, left, up, down), for example: horizontal privileged direction: L44hr = 1L45_h -L44_h I; (right) LhI = I L43_h -L44_h I; (left) L44h0 = L34_h -L44_h I; (up) LhU = 1L54_h -L44_h I; (down) vertical privileged direction: L44vr = IL45_ -L44 I; (right) L441 = I L3 - I; (left) L440 = IL34 -L44 I; (up) L44 IL54_ -L44 I; (down) b) Calculation of the absolute differences of the ab values from the four respective neighbouring positions (right, left, up, down), for example: horizontal privileged direction: ab44_h_r = (a45_h -a44h)2 + (b45_h -b44h)2; (right) ab44hJ = (a43h -a44h)2 + (b43_h -b44_h)2; (left) ab44hO = (a34h -a44h)2 + (b34h -b44_h)2; (up) ab44hU = (a54h -a44h)2 + (b54_h -b44_h)2; (down) vertical privileged direction: ab__1 = (a45 -a44)2 + (b45 -b44)2; (right) ab441 = (a43 -a44)2 + (b43 -b44)2; (left) ab440 = (a34 -a44V)2 + (b34 -b44)2; (up) ab44 = (a54 -a44V)2 + (b54 -b44)2; (down) c) Calculation of the limit values (for example L44eps, ab44) as a comparative measure of the homogeneity, for
example:
Lhmax = max(L44hr, L44h1); L44vmax max(LhO, LhU); L44_eps min(L44hmax, L44vmax); ab44hmax = max(ab44hf, a b44h1); ab44vmax = max(ab44hO, ab44hU); ab44_eps = m in(ab44hmax, ab44vmax); d) Calculation of the homogeneity values (for example homo44h, homo44), for example: homo44h = homo44 v = 0; /* initialisation */ jf (L44hr < L44eps && abh1 <= ab44eps) hOmO44h 1; jf (L44h1 < L44eps && ab44hI <= ab44eps) homo44h I 1; if (L44ho < L44eps && ab44hO < ab44_eps) homo44h 1; jf (L44hu < L44eps && abhU < ab44eps) homo44h 1; if (L441 < L44eps && ab44_v_r < ab44eps) hOfl1o44 += 1; if (L44 < L44eps && ab441 <= ab44_eps) homo44 += 1; if (L440 < Leps && ab440 <= ab44eps) homo44 += 1; if (L44VU < L44eps && ab44 <= ab44eps) homo44 += 1; 5th step: Decision as to whether the horizontal or vertical tristimulus value is selected.
a) Low-pass filtering of the two homogeneity maps with a 3x3 moving average: ho44h = homo33h + homo34h + homo35h + homo43h + homo44h + homo45h + homo53h + homo54h + homo55h; ho44 = homo33 + homo34 + homo35 + homo43 + homo44 + homo45 + homo53 + homo54 + homo55; b) Selection of the vertical or horizontal tristimulus value, or averaging, for example: if (ho44 h == ho){ R44 = (Rh + R44)I2; G = (G44_h + G)/2; B44 = (B44h + 30} else if (ho44h> ho44) { R = R44_h; G44 = G44h; B44 = B44_h;} else { R = R4; G = G; B = B;} step: Median filtering of the colour differences (R-G) and (B-G), various filter kernels being possible, for example the 3x3 filter kernel of the nearest neighbours: repeat m times R = Median(R -G) + G; B = Median(B -G) + G; G = 1/2 * (Median(G -R) + Median(G -B) + R + B); end_repeat.
NB: the originally measured tristimulus values (so-called hot pixels) are also changed in the 6th step.
The basic concept of the "adaptive homogeneity-directed demosaicing algorithm" of Hirakawa and Parks consists in the strictly separated calculation of tristimulus values for the horizontal and vertical privileged directions. This basic concept, however, is disobeyed in step 2b) because in green-blue rows on the green hot pixels (Fig. 1 for example G33) the calculation of the red value in the horizontal direction (R33h) is carried out with the aid of the colour differences (R23 -G23h + R43 -G43_h) in the vertical direction (Fig. 2a), and the calculation of the blue value in the vertical direction (B33) is carried out with the aid of the colour differences (B32 -G32 + B34 -G34) in the horizontal direction.
The basic concept is likewise departed from in step 2b) because in green-red rows on the green hot pixels (Fig. 1 for example G44) the calculation of the red value in the vertical direction (R33h) is carried out with the aid of the colour differences (R23 -G23h + R43 -G43h) in the horizontal direction, and the calculation of the blue value in the horizontal direction (B33) is carried out with the aid of the colour differences (B32 -G32 + B34 -G34) in the vertical direction (Fig. 2b) Disadvantages of AHD are: the algorithm is very noise-sensitive, i.e. strong colour artefacts occur with correlated, uncoloured noise (the same random value is added to all three tristimulus values of a pixel); strong colour artefacts likewise occur when there are jumps in the chrominance value, as in the region of the maximum spatial frequency (pixels alternately white and black); in relatively homogeneous image regions with little noise, AHD generates readily visible substructures.
SUMMARY OF THE INVENTION
It is an object of the invention, i.e. the technical problem on which it is based, to refine the known demosaicing method so that the image quality is improved even further and the noise is reduced even more.
The invention consists in a method for interpolating pixels of an image sensor having the features of Claim 1.
The method according to the invention has the following steps: 1) interpolating the F2 tristimulus values on the F1 pixels and on the F3 pixels, in a first direction and in a second direction respectively over the pixel in question; then 2a1) interpolating the F3 tristimulus values on the F1 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1, with the aid of the F3 pixels surrounding the F1 pixel; and 2a2) interpolating the F1 tristimulus values on the F3 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1, with the aid of the F1 pixels surrounding the F3 pixel; and then 2b1) interpolating the F3 tristimulus values on the F2 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristirnulus values interpolated in the second direction in step 1, with the aid of the F1 pixels respectively neighbouring the F2 pixel; 2b2) interpolating the F1 tristimulus values on the F2 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1, with the aid of the F3 pixels respectively neighbouring the F2 pixel.
Using the example F1 = red, F2 = green and F3 = blue, the method thus also employs the horizontal colour differences for respectively calculating the horizontal red and blue values, and also using the vertical colour differences for respectively calculating the vertical red and blue values.
According to the invention, first the missing blue values at the red pixels are calculated (step 2a1) and the missing red values are calculated at the blue pixels (step 2a2) . Only then are the missing blue and red values calculated at the green pixels, as indicated in steps 2b1 and 2b2. This sequence of steps 2b after steps 2a does not play a part in the Hirakawa and Parks algorithm known from the prior art.
ADVANTAGES
The proposed technical solution measurably and visibly improves the demosaicing result. The deviations of the F1 and F3 tristimulus values (red and blue values in the RGB colour space) from the original values are significantly reduced (about 10 -15% lower mean-square errors), while the F2 tristimulus values (green values in the RGB colour space) scarcely change perceptibly (about �2% changes in the mean-square errors) . This also applies for very noisy images.
In terms of visual impression, the colour deviations are visibly reduced significantly by the proposed technical solution.
An advantageous refinement of the method comprises the following additional steps: 3a) calculating the absolute differences of the F1 tristimulus values from the four respective neighbouring positions; 3b) calculating the absolute differences of the F2 tristirriulus values from the four respective neighbouring positions; 3c) calculating the absolute differences of the F3 tristimulus values from the four respective neighbouring positions; 3d) calculating the limit values as a comparative measure of the homogeneity; 4) calculating the homogeneity values; 5) deciding whether the horizontal tristimulus value or the vertical tristimulus value is selected.
Alternatively, the method may be refined by the following steps: 3) converting the tristimulus values interpolated in steps 1) to 2b2) via the CIE-XYZ colour space into the CIE-Lab colour space; 4a) calculating the absolute differences of the L values of the Lab colour space from the four respective neighbouring positions; 4b) calculating the absolute differences of the ab values of the Lab colour space from the four respective neighbouring positions; 4c) calculating the limit values as a comparative measure of the homogeneity; 4d) calculating the homogeneity values; 5) deciding whether the horizontal tristimulus value or the vertical tristimulLus value is selected.
Pn alternative, second method of the invention has the following steps: 1) interpolating the F2 tristimulus values on the F1 pixels and on the F3 pixels, in a first direction and in a second direction respectively over the pixel in question; 2a1) interpolating the F3 tristimulus values on the F1 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1, with the aid of the F3 pixels surrounding the F1 pixel; and 2a2) interpolating the F tristimulus values on the F3 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1, with the aid of the F1 pixels surrounding the F3 pixel; 2b1) interpolating the F3 tristimulus values on the F2 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulLus values interpolated in the second direction in step 1; 2b2) interpolating the F1 tristimulus values on the F2 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1; 3a) calculating the absolute differences of the F1 tristimulus values from the four respective neighbouring positions; 3b) calculating the absolute differences of the F2 tristimulus values from the four respective neighbouring positions; 3c) calculating the absolute differences of the F3 tristimulus values from the four respective neighbouring positions; 3d) calculating the limit values as a comparative measure of the homogeneity; 4) calculating the homogeneity values; 5) deciding whether the tristimulus value interpolated in the first direction or the tristimulus value interpolated in the second direction is selected, by forming a definitive tristimulus value on the basis of the homogeneity values for a component X, determined by a homogeneity difference, of the tristimulus value interpolated in the first direction and the component lOO-X complementary thereto of the tristimulus value interpolated in the second direction.
In a variant, this second solution may have the following steps: 1) interpolating the F2 tristimulus values on the F1 pixels and on the F3 pixels, in a first direction and in a second direction respectively across the pixel in question; 2a1) interpolating the F3 tristimulus values on the F1 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1, with the aid of the F3 pixels surrounding the F1 pixel; and 2a2) interpolating the F1 tristimulus values on the F3 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1, with the aid of the F1 pixels surrounding the F3 pixel; 2b1) interpolating the F3 tristimulLus values on the F2 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1; 2b2) interpolating the F1 tristimulus values on the F2 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1; 3) converting the tristimulus values interpolated in steps 1) to 2b2) via the CIE-XYZ colour space into the OlE-Lab colour space; 4a) calculating the absolute differences of the L values of the Lab colour space from the four respective neighbouring positions; 4b) calculating the absolute differences of the ab values of the Lab colour space from the four respective neighbouring positions; 4c) calculating the limit values as a comparative measure of the homogeneity; 4d) calculating the homogeneity values; 5) deciding whether the tristimulus value interpolated in the first direction or the tristimulus value interpolated in the second direction is selected, by forming a definitive tristimulus value on the basis of the homogeneity values for a component X, determined by a homogeneity difference, of the tristimulus value interpolated in the first direction and the component lOO-X complementary thereto of the tristimulus value interpolated in the second direction.
This second version of the method consists in making better use of the information contained in the homogeneity maps.
Although a yes/no decision cannot be made for small differences in the homogeneity values, the definitive tristimulus value is nevertheless made up of a component (for example the percentage) VXV!, determined by the homogeneity difference, of the (for example horizontal) tristimulus value interpolated in the first direction and the complementary component (for example the percentage) "lOO-X" of the (for example vertical) tristimulus value interpolated in the second direction. The greater the differences in the homogeneity values are, the more strongly the tristimulus values of the more homogeneous privileged direction will be weighted. This measurably and visibly improves the demosaicing result significantly.
In the aforementioned methods, the following may additionally be provided as a further step: 6) carrying out median filtering of the colour differences (F1-F2) and (F3-F2), while the tristimulus values which relate to hot pixels, i.e. have been measured directly by the sensor, remain unchanged.
A third aspect of the invention relates to a method having the following steps: 1) interpolating the F2 tristimulus values on the Fi pixels and on the F3 pixels, in a first direction and in a second direction respectively over the pixel in question; 2a1) interpolating the F3 tristirnulus values on the F1 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1, with the aid of the F3 pixels surrounding the F1 pixel; and 2a2) interpolating the F tristimulus values on the F3 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1, with the aid of the F1 pixels surrounding the F3 pixel; 2b1) interpolating the F3 tristimulus values on the F2 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1; 2b2) interpolating the F3 tristimulus values on the F2 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1; 3a) calculating the absolute differences of the F1 tristimulus values from the four respective neighbouring positions; 3b) calculating the absolute differences of the F2 tristimulus values from the four respective neighbouring positions; 3c) calculating the absolute differences of the F3 tristimulus values from the four respective neighbouring positions; 3d) calculating the limit values as a comparative measure of the homogeneity; 4) calculating the homogeneity values; 5) deciding whether the tristimulus value interpolated in the first direction or the tristimulus value interpolated in the second direction is selected, by forming a definitive tristimulus value on the basis of the homogeneity values; and with the further step: carrying out median filtering of the colour differences (F1-F2) and (F3-F2), while the tristimulus values which relate to hot pixels, i.e. have been measured directly by the sensor, remain unchanged.
Alternatively, the third solution may have the following steps: 1) interpolating the F2 tristimulus values on the F1 pixels and on the F3 pixels, in a first direction and in a second direction respectively over the pixel in question; 2a1) interpolating the F3 tristimulus values on the F1 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1, with the aid of the F3 pixels surrounding the F1 pixel; and 2a2) interpolating the F1 tristimulus values on the F3 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1, with the aid of the F1 pixels surrounding the F3 pixel; 2b1) interpolating the F3 tristimulus values on the F2 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1; 2b2) interpolating the F1 tristimulus values on the F2 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1; 3) converting the tristimulus values interpolated in steps 1) to 2b2) via the OIE-XYZ colour space into the CIE-Lab colour space; 4a) calculating the absolute differences of the L values of the Lab colour space from the four respective neighbouring positions; 4b) calculating the absolute differences of the ab values of the Lab colour space from the four respective neighbouring positions; 4c) calculating the limit values as a comparative measure of the homogeneity; 4d) calculating the homogeneity values; 5) deciding whether the tristimulus value interpolated in the first direction or the tristimuius value interpolated in the second direction is selected, by forming a definitive tristimulus value on the basis of the homogeneity values; and with the further step: carrying out median filtering of the colour differences (F1-F2) and (F3-F2), while the tristimulus values which relate to hot pixels, i.e. have been measured directly by the sensor, remain unchanged.
The improvement over the prior art in the third class of method consists in those tristimulus values which are hot pixels, i.e. have been measured directly by the sensor, remaining unchanged. This measurably and visibly improves the demosaicing result significantly. The deviations of all three tristimulus values F1, F2, F3 (for example red, green, blue) from the original values are significantly reduced relative to the prior art (about 40-45% lower mean-square errors), while the deviations tend to be reduced even more strongly with increasing noise than without noise. In terms of visual impression, the colour deviations are visibly reduced significantly by the proposed technical solution.
All the aforementioned alternative solutions may also be combined with one another. For instance, in the second and third solutions, the interpolation of the F3 tristimulus values (for example the blue values) in step 2b1 may be carried out with the aid of the (for example red) F1 pixels respectively neighbouring the (for example green) F2 pixel and the interpolation of the F1 tristimulus (red) values in step 2b2 may be carried out with the aid of the (blue) F3 pixels respectively neighbouring the (green) F2 pixel.
Likewise, the first and third solutions may also comprise the feature that the decision in step 5, as to whether the tristimulus value interpolated in the first (horizontal) direction or the tristimulus value interpolated in the second (vertical) direction is selected, may be carried out by forming a definitive tristimulus value on the basis of the homogeneity values for a component (for example a percentage) X, determined by a homogeneity difference, of the tristimulus value interpolated in the first direction, and the component (for example a percentage) l00-X complementary thereto of the tristimulus value interpolated in the second direction.
The step of median filtering the colour differences (F1-F2) and (F3-F2), while the tristimulus values that relate to hot pixels, i.e. have been measured directly by the sensor, remain unchanged, is preferably carried out as step 6) after the step of deciding whether the tristimulus value interpolated in the first direction or the tristimulus value interpolated in the second direction is selected.
It is also advantageous for the step of median filtering the colour differences (F1-F2) and (F3-F2), while the tristimulus values which relate to hot pixels, i.e. have been measured directly by the sensor, remain unchanged, to be carried out before the step of deciding whether the tristimulus value interpolated in the first direction or the tristimulus value interpolated in the second direction is selected. In this case, it is particularly advantageous for the step of median filtering the colour differences (F1-F2) and (F3-F2), while the tristimulus values that relate to hot pixels, i.e. have been measured directly by the sensor, remain unchanged, to be carried out as step 2c) immediately after steps 2a1) to 2b2) This reduces the deviations of all three tristimulus values F1, F2, F3 (for example red, green, blue) from the original values, so that the horizontal/vertical decision, i.e. the decision in respect of selecting the tristimulus values interpolated in the first or second direction, is positively affected.
Lastly, in all variants of the present invention it is advantageous for a median of the two F2 tristimulus values interpolated in step 1 to be formed in the first direction and in the second direction.
The three colours F1, F2, F3 are preferably red, green and blue, or alternatively yellow, magenta and cyan.
Preferably, the first direction and the second direction are mutually perpendicular.
The invention furthermore relates to a computer program having program code means for carrying out all the method steps according to any of the method claims when the program is run on a computer.
By means of this computer program, after readout of the tristimulus values on pixels of an image sensor, the method according to the invention for interpolating tristimulus values is therefore carried out and the interpolated tristimulus values are stored together with the measured, non-interpolated tristimulus values as an image file in a memory device.
The invention also relates to a computer program product having program code means, which are stored on a computer-readable data medium, for carrying out all the method steps according to at least one of Claims 1 to 15 when the program product is run on a computer. This computer program product also enables the inventive interpolation of tristimulus values on the basis of tristimulus values measured on pixels of an image sensor, and subsequent storage of the measured and interpolated tristimulus values as an image file in a memory device.
Lastly, the invention also provides a computer-readable data carrier, on which a computer program according to Claim 16 or a computer program product according to Claim 17 is stored.
Preferred exemplary embodiments of the invention with additional configuration details and further advantages will be described and explained in more detail below with reference to the appended drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1 shows a two-dimensional sensor, in front of which a so-called "Bayer" colour filter is arranged; Fig. 2a shows an example of the calculation of the horizontal red value with vertical colour
differences according to the prior art;
Fig. 2b shows an example of the calculation of the horizontal blue value with vertical colour
differences according to the prior art;
Fig. 3a shows the mixing of the horizontal and vertical tristimulus values as a function of the homogeneity difference for linear colour mixing; Fig. 3b shows the mixing of the horizontal and vertical tristimulus values as a function of the homogeneity difference for colour mixing according to complementary sigmoid functions; Fig. 4 shows an image detail of the TUSAF" reference image as a comparison of a method in accordance with the invention with methods from the prior art; Fig. 5 shows an image detail of the "lighthouse" reference image as a comparison of a method in accordance with the invention with methods from
the prior art; and
Fig. 6 shows an image detail of the noisy "USAF" reference image as a comparison of a method in accordance with the invention with methods from
the prior art.
DESCRIPTION OF EMBODIMENTS
First Embodiment A first exemplary technical solution to the problem consists in also employing the horizontal colour differences for respectively calculating the horizontal red and blue values, and also the vertical colour differences for respectively calculating the vertical red and blue values, for example: R33_h = G33 + (R32h -G32_h + R34h G34h)12; B33 = G33 + (B23 -+ B43 -G43)/2; R44 = G44 + (R34 -G34 + R54 -G54)/2; B44_h = + (B43_h -G43_h + B45_h -G45h)/2; A prerequisite for this, however, is that the missing B values should first be calculated on the R pixels (in Fig. 1 for example R43) and for the missing R values to be calculated on the B pixels (in Fig. 1 for example B34) (steps l.2a and 1.2a2), and only then the missing R/B on the green hot pixels (in Fig. 1 for example G33, G44, steps l.2b1 and l.2b2) . This sequence does not play a part in the Hirakawa and Parks algorithm known from the prior art.
If the algorithm is to be implemented on hardware with stringent memory restrictions (FPGAs, ASIC5), the originally lixil-pixel catchment area must be increased to 13x13 pixels around the current pixel to be calculated. This is because in order to calculate the vertical blue value B33 at the green hot pixel G33, the vertical blue value B23 is needed, which in turn requires the vertical green values G12, G14, G32, G34v. The two vertical green values Gi2, Gi4 are critical here, since the blue hot pixels B12 and B14 are needed in order to calculate them according to the first step, so that the catchment area is indeed increased to 13x13 pixels. Assistance can be provided here by using a simplified algorithm for calculating the vertical blue value B23, for example the "high-quality linear interpolation for demosaicing" algorithm of Malvar, He & Cutler, which has
already been mentioned in the introduction:
B23 ((B12+B14+B32+B34)2 + R236 -(R03+ R43+ R21+ R25)1.5)/8; and likewise for the horizontal red values, for example R32h: R32h = ((R21+R41+R23+R43)2 + B326 -(B12+ B52+ B30+ B34) 1.5)/8; The required catchment area therefore remains restricted to lixil pixels, and the quality of the result scarcely changes relative to the AHD method of Hirakawa & Parks.
The proposed technical solution measurably and visibly improves the demosaicing result significantly. The deviations of the red and blue values from the original values are significantly reduced (about 10 -15% lower mean-square errors), while the green values scarcely change perceptibly (about � 2% changes in the mean-square errors) This also applies for very noisy images. In terms of visual impression, the colour deviations are visibly reduced significantly by the proposed technical solution.
Second Embodiment A second solution to the problem on which the invention is based is to use the homogeneity information for a softer decision between the horizontal and vertical channels.
In step Sb) of the AHD method of Hirakawa and Parks, averaging of the horizontal and vertical tristimulus values is only carried out in the case of exactly equal horizontal and vertical homogeneity values (ho44h == ho44), for
example:
R44 = (R44_h + R)I2; G44 = (G44_h + G44)/2; B44 (B44h + B44)/2; Otherwise, 100% of either the horizontal or the vertical tristimulus value is selected. In relatively homogeneous image regions (regions of low spatial frequencies), this leads to highly visible block-like artefacts since, owing to the low-pass filtering of the homogeneity values (step 5a of the AHD method), the algorithm has a certain persistence in respect of the privileged direction. This has a positive effect for high spatial frequencies (edges, lines), since minor perturbations are smoothed out by this persistence.
For low spatial frequencies (homogeneous image regions), however, it has a negative effect since the same privileged direction is often kept over a plurality of pixels, before the direction is changed. In the homogeneous image regions, this creates block-like artefacts with a side length of 4-8 pixels (see Fig. 6d) which become visible owing to the homogeneous environment, despite rather small colour differences. Another problem of the relatively hard horizontal/vertical decision in the prior art is the high sensitivity to noise.
Methods corresponding to the second embodiment of the invention resolve the problem by making better use of the information contained in the homogeneity maps. For small differences in the homogeneity values, a yes/no decision is not made (as in the AHD method), but instead the definitive tristimulus value is made up of a percentage "X", determined by the homogeneity difference, of the horizontal tristimulus value and the complementary percentage "lOO-X" of the vertical tristimulus value.
The greater the differences in the homogeneity values are, the more strongly the tristimulus values of the more homogeneous privileged direction will be weighted. One possible algorithm for weighting the privileged directions is a linear term. Let "S" be a numerical value which specifies the width of the transition region for the soft decision, then for example the following applies for the colour mixing with the linear approach (see Fig. 3a) if (ho> ho44 + S) { R44 = R44h; G44 = G44h; B44 = B44h;} else if (ho44 h < ho44 -S){ R44 R44; G44 G44; B44 = B44;} else { R4 = (Rh R44)/2 + (R44_h -R44 V)(ho44h -ho44 G44 = (G44_h + G44)/2 + (Gh -G44 V)(hoh -ho44)I2S; B44 = (B44_h + B44)/2 + (B44_h -B44 V)(ho44h -ho44 Instead of linear weighting of the two privileged directions by means of a percentage (percentage X of the first direction and complementary percentage lOO-X of the second direction), any other desired functions may also be used, for example the sigmoid functions represented in Fig. 3b: l/(l+exp(_a*X)) and the complement to 1 thereof: 1/(1+exp(a*X)), in order to determine the definitive tristimulus value. These two functions also add up 1 for each value X. This proposed technical solution measurably and visibly improves the demosaicirig result significantly. For a mixture of 50% non-noisy (see Fig. 4 and Fig. 5) and 50% noisy images (see Fig. 6), an optimum is obtained with a transition region width of S = 11. With this value, the deviations of all three tristimulus values (red, green, blue) from the original values are reduced significantly, namely to about 15 to 25% lower mean-square errors, while the deviations tend to be reduced even more strongly with increasing noise than without noise.
The optimal transition region width of S = 11 is independent of whether the homogeneity maps are compiled in the OlE-Lab colour space or in the RGB colour space (third step of the AHD method of Hirakawa & Parks) . For the purely visual impression with non-noisy images (Fig. 4 and Fig. 5), the optimum for the transition region is S = 3, while for noisy images (Fig. 6) the optimum is S = 13. Via the parameter S, which describes the width of the transition region, adaptation to the characteristics of the image material to be processed can therefore be carried out.
Third Embodiment: In the third embodiment of the invention, median filtering of the colour differences (R -G) and (B -G) is carried out in a 3x3 environment of the pixel and the tristimulus values are corrected after the horizontal/vertical decision, and only at pixels which are not hot pixels, "hot pixels" being intended to mean those pixels on which the tristimulus value is measured directly, i.e. not interpolated. This median filtering is carried out according to the formula: repeat m times: = G + Median(R -G); G' = (R + B -Median(R -G) -Median(B -Gfl/2; B' = G + Median(B -G); end_repeat.
In the sixth step of the AHD method of Hirakawa and Parks known from the prior art, the median filtering of the colour differences (R -G) and (B -G) is carried out on all pixels.
In the third embodiment of the invention, conversely, the tristimulus values which are hot pixels (i.e. have been measured directly by the sensor, for example G22, G33, R23, B32) remain unchanged. The median filter may of course also be extended to filter kernels other than the aforementioned 3x3 filter kernel of the nearest-neighbour pixels.
The proposed technical solution measurably and visibly improves the demosaicirig result significantly. The deviations of all three tristimulus values (red, green, blue) from the original values are significantly reduced relative to the median filter known from the AHD method by threefold repetition (about 40 -45% lower mean-square errors), while the deviations tend to be reduced even more strongly with increasing noise than without noise. In terms of visual impression, the colour deviations are visibly reduced significantly by the proposed technical solution.
Fourth Embodiment In the fourth embodiment of the invention, the median filtering of the colour differences (R -G) and (B -G) is carried out in a 3x3 environment of the pixel and the tristimulus values are corrected before the horizontal/vertical decision, and only at pixels which are not hot pixels, according to the formula: repeat m times: R' = G + Median(R -G' = (R + B -Median(R -G) -Median(B -G))/2; B' = G + Median(B -end_repeat.
In the method of the fourth embodiment, the median filtering according to the third embodiment is already applied before the horizontal/vertical decision, i.e. directly after the second step of the AHD algorithm. This reduces the deviations of all three tristimulus values F1, F2, F3 (for example red, green, blue) from the original values so that the horizontal/vertical decision, i.e. the fifth step of the AHD algorithm, is also positively affected.
A prerequisite for median filtering before the horizontal/ vertical decision is that all three tristimulus values F1, F2, F3 (for example red, green, blue) for the pixels to be calculated are already available up to a distance (according to the Manhattan norm) of three pixels. If the solution to the object according to the first exemplary embodiment is applied in a technical implementation of the proposed demosaicing method, then the necessary tristimulus values are already available. Otherwise, the required tristimulus values would need to be calculated beforehand according to the method of the first exemplary embodiment.
The proposed technical solution according to the fourth embodiment measurably and visibly improves the demosaicing result significantly. The deviations of all three tristimulus values F1, F2, F3 (for example red, green, blue) from the original values are already reduced considerably by single application (about 24 to 42% lower mean-square errors), and very significantly by threefold application (about 45 to 60% lower mean-square errors), while the deviations tend to be reduced even more strongly with increasing noise than without noise. In terms of visual impression, the colour deviations are visibly reduced significantly by the proposed technical solution.
Variant without Conversion into the CIE-Lab Colour Space Here the conversion from the RGB colour space into the CIE-Lab colour space, as provided in the third step of the AHD algorithm described in the introduction to the description, is omitted, and the following method steps are carried out instead of the fourth step of the AHD algorithm described in
the introduction to the description:
Compilation of the homogeneity maps for the horizontal (homoh) and vertical (homo) privileged directions: a) calculation of the absolute differences of the R values from the four respective neighbouring positions (right, left, up, down); for example: horizontal privileged direction: R44hr = 1R45 -R44h I; (right) R 44_h_i = R43 -R44_h I; (left) R 44ho = I R34h -R44h I; (up) R 44hu = R54h -R44h; (down) vertical privileged direction: R44vr = 1R45 -R44 I; (right) R441 = JR43 -R44; (left) R440 = R34 -R44; (up) R44 = I R54 -R44 I; (down) b) calculation of the absolute differences of the G values from the four respective neighbouring positions (right, left, up, down), for example: horizontal privileged direction: G44_h_r = G45h -G44 I; (right) G44hi = G43_h -; (left) G44h0 = G34_h -G44 I; (up) G44hu = G54h -G44; (down) vertical privileged direction: G44vr = IG45 -G44; (right) = IG43 -G44 I; (left) G440 = I G34 -G44 I; (up) G44 = -G44; (down) c) calculation of the absolute differences of the B values from the four respective neighbouring positions (right, left, up, down), for example: horizontal privileged direction: B44_h_r = jB45h -B44h; (right) B 44_h_i = B43h -B44h; (left) B 44ho = B34 -B44_h I; (up) B 44_h_u = B54 -B44_h; (down) vertical privileged direction: B44vr = B45 -B44; (right) B441 = B43 -B44 I; (left) B440 = B34 -B44; (up) B44 = I B54 -B44 I; (down) Combination of the Embodiments In general, all methods of the four embodiments may also be applied individually, and in each case make a contribution to improving the image quality. If some or all features of the embodiments are combined with one another, then their contributions to improving the image quality are approximately added together.
The deviations of all three tristimulus values F1, F2, F3 (for example red, green, blue) are significantly reduced by combining the features of all four embodiments, relative to the deviation values which can be achieved with the AHD method of Hirakawa & Parks as known from the prior art.
Combining the features of all four exemplary embodiments of the method according to the invention achieves about 58 to 78% lower mean-square errors, while the deviations tend to be reduced even more strongly with increasing noise than without noise. The improvements for the visual impression are represented in Fig. 4, Fig. 5 and Fig. 6, which will be explained below.
Fig. 4 shows an image detail of the "USAF" reference image.
The numerical values after the brackets (b) to (d) represent the sum of the mean-square errors of the three tristimulus values R,G,B from the original tristimulus values (image a) The lower the deviations are, the better the quality is.
(a) denotes the original image, (b) denotes processing with a bilinear filter, (c) denotes "high quality linear interpolation" according to Malvar, He & Cutler, (d) denotes processing with the AHD method of Hirakawa & Parks, (e) denotes processing with the demosaicing method according to the invention.
Fig. 5 shows an image detail of the "lighthouse" reference image. The numerical values after the brackets (b) to (d) represent the sum of the mean-square errors of the three tristimulus values R,G,B from the original tristimulus values (image a) . The lower the deviations are, the better the quality is.
(a) denotes the original image, (b) denotes processing with a bilinear filter, (c) denotes "high quality linear interpolation" according to Malvar, He & Cutler, (d) denotes processing with the AHD method of Hirakawa & Parks, (e) denotes processing with the demosaicing method according to the invention.
Fig. 6 shows an image detail of the noisy "USAF" reference image. The numerical values after the brackets (b) to (d) represent the sum of the mean-square errors of the three tristimulus values R,G,B from the original tristimulus values (image a) . The lower the deviations are, the better the quality is.
(a) denotes the original image, (b) denotes processing with a bilinear filter, (c) denotes "high quality linear interpolation" according to Malvar, He & Cutler, (d) denotes processing with the AHD method of Hirakawa & Parks, (e) denotes processing with the demosaicing method according to the invention.
The technical solution according to the third and fourth embodiments (median filtering of the colour differences) may moreover be applied to other demosaicing methods as well.
All features of the technical solutions according to the invention may also be used within the framework of real-time applications or hardware solutions (for example in ASICs (= application-specific integrated circuits), in FPGAs (= field-programmable gate arrays), or in GPUs (= graphical processing units, i.e. in programmable graphics cards), since the respective technical solutions can be represented as a nonlinear filter kernel.
By means of the invention, the known AHD method for interpolating tristimulus values on pixels of an image sensor, the image sensor comprising a multiplicity of pixels and a colour filter in one of three colours, for example red (R), green (G), blue (B) or cyan (C), magenta (M), yellow (Y), being provided in front of each pixel, is therefore improved by different variants of individual method steps, which may be provided individually or in combination.

Claims (19)

  1. Claims 1. A method for interpolating colour or tristimulus values at pixels of an image sensor, the image sensor comprising a multiplicity of pixels and a colour filter in one of three colours (F1, F2, F3) being provided in front of each pixel, having the following steps: 1) interpolating the F2 tristimulus values at the F1 pixels and the F3 pixels, in a first direction and in a second direction respectively across the pixel in question; then first 2a1) interpolating the F3 tristimulus values at the F1 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1, with the aid of the F3 pixels surrounding the F1 pixel; and 2a2) interpolating the F1 tristimulus values at the F3 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1, with the aid of the F1 pixels surrounding the F3 pixel; and then 2b1) interpolating the F3 tristimulus values at the F2 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulLus values interpolated in the second direction in step 1, with the aid of the F1 pixels respectively neighbouring the F2 pixel; and 2b2) interpolating the F1 tristimulus values at the F2 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1, with the aid of the F3 pixels respectively neighbouring the F2 pixel.
  2. 2. A method according to Claim 1, including the further steps: 3a) calculating the absolute differences of the F1 tristimulus values from the four respective neighbouring positions; 3b) calculating the absolute differences of the F2 tristimulus values from the four respective neighbouring positions; 3c) calculating the absolute differences of the F3 tristimulus values from the four respective neighbouring positions; 3d) calculating the limit values as a comparative measure of the homogeneity; 4) calculating the homogeneity values; 5) deciding whether the horizontal tristimulus value or the vertical tristimulus value is selected.
  3. 3. A method according to Claim 1, including the further steps: 3) converting the tristimulus values interpolated in steps 1) to 2b2) via the CIE-XYZ colour space into the CIE-Lab colour space; 4a) calculating the absolute differences of the L values of the Lab colour space from the four respective neighbouring positions; 4b) calculating the absolute differences of the ab values of the Lab colour space from the four respective neighbouring positions; 4c) calculating the limit values as a comparative measure of the homogeneity; 4d) calculating the homogeneity values; 5) deciding whether the horizontal tristimulus value or the vertical tristimulus value is selected.
  4. 4. A method for interpolating tristimulus values at pixels of an image sensor, the image sensor comprising a multiplicity of pixels and a colour filter in one of three colours (F1, F2, F3) being provided in front of each pixel, having the following steps: 1) interpolating the F2 tristimulus values at the F1 pixels and at the F3 pixels, in a first direction and in a second direction respectively across the pixel in question; 2a1) interpolating the F3 tristimulus values at the F1 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1, with the aid of the F3 pixels surrounding the F1 pixel; and 2a2) interpolating the F1 tristimulus values at the F3 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1, with the aid of the F1 pixels surrounding the F3 pixel; 2b1) interpolating the F3 tristimulus values at the F2 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1; 2b2) interpolating the F1 tristimulus values at the F2 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1; 3a) calculating the absolute differences of the F1 tristimulus values from the four respective neighbouring positions; 3b) calculating the absolute differences of the F2 tristimulus values from the four respective neighbouring positions; 3c) calculating the absolute differences of the F3 tristimulus values from the four respective neighbouring positions; 3d) calculating the limit values as a comparative measure of the homogeneity; 4) calculating the homogeneity values; 5) deciding whether the tristimulus value interpolated in the first direction or the tristimulus value interpolated in the second direction is selected, by forming a definitive tristimulus value on the basis of the homogeneity values for a component X, determined by a homogeneity difference, of the tristimulus value interpolated in the first direction and the component 100-X complementary thereto of the tristimulus value interpolated in the second direction.
  5. 5. A method for interpolating tristimulus values at pixels of an image sensor, the image sensor comprising a multiplicity of pixels and a colour filter in one of three colours (F1, F2, F3) being provided in front of each pixel, having the following steps: 1) interpolating the F2 tristimulus values at the F1 pixels and at the F pixels, in a first direction and in a second direction respectively across the pixel in question; 2a1) interpolating the F3 tristimulus values at the F1 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1, with the aid of the F3 pixels surrounding the F1 pixel; and 2a2) interpolating the F1 tristimulus values at the F3 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1, with the aid of the F1 pixels surrounding the F3 pixel; 2b1) interpolating the F3 tristimulus values at the F2 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1; 2b2) interpolating the F1 tristimulus values at the F2 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1; 3) converting the tristimulus values interpolated in steps 1) to 2b2) via the CIE-XYZ colour space into the CIE-Lab colour space; 4a) calculating the absolute differences of the L values of the Lab colour space from the four respective neighbouring positions; 4b) calculating the absolute differences of the ab values of the Lab colour space from the four respective neighbouring positions; 4c) calculating the limit values as a comparative measure of the homogeneity; 4d) calculating the homogeneity values; 5) deciding whether the tristimulus value interpolated in the first direction or the tristimulus value interpolated in the second direction is to be selected, by forming a definitive tristimulus value on the basis of the homogeneity values for a component X, determined by a homogeneity difference, of the tristimulus value interpolated in the first direction and the component 100-X complementary thereto of the tristimulus value interpolated in the second direction.
  6. 6. A method according to one of Claims 2 to 5, including the further step: 6) carrying out median filtering of the colour differences (F1-2) and (F3-F2), while the tristimulus values which relate to hot pixels, i.e. have been measured directly by the sensor, remain unchanged.
  7. 7. A method for interpolating tristimulus values at pixels of an image sensor, the image sensor comprising a multiplicity of pixels and a colour filter in one of three colours (F1, F2, F3) being provided in front of each pixel, having the following steps: 1) interpolating the F2 tristimulus values at the F1 pixels and at the F3 pixels, in a first direction and in a second direction respectively across the pixel in question; 2a1) interpolating the F3 tristimulus values at the F1 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1, with the aid of the F3 pixels surrounding the F1 pixel; and 2a2) interpolating the F1 tristimulus values at the F3 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1, with the aid of the F1 pixels surrounding the F3 pixel; 2b1) interpolating the F3 tristimulus values at the F2 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1; 2b2) interpolating the F1 tristimulus values at the F2 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1; 3a) calculating the absolute differences of the F1 tristimulus values from the four respective neighbouring positions; 3b) calculating the absolute differences of the F2 tristimulus values from the four respective neighbouring positions; 3c) calculating the absolute differences of the F3 tristimulus values from the four respective neighbouring positions; 3d) calculating the limit values as a comparative measure of the homogeneity; 4) calculating the homogeneity values; 5) deciding whether the tristimulus value interpolated in the first direction or the tristimulus value interpolated in the second direction is selected, by forming a definitive tristimulus value on the basis of the homogeneity values; and with the further step: carrying out median filtering of the colour differences (F1-F7) and (F3-F2), while the tristimulus values which relate to hot pixels, i.e. have been measured directly by the sensor, remain unchanged.
  8. 8. A method for interpolating tristimulus values at pixels of an image sensor, the image sensor comprising a multiplicity of pixels and a colour filter in one of three colours (F1, F2, F3) being provided in front of each pixel, having the following steps: 1) interpolating the F2 tristimulLus values at the F1 pixels and at the F3 pixels, in a first direction and in a second direction respectively across the pixel in question; 2a1) interpolating the F3 tristimulus values at the F1 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1, with the aid of the F pixels surrounding the F1 pixel; and 2a2) interpolating the F1 tristimulus values at the F3 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1, with the aid of the F1 pixels surrounding the F3 pixel; 2b1) interpolating the F3 tristimulus values at the F2 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1; 2b2) interpolating the F1 tristimulus values at the F2 pixels in the first direction by using the F2 tristimulus values interpolated in the first direction in step 1, and in the second direction by using the F2 tristimulus values interpolated in the second direction in step 1; 3) converting the tristimulus values interpolated in steps 1) to 2b2) via the CIE-XYZ colour space into the OlE-Lab colour space; 4a) calculating the absolute differences of the L values of the Lab colour space from the four respective neighbouring positions; 4b) calculating the absolute differences of the ab values of the Lab colour space from the four respective neighbouring positions; 4c) calculating the limit values as a comparative measure of the homogeneity; 4d) calculating the homogeneity values; 5) deciding whether the tristimulus value interpolated in the first direction or the tristimulus value interpolated in the second direction is selected, by forming a definitive tristimulus value on the basis of the homogeneity values; and with the further step: carrying out median filtering of the colour differences (F1-F2) and (F3-F2), while the tristimulus values which relate to hot pixels, i.e. have been measured directly by the sensor, remain unchanged.
  9. 9. A method according to Claim 4, 5, 7 or 8, in which: the interpolation of the F3 tristimulus values in step 2b1 is carried out with the aid of the F1 pixels respectively neighbouring the F2 pixel, and the interpolation of the F1 tristimulus values in step 2b2 is carried out with the aid of the F3 pixels neighbouring the F2 pixel.
  10. 10. A method according to Claim 2, 3, 7 or 8, in which: the decision in step 5, as to whether the tristimulus value interpolated in the first direction or the tristimulus value interpolated in the second direction is selected, is carried out by forming a definitive tristimulus value on the basis of the homogeneity values for a component X, determined by a homogeneity difference, of the tristimulus value interpolated in the first direction and the component 100-X complementary thereto of the tristimulus value interpolated in the second direction.
  11. 11. A method according to Claim 6, 7 or 8, in which the step of median filtering the colour differences (F1-F2) and (F3-F2), while the tristimulus values which relate to hot pixels, i.e. have been measured directly by the sensor, remain unchanged, is carried out after step 5, i.e. after the decision as to whether the tristimulus value interpolated in the first direction or the tristimulus value interpolated in the second direction is selected.
  12. 12. A method according to Claim 6, 7 or 8, in which the step of median filtering the colour differences (F1-F2) and (F3-F2), while the tristimulus values which relate to hot pixels, i.e. have been measured directly by the sensor, remain unchanged, is carried out before step 5, i.e. before the decision as to whether the tristimulus value interpolated in the first direction or the tristimulus value interpolated in the second direction is selected, preferably immediately after steps 2a1) to 2b2)
  13. 13. A method according to any preceding claim, in which a median of the two F2 tristimulus values interpolated in step 1 is formed in the first direction and in the second direction after step 1.
  14. 14. A method according to any preceding claim, in which the three colours (F1, F2, F3) are red (R), green (G) and blue (B) or yellow (Y), magenta (M) and cyan (C)
  15. 15. A method according to any preceding claim, in which the first direction and the second direction are mutually perpendicular.
  16. 16. A computer program having program code means for carrying out all the method steps according to any preceding claim when the program is run on a computer.
  17. 17. A computer program product having program code means, which are stored on a computer-readable data carrier, for carrying out all the method steps according to any of Claims 1 to 15 when the program product is run on a computer.
  18. 18. A computer-readable data carrier, on which a computer program according to Claim 16 or a computer program product according to Claim 17 is stored.
  19. 19. A method substantially as described with reference to Figures 4(e), 5(e), 6(e).
GB0921952.8A 2008-12-19 2009-12-16 Method for interpolating tristmulus values on pixels of an image sensor Expired - Fee Related GB2466375B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
DE102008063970A DE102008063970B4 (en) 2008-12-19 2008-12-19 Method for interpolating color values at picture elements of an image sensor

Publications (3)

Publication Number Publication Date
GB0921952D0 GB0921952D0 (en) 2010-02-03
GB2466375A true GB2466375A (en) 2010-06-23
GB2466375B GB2466375B (en) 2014-07-16

Family

ID=41717021

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0921952.8A Expired - Fee Related GB2466375B (en) 2008-12-19 2009-12-16 Method for interpolating tristmulus values on pixels of an image sensor

Country Status (3)

Country Link
DE (1) DE102008063970B4 (en)
FR (1) FR2940574B1 (en)
GB (1) GB2466375B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102016112968B4 (en) * 2016-07-14 2018-06-14 Basler Ag Determination of color values for pixels at intermediate positions

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006084266A1 (en) * 2005-02-04 2006-08-10 Qualcomm Incorporated Adaptive color interpolation

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4774565A (en) 1987-08-24 1988-09-27 Polaroid Corporation Method and apparatus for reconstructing missing color samples
US5629734A (en) 1995-03-17 1997-05-13 Eastman Kodak Company Adaptive color plan interpolation in single sensor color electronic camera
JP4292625B2 (en) * 1998-06-01 2009-07-08 株式会社ニコン Interpolation processing apparatus and recording medium recording interpolation processing program
US7053908B2 (en) 2001-04-12 2006-05-30 Polaroid Corporation Method and apparatus for sensing and interpolating color image data
US7456881B2 (en) * 2006-01-12 2008-11-25 Aptina Imaging Corporation Method and apparatus for producing Bayer color mosaic interpolation for imagers

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006084266A1 (en) * 2005-02-04 2006-08-10 Qualcomm Incorporated Adaptive color interpolation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
IEEE TRANSACTIONS ON IMAGE PROCESSING, published 1 December 2008, Vol 17, No 12, pages 2356-2367 (XP011248779); *
IEEE TRANSACTIONS ON IMAGE PROCESSING, published March 2005, Vol 14, No 3, pages 360-369; *

Also Published As

Publication number Publication date
FR2940574A1 (en) 2010-06-25
DE102008063970B4 (en) 2012-07-12
GB2466375B (en) 2014-07-16
GB0921952D0 (en) 2010-02-03
FR2940574B1 (en) 2015-12-25
DE102008063970A1 (en) 2010-06-24

Similar Documents

Publication Publication Date Title
US9013611B1 (en) Method and device for generating a digital image based upon a selected set of chrominance groups
US9736437B2 (en) Device for acquiring bimodal images
US7450166B2 (en) Digital signal processing apparatus in image sensor
US7577315B2 (en) Method and apparatus for processing image data of a color filter array
JP4517493B2 (en) Solid-state imaging device and signal processing method thereof
US7400332B2 (en) Hexagonal color pixel structure with white pixels
US7082218B2 (en) Color correction of images
US7072509B2 (en) Electronic image color plane reconstruction
WO2008001629A1 (en) Image processing device, image processing program, and image processing method
JP2004153823A (en) Image processing system using local linear regression
US11546562B2 (en) Efficient and flexible color processor
KR100398564B1 (en) Solid-state color imager
EP2152010B1 (en) Luminance signal generation apparatus, luminance signal generation method, and image capturing apparatus
US7609300B2 (en) Method and system of eliminating color noises caused by an interpolation
JPH10313464A (en) Color filter array and its hue interpolating device
KR100421348B1 (en) Color image pickup apparatus
US6842191B1 (en) Color image restoration with anti-alias
EP0970588A1 (en) Color signal interpolation
US8363135B2 (en) Method and device for reconstructing a color image
GB2466375A (en) Improved method for interpolation of tristimulus values on pixels of an image sensor.
KR20150123723A (en) Image processing apparatus, imaging apparatus, image processing method and storage medium
CN110324541B (en) Filtering joint denoising interpolation method and device
US8174592B2 (en) Color interpolation device and color interpolation method
TWI528812B (en) System and method of reducing noise
US9088740B2 (en) System and method of reducing noise

Legal Events

Date Code Title Description
746 Register noted 'licences of right' (sect. 46/1977)

Effective date: 20190909

PCNP Patent ceased through non-payment of renewal fee

Effective date: 20211216