WO1991005441A1 - Segmentation mechanism for producing interpolated color values for a sampled color image signal - Google Patents

Segmentation mechanism for producing interpolated color values for a sampled color image signal Download PDF

Info

Publication number
WO1991005441A1
WO1991005441A1 PCT/US1990/005363 US9005363W WO9105441A1 WO 1991005441 A1 WO1991005441 A1 WO 1991005441A1 US 9005363 W US9005363 W US 9005363W WO 9105441 A1 WO9105441 A1 WO 9105441A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
signal values
segmentation
sampled
accordance
Prior art date
Application number
PCT/US1990/005363
Other languages
French (fr)
Inventor
David Roy Cok
Original Assignee
Eastman Kodak Company
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Eastman Kodak Company filed Critical Eastman Kodak Company
Priority to DE69015153T priority Critical patent/DE69015153T2/en
Priority to EP90914590A priority patent/EP0446330B1/en
Publication of WO1991005441A1 publication Critical patent/WO1991005441A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/646Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values

Definitions

  • the present invention relates in general to color imagery signal processing and is particularly directed to a segmentation mechanism for interpolating color band signal values so as to substantially reduce color fringing in the neighborhood of a color edge.
  • gACKSRQUNP QF THE INVENTION Multi-channel imagery sensing devices, such as a digital RGB video camera, typically produce output signals whose sampling frequencies differ, with one channel (usually one of the green or the luminance channel) being fully sampled, while the other channels (the red and blue channels) carry lower resolution data.
  • the amount of green information may be two or three times that of the red or blue channels.
  • the interpolated value is a color difference signal, such as an I, Q, R-G, or B-G signal.
  • the above-described color fringing of edges between regions of different color in images that are processed by conventional interpolation mechanisms is effectively avoided by means of a new and improved signal processing technique that subdivided the fully sampled channel into a segmentation diagram comprised of a plurality of contiguous image regions of respectively different image characteristics.
  • the segmentation mechanism is such that, within a respective region, the fully sampled signal values are associated with a common image characteristic.
  • a boundary between adjacent regions occurs where the segmentation mechanism has inferred the presence of an edge between sub-sampled locations and has assigned signal values for successive sampling locations of the fully sampled channel on opposite sides of the edge in accordance with different image characteristics.
  • signal values for non-sampled locations of a sub-sampled channel are interpolated in accordance with a predetermined relationship between fully sampled and sub-sampled signal values at a sampling location in that region whereat each of fully sampled and sub-sampled signal values have been produced.
  • the interpolation process according to the present invention may be carried out using more than one segmentation mechanism.
  • the image processing performance of each segmentation scheme is weighted.
  • Interpolation values based upon each segmentation scheme are then combined as a weighted average.
  • Figure 1 diagra matically shows a color image processing system employing a dedicated color sample interpolation processor;
  • Figure 2 diagrammatically illustrates a portion of a two-dimensional pixel array;
  • Figure 3 diagrammatically illustrates a portion of one row of the pixel array of Figure 2 and the differential sampling of the pixel locations within the row to produce sampled signal values Li and Ci;
  • Figure 4 diagrammatically illustrates a series of image representative signals having an edge between respective constant levels for successively offset edge-representative locations among fully sampled locations 0 and n;
  • Figure 5 diagrammatically illustrates an image representative signal having an edge between respective uniformly (linearly) varying regions containing fully sampled locations 0 and n; and
  • Figure 6 diagrammatically illustrates a portion of a two-dimensional pixel array and the two- dimensional segmenting of the pixel locations within the array such that each region contains a plurality of pixel locations whereat .fully sampled signal values are provided and a pixel location whereat a sub-sampled signal value is provided.
  • the invention resides primarily in a novel color imagery signal interpolation mechanism, rather than a particular implementation of that mechanism.
  • the interpolation mechanism may be carried out by means of a variety of structural combinations of conventional signal processing circuits and components, such as custom- configured integrated circuit hardware. Accordingly, the structure, control and arrangement of the signal processing hardware of an embodiment of the invention have been illustrated in the drawings by a readily understandable block diagram, which shows only those specific details that are pertinent to the present invention, so as not to obscure the disclosure with structural details which will be readily apparent to those skilled in the art having the benefit of the present description.
  • a color image processing system is diagrammatically shown as comprising a color image transducer device, such as a digital RGB video camera 11, that supplies over its output link 13 image characteristic-representative digital signals, such as respective red, green and blue channel signal values representative of the red, green and blue color band components of a color image 21, light from which is incident upon an opto-electronic conversion matrix within the camera.
  • image characteristic-representative digital signals such as respective red, green and blue channel signal values representative of the red, green and blue color band components of a color image 21, light from which is incident upon an opto-electronic conversion matrix within the camera.
  • image characteristic-representative digital signals such as respective red, green and blue channel signal values representative of the red, green and blue color band components of a color image 21, light from which is incident upon an opto-electronic conversion matrix within the camera.
  • the sampling frequency is not the same for each of the respective image information bands.
  • green is fully sampled
  • red and blue are sampled at a lower frequency.
  • a microprocessor programmed to execute the interpolation procedure set forth in detail below, which derives values for those portions (pixel locations) of an image that lie between fully sampled pixels.
  • those pixels that are fully sampled will be denoted by the reference character L, while the subsampled components will be denoted by the reference character C.
  • Figure 2 diagrammatically illustrates a portion of a two-dimensional array of image pixels Pi,j, for which image characteristic representative signals, associated with respective image attributes, such as a specified chrominance or chrominance difference value, a luminance value, etc. are provided by camera 11.
  • image characteristic representative signals associated with respective image attributes, such as a specified chrominance or chrominance difference value, a luminance value, etc. are provided by camera 11.
  • image attributes such as a specified chrominance or chrominance difference value, a luminance value, etc.
  • the problem of color fringing of edges between portions of an image having differing color characteristics is effectively avoided by first segmenting the image into a plurality of contiguous image regions, between which edges are inferred and then interpolating sub-sampled signal values within each region in accordance with a predetermined relationship between fully sampled and sub-sampled signal values at sampling locations whereat fully sampled and sub-sampled signal values have been supplied by camera 11.
  • the mechanism chosen to segment the image may be any of a number of conventional imagery signal processing techniques, including both one and two-dimensional segmentation schemes and the invention is not limited any specific method. Consequently, the segmentation examples given below are merely for purposes of illustration, other procedures described in the literature are equally applicable to the invention.
  • One example of a technique for segmenting the image on a one-dimensional basis involves the analysis of successive linear groups or lines of pixels, such as linear portion 25 of the array of Figure 2, using sub- intervals delineated by those pixel locations at which signal values for a sub-sampled channel have been produced by camera 11. Within each sub-interval a color edge is presumed and the analysis proceeds to determine the most likely position of the edge based upon an iterative processing of the fully sampled channel signals for all of the pixels within the sub- interval.
  • FIG. 3 which diagrammatically illustrates a portion 25 of one row of the pixel array of Figure 2
  • differential (frequency) sampling of the pixel locations Pi within the row produces fully sampled signal values Li for each pixel Pi and sub-sampled signal values CO, Cn, C2n, ... for every nth pixel location, where the sub- sample factor n typically has a value of 2, 4 or 8. It is necessary, therefore, to interpolate C values for pixel locations Cl, C2, C3, ..., Cn-1, Cn+1, etc.
  • a color edge is inferred, and the ' characteristics of the image on opposite sides of the edge are presumed to conform to respectively different color characteristics, such as respectively different constant color levels or respective uniform variations in color, so that between the edge and the end points of the subinterval, there are two color regions that are contiguous with one another at the edge.
  • the segmentation process is carried out on an iterative basis, p: iucing a series of measurements of the 'fit' of each segmented sub-interval iteration.
  • That segmented sub-interval iteration which best fits or matches the L values of the fully sampled channel is selected.
  • the best fit may be determined using conventional error measurement techniques, such as a least squares fit or a maximum likelihood, or Bayesion statistical procedure, or a threshold crossing of the L values between the regions.
  • Figure 4 diagrammatically illustrates an iterative segmentation process applied to a sub- interval of fully sampled L signal values, with an edge E being inferred between respective high and low constant levels LKH and LKL, for successively offset edge-representative locations among fully sampled locations 0 and n.
  • LKH and LKL constant levels
  • Figure 4 diagrammatically illustrates an iterative segmentation process applied to a sub- interval of fully sampled L signal values, with an edge E being inferred between respective high and low constant levels LKH and LKL, for successively offset edge-representative locations among fully sampled locations 0 and n.
  • an edge E is presumed between pixel locations 0 and n; on each side of the edge the value of L is a constant (either LKH or LKL) .
  • the values of the constants may be chosen by setting a given threshold based upon a priori knowledge of the image statistics and averaging the L values within the interval. As with the segmentation process itself, the particular mechanism through which the constants are defined is not critical to the invention.
  • the sequence of L values LO, LI, L2, ..., Ln is compared with the model M and a measure of the fit of the model to the actual image-representative signals (for example, through a least squared error fit or choosing the center zero-crossing of L-(Lo+Ln)/2) is derived.
  • That segmentation model which produces the 'best fit' thereby 'tailors' the optimal location of the edge E and the mechanism through which the sub- sampled C values are interpolated within the sub- interval or region between pixel locations 0 and n.
  • the particular mechanism for interpolating an unknown value of C will depend on the characteristics of the bands of the L and C signals. Examples of the mechanism through which unknown values of C are defined include making each value of C within a region (on one side of the edge E) constant, making the difference C-L constant within the region, or making the ratio C/L constant within the region. For bands which are essentially decorrelated with one another, such as the in-phase and luminance channels in a YIQ system, the value of C is fixed within the region. For a red and green intensity imaging device, the ratio of red to green (green is fully sampled) is made constant within the region. For a log exposure imaging device, the difference between the red and green values is made constant within the region.
  • he model M of the image characteristic has the form of linear equation, represented in Figure 5 by respective lines LI and L2.
  • an edge E is presumed between pixel locations 0 and n, and on each side of the edge the value of L conforms with a respective relationship (here a linear variation rather than a constant, as in the previous example) .
  • the values of the coefficients a and b may be selected in accordance with image characteristics, including expected noise, and the magnitudes of each of CO, LO and Cn, Ln at pixel locations 0 and n, respectively.
  • the sequence of L values LO, LI, L2, ..., Ln is compared with the model M and a measure of the fit of the model to the actual image-representative signals is derived. That model which produces the best fit then establishes the location of the edge E and the mechanism through which the sub-sampled C values are interpolated within the sub-interval or region between pixel locations 0 and n.
  • the mechanism for interpolating an unknown value of C will depend on the characteristics of the bands of the L and C signals and may include the relationships set forth above.
  • Figures 6(a) - 6(b) show portion 27 of the pixel array of Figure 2, which has been segmented into respective segmentation diagrams by a two-dimensional segmenting process, such that each region of each diagram contains a plurality of pixel locations whereat fully sampled signal values are provided and one pixel location whereat a sub-sampled signal value is provided. More particularly, each of Figures 6(a) - 6(b) illustrates a two-dimensional 'neighborhood' segmenting or subdividing of a two-dimensional array of pixels into four regions 31-34, each of which contains a pixel location P at which both L and C values are supplied by camera 11 and some number of additional pixel locations Po at which only L values are provided and for which C values must be interpolated.
  • FIG. 6 An example of a 'neighborhood' segmenting process for the sampling geometry illustrated in Figure 6 may be any of the types described in the published literature (e.g. "Handbook of Pattern Recognition and Image Processing” eds. T.Y. Young & K.S. Fu Academic Press 1986).
  • groups of interconnected edge lines E between adjacent regions are produced, as shown in Figures 6(a) - (c) .
  • the geometry is subdivided by an edge line E12 between regions 31 and 32, edge line E13 between regions 31 and 33, edge line E14 between regions 31 and 34 and edge line E34 between regions 33 and 34.
  • the L values within each region are compared with a (two-dimensional) model for the array (e.g a constant level or 'planar' model) " and a measure of the f it of the model to the L values for each pixel location within a region is derived.
  • a two-dimensional model which produces the best fit establishes the paths of the edge lines E and the mechanism through which the sub-sampled C values are interpolated within the region.
  • the interpolation mechanism first segments image data rsuant to some prescribed one or two-dimensional segmentation procedure, and then, after performing a best fit tailoring the boundaries of the segmented regions, interpolates the sub-sampled C values at pixel locations for which C data was not originally supplied.
  • the mechanism of the present invention may be carried out by applying a plurality of segmentation schemes to the data and assigning a figure of merit to each segmentation process. A typical measure of the figure of merit may again be based upon the mean square deviation of the actual L data from the chosen segmentation model.
  • Another measure of the figure of merit that may be used is the likelihood that a segmentation could produce the given observed L data, based upon a statistical model of the image.
  • Wl, W2, W3, Wk there is a corresponding plurality of performance or weighting coefficients Wl, W2, W3, Wk, each of which is associated with the figure of merit and a priori characteristics of the image.
  • the weighting function W may be established in accordance with the relationship:
  • W exp( -(mean square deviation) / (flat field variance) ) , wherein the variance value provides a measure of the noise within a generally uniform region of the image.
  • the problem of color fringing of edges between image regions of different color characteristics that are processed by conventional interpolation mechanisms is effectively obviated in accordance with the present invention by basing interpolation of sub-sampled image signals on an initial segmentation diagram, boundaries between which have been tailored to optimally subdivide the image data into a plurality of contiguous image regions of respectively different image characteristics. Because a boundary between adjacent regions occurs where the segmentation mechanism has tailored the inference of an edge, interpolated sub-sampled values in the neighborhood of that edge produce a reconstructed image that enjoys significantly improved quality over images produced by conventional linear interpolation.

Abstract

Blurring along edges between regions of different color characteristics in interpolated color images is avoided by a signal processing technique that segments the fully sampled channel into a plurality of contiguous image regions of respectively different image characteristics. The segmentation mechanism is such that, within a respective region, the fully sampled signal values are associated with a common image characteristic. A boundary between adjacent regions occurs where the segmentation mechanism has inferred the presence of an edge between sub-sampled locations and has assigned signal values for successive sampling locations of the fully sampled channel on opposite sides of the edge in accordance with different image characteristics. After the image has been segmented, within each region, signal values for non-sampled locations of a sub-sampled channel are interpolated in accordance with a predetermined relationship between fully sampled and sub-sampled signal values at a sampling location in that region whereat each of fully sampled and sub-sampled signal values have been produced. In addition, more than one segmentation mechanism may be employed. In this case, the image processing performance of each segmentation scheme is weighted. Interpolation values based upon each segmentation scheme are then combined as a weighted average.

Description

SEGMENTATION MECHANISM FOR PRODUCING INTERPOLATED COLOR VALUES POR A SAMPLED COLOR IMAGE SIGNAL
FIE -P QF THE INVENTION The present invention relates in general to color imagery signal processing and is particularly directed to a segmentation mechanism for interpolating color band signal values so as to substantially reduce color fringing in the neighborhood of a color edge. gACKSRQUNP QF THE INVENTION Multi-channel (e.g. three color band, luminance band) imagery sensing devices, such as a digital RGB video camera, typically produce output signals whose sampling frequencies differ, with one channel (usually one of the green or the luminance channel) being fully sampled, while the other channels (the red and blue channels) carry lower resolution data. For example, in a miniaturized charge-coupled device camera, the amount of green information may be two or three times that of the red or blue channels. Moreover, in color image compression telecommunication systems, it is common practice to subsa ple the chrominance channels prior to applying the data to a compression mechanism, for the purpose of further reducing the amount of data to be transmitted. Because of this reduction in the amount of information through which the original image has been defined, upon reconstruction, it is necessary to fill in or interpolate values for non-sampled image locations of the lower resolution channel. A widespread technique for carrying out the interpolation process is to conduct a one or two-dimensional linear interpolation for computing values of image locations, where there is no chrominance information, from adjacent pixels where chrominance values are available. Typically, the interpolated value is a color difference signal, such as an I, Q, R-G, or B-G signal. Unfortunately, such linear interpolation process yields color artifacts (a blurring or smearing of the image) at edges between regions of different color, so that the colored component of the edge is not as sharp as the luminance component, thereby reducing the quality of both photographic images and video images of real scenes. SUMMARY OF THE INVENTION
In accordance with the present invention, the above-described color fringing of edges between regions of different color in images that are processed by conventional interpolation mechanisms is effectively avoided by means of a new and improved signal processing technique that subdivided the fully sampled channel into a segmentation diagram comprised of a plurality of contiguous image regions of respectively different image characteristics. The segmentation mechanism is such that, within a respective region, the fully sampled signal values are associated with a common image characteristic. A boundary between adjacent regions occurs where the segmentation mechanism has inferred the presence of an edge between sub-sampled locations and has assigned signal values for successive sampling locations of the fully sampled channel on opposite sides of the edge in accordance with different image characteristics. After the image has been segmented, within each region, signal values for non-sampled locations of a sub-sampled channel are interpolated in accordance with a predetermined relationship between fully sampled and sub-sampled signal values at a sampling location in that region whereat each of fully sampled and sub-sampled signal values have been produced.
The interpolation process according to the present invention may be carried out using more than one segmentation mechanism. In this instance, the image processing performance of each segmentation scheme is weighted. Interpolation values based upon each segmentation scheme are then combined as a weighted average.
BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 diagra matically shows a color image processing system employing a dedicated color sample interpolation processor; Figure 2 diagrammatically illustrates a portion of a two-dimensional pixel array;
Figure 3 diagrammatically illustrates a portion of one row of the pixel array of Figure 2 and the differential sampling of the pixel locations within the row to produce sampled signal values Li and Ci; Figure 4 diagrammatically illustrates a series of image representative signals having an edge between respective constant levels for successively offset edge-representative locations among fully sampled locations 0 and n;
Figure 5 diagrammatically illustrates an image representative signal having an edge between respective uniformly (linearly) varying regions containing fully sampled locations 0 and n; and Figure 6 diagrammatically illustrates a portion of a two-dimensional pixel array and the two- dimensional segmenting of the pixel locations within the array such that each region contains a plurality of pixel locations whereat .fully sampled signal values are provided and a pixel location whereat a sub-sampled signal value is provided.
DETAILED DESCRIPTION Before describing the details of an embodiment of the present invention, it should be observed that the invention resides primarily in a novel color imagery signal interpolation mechanism, rather than a particular implementation of that mechanism. Moreover, although, in its preferred embodiment, the invention is implemented using a programmed digital computer, the interpolation mechanism may be carried out by means of a variety of structural combinations of conventional signal processing circuits and components, such as custom- configured integrated circuit hardware. Accordingly, the structure, control and arrangement of the signal processing hardware of an embodiment of the invention have been illustrated in the drawings by a readily understandable block diagram, which shows only those specific details that are pertinent to the present invention, so as not to obscure the disclosure with structural details which will be readily apparent to those skilled in the art having the benefit of the present description.
Referring now to Figure 1, a color image processing system is diagrammatically shown as comprising a color image transducer device, such as a digital RGB video camera 11, that supplies over its output link 13 image characteristic-representative digital signals, such as respective red, green and blue channel signal values representative of the red, green and blue color band components of a color image 21, light from which is incident upon an opto-electronic conversion matrix within the camera. As noted previously, when image 21 is scanned by camera 11, the sampling frequency is not the same for each of the respective image information bands. Typically, green is fully sampled, while red and blue are sampled at a lower frequency. As a consequence, the sampled values on link 13 are coupled to an interpolation processor 15 (e.g. a microprocessor programmed to execute the interpolation procedure set forth in detail below) , which derives values for those portions (pixel locations) of an image that lie between fully sampled pixels. In the present description, those pixels that are fully sampled will be denoted by the reference character L, while the subsampled components will be denoted by the reference character C.
Figure 2 diagrammatically illustrates a portion of a two-dimensional array of image pixels Pi,j, for which image characteristic representative signals, associated with respective image attributes, such as a specified chrominance or chrominance difference value, a luminance value, etc. are provided by camera 11. In addition, within the array a one- dimensional or linear pixel region 25 and a two- dimensional region 27 have been demarcated by broken lines and will be referenced below in the description of examples for segmenting the image into contiguous regions of differing color characteristics.
As pointed out above, in accordance with the present invention, the problem of color fringing of edges between portions of an image having differing color characteristics is effectively avoided by first segmenting the image into a plurality of contiguous image regions, between which edges are inferred and then interpolating sub-sampled signal values within each region in accordance with a predetermined relationship between fully sampled and sub-sampled signal values at sampling locations whereat fully sampled and sub-sampled signal values have been supplied by camera 11. It should be observed that the mechanism chosen to segment the image may be any of a number of conventional imagery signal processing techniques, including both one and two-dimensional segmentation schemes and the invention is not limited any specific method. Consequently, the segmentation examples given below are merely for purposes of illustration, other procedures described in the literature are equally applicable to the invention.
One example of a technique for segmenting the image on a one-dimensional basis involves the analysis of successive linear groups or lines of pixels, such as linear portion 25 of the array of Figure 2, using sub- intervals delineated by those pixel locations at which signal values for a sub-sampled channel have been produced by camera 11. Within each sub-interval a color edge is presumed and the analysis proceeds to determine the most likely position of the edge based upon an iterative processing of the fully sampled channel signals for all of the pixels within the sub- interval.
More particularly, referring to Figure 3, which diagrammatically illustrates a portion 25 of one row of the pixel array of Figure 2, differential (frequency) sampling of the pixel locations Pi within the row produces fully sampled signal values Li for each pixel Pi and sub-sampled signal values CO, Cn, C2n, ... for every nth pixel location, where the sub- sample factor n typically has a value of 2, 4 or 8. It is necessary, therefore, to interpolate C values for pixel locations Cl, C2, C3, ..., Cn-1, Cn+1, etc. In accordance with the present invention, prior to interpolating C values for these locations, within each subinterval, such as the sub-interval between pixel locations PO and Pn, a color edge is inferred, and the' characteristics of the image on opposite sides of the edge are presumed to conform to respectively different color characteristics, such as respectively different constant color levels or respective uniform variations in color, so that between the edge and the end points of the subinterval, there are two color regions that are contiguous with one another at the edge. In o der to determine between which two pixels the edge should be located, the segmentation process is carried out on an iterative basis, p: iucing a series of measurements of the 'fit' of each segmented sub-interval iteration. That segmented sub-interval iteration which best fits or matches the L values of the fully sampled channel is selected. The best fit may be determined using conventional error measurement techniques, such as a least squares fit or a maximum likelihood, or Bayesion statistical procedure, or a threshold crossing of the L values between the regions.
Figure 4 diagrammatically illustrates an iterative segmentation process applied to a sub- interval of fully sampled L signal values, with an edge E being inferred between respective high and low constant levels LKH and LKL, for successively offset edge-representative locations among fully sampled locations 0 and n. In accordance with this example of a one-dimensional segmentation process, it is presumed that the characteristics of the image conform to a prescribed model or template M, represented in the example illustrated in Figure 4 by a step-wise variation in the L parameter between a relatively high, constant value LKH and a relatively low constant value L L. In other words, within the sub-interval, an edge E is presumed between pixel locations 0 and n; on each side of the edge the value of L is a constant (either LKH or LKL) . The values of the constants may be chosen by setting a given threshold based upon a priori knowledge of the image statistics and averaging the L values within the interval. As with the segmentation process itself, the particular mechanism through which the constants are defined is not critical to the invention.
For each iteration of the location of edge E, the sequence of L values LO, LI, L2, ..., Ln is compared with the model M and a measure of the fit of the model to the actual image-representative signals (for example, through a least squared error fit or choosing the center zero-crossing of L-(Lo+Ln)/2) is derived. That segmentation model which produces the 'best fit' thereby 'tailors' the optimal location of the edge E and the mechanism through which the sub- sampled C values are interpolated within the sub- interval or region between pixel locations 0 and n.
The particular mechanism for interpolating an unknown value of C will depend on the characteristics of the bands of the L and C signals. Examples of the mechanism through which unknown values of C are defined include making each value of C within a region (on one side of the edge E) constant, making the difference C-L constant within the region, or making the ratio C/L constant within the region. For bands which are essentially decorrelated with one another, such as the in-phase and luminance channels in a YIQ system, the value of C is fixed within the region. For a red and green intensity imaging device, the ratio of red to green (green is fully sampled) is made constant within the region. For a log exposure imaging device, the difference between the red and green values is made constant within the region.
Figure 5 diagrammatically illustrates a second example of an iterative segmentation process that may be applied to a sub-interval of fully sampled L signal values, with an edge E being inferred between respective uniformly values of L (of the form L = a.x + b) as represented by the line L = al.x + bl that extends between pixel location Po and edge E, and the line L2 = a2LO + b2 that extends between edge E and pixel location Pn. In accordance with this example, he model M of the image characteristic has the form of linear equation, represented in Figure 5 by respective lines LI and L2. As in the previous example, within the sub-interval, an edge E is presumed between pixel locations 0 and n, and on each side of the edge the value of L conforms with a respective relationship (here a linear variation rather than a constant, as in the previous example) . The values of the coefficients a and b may be selected in accordance with image characteristics, including expected noise, and the magnitudes of each of CO, LO and Cn, Ln at pixel locations 0 and n, respectively.
Again, as in the previous example, for each iteration of the location of edge E, the sequence of L values LO, LI, L2, ..., Ln is compared with the model M and a measure of the fit of the model to the actual image-representative signals is derived. That model which produces the best fit then establishes the location of the edge E and the mechanism through which the sub-sampled C values are interpolated within the sub-interval or region between pixel locations 0 and n. As noted previously, the mechanism for interpolating an unknown value of C will depend on the characteristics of the bands of the L and C signals and may include the relationships set forth above.
Figures 6(a) - 6(b) show portion 27 of the pixel array of Figure 2, which has been segmented into respective segmentation diagrams by a two-dimensional segmenting process, such that each region of each diagram contains a plurality of pixel locations whereat fully sampled signal values are provided and one pixel location whereat a sub-sampled signal value is provided. More particularly, each of Figures 6(a) - 6(b) illustrates a two-dimensional 'neighborhood' segmenting or subdividing of a two-dimensional array of pixels into four regions 31-34, each of which contains a pixel location P at which both L and C values are supplied by camera 11 and some number of additional pixel locations Po at which only L values are provided and for which C values must be interpolated. An example of a 'neighborhood' segmenting process for the sampling geometry illustrated in Figure 6 may be any of the types described in the published literature (e.g. "Handbook of Pattern Recognition and Image Processing" eds. T.Y. Young & K.S. Fu Academic Press 1986). Through iterative application of the segmenting process, groups of interconnected edge lines E between adjacent regions are produced, as shown in Figures 6(a) - (c) . In the segmentation diagram shown in Figure 6(a), for example, the geometry is subdivided by an edge line E12 between regions 31 and 32, edge line E13 between regions 31 and 33, edge line E14 between regions 31 and 34 and edge line E34 between regions 33 and 34. For each iteration of the placement of edge lines E, the L values within each region are compared with a (two-dimensional) model for the array (e.g a constant level or 'planar' model) " and a measure of the f it of the model to the L values for each pixel location within a region is derived. As in the case of a one-dimensional segmentation process, that two- dimensional model which produces the best fit establishes the paths of the edge lines E and the mechanism through which the sub-sampled C values are interpolated within the region.
As described above, the interpolation mechanism according to the present invention first segments image data rsuant to some prescribed one or two-dimensional segmentation procedure, and then, after performing a best fit tailoring the boundaries of the segmented regions, interpolates the sub-sampled C values at pixel locations for which C data was not originally supplied. As an enhancement of this process, rather than initially decide upon a particular segmentation scheme and then apply the L data values to the chosen segmentation mechanism, the mechanism of the present invention may be carried out by applying a plurality of segmentation schemes to the data and assigning a figure of merit to each segmentation process. A typical measure of the figure of merit may again be based upon the mean square deviation of the actual L data from the chosen segmentation model. Another measure of the figure of merit that may be used is the likelihood that a segmentation could produce the given observed L data, based upon a statistical model of the image. Thus, for a plurality of seg-τient tion schemes SI, S2, S3, ..., Sk, there is a corresponding plurality of performance or weighting coefficients Wl, W2, W3, Wk, each of which is associated with the figure of merit and a priori characteristics of the image. For example, the weighting function W may be established in accordance with the relationship:
W = exp( -(mean square deviation) / (flat field variance) ) , wherein the variance value provides a measure of the noise within a generally uniform region of the image.
For each segmentation diagram, all of the unknown C values are interpolated and then scaled in accordance with the weighting coefficient associated with the particular segmentation process employed. Finally, for each interpolated pixel location Pi, all of the values Ci for that location are summed over the total of the weighting coefficients to obtain a weighted average of the value of Ci.
As will be appreciated from the foregoing description, the problem of color fringing of edges between image regions of different color characteristics that are processed by conventional interpolation mechanisms is effectively obviated in accordance with the present invention by basing interpolation of sub-sampled image signals on an initial segmentation diagram, boundaries between which have been tailored to optimally subdivide the image data into a plurality of contiguous image regions of respectively different image characteristics. Because a boundary between adjacent regions occurs where the segmentation mechanism has tailored the inference of an edge, interpolated sub-sampled values in the neighborhood of that edge produce a reconstructed image that enjoys significantly improved quality over images produced by conventional linear interpolation. While I have shown and described an embodiment in accordance with the present invention, it is to be understood that the same is not limited thereto but is susceptible to numerous changes and modifications as known to a person skilled in the art, and I therefore do not wish to be limited to the details shown and describ herein but intend to cover all such changes and modifications as are obvious to one of ordinary skill in the art.

Claims

WHAT IS CLAIMED
1. A method of processing sampled signal values, produced by a color imaging device, representative of respectively different information contents of an input image, a first of said sampled signal values, associated with a first band containing information of first characteristics of said input image, being produced at a first spatial frequency, and a second of said sampled signal values, associated with a second band containing information of second characteristics of input image, being produced at a second spatial frequency, lower than said first spatial frequency, said method producing interpolated second signal values, for locations of said image between sampling locations at which second signal values are not produced, comprising the steps of:
(a) segmenting said image into an image segmentation diagram formed of a plurality of regions of first signal values such that the first signal values in a respective region are associated with a common image characteristic for that region, while the image characteristics of adjacent regions differ from one another such that there is an image boundary between adjacent regions;
(b) for each of said plurality of regions, interpolating second signal values, for those locations within that region at which second signal sample values have not been produced, in accordance with a predetermined relationship between first and second signal values at a location in that region whereat each of first and second signal sample values have been produced.
2. A method according to claim 1, wherein step (a) comprises segmenting said image into a plurality of segmentation diagrams in accordance with a prescribed image segmentation mechanism, each segmentation diagram be formed of a plurality of regions of first signal values such that the first signal values in a respective region are associated with a common image characteristic for that region, while the image characteristics of adjacent regions differ from one another, such that there is an image boundary between adjacent regions, and step (b) comprises, for each of the plurality of regions within a selected one of said plurality of segmentation diagrams, interpolating second signal values, for those locations within that region at which second signal sample values have not been produced, in accordance with a predetermined relationship between first and second signal values at a location in that region whereat each of first and second signal sample values have been produced.
3. A method according to claim 1, wherein step (a) comprises segmenting said image into a plurality of segmentation diagrams in accordance with a plurality of respectively different image segmentation mechanisms, each segmentation diagram be formed of a plurality of regions of first signal values such that the first signal values in a respective region are associated with a common image characteristic for that region, while the image characteristics of adjacent regions differ from one another such that there is an image boundary between adjacent regions, and weighting eacπ of said plurality of segmentation diagrams in accordance with a prescribed error characteristic between each segmented diagram and said input image, and step (b) comprises interpolating second signal values in accordance with the weighting of said plurality of segmentation diagrams.
4. A method according to claim 3, wherein step (b) comprises interpolating second signal values for each of said plurality of segmentation diagrams and obtaining a weighted average of respective interpolated second signal values produced from each of said segmentation diagrams.
5. A method according to claim 1, wherein step (a) comprises segmenting said image into an image segmentation diagram formed of a plurality of regions of first signal values such that, within a respective region, each first signal value is the same, while the values of first signals for adjacent regions differ from one another, so that an image boundary is defined between adjacent regions.
6. A method according to claim 5, wherein step (a) comprises segmenting said image into a plurality of segmentation diagrams in accordance with a plurality of respectively different image segmentation mechanisms, and weighting each of said plurality of segmentation diagrams in accordance with a prescribed error characteristic between each segmented diagram and said input image, and step (b) comprises interpolating second signal values in accordance with the weighting of said plurality of segmentation diagrams.
7. A method according to claim 6, wherein step (b) comprises interpolating second signal values for each of said plurality of segmentation diagrams and obtaining a weighted average of respective interpolated second signal values produced from each of said segmentation diagrams.
8. A method according to claim 1, wherein step (a) comprises the steps of:
(al) inferring an image characteristic edge between successive sampling locations at which second signal values are produced by said imaging device,
(a2) processing first signal values produced at sampling locations that lie between said image characteristic edge and one of said successive sampling locations in accordance with a first image characteristic-defining relationship, and first signal values produced at sampling locations that lie between said image characteristic edge and another of said successive sampling locations, in accordance with a second image characteristic-defining relationship, so as to derive a measure of a prospective segmentation boundary between said successive sampling locations,
(a3) repeating steps (al) and (a2) for plural inferred image characteristic edges between said successive sampling locations, so as to obtain plural measures of prospective segmentation boundaries, and
(a4) selecting, as an image boundary between adjacent regions of said segmentation diagram, that image characteristic edge which most closely fits a variation in said first signal values in accordance with said measures of prospective segmentation boundaries.
9. A method according to claim 8, wherein steps (al) - (a4) are carried out with respect to a one-dimensional arrangement of sampling locations.
10. A method according to claim 8, wherein steps (al) - (a4) are carried out with respect to a twu-dimensional arrangement of sampling locations.
11. A method of processing sampled signal values, produced by a color imaging device, representative of respectively different information contents of an input image, a fully sampled channel of said signal values being associated with a first band containing information of first characteristics of said input image, and a sub-sampled channel of said sampled signal values being associated with a second band containing information of second characteristics of input image, said method producing interpolated second signal values, for locations of said image between sampling locations at which second signal values are not produced, comprising the steps of:
(a) segmenting the fully sampled channel into a segmentation diagram containing a plurality of contiguous image regions of respectively different image characteristics, such that, within a respective region, the fully sampled signal values are associated with a common image characteristic and, on either side of a boundary between adjacent regions, signal values for successive sampling locations of the fully sampled channel are assigned in accordance with different image characteristics; and
(b) interpolating, within each region, signal values for non-sampled locations of said sub- sampled channel in accordance with a predetermined relationship between fully sampled and sub-sampled signal values at a sampling location in that region whereat each of fully sampled and sub-sampled signal values have been produced.
12. A method according to claim 11, wherein step (a) includes the step of segmenting the fully sampled channel into a plurality of different segmentation diagrams, the signal processing performance of each of which is defined by an associated weighting coefficient, and step (b) comprises interpolating signal values for non-sampled locations of said sub-sampled channel for each segmentation diagram and combining the interpolated signal values in accordance with said weighting coefficients.
PCT/US1990/005363 1989-10-02 1990-09-24 Segmentation mechanism for producing interpolated color values for a sampled color image signal WO1991005441A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
DE69015153T DE69015153T2 (en) 1989-10-02 1990-09-24 SEGMENTATION DEVICE FOR GENERATING COLOR INTERPOLATION VALUES FOR A KEYED COLOR IMAGE SIGNAL.
EP90914590A EP0446330B1 (en) 1989-10-02 1990-09-24 Segmentation mechanism for producing interpolated color values for a sampled color image signal

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US415,942 1989-10-02
US07/415,942 US5040064A (en) 1989-10-02 1989-10-02 Method of processing sampled signal valves produced by a color imaging device

Publications (1)

Publication Number Publication Date
WO1991005441A1 true WO1991005441A1 (en) 1991-04-18

Family

ID=23647867

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1990/005363 WO1991005441A1 (en) 1989-10-02 1990-09-24 Segmentation mechanism for producing interpolated color values for a sampled color image signal

Country Status (5)

Country Link
US (1) US5040064A (en)
EP (1) EP0446330B1 (en)
JP (1) JPH04502097A (en)
DE (1) DE69015153T2 (en)
WO (1) WO1991005441A1 (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2948240B2 (en) * 1989-10-09 1999-09-13 株式会社日立製作所 Signal interpolator
US5475769A (en) * 1992-07-13 1995-12-12 Polaroid Corporation Method and apparatus for recovering image data through the use of a color test pattern
US5382976A (en) * 1993-06-30 1995-01-17 Eastman Kodak Company Apparatus and method for adaptively interpolating a full color image utilizing luminance gradients
US5555324A (en) * 1994-11-01 1996-09-10 Massachusetts Institute Of Technology Method and apparatus for generating a synthetic image by the fusion of signals representative of different views of the same scene
US5778106A (en) * 1996-03-14 1998-07-07 Polaroid Corporation Electronic camera with reduced color artifacts
JPH11122626A (en) 1997-10-17 1999-04-30 Nikon Corp Image processing method, system and record medium recording image processing program
US6650795B1 (en) 1999-08-10 2003-11-18 Hewlett-Packard Development Company, L.P. Color image capturing system with antialiazing
US6580837B1 (en) * 1999-12-07 2003-06-17 Intel Corporation Up-sampling decimated color plane data
US6903782B2 (en) * 2001-03-28 2005-06-07 Koninklijke Philips Electronics N.V. System and method for performing segmentation-based enhancements of a video image
US7221793B2 (en) * 2003-05-27 2007-05-22 Hewlett-Packard Development Company, L.P. Systems and methods for providing spatially-varied demosaicing
GB0316994D0 (en) * 2003-07-21 2003-08-27 E2V Tech Uk Ltd Smear reduction in CCD images
US20070198725A1 (en) * 2004-10-06 2007-08-23 Morris Robert P System and method for utilizing contact information, presence information and device activity
US7568628B2 (en) 2005-03-11 2009-08-04 Hand Held Products, Inc. Bar code reading device with global electronic shutter control
US7770799B2 (en) 2005-06-03 2010-08-10 Hand Held Products, Inc. Optical reader having reduced specular reflection read failures
TWI268709B (en) * 2005-08-26 2006-12-11 Realtek Semiconductor Corp Digital filtering device and related method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1986001966A1 (en) * 1984-09-10 1986-03-27 Eastman Kodak Company Signal processing method and apparatus for sampled image signals
WO1986001965A2 (en) * 1984-09-10 1986-03-27 Eastman Kodak Company Signal rpocessing method and apparatus for sampled color image signal

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4642678A (en) * 1984-09-10 1987-02-10 Eastman Kodak Company Signal processing method and apparatus for producing interpolated chrominance values in a sampled color image signal
US4764805A (en) * 1987-06-02 1988-08-16 Eastman Kodak Company Image transmission system with line averaging preview mode using two-pass block-edge interpolation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1986001966A1 (en) * 1984-09-10 1986-03-27 Eastman Kodak Company Signal processing method and apparatus for sampled image signals
WO1986001965A2 (en) * 1984-09-10 1986-03-27 Eastman Kodak Company Signal rpocessing method and apparatus for sampled color image signal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
IEEE Transactions on Consumer Electronics, Volume 35, No. 3, August 1989, IEEE, (New York, US), K.A. PARULSKI et al.: "A Digital Color CCD Imaging System using Custom VLSI Circuits", page 382-388 see page 386, left-hand column, line 11 - right-hand column, line 7 *
PATENT ABSTRACTS OF JAPAN, Volume 13, No. 128 (E-735), 29 March 1989 & JP, A, 63296577 (Matsushita) 2 December 1988 see Abstract *

Also Published As

Publication number Publication date
US5040064A (en) 1991-08-13
DE69015153T2 (en) 1995-06-22
EP0446330A1 (en) 1991-09-18
EP0446330B1 (en) 1994-12-14
DE69015153D1 (en) 1995-01-26
JPH04502097A (en) 1992-04-09

Similar Documents

Publication Publication Date Title
EP0446330B1 (en) Segmentation mechanism for producing interpolated color values for a sampled color image signal
Lukac et al. Demosaicked image postprocessing using local color ratios
US6934056B2 (en) Noise cleaning and interpolating sparsely populated color digital image using a variable noise cleaning kernel
EP0460193B1 (en) A method to extend the linear range of images captured on film
US7486844B2 (en) Color interpolation apparatus and color interpolation method utilizing edge indicators adjusted by stochastic adjustment factors to reconstruct missing colors for image pixels
US7515747B2 (en) Method for creating high resolution color image, system for creating high resolution color image and program creating high resolution color image
JP3293920B2 (en) Image processing apparatus and method
EP1552474B1 (en) A method for interpolation and sharpening of images
KR20010020797A (en) Image demosaicing method utilizing directional smoothing
US20060119896A1 (en) Image processing apparatus, image processing program, electronic camera, and image processing method for smoothing image of mixedly arranged color components
KR100332268B1 (en) How to make hierarchical estimates of movement in television signals
EP1337115B1 (en) Image processor and colorimetric system converting method
JPH1155695A (en) Method and device for measuring registration error and color aberration
US7386167B2 (en) Segmentation technique of an image
US4907096A (en) Image processing
EP0448691B1 (en) Line scanning apparatus using staggered linear segments with adjoining overlap regions
US7289665B2 (en) Image processing device and image processing program
US7068396B1 (en) Method and apparatus for performing tone scale modifications on a sparsely sampled extended dynamic range digital image
EP0446329B1 (en) Mechanism for producing interpolated color values for a sampled color image signal
JP3059205B2 (en) Image processing device
US7133569B1 (en) Aliasing artifact attenuation system
US6795586B1 (en) Noise cleaning and interpolating sparsely populated color digital image
JPH0722343B2 (en) Image data compression device
US7023576B1 (en) Method and an apparatus for elimination of color Moiré
US6961480B2 (en) Off-grid interpolation in image processing

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB IT LU NL SE

WWE Wipo information: entry into national phase

Ref document number: 1990914590

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1990914590

Country of ref document: EP

WWG Wipo information: grant in national office

Ref document number: 1990914590

Country of ref document: EP