US9451222B2 - Color processing of digital images - Google Patents

Color processing of digital images Download PDF

Info

Publication number
US9451222B2
US9451222B2 US14/066,395 US201314066395A US9451222B2 US 9451222 B2 US9451222 B2 US 9451222B2 US 201314066395 A US201314066395 A US 201314066395A US 9451222 B2 US9451222 B2 US 9451222B2
Authority
US
United States
Prior art keywords
pixel
input image
edginess
low pass
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US14/066,395
Other versions
US20140118585A1 (en
Inventor
Filippo Naccari
Arcangelo Ranieri Bruna
Simone BIANCO
Raimondo Schettini
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
STMicroelectronics SRL
Original Assignee
STMicroelectronics SRL
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by STMicroelectronics SRL filed Critical STMicroelectronics SRL
Publication of US20140118585A1 publication Critical patent/US20140118585A1/en
Assigned to STMICROELECTRONICS S.R.L. reassignment STMICROELECTRONICS S.R.L. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BIANCO, SIMONE, SCHETTINI, RAIMONDO, BRUNA, ARCANGELO RANIERI, NACCARI, FILLIPO
Application granted granted Critical
Publication of US9451222B2 publication Critical patent/US9451222B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • H04N9/07
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/12Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with one sensor only
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • H04N1/6072Colour correction or control adapting to different types of images, e.g. characters, graphs, black and white image portions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/85Camera processing pipelines; Components thereof for processing colour signals for matrixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/142Edging; Contouring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/67Circuits for processing colour signals for matrixing

Definitions

  • An embodiment relates to the field of image processing, in particular, to color processing of digital images and, particularly, to a color-processing system employing color-space matrix transforms.
  • Digital imaging devices for capturing images for example, mobile and digital still cameras or video cameras or LCD imaging signal processors, operate by means of image sensors.
  • the spectral sensitivities of the sensor color channels usually do not match those of a desired output color space. Therefore, some color processing (matrixing) is typically needed to achieve colors of the desired color space.
  • R i , G i , B i are the device raw RGB values and R o , G o , B o are the desired output RGB values.
  • the RGB space is used for exemplary purposes.
  • the diagonal matrix diag (r gw , g gw , b gw ) performs white balancing, i.e., it is provided for luminance compensation.
  • the 3 ⁇ 3 matrix with matrix elements a 11 , . . . , a 33 is a color matrix for color-space transform (matrixing) from the device RGB space to a standard color space (e.g., sRGB, AdobeRGB, etc).
  • the color-space transform can produce artifacts resulting in a degradation of the image quality in the desired, e.g., sRGB space.
  • the noise in the output color space typically exceeds the original noise in the device color space, i.e., for example, for the red channel, under the assumption that the noise in the device color space is the same for all RGB channels:
  • ⁇ R o and ⁇ R i represent the noise in the red channel after the transformation and before the transformation, respectively.
  • an embodiment is a method for color processing that alleviates noise amplification significantly as compared to conventional schemes.
  • An embodiment addresses the above-mentioned problem and provides a method for color processing of a (digital) input image, the method including the steps of:
  • edginess parameters e.g., one edginess parameter for each pixel of the input image
  • the input image is, for example, an RGB image.
  • low-pass filtering By low-pass filtering, slowly varying background patterns that can be interpreted as wave patterns with long wavelengths (low frequencies) are stressed, whereas by high-pass filtering, highly varying details with short wavelengths (high frequencies) are stressed. Both low- and high-pass filtering can, in principle, be performed in the spatial or frequency domain in a conventional manner.
  • the low-pass filtering can, for example, be performed by means of a low-pass filter with a bandwidth selected based on the resolution (number of pixels) of the input image.
  • gamut mapping based on color matrix operations of conventional digital cameras leads to a set of correction coefficients that causes significant reduction of the signal-to-noise ratio (SNR), according to an embodiment little or no significant noise is produced.
  • SNR signal-to-noise ratio
  • the edge detection is performed in order to discriminate between pixels belonging to flat or edge regions of the input image.
  • the edge detection may include corner detection.
  • the edginess parameter represents the edginess of a pixel. For each pixel of the input image, a respective edginess parameter can be determined. For example, a normalized edginess parameter of the interval [0, 1], wherein 0 and 1 represent a likelihood of 0% and 100% probability, respectively, that a pixel belongs to an edge of the input image (or a corner of the input image), can be determined (see also detailed description below).
  • the edginess parameters can, particularly, be determined based on a set of mask patterns used to identify vertical, horizontal and diagonal edges and corners. Moreover, the edginess parameters can be determined in the spatial RGB domain and based on the entire information of one or more RGB color channels.
  • the color-space transformation of the input image is performed for each pixel of the input image according to:
  • C i LP denotes the low-pass component of the pixel for the i-th channel (i-th channel of the low-pass component)
  • C i HP denotes the high-pass component of the pixel for the i-th channel (i-th channel of the high-pass component)
  • denotes the edginess parameter of the pixel
  • m ij denotes the i,j coefficient of a color matrix.
  • the color-matrix coefficients depend mainly on the response of the camera color filters and the target standard color space. They also could be dynamically calculated for the white-balancing estimation results.
  • Another embodiment is a computer-program product including one or more computer-readable media having computer-executable instructions for performing the steps of a method according to an embodiment such as according to one of the above-described examples.
  • an image-processing device including a processor (for example, an imaging signal processor or an LCD imaging signal processor) configured for:
  • the processor may be configured to perform the color-space transformation of the input image for each pixel of the input image according to:
  • C i LP denotes the low-pass component of the pixel for the i-th channel (i-th channel of the low-pass component)
  • C i HP denotes the high-pass component of the pixel for the i-th channel (i-th channel of the high-pass component)
  • denotes the edginess parameter of the pixel
  • m ij denotes the i,j coefficient of a color matrix.
  • the image-processing device can, for example, include or consist of a digital (still) camera. It can also be some LCD display device including a display-driver circuit wherein an embodiment of the above-described method is implemented.
  • Another embodiment is an image-processing device including a low-pass filter for low-pass filtering of the input image to obtain a low-pass component, a high-pass filter for high-pass filtering of the input image to obtain a high-pass component, an edge-detection means for processing the input image for edge detection to obtain edginess parameters, and a color-matrix-application means for performing a color-space transformation of the input image based on the low-pass component, the high-pass component, and the edginess parameters.
  • the high-pass component is obtained based on the difference of the input image and the low-pass component.
  • the pixel values of the low-pass component are subtracted from the pixel values of the input image.
  • FIG. 1 illustrates an example of color processing based on color matrixing, according to an embodiment.
  • FIG. 2 shows exemplary mask patterns that can be used for edge or corner detection, according to an embodiment.
  • FIG. 1 An example for adaptive color-matrix application for color-space transform of an image is illustrated in FIG. 1 , according to an embodiment.
  • An RGB input image is processed by a low-pass filter 1 to obtain a low-pass component A LP , and by a high-pass filter 2 to obtain a high-pass component A HP .
  • the high-pass component is obtained by determining the difference of the pixel values of the input image and the pixel values of the low-pass component.
  • the digital input image is processed for edge detection by an edge-detection means 3 , which provides edginess parameters ⁇ for each pixel of the input image.
  • the edge-detection means 3 estimates the likelihood [0, 1] that a pixel is part of an edge or a corner pixel.
  • the low-pass component A LP and high-pass component A HP , and the edginess parameters ⁇ are input to a color-matrix application means 4 , which performs matrixing of the input image based on the input information provided by the low-pass filter 1 , the high-pass filter 2 , and the edge-detection means 3 .
  • the edge-detection means 3 adopts six mask patterns as illustrated in FIG. 2 in order to detect both edges and corners.
  • the edge-detection means 3 may operate in the spatial RGB domain. It may make use of information of all channels. Moreover, across different patterns, different functions to accumulate the edginess parameters, and different weighting masks to obtain a better edge-detection precision, can be used.
  • the number of potential masks to be used for edge detection can be extended, thus leading to the possibility of detecting edges of any orientation with a greater accuracy.
  • the herein disclosed embodiment for edge detection has been initially based on a the following paper, Yeong-Hwa Kim, and Jaeheon Lee, “Image Feature and Noise Detection Based on Statistical Hypothesis, Tests and Their Applications in Noise Reduction”, IEEE Transactions on Consumer Electronics, vol. 51, n. 4, November 2005, which is incorporated by reference.
  • an embodiment of the disclosed accumulation function has been modified, and, to an embodiment, pattern masks for corner detection, and the possibility of using weighting masks, has been added.
  • An edginess level of a pixel of the input image for the i-th channel related to the k-th edge pattern can be determined by:
  • C ij k denotes the pixel value of the i-th channel at the j-th position of the k-th pattern and w j k denotes the weight selected from the interval [0, 1] at the j-th position of the k-th pattern.
  • the low-pass and high-pass components, as well as the edginess parameters ⁇ , for each pixel are input into the color-matrix application means 4 .
  • the color-matrix application means 4 calculates, for each pixel, the i-th output channel of the matrix-transformed input image (i.e., of the desired color-transformed output image) as follows:
  • C i LP denotes the low-pass component of the pixel for the i-th channel (i-th channel of the low-pass component) and C i HP denotes the high-pass component of the pixel for the i-th channel (i-th channel of the high-pass component) and m ij denotes the i,j coefficient of the color matrix applied by the color-matrix application means 4 .
  • the coefficients m ij can be the coefficients of a conventional color matrix.
  • An example color matrix is:
  • an apparatus that performs the above-described calculations may be a computing machine such as a microprocessor, microcontroller, or non-instruction-executing circuit. And such a computing machine may be on a same die as, or on a different die than, the image-capture device that captures an image and that generates one or more color components for each pixel of the image.
  • a computing machine such as a microprocessor, microcontroller, or non-instruction-executing circuit.
  • a computing machine may be on a same die as, or on a different die than, the image-capture device that captures an image and that generates one or more color components for each pixel of the image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

An embodiment relates to a method for color processing of an input image, the method including the steps of low-pass filtering of the input image to obtain a low-pass component, high-pass filtering of the input image to obtain a high-pass component, processing the input image for edge detection to obtain edginess parameters, and performing a color-space transformation of the input image based on the low-pass component, the high-pass component, and the edginess parameters.

Description

PRIORITY CLAIM
The instant application claims priority to Italian Patent Application No. VI2012A000291, filed Oct. 29, 2012, which application is incorporated herein by reference in its entirety.
TECHNICAL FIELD
An embodiment relates to the field of image processing, in particular, to color processing of digital images and, particularly, to a color-processing system employing color-space matrix transforms.
SUMMARY
Digital imaging devices for capturing images, for example, mobile and digital still cameras or video cameras or LCD imaging signal processors, operate by means of image sensors. The spectral sensitivities of the sensor color channels usually do not match those of a desired output color space. Therefore, some color processing (matrixing) is typically needed to achieve colors of the desired color space.
A typical image-reconstruction (color-correction) pipeline implemented in an imaging device, for example, a digital camera reads (for a gamma correction of 1)
[ R o G o B o ] = [ a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 ] [ r gw 0 0 0 g gw 0 0 0 b gw ] [ R i G i B i ] ,
where Ri, Gi, Bi, are the device raw RGB values and Ro, Go, Bo are the desired output RGB values. The RGB space is used for exemplary purposes. The diagonal matrix diag (rgw, ggw, bgw) performs white balancing, i.e., it is provided for luminance compensation. The 3×3 matrix with matrix elements a11, . . . , a33 is a color matrix for color-space transform (matrixing) from the device RGB space to a standard color space (e.g., sRGB, AdobeRGB, etc).
However, the color-space transform can produce artifacts resulting in a degradation of the image quality in the desired, e.g., sRGB space. In particular, in the case of small-gamut sensors, the noise in the output color space typically exceeds the original noise in the device color space, i.e., for example, for the red channel, under the assumption that the noise in the device color space is the same for all RGB channels:
σ R o σ R i = a 11 2 + a 12 2 + a 13 2 > 1 ,
where σR o and σR i represent the noise in the red channel after the transformation and before the transformation, respectively.
Thus, typically, color processing leads to noise amplification.
Therefore, an embodiment is a method for color processing that alleviates noise amplification significantly as compared to conventional schemes.
An embodiment addresses the above-mentioned problem and provides a method for color processing of a (digital) input image, the method including the steps of:
Low-pass filtering of the (e.g., color channels of the) input image to obtain a low-pass component (e.g., for each color channel);
High-pass filtering of the (e.g., color channels of the) input image to obtain a high-pass component (e.g., for each color channel);
processing the input image for edge detection to obtain edginess parameters (e.g., one edginess parameter for each pixel of the input image); and
performing a color-space transformation of the input image based on the low-pass component, the high-pass component, and the edginess parameters. The input image is, for example, an RGB image.
By low-pass filtering, slowly varying background patterns that can be interpreted as wave patterns with long wavelengths (low frequencies) are stressed, whereas by high-pass filtering, highly varying details with short wavelengths (high frequencies) are stressed. Both low- and high-pass filtering can, in principle, be performed in the spatial or frequency domain in a conventional manner.
The low-pass filtering can, for example, be performed by means of a low-pass filter with a bandwidth selected based on the resolution (number of pixels) of the input image.
If m and n are respectively the number of rows and columns of the input image, then the size of the low-pass filter convolution kernel can be calculated, for example, in the following way: LPk=(m*n)/S, where S is a suitable scaling factor.
Whereas gamut mapping based on color matrix operations of conventional digital cameras leads to a set of correction coefficients that causes significant reduction of the signal-to-noise ratio (SNR), according to an embodiment little or no significant noise is produced. This is achieved by adapting matrixing (color matrix application) based on the total information given by the low- and high-pass components and the edginess information. The edge detection is performed in order to discriminate between pixels belonging to flat or edge regions of the input image. In particular, the edge detection may include corner detection.
The edginess parameter represents the edginess of a pixel. For each pixel of the input image, a respective edginess parameter can be determined. For example, a normalized edginess parameter of the interval [0, 1], wherein 0 and 1 represent a likelihood of 0% and 100% probability, respectively, that a pixel belongs to an edge of the input image (or a corner of the input image), can be determined (see also detailed description below). The edginess parameters can, particularly, be determined based on a set of mask patterns used to identify vertical, horizontal and diagonal edges and corners. Moreover, the edginess parameters can be determined in the spatial RGB domain and based on the entire information of one or more RGB color channels.
According to an example where the color space has three dimensions or channels (e.g., RGB), the color-space transformation of the input image is performed for each pixel of the input image according to:
C i′=(C 1 LP +αC 1 HP )m i0+(C 2 LP +αC 2 HP )m i1+(C 3 LP +αC 3 HP )m i2+(1−α)C i HP i=1,2,3
where Ci LP denotes the low-pass component of the pixel for the i-th channel (i-th channel of the low-pass component), Ci HP denotes the high-pass component of the pixel for the i-th channel (i-th channel of the high-pass component), α denotes the edginess parameter of the pixel, and mij denotes the i,j coefficient of a color matrix.
The color-matrix coefficients depend mainly on the response of the camera color filters and the target standard color space. They also could be dynamically calculated for the white-balancing estimation results. An example reads
M = [ 1.9 - 0.6 - 0.3 - 0.3 1.6 - 0.3 - 0.5 - 0.7 2.2 ] .
The above-described examples of an embodiment can also be implemented in the context of image processing for tone mapping (dynamic range compression). In general, an application of digital gain greater than 1.0 produces a signal degradation in terms of SNR.
Both the application of digital gains to input RGBi signals for white-balancing compensation and the gamma encoding applied to linear output RGBo signals can benefit from an embodiment.
Another embodiment is a computer-program product including one or more computer-readable media having computer-executable instructions for performing the steps of a method according to an embodiment such as according to one of the above-described examples.
The above-mentioned problem can also be solved by an image-processing device, including a processor (for example, an imaging signal processor or an LCD imaging signal processor) configured for:
Low-pass filtering of the input image to obtain a low-pass component;
high-pass filtering of the input image to obtain a high-pass component;
processing the input image for edge detection to obtain edginess parameters; and
performing a color-space transformation of the input image based on the low-pass component, the high-pass component, and the edginess parameters.
In particular, the processor may be configured to perform the color-space transformation of the input image for each pixel of the input image according to:
C i′=(C 1 LP +αC 1 HP )m i0+(C 2 LP +αC 2 HP )m i1+(C 3 LP +αC 3 HP )m i2+(1−α)C i HP i=1,2,3
where Ci LP denotes the low-pass component of the pixel for the i-th channel (i-th channel of the low-pass component), Ci HP denotes the high-pass component of the pixel for the i-th channel (i-th channel of the high-pass component), α denotes the edginess parameter of the pixel, and mij denotes the i,j coefficient of a color matrix.
The image-processing device can, for example, include or consist of a digital (still) camera. It can also be some LCD display device including a display-driver circuit wherein an embodiment of the above-described method is implemented.
Another embodiment is an image-processing device including a low-pass filter for low-pass filtering of the input image to obtain a low-pass component, a high-pass filter for high-pass filtering of the input image to obtain a high-pass component, an edge-detection means for processing the input image for edge detection to obtain edginess parameters, and a color-matrix-application means for performing a color-space transformation of the input image based on the low-pass component, the high-pass component, and the edginess parameters.
According to a particular example, the high-pass component is obtained based on the difference of the input image and the low-pass component. For example, the pixel values of the low-pass component are subtracted from the pixel values of the input image.
BRIEF DESCRIPTION OF THE DRAWINGS
One or more features and advantages will be described with reference to the drawings. In the description, reference is made to the accompanying figures, which are meant to illustrate one or more embodiments, although it is understood that such embodiments may not represent the full scope of the teachings of the disclosure.
FIG. 1 illustrates an example of color processing based on color matrixing, according to an embodiment.
FIG. 2 shows exemplary mask patterns that can be used for edge or corner detection, according to an embodiment.
DETAILED DESCRIPTION
An example for adaptive color-matrix application for color-space transform of an image is illustrated in FIG. 1, according to an embodiment. An RGB input image is processed by a low-pass filter 1 to obtain a low-pass component ALP, and by a high-pass filter 2 to obtain a high-pass component AHP. According to an example, the high-pass component is obtained by determining the difference of the pixel values of the input image and the pixel values of the low-pass component. In addition, the digital input image is processed for edge detection by an edge-detection means 3, which provides edginess parameters α for each pixel of the input image. The edge-detection means 3 estimates the likelihood [0, 1] that a pixel is part of an edge or a corner pixel. The low-pass component ALP and high-pass component AHP, and the edginess parameters α are input to a color-matrix application means 4, which performs matrixing of the input image based on the input information provided by the low-pass filter 1, the high-pass filter 2, and the edge-detection means 3.
In detail, according to an example, the edge-detection means 3 adopts six mask patterns as illustrated in FIG. 2 in order to detect both edges and corners. The edge-detection means 3 may operate in the spatial RGB domain. It may make use of information of all channels. Moreover, across different patterns, different functions to accumulate the edginess parameters, and different weighting masks to obtain a better edge-detection precision, can be used.
The number of potential masks to be used for edge detection can be extended, thus leading to the possibility of detecting edges of any orientation with a greater accuracy. The herein disclosed embodiment for edge detection has been initially based on a the following paper, Yeong-Hwa Kim, and Jaeheon Lee, “Image Feature and Noise Detection Based on Statistical Hypothesis, Tests and Their Applications in Noise Reduction”, IEEE Transactions on Consumer Electronics, vol. 51, n. 4, November 2005, which is incorporated by reference. However, an embodiment of the disclosed accumulation function has been modified, and, to an embodiment, pattern masks for corner detection, and the possibility of using weighting masks, has been added.
Consider n color channels and m pattern masks of size s each. An edginess level of a pixel of the input image for the i-th channel related to the k-th edge pattern can be determined by:
E ki = j = 1 s C i , ( j + 1 ) k - C ij k w j k ; i = 1 , 2 , , n ; k = 1 , 2 , , m ,
where Cij k denotes the pixel value of the i-th channel at the j-th position of the k-th pattern and wj k denotes the weight selected from the interval [0, 1] at the j-th position of the k-th pattern. For the i-th color channel one obtains:
E i = k = 1 m E ki ; i = 1 , 2 , , n
Normalization to the interval [0, 1] is obtained by:
E _ i = E i ( s - 1 ) ( 2 b - 1 ) ,
where b is the bit depth of the i-th color channel quantifying how many unique colors are available in an image's color palette. Consequently, the overall normalized edginess parameter for the pixel is given by:
α = max i { E _ i } .
As already mentioned, the low-pass and high-pass components, as well as the edginess parameters α, for each pixel are input into the color-matrix application means 4. For a 3-channel color image, for example, an RGB image, the color-matrix application means 4 calculates, for each pixel, the i-th output channel of the matrix-transformed input image (i.e., of the desired color-transformed output image) as follows:
C i′=(C 1 LP +αC 1 HP )m i0+(C 2 LP +αC 2 HP )m i1+(C 3 LP +αC 3 HP )m i2+(1−α)C i HP i=1,2,3
where Ci LP denotes the low-pass component of the pixel for the i-th channel (i-th channel of the low-pass component) and Ci HP denotes the high-pass component of the pixel for the i-th channel (i-th channel of the high-pass component) and mij denotes the i,j coefficient of the color matrix applied by the color-matrix application means 4. The coefficients mij can be the coefficients of a conventional color matrix.
An example color matrix is:
M = [ 1.9 - 0.6 - 0.3 - 0.3 1.6 - 0.3 - 0.5 - 0.7 2.2 ] .
All previously described embodiments are not intended as limitations, but serve as examples illustrating features and advantages of the disclosed concepts. It is also understood that some or all of the above-described features can be combined in ways different from the ways described.
For example, an apparatus that performs the above-described calculations may be a computing machine such as a microprocessor, microcontroller, or non-instruction-executing circuit. And such a computing machine may be on a same die as, or on a different die than, the image-capture device that captures an image and that generates one or more color components for each pixel of the image.
From the foregoing it will be appreciated that, although specific embodiments have been described herein for purposes of illustration, various modifications may be made without deviating from the spirit and scope of the disclosure. Furthermore, where an alternative is disclosed for a particular embodiment, this alternative may also apply to other embodiments even if not specifically stated.

Claims (23)

The invention claimed is:
1. An apparatus, comprising:
a low pass filter configured to generate a low pass value in response to a first color component of a pixel, the first color component of the pixel being in one of a plurality of channels in a first color space;
a high pass filter configured to generate a high pass value in response to the first color component of the pixel;
an edge detection circuit configured to generate an edginess parameter for the pixel indicating the likelihood the pixel is part of an edge or corner in a digital image containing the pixel; and
a modifier circuit configured to transform the pixel from the first color space to a second color space including a plurality of channels, the modifier circuit configured to transform the pixel using a color transformation matrix including a plurality of coefficients, each coefficient that is utilized in generating a transformed value for the pixel in the second color space being adjusted using the low pass value of the pixel and the low pass values of corresponding pixels in the other channels and also being adjusted using the high pass value of the pixel and the high pass values of the corresponding pixels in the other channels, and wherein the edginess parameter of the pixel is utilized to adjust the high pass value of the pixel and the high pass values of the corresponding pixels but the edginess parameter is not utilized to adjust the low pass value of the pixel and the low pass values of the corresponding pixels in the other channels.
2. The apparatus of claim 1, wherein the high pass filter generates the high pass component based on the difference of the input image and the low pass component.
3. The apparatus of claim 1, wherein the low pass filter has a bandwidth based on a resolution of the input image.
4. The apparatus of claim 1, wherein the edge detection circuit is configured to generate the edginess parameter based on a set of mask patterns utilized to identify vertical, horizontal and diagonal edges in the digital image.
5. A system, comprising:
an image capture device configured to capture an image and to generate a color component of a pixel of the image in a device color space, the color component being in one of a plurality of channels in the device color space; and
an apparatus coupled to the image capture device and including,
a low pass filter configured to generate a low pass value in response to the color component of the pixel;
a high pass filter configured to generate a high pass value in response to the color component of the pixel;
an edge detection circuit configured to generate an edginess parameter for the pixel indicating the likelihood the pixel is part of an edge or corner in a digital image containing the pixel; and
a modifier circuit configured to transform the pixel from the device color space to an output color space including a plurality of channels, the modifier circuit configured to transform the pixel based on a color transformation matrix including a plurality of coefficients, each coefficient that is utilized in generating a transformed value for the pixel in the output color space being varied using the low pass value of the pixel and the low pass values of corresponding pixels in the other channels and also being varied using the high pass value of the pixel and the high pass values of the corresponding pixels in the other channels, and wherein the edginess parameter of the pixel is utilized to vary the high pass value of the pixel and the high pass values of the corresponding pixels but the edginess parameter is not utilized to vary the low pass value of the pixel and the low pass values of the corresponding pixels in the other channels.
6. The system of claim 5 wherein the image capture device and the apparatus are either disposed on respective dies or are disposed on a same die.
7. The system of claim 5 wherein the device color space is device RGB space and wherein the output color space comprises one of the sRGB and Adobe RGB standard color spaces.
8. The system of claim 5 wherein the apparatus includes a computing circuit.
9. The system of claim 5 wherein the image capture device includes a pixel array.
10. A method for color processing of an input image, comprising:
low pass filtering of the input image to obtain a low pass component;
high pass filtering of the input image to obtain a high pass component;
processing the input image for edge detection to obtain edginess parameters; and
performing a color space transformation of the input image from a first color space to a second color space, each color space including a plurality of channels and the color space transformation being performed using a transformation matrix having a plurality of coefficients, the color space transformation including,
generating a transformed value for the pixel in the second color space utilizing selected coefficients in the transformation matrix, the selected coefficients having values that are modified,
by the low pass value of the pixel and the low pass values of corresponding pixels in the other channels,
by the high pass value of the pixel and the high pass values of the corresponding pixels in the other channels as modified by the edginess parameter, and
wherein the edginess parameter of the pixel is not utilized to modify the low pass value of the pixel and the low pass values of the corresponding pixels in the other channels.
11. The method of claim 10, wherein the input image is a red-green-blue (RGB) color space image.
12. The method of claim 11, wherein high pass filtering comprises high pass filtering based on the difference of the input image and the low pass component to obtain the high pass component.
13. The method of claim 12, wherein low pass filtering of the input image to obtain a low pass component comprises low pass filtering using a bandwidth selected based on the resolution of the input image.
14. The method of claim 13, wherein processing the input image for edge detection to obtain edginess parameters comprises processing each pixel of the input image to determine a respective edginess parameter for the pixel.
15. The method of claim 14, wherein processing the input image for edge detection to obtain edginess parameters comprises determining the edginess parameters based on a set of mask patterns used to identify vertical, horizontal and diagonal edges and corners in the input image.
16. The method of claim 15, wherein processing the input image for edge detection to obtain edginess parameters comprises processing the pixels of the input image in the RGB color space image in a spatial RGB domain based on information of one or more RGB color channels.
17. A method for color processing of an input image, comprising:
low pass filtering of the input image to obtain a low pass component;
high pass filtering of the input image to obtain a high pass component;
processing the input image for edge detection to obtain edginess parameters;
performing a color space transformation of the input image based on the low pass component, the high pass component, and the edginess parameters; and
wherein performing a color space transformation of the input image based on the low pass component, the high pass component, and the edginess parameters comprises performing the color space transformation for each pixel of the input image according to the following equation:

C i′=(C 0 LP +αC 0 HP )m i0+(C 1 LP +αC 1 HP )m i1+(C 2 LP +αC 2 HP )m i2+(1−α)C i HP i=1,2,3
where Ci LP denotes the low pass component of the i-th channel of the low pass component, Ci HP denotes the high pass component of the i-th channel of the high pass component, α denotes the edginess parameter of the pixel, mij denotes the i,j coefficient of a color matrix, i and j are indices for components of the color matrix, and Ci′ is the new color value for the pixel being processed.
18. The method of claim 17, wherein the input image is an RGB image.
19. The method of claim 17, wherein the high pass filtering comprises generating the high pass component based on the difference of the input image and the low pass component.
20. The method of claim 17, wherein the low pass filtering has a bandwidth selected based on a resolution of the input image.
21. The method of claim 17, wherein processing the input image for edge detection to obtain edginess parameters comprises determining for each pixel of the input image a respective edginess parameter.
22. The method of claim 21, wherein processing the input image for edge detection to obtain edginess parameters further comprises determining the edginess parameters based on a set of mask patterns used to identify vertical, horizontal and diagonal edges and corners in the input image.
23. The method of claim 22, wherein processing the input image for edge detection to obtain edginess parameters further comprises determining the edginess parameters in the spatial RGB domain based on information of one or more RGB color channels.
US14/066,395 2012-10-29 2013-10-29 Color processing of digital images Active US9451222B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
IT000291A ITVI20120291A1 (en) 2012-10-29 2012-10-29 COLOR PROCESSING OF DIGITAL IMAGES
ITVI2012A0291 2012-10-29
ITVI2012A000291 2012-10-29

Publications (2)

Publication Number Publication Date
US20140118585A1 US20140118585A1 (en) 2014-05-01
US9451222B2 true US9451222B2 (en) 2016-09-20

Family

ID=47388645

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/066,395 Active US9451222B2 (en) 2012-10-29 2013-10-29 Color processing of digital images

Country Status (2)

Country Link
US (1) US9451222B2 (en)
IT (1) ITVI20120291A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104967761B (en) * 2015-06-26 2018-03-20 深圳市华星光电技术有限公司 Color gamut matching method
CN111226256A (en) * 2017-11-09 2020-06-02 深圳市大疆创新科技有限公司 System and method for image dynamic range adjustment

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5231677A (en) * 1984-12-28 1993-07-27 Canon Kabushiki Kaisha Image processing method and apparatus
US6317157B1 (en) * 1997-11-11 2001-11-13 Fujitsu Limited Image conversion apparatus and image conversion method
US20020149685A1 (en) * 2001-03-23 2002-10-17 Nec Viewtechnology, Ltd. Method of and apparatus for improving picture quality
US20030086606A1 (en) * 2001-07-18 2003-05-08 Hewlett-Packard Company Electronic image colour plane reconstruction
US20030169941A1 (en) * 2002-03-11 2003-09-11 Sunplus Technology Co., Ltd. Edge enhancement method and apparatus in digital image scalar-up circuit
US20040057630A1 (en) * 2002-09-24 2004-03-25 Thomas Schuhrke Image processing method for automated contrast modification of digital image data
US6768514B1 (en) * 1998-11-18 2004-07-27 Sony Corporation Image processing apparatus and image processing method
US20040252316A1 (en) * 2003-01-21 2004-12-16 Noriko Miyagi Image processing apparatus and method, and computer program product
US20070292041A1 (en) 2006-06-16 2007-12-20 Kabushiki Kaisha Toshiba Image processing apparatus, image forming apparatus, and image processing method
US20080101716A1 (en) * 2006-10-27 2008-05-01 Quanta Computer Inc. Image sharpening apparatus and method thereof
US20080143844A1 (en) * 2006-12-15 2008-06-19 Cypress Semiconductor Corporation White balance correction using illuminant estimation
EP1936952A2 (en) 2006-12-08 2008-06-25 Samsung Electronics Co., Ltd. Image forming apparatus and image quality improving method thereof
US20140355904A1 (en) * 2012-02-21 2014-12-04 Flir Systems Ab Image processing method for detail enhancement and noise reduction

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5231677A (en) * 1984-12-28 1993-07-27 Canon Kabushiki Kaisha Image processing method and apparatus
US6317157B1 (en) * 1997-11-11 2001-11-13 Fujitsu Limited Image conversion apparatus and image conversion method
US6768514B1 (en) * 1998-11-18 2004-07-27 Sony Corporation Image processing apparatus and image processing method
US20020149685A1 (en) * 2001-03-23 2002-10-17 Nec Viewtechnology, Ltd. Method of and apparatus for improving picture quality
US20030086606A1 (en) * 2001-07-18 2003-05-08 Hewlett-Packard Company Electronic image colour plane reconstruction
US20030169941A1 (en) * 2002-03-11 2003-09-11 Sunplus Technology Co., Ltd. Edge enhancement method and apparatus in digital image scalar-up circuit
US20040057630A1 (en) * 2002-09-24 2004-03-25 Thomas Schuhrke Image processing method for automated contrast modification of digital image data
US20040252316A1 (en) * 2003-01-21 2004-12-16 Noriko Miyagi Image processing apparatus and method, and computer program product
US20070292041A1 (en) 2006-06-16 2007-12-20 Kabushiki Kaisha Toshiba Image processing apparatus, image forming apparatus, and image processing method
US20080101716A1 (en) * 2006-10-27 2008-05-01 Quanta Computer Inc. Image sharpening apparatus and method thereof
EP1936952A2 (en) 2006-12-08 2008-06-25 Samsung Electronics Co., Ltd. Image forming apparatus and image quality improving method thereof
US20080143844A1 (en) * 2006-12-15 2008-06-19 Cypress Semiconductor Corporation White balance correction using illuminant estimation
US20140355904A1 (en) * 2012-02-21 2014-12-04 Flir Systems Ab Image processing method for detail enhancement and noise reduction

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Ford et al., "Colour Space Conversions," Aug. 11, 1998, 31 pages.
Igor Kharitonenko, Sue Twelves, and Chaminda Weerasinghe, "Suppression of Noise Amplification During Colour Correction", IEEE, vol. 48, No. 2, May 2002, pp. 229-233.
Search Report for Italian patent application No. VI20120291; Munich, Germany, Jun. 26, 2013; 2 pages.
SukHwan Lim, and Amnon Silverstein, "Spatially Varying Color Correction (SVCC) Matrices for Reduced Noise", HP Technical Reports, HP Laboratories Palo Alto, CA; HPL-2004-99, Jun. 2, 2004; 6 pages.
Y.H. Kim, and J. Lee, "Image Feature and Noise Detection Based on Statistical Hypothesis Tests and Their Applications in Noise Reduction", IEEE Transactions on Consumer Electronics, vol. 51, No. 4, Nov. 2005, pp. 1367-1378.

Also Published As

Publication number Publication date
US20140118585A1 (en) 2014-05-01
ITVI20120291A1 (en) 2014-04-30

Similar Documents

Publication Publication Date Title
US9179113B2 (en) Image processing device, and image processing method, and program
US8254718B2 (en) Multi-channel edge-aware chrominance noise reduction
US8229212B2 (en) Interpolation system and method
US7082218B2 (en) Color correction of images
US8427559B2 (en) Image data processing method by reducing image noise, and camera integrating means for implementing said method
US7486844B2 (en) Color interpolation apparatus and color interpolation method utilizing edge indicators adjusted by stochastic adjustment factors to reconstruct missing colors for image pixels
US8115825B2 (en) Electronic device with two image sensors
US8363123B2 (en) Image pickup apparatus, color noise reduction method, and color noise reduction program
US8804012B2 (en) Image processing apparatus, image processing method, and program for executing sensitivity difference correction processing
US9111365B2 (en) Edge-adaptive interpolation and noise filtering method, computer-readable recording medium, and portable terminal
US20080240602A1 (en) Edge mapping incorporating panchromatic pixels
EP2523160A1 (en) Image processing device, image processing method, and program
US20170163951A1 (en) Imaging apparatus and image processing method of thereof
US8189940B2 (en) Image processing apparatus, imaging apparatus, and image processing method
US8538189B2 (en) Image noise filter and method
US7512264B2 (en) Image processing
US9451222B2 (en) Color processing of digital images
EP1947609A1 (en) Method to map the differences between two images
US11915392B2 (en) Image enhancement method and apparatus
US20130016905A1 (en) Method and apparatus for correcting color distortion
US10863148B2 (en) Tile-selection based deep demosaicing acceleration
JP5494249B2 (en) Image processing apparatus, imaging apparatus, and image processing program
JP6426909B2 (en) Color information complementing device and its program
CN116391202B (en) Image noise reduction method, device and chip
Gheorghe et al. A self-profiling image noise and edge enhancement filter

Legal Events

Date Code Title Description
AS Assignment

Owner name: STMICROELECTRONICS S.R.L., ITALY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:NACCARI, FILLIPO;BRUNA, ARCANGELO RANIERI;BIANCO, SIMONE;AND OTHERS;SIGNING DATES FROM 20150305 TO 20150306;REEL/FRAME:035119/0661

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8