WO2008065575A1 - Device and method for processing color image data - Google Patents

Device and method for processing color image data Download PDF

Info

Publication number
WO2008065575A1
WO2008065575A1 PCT/IB2007/054707 IB2007054707W WO2008065575A1 WO 2008065575 A1 WO2008065575 A1 WO 2008065575A1 IB 2007054707 W IB2007054707 W IB 2007054707W WO 2008065575 A1 WO2008065575 A1 WO 2008065575A1
Authority
WO
WIPO (PCT)
Prior art keywords
saturation
color gamut
hue
luminance
white
Prior art date
Application number
PCT/IB2007/054707
Other languages
French (fr)
Inventor
Matheus J. G. Lammers
Petrus M. De Greef
Original Assignee
Nxp B.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=39187223&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=WO2008065575(A1) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Nxp B.V. filed Critical Nxp B.V.
Priority to US12/516,785 priority Critical patent/US8441498B2/en
Priority to EP07849188A priority patent/EP2123056A1/en
Publication of WO2008065575A1 publication Critical patent/WO2008065575A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/67Circuits for processing colour signals for matrixing
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/02Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/06Adjustment of display parameters
    • G09G2320/0606Manual adjustment
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/16Calculation or use of calculated indices related to luminance levels in display data

Definitions

  • the present invention relates to a video processing device and a method for processing color image data.
  • More and more mobile electronic devices are designed with a color display device for displaying color images. These color images can for example be generated by a camera or a video processor.
  • the supplied color image data may have to undergo an image processing to improve the appearance of the image on the display device.
  • multi- media data have to be displayed in a mobile phone, in a multi-media portable player, etc.
  • the display technology used in these mobile devices has some limitations, in particular with respect to the picture quality and the color reproduction.
  • the colors of the supplied image data are typically encoded according to already existing standards. These standards have been selected to facilitate a design of displays based on the available materials. Accordingly, as long as a camera or a display match the chosen standard, a reasonable reproduction of the colors or image data may be expected.
  • LCD displays in particular for mobile applications may not be able to meet the requirements of these standards.
  • One of these standards is the sRGB standard, which defines the x-y coordinates of the red, green and blue light source in connection with a reference white point.
  • the primary RGB coordinates of the sRGB standard define a triangle drawn in the Y-xy diagram.
  • the Y component represents the luminance of a pixel (perpendicular to the x-y axis) while the x-y coordinate relates to a unique color with respect to saturation and hue.
  • the color coordinates of each encoded pixel lie inside or on the border of this triangle.
  • their display triangle is smaller than the sRGB reference resulting in several artifacts.
  • artifacts may include a lack of saturation due to a smaller gamut of the display of the mobile device.
  • a further artifact may relate to hue errors as the color display primaries do not match the sRGB standard primary values.
  • the reference white point may be shifted such that black and white parts in a scene may be represented in a color.
  • color gamut mapping is used to process input pixels such that the colors on a display with a smaller gamut are reproduced to match a reference display.
  • WO 2005/109854 discloses a method for processing color image data to be displayed on a target device.
  • Input pixel data within a source color gamut is received.
  • the input pixel data in said source color gamut is mapped to a target color gamut which can be associated to a target device.
  • the mapping is controlled according to a color saturation value of the input pixel data.
  • Fig. 1 shows a block diagram of a basic color gamut mapping.
  • the input image data IN are processed by gamma function 10.
  • the output of the gamma function 10 undergoes a color gamut mapping function 20 by processing the input pixel data with a static matrix (3x3).
  • the output of the color gamut mapping function 20 is processed by a hard clipper function 30.
  • negative values of R, G, B are clipped to zero and a value of the RGB is set to 2096 for 36-bit RGB pixel data if the value of the RGB is larger than 2096.
  • the output of the hard clipper function 30 is processed by the de-gamma function 40 and the output OUT of the de-gamma function 40 is supplied to a display in a target device.
  • the gamma function 10 is required as the input image data IN or pixels relate to the video domain.
  • the values of the RGB signal are now proportional to the luminance of the three primary light sources.
  • Gamut mapping is preferably performed in the light domain.
  • the gamma transformation performed in the gamma function 10 corresponds to a non- linear operation and may increase the resolution of the RGB signal in the digital domain.
  • the coefficients of the gamut mapping matrix in the gamut mapping function 20 are chosen such that an input RGB luminance value can be directly mapped into a new RGB luminance value for the display of the mobile application. In other words, the matrix is designed to adapt the ratio between the RGB subpixels.
  • the coefficients of the gamut mapping matrix can be calculated:
  • MTX cgm (MTXd 1 Sp) "1 • MTX SRGB
  • MTXSRGB and the MTXd lsp matrices are used to translate a RGB value to the XYZ domain. These matrices are determined based on the primary colors and a reference white point.
  • the above described color gamut mapping may lead to subpixels which are out of range (negative or positive out of range).
  • a clipping operation hard, soft or smart
  • the inverse gamma function is used to transform the pixels back to the video domain as a display typically cannot handle the luminance values directly, e.g. due to standard interface display driver hardware.
  • Fig. 2 shows two graphs for illustrating problems arising from a color gamut mapping.
  • a color triangle of the input image data and of the output image data are displayed.
  • color triangle watched from the side is depicted.
  • the input stimuli Is represent scanned lines, wherein for each scanned line the saturation is increased in steps of 10%.
  • the twelve lines depicted in the lower diagram comprise a constant hue.
  • the result of color gamut mapping OM is depicted in the same diagram.
  • the x-y diagram is taken at a constant perceptive luminance level of 30% and output values above or below this luminance are color coded (red/blue) until a threshold of 5% is exceeded. Those part of the line where the threshold is exceed are labelled with UT. Accordingly the color gamut mapping works well for pixels that coincide both triangles.
  • the luminance on the vertical axis is shown while the blue-yellow color space surface is watched from aside.
  • the flat lines represent the input stimuli Is.
  • the bended lines show the luminance of the display if no gamut mapping is performed at all.
  • the lines MO show the result after color gamut mapping. Accordingly, the gamut mapping works well in the luminance direction for pixels that coincide with both triangles.
  • the following gamut mapping problems may arise. Those pixels which fall outside the display triangle may generate negative values after mapping, i.e. negative light is required to reproduce this particular color on the display. As this is physically not possible, these negative values must be clipped off to a value the display can represent.
  • the upper diagram shows the problem area at the corner of the display triangle. Moreover, pixels with high amplitude may lay outside the display range and have to be therefore limited to a value, which can be physically represented.
  • the lower Y-x-y diagram depicts a problem area at the right top where the lines UMO clip against the ceiling. In general, any sudden discontinuity in the first derivative of the luminance will lead to visible artifacts in the resulting image. This artifact can be seen at the right side of the Y-x-y diagram where the luminance suddenly bends upwards. Therefore, a trade-off is required between the color gamut mapping and the (soft) clipping.
  • a video processing device which comprises a luminance and saturation detector for detecting the luminance values and the saturation values of pixels of an input video signal and a white-point, saturation and hue modulator for transforming luminance and saturation properties of the pixels of the input video signal into white-point, saturation and hue correction factors.
  • the video device also comprises a color gamut matrix generating unit for generating a color gamut matrix in the perception domain based on the white-point, saturation and hue correction factors of the white-point, saturation and hue modulator, a color gamut mapping unit for multiplying the pixels of the input video signal with a color gamut matrix generated by the color gamut matrix generating unit, and a clipping unit for clipping the results of the a color gamut mapping unit which are out of a predefined range.
  • the luminance and saturation detector comprises a RGB squaring unit for squaring amplitudes of sub-pixels of the input video signal, a RGB shuffler unit for ranking the squared sub-pixels based on their amplitude value; a luminance and saturation calculation unit for calculating a value of the luminance, an internal saturation and a saturation correction factor; and a saturation correction unit for outputting a corrected saturation value.
  • the white-point, saturation and hue modulator comprises a white point modulator for determining a white-point correction factor based on the luminescence value from the luminance and saturation calculation unit, a saturation modulator for determining a saturation correction factor based on the saturation value from the luminance and saturation calculation unit, and a hue modulator for determining a hue correction factor based on the saturation value from the luminance and saturation calculation unit.
  • the color gamut matrix generating unit is adapted to generate the color gamut matrix based upon measured characteristics of a display module such that the color gamut matrix can be apated to the actual display module.
  • the invention also relates to a method of processing color image data.
  • the luminance values and the saturation values of pixels of an input video signal are detected by a luminance and saturation detector.
  • the luminance and saturation properties of the pixels of the input video signal are transformed into white-point, saturation and hue correction factors by a white-point, saturation and hue modulator.
  • a color gamut matrix is generated in the perception domain based on the white-point, saturation and hue correction factors of the white-point, saturation and hue modulator by a color gamut matrix generating unit.
  • the pixels of the input video signal are multiplied with a color gamut matrix generated by the color gamut matrix generating unit by a color gamut mapping unit.
  • the results of the a color gamut mapping unit which are out of a predefined range are clipped by a clipping unit.
  • the present invention relates to the realization that the color gamut mapping algorithms take measures to avoid clipping and preserve contrast at several points, wherein a duplication of functionality might be expected.
  • a saturation dependent attenuation is present in front of the video matrix while the matrix itself may also be able to perform a saturation dependent attenuation.
  • the soft clipper adjusts values below and above the operating range of the display. However, if it is possible to detect these situations, the matrix coefficients can be modified to avoid severe negative and positive values and the soft clipper can be replaced by a much cheaper hard clipper (in terms of hardware resources).
  • the color gamut mapping is executed in the linear light domain. This is however disadvantageous as a matched gamma and de-gamma functional block is required.
  • the previously required gamma and de-gamma function can be omitted, i.e. the color gamut mapping is performed directly in the video domain, i.e. the video perceptive domain.
  • the required 8 bit video values can be used to calculate the color gamut mapped output pixels directly.
  • the coefficients of the color gamut matrix (calculated according to the EBU standard) as well as the display data in the linear light domain are corrected to handle the missing gamma block and pixel colors around the white point are mapped as if the gamma function is present.
  • the soft clipper used according to the prior art can be replaced by a hard clipper as the coefficients of the color gamut matrix are modified to avoid severe clipping.
  • the adaptive path in the video processing device is achieved by the adaptive path in the video processing device according to the invention.
  • the properties to reduce a white-point, a hue and a saturation correction are determined based on a simple luminance and saturation value and the optimal coefficients are derived for each pixel.
  • the adaptive branch is merely used to prevent a loss of detail (e.g. by clipping) and to preserve the contrast of an image. A loss of detail may occur for artificial images as the mapping algorithm may not be aware of spatial and temporal relations between pixels.
  • the reduction of parameters for the white-point, saturation and hue may be determined directly, as a direct relation is present between the color gamut matrix coefficients and clipping artifacts.
  • the number of bits in a ROM for storing static mapping data is reduced. E.g. if a 24bit pixel application is taken into account, a display can be characterized by 9 * 5bits + 3*4bits of ROM storage capacity.
  • Fig. 1 shows a block diagram of a basic color gamut mapping
  • Fig. 2 shows two graphs for illustrating problems arising from a color gamut mapping
  • Fig. 3 shows block diagram of a gamut mapping system in the light domain according to the prior art
  • Fig. 4 shows an illustration of a reduction of the hue correction
  • Fig. 5 shows a block diagram of a color gamut mapping system in the perceptive domain according to a first embodiment
  • Fig. 6 shows an illustration of a video matrix relationship
  • Fig. 7 shows an illustration of models for the luminescence, the saturation and the hue
  • Fig. 8 shows an illustration of models of a hue corrected saturation
  • Fig. 9 shows a block diagram of a LSHD detector according to a second embodiment
  • Fig. 10 shows an illustration of a transfer curve of a modulator
  • Fig. 11 shows a block diagram of a WSH modulator according to a third embodiment
  • Fig. 12 shows a block diagram of a matrix generator according to a fourth embodiment
  • Fig. 13 shows a block diagram of a color gamut mapping unit according to a fifth embodiment
  • Fig. 14 shows a representation of the calculation of XYZ matrices
  • Fig. 15 shows a representation of the calculation of static matrix coefficients
  • Fig. 16 shows an illustration of a trade-off between color gamut mapping and soft clipping
  • Fig. 17 shows a representation of an example of an adaptive matrix coefficient
  • Fig. 3 shows a block diagram of a gamut mapping system in the light domain.
  • input video signal IN is processed by the video path, i.e. by the gamma function 10, by the adaptive color gamut mapping function 20, by a soft clipper function 30 and by the de- gamma function 40 (as described according to Fig. 1).
  • the adaptive processing for the adaptive color gamut function is performed by a luminance unit LU, a saturation unit SU, a RGB unit RGB, a white point unit WPU, a triangle unit TU and by a color matrix unit CMU.
  • the luminance unit LU measures the luminance of the input signal IN and the saturation unit SU measures the saturation of the input signal IN.
  • Optimal coefficients are determined based on the measured luminance lum and saturation sat.
  • the mapping coefficients for the color gamut mapping are calculated from predefined coefficients, i.e. parameters inserted at the color gamut matrix generator by means of the shift and rotate values for every pixel.
  • the video matrix coefficients are modified under circumstances where clipping has to be avoided at the cost of less color correction. For those pixels, it is more important to preserve the details and contrast in the resulting image. A trade-off between color mapping and clipping prevention has to be made carefully since no spatial or temporal pixel information is available to the color gamut mapping algorithm.
  • the decision whether a pixel needs to be color mapped or requires a set of modified coefficients for the video matrix may be based on the case when a pixel has a high luminance, (i.e. the white point correction is skipped) or on the case when a pixel has a high saturation, (i.e. the hue is not corrected).
  • the white point correction is skipped if a pixel has a high luminance. Accordingly, the diagonal coefficients si to s3 in the video matrix 20 are modified such that the sum of the rows is equal to one.
  • a White point correction implies that at least one of the sub-pixels has a gain above one and that clipping will occur if the sub-pixel's amplitude is also high. Any unsaturated pixels with high amplitude will therefore get the color of the backlight as the white point is not corrected anymore. If the white point of the backlight is chosen such that its color is shifted towards blue, the perceived pixel luminance will appear to be brighter, which can be advantageous. Therefore, pixels with low luminance amplitude will have the correct white point according to the sRGB standard while pixels with a high luminance will get the bluish white point of the backlight and thereby appear to be brighter.
  • a saturation dependent attenuation is required before entering the video matrix to reduce the amplitude of highly saturated colors. Now, these colors will produce less negative values after color gamut mapping and thereby are easier to handle in the soft clipper.
  • a soft clipper is still required to make sure that the values presented at the de-gamma block are in range. Negative values are removed by adding white to all three sub-pixels. This will reduce the saturation without disturbing the pixel's hue. Next, sub-pixel amplitudes above one are detected and, in that case, the amplitudes are reduced such that they are just in range.
  • the front of screen performance of the display is increased such that perceived colors are matched more closer to the input standard.
  • measures must be taken to avoid artefacts such as loss of detail and contrast.
  • Fig. 5 shows a block diagram of a color gamut mapping system in the perceptive domain according to a first embodiment.
  • the color gamut mapping system comprises a luminescence and saturation detector LSHD, a white point, saturation and hue modulator WSH, a color gamut matrix generator CGMG, an adaptive color gamut mapping unit 20 and a hard clipper unit 31.
  • the luminescence and saturation detector LSHD receives the input signal IN and derives the luminescence lum and the saturation sat from a pixel by means of a simplified module.
  • the hue of the pixel is also used in order to compensate the measured saturation for a case that the respective color vector points to a secondary color.
  • the white point, saturation and hue modulator WSH implements three functions transforming the luminescence lum and saturation sat of a pixel to those values which are required for white point W, saturation S and hue H correction .
  • the output of the modulator may constitute normalized values indicating the amount of the originally measured data may be part of the coefficients of the video matrix.
  • the modulator WSH also receives three input values, namely WLm, SSm, HSm. These three values indicate characteristics of the three transfer functions.
  • the color gamut matrix generator CGMG receives the three output signals W,
  • the optimal video matrix i. e. the matrix coefficients si, s2, s3, rl, r2, bl, b2, gl, g2 for a pixel, which is to be mapped.
  • Static coefficients of the video matrix are encoded in predefined white-point W RGB , saturation S RGB and hue H RGB parameters. These parameters determine the color gamut mapping of pixels from the input IN to the output OUT. If the coefficients of the static matrix are modified in order to reduce the required amount of white point, saturation and hue correction, the quality of the color gamut mapping will deteriorate. However, some details and contrast will be preserved as a clipping is prevented.
  • the parameters WRGB; SRGB ; H RG B as well as WLm, SSm, HSm are used to characterize a display which is to be driven by the video processing system.
  • the color mapping system according to Fig. 5 is able to operate directly in the video (perceptive) domain.
  • the color gamut mapping unit 20 and the hard clipper unit 31 constitute the video processing path while the luminance and saturation detector LSHD, the white point, saturation and hue modulator WSH and the color gamut matrix generator CGMG relate to the adaptive processing path, which serves to prevent a loss of detail e.g. by clipping and to prevent contrast in the image.
  • the input pixels from an input reference system SRGB are mapped to a corresponding display reference system. This is performed by multiplying the input pixel vector with a three-by-three matrix in the color gamut mapping unit 20. If the result of this processing is out of the range, these values can be hard-clipped by the hard clipper unit 31.
  • the units for the adaptive processing are used to ensure that the pixels will be mapped without a loss of detail or contrast.
  • the units of the adaptive processing may also be used to implement a soft clipping function.
  • the color gamut mapping is performed by the units for the video processing while the units for the adaptive processing modify the coefficients of the video matrix in order to perform a respective mapping and in order to avoid a severe clipping.
  • Fig. 6 shows an illustration of a video matrix relationship. In particular, the relationships between the correction parameters and the matrix coefficients are depicted.
  • the matrix coefficients as modified constitute a virtual triangle in the Y-X-Y domain.
  • the arrows in the Fig. 6 depict the effect of the transformation on the virtual triangle V if the matrix coefficients are changed by the corresponding correction.
  • the effects of the color gamut matrix generator on the white point parameters are depicted.
  • the white point reference is shifted and the perceived overall hue is changed.
  • the effects of the color gamut matrix generator on the saturation parameters is depicted.
  • the virtual triangle V is decreased, the perceived saturation is increased and in independent saturation control for RGB colors is achieved.
  • the effect of the color gamut matrix generator on the hue parameters is depicted.
  • the virtual triangle is rotated and the perceived hue has changed.
  • an independent hue control for RGB can be achieved.
  • a relation between the white point, the saturation and the hue correction and the coefficients of the video matrix originate from the fact that the mapping from an input signal in RGB to an output signal in RGB is performed by means of matrix coefficients. However, if a different kind of mapping is performed, the relationship will also be different.
  • the relationship between the white point correction and the matrix coefficients are as follows:
  • the RGB ratio will be determined by the sum of the rows.
  • the subpixels in the input vector V 1n will have the same amplitude V.
  • the particular amount of correction which is required if the white point of a display does not correspond to the input standard white point will correspond to the sum of rows.
  • the white point correction can be avoided.
  • the coefficients S 1 , S 2 and S3 can be varied. However, modifying the other coefficients is not possible as these coefficients relate to saturation and hue correction.
  • the required amount of saturation mapping for the particular display will correspond to an average of the red, green and blue coefficients: (ri+r 2 )/2, (gi+g 2 )/2, (bi+b 2 )/2.
  • the primary display color coordinate can be found on a line between the reference white point and the primary color according to the input standard.
  • a virtual triangle can be obtained by using the averages of these coefficients in the Y-x-y domain. With the virtual triangle, negative averages will result in a smaller gamut triangle size while positive averages will lead to a wider gamut triangle size. If the averages are zero, the triangle size will correspond to the input standard triangle size.
  • the red, green and blue averages will have to be modified to equal to zero.
  • the average values can be reduced to zero in order to reduce the amount of saturation correction.
  • the hue correction corresponds to differences between coefficients of a color: (ri-r 2 ), (gi-g 2 ), (bi-b 2 ).
  • the hue correction can be avoided by averaging these coefficients.
  • Fig. 7 shows an illustration of modules for the luminescence, the saturation and the hue. These modules are used by the luminance and saturation detector LSHD to determine the luminescence and saturation properties of a pixel.
  • the amplitudes of sub-pixels are squared such that these values correspond to the values in the linear light domain.
  • the square is an approximation of a gamma factor of 2,2 if the adaptive branch is used.
  • the sub-pixels are shuffled.
  • the sub-pixels are ranked in order of their magnitude.
  • the value with the highest magnitude is referred to as "max”, the next value corresponds to “med” (medium) and the last value corresponds to "min”. Based on these values, the properties of the luminescence, the saturation and the hue is calculated or derived.
  • the value of the saturation corresponds to the modulation depth between sub- pixels.
  • the value of the hue relates to the direction of the vector with respect to a primary or secondary color PC, SC.
  • the bottom part of Fig. 7 depicts those values of the hue if the primary PC and secondary colors SC are also taken into account.
  • When determining the hue HU normalized values are produced. Accordingly, the values for max, med, min with respect to the luminance and the saturation will stay within a range of 0 and 1 while the values of the hue will be within a range of-1 and +1.
  • the saturation and hue values are independent of the average luminance amplitude of a pixel. This is of significance as independent luminance and saturation values are required to control the color gamut mapping process.
  • Fig. 8 shows an illustration of modules of a hue corrective saturation.
  • the saturation according to the saturation model SMl and the hue according to the hue model HM are calculated as shown according to Fig. 7 based on a saturation model SMl with no hue correction and a hue model HM.
  • the measured saturation Si of those colors which have a vector pointing into the direction of secondary colors is reduced by "m", which corresponds to the so-called modulation index.
  • the modulation index is selected such that the detected saturation Sd is approximately proportional to the distance between the pixel color coordinates and the white point.
  • Fig. 9 shows a block diagram of a LSHD detector according to a second embodiment.
  • This LSHD detector according to the second embodiment may be used in the color gamut mapping system according to Fig. 5.
  • the LSHD detector comprises a RGB square unit RGB SQ, a RGB shuffler unit RGB SH, a LSH unit and a saturation correction unit SC.
  • the RGB square unit RGB SQ squares the amplitudes of the sub-pixels Ri, Bi, Gi.
  • the square is calculated as an eight bit number multiplied with a six bit number. The result thereof is depicted as a seven bit value.
  • the RGB shuffler unit RGB SH is coupled at its input to the output Rs, Gs, Bs of the RGB square unit.
  • the RGB shuffler unit RGB SH serves to rank the sub-pixels in the order of the value of their amplitude.
  • the RGB shuffler unit outputs a maximum MAX, a medium MED and a minimum value MIN. Based on these values, the luminescence L D , the internal saturation Si and the correction factor for the saturation Cs is calculated by the LSH unit LSH and outputted.
  • the saturation connection unit SC receives the internal saturation Si and the correction fact Cs and outputs corrected saturation Sd.
  • the output values of the LSH unit correspond to the maximum for an optimization of calculation reasons in the luminescence saturation domain.
  • Fig. 10 shows an illustration of the transfer curve of a modulator.
  • the transfer curve is modified by the slope.
  • the input parameters and the output correction factors relate to normalized values.
  • the depicted horizontal line HL represents the limitation of the correction values to 1. If the input values are larger, the slope transfer will determine the output.
  • a correction value of 1 will indicate that the corresponding property is completely part of the resulting video matrix.
  • a value of zero will indicate that the corresponding correction property is skipped in the video matrix.
  • Fig. 11 shows a block diagram of WSH modulator according to a third embodiment.
  • the WSH modulator according to the third embodiment can be used in the color gamut mapping system according to Fig. 5.
  • the modulator according to Fig. 11 comprises a white point modulator WPM, a saturation modulator SM and a hue modulator HM.
  • the luminescence and saturation properties Ld, Sd from the LSH detector LSHD are transformed into correction factors Hc, Wc, Sc for determining the white point, the saturation and the hue correction.
  • the ideal curves for these modulators correspond to a linear function as depicted in Fig. 11.
  • the parameters WL m , SS m and HS m correspond to the linear function and are inputted to the WSH modulator.
  • the operation of the white point modulator WPM can be described by the following equation:
  • the operation of the hue modulator HM can be described as follows:
  • Fig.12 shows a block diagram of a matrix generator according to a fourth embodiment.
  • the matrix generator CGMG according to the fourth embodiment can be used in the mapping system according to Fig.5.
  • the color gamut matrix generator CGMG is used to generate nine coefficients which are used for the color gamut mapping on the basis of the allowed amount of white point, saturation and hue correction Wc, Sc, Hc from a previous block.
  • the nine parameters are calculated based from the measured dataof a display module.
  • the nine parameters are defined to make a calculation of the coefficients easy when the required correction is taken into account.
  • the relation between the coefficients and the nine parameters is as follows:
  • the video matrix MTX CGM is calculated in the linear light domain from the measurement data. Thereafter, these coefficients are compensated as no gamma and de- gamma operation is performed. Accordingly, based on these values, the nine parameters are calculated as follows:
  • K l0 S r -S c +H r -H c
  • K 21 S g -S c +H g -H c
  • Fig. 13 shows a block diagram of a video matrix unit according to a fifth embodiment.
  • the color gamut mapping unit 20 i.e. a video matrix unit VM according to the fifth embodiment can be used in the mapping system according to Fig. 5.
  • the video matrix unit VM 20 serves to perform the color gamut mapping for the video processing.
  • the input pixel vector (Ri, Gi, Bi) is multiplied by the matrix coefficients Koo- • • K 22 in order to obtain a color gamut mapped output pixel vector Ro, Go, Bo.
  • the amplitudes of these coefficients are close to a unity matrix, i.e. the diagonal coefficients are around 1 and the other coefficients are around 0. Therefore, the output pixel vector can be described as follows:
  • the color gamut mapping can be performed as follows:
  • Fig. 14 shows a representation of a calculation of XYZ matrices.
  • the calculation of the OTP parameters based on the input standard and the measured data of the display.
  • the parameters WRGB, SRGB and H RG B define the color gamut mapping while the parameters WL M , SS M and HSM define a trade-off between the color mapping and soft clipping.
  • the values of kl and k2 are used to determine the RGB ratio for the white point:
  • the WRGB, SRGB and H RG B parameters are calculated. This is performed by calculating the MTXSRGB and MTXDISP matrix based on the desired color standard for input images and the measurements of the display primaries and the display white point.
  • the XYZ matrix is calculated as follows:
  • the color gamut matrix MTX CGM in the linear light domain can be calculated as follows:
  • Fig. 15 shows a representation of the calculation of static matrix coefficients.
  • the transformation of the coefficients of the MTX cgm matrix into the perceptive domain is shown.
  • Nine coefficients from the luminescence domain are inputted and are converted to nine coefficients in the video domain.
  • the linear coefficients are indicated as K L r0Wi co i U mn and the perceptive coefficients are indicated as K p r0Wi co i U mn •
  • a gamma value of approx. 2,2 is taken into account.
  • the equations in the perceptive domain are such that they are also corrective.
  • the white point correction will be the same for pixel vectors being processed by both matrices. Furthermore, the first derivative of the saturation correction around the white point will also be the same for both matrices. Accordingly, the proportional increase of saturation from grey up to 30 to 40% is a good approximation of the correct values. Moreover, the first derivative of the hue correction around the white point is the same for both of the matrices. Accordingly, the angle of a pixel vector starting from the white point constitutes a good approximation of the correct angles.
  • the perceptive coefficients are calculated as follows:
  • Fig. 16 shows an illustration of a trade-off between color gamut mapping and soft clipping.
  • the luminescence and saturation properties of each input pixel are determined. These properties correspond to a two dimensional vector point to an area in Fig. 16. The correct action to be taken will depend on the area which is addressed by the pixel property. In Fig. 16, four areas are defined. If the luminescence and saturation value of a pixel is below a certain threshold, the color gamut mapping is performed based on the originally determined matrix coefficients. These thresholds correspond to the value of the parameters WL M and HS M specifying the slopes of the correction transformation. However, if the luminescence is low, the matrix coefficients are adjusted firstly by removing the hue correction and followed by a removal of the saturation correction.
  • any gradual decrease of coefficients related to saturation correction is compensated in increasing the diagonal coefficients with the same amount. Although this is to be a rather rough first order approximation, this is sufficient for the present algorithm. Pixels having a high saturation and a low luminescence may result in negative color gamut mapping values. These coefficients which are responsible for this are modified to towards zero and a saturation loss is compensated by adding the same amount to the diagonal values. A pixel vector with a high saturation value is used as a mask for the matrix such that a column is selected where the diagonal coefficients are used as a gain factor for this color. If a pixel comprises a high luminescence, the white point correction can be removed to avoid clipping against the sealing. It should be noted that a pixel with a high luminescence cannot have a high saturation at the same time.
  • the calculation of the parameters WL M , SS M and HS M is based on the correct pixel vectors generating values below 0 or above 1.
  • Negative clipping in the red/cyan direction can be determined as follows:
  • can be determined for red/cyan by
  • can be determined for green/magenta by:
  • e value closest to zero is taken and run through the saturation model.
  • the color directions green/magenta and blue/yellow relate to the same calculations.
  • ⁇ for red/cyan is calculated, a value closest to zero is taken.
  • First ⁇ for is calculated, next values are run through a saturation module, finally the slope is calculated.
  • the value of ⁇ can be determined for blue/yellow by:
  • the SS M calculation is performed which only differs in the coefficients used in the matrix from calculating the slope.
  • the hue component is already removed from the matrix coefficients before the saturation correction is skipped.
  • the red, green and blue coefficients (T 1 , V 2 , gi, g2, bi, b 2 ) which are responsible for the hue correction are averaged and the ⁇ value closest to zero is used to calculate the slope SS M by means of the following equation:
  • calculation of the WL M parameters are performed by determining positive clipping values in a red/cyan direction:
  • V is calculated for blue/yellow.
  • Vp The value closest to zero is taken and run through the luminance model. Accordingly, the Vp value which is closest to zero is used to reduce the value of ⁇ to zero. This is because the white point correction has to be removed from pixels with a high luminescence.
  • the value of Vp may be used to calculate the WL M slope:
  • Fig. 17 shows a representation of an example of an adaptive matrix coefficient. If the luminescence values are below 0,853 and the saturation values are below 0,486 (please note that these values are merely used for illustrative purposes and should not be considered as restrictive) the original color gamut matrix coefficients are used. If the saturation values are between 0,486 and 0,264, the red, green and blue coefficients are averaged, reducing the hue correction. If the saturation values are above 0,624, the red, green and blue coefficients are decreased to zero, reducing the saturation correction. At the same time, the average values are added to the diagonal values, to maintain luminance. As an illustrative example, the value 1, 288 in the right bottom matrix of Fig.
  • the white point and saturation correction for the red component correspond approx. to a first order approximation. If the luminescence value is above 0,853, the sum of the rows are made equal to one. This is preferably performed by adjusting the diagonal coefficients. Then the diagonal coefficients of the left two matrices are compared. The sum of the red row in the bottom matrix corresponds to 1,171 while the sum of the same row in the top matrix is equal to one.
  • the color gamut mapping in the linear domain as shown in the left triangle is compared to the color gamut mapping in the perceptive domain as depicted in the right angle without any adaptation of the matrix coefficients.
  • the pixel colors around the white point are closed to the desired colors.
  • the saturation values as plotted in increasing steps of 10% up to 30% to 40% correspond to a good approximation. If high saturation values are present, clipping occurs for both triangles due to the absence of the adaptive processing according to the first embodiment.
  • the adaptation of the matrix coefficients according to the first embodiment has been employed.
  • the left triangle shows a clipping against the boundaries of the triangle, while the right triangle shows a much smoother trajectory from grey to blue.
  • this difference appears to be rather small, the impact on front screen performance with natural images is rather big. This is in particular true if a spatial relation between pixels is present.
  • the above described color mapping can be used to compress a gamut for mobile displays. Furthermore, the gamut can be extended for example wide gamut LCD TV. Moreover, the color space can be modified for multi-primary displays. User settings like saturation and color temperature can be executed. A picture enhancement by means of a blue stretch or green stretch can be performed.
  • the adaptive color gamut mapping may be used to compress the gamut for mobile displays.
  • the gamut may be expanded for white gamut LCD television systems.
  • the color space for multi- primary displays can be modified.
  • user settings like saturation, color temperature or the like can be executed and a picture enhancement, e.g. a blue stretch, green stretch can be achieved.
  • the above described adaptive color gamut mapping can be used to enhance color settings for display with limited color capabilities by performing a gamut compression.
  • limited colors are provided as the color spectrum of the backlight and the limited thickness and transmission of color filters are present.
  • mobile LCD displays only enable a poor performance with respect to contrast and bit depth such that color gamut mapping is performed without using an adaptive branch which reduces the required hardware while introducing artefacts.
  • PLED/OLED displays have limited colors because of the degradation of materials with optimal primary colors. Therefore, a preferred spectrum of primaries will be exchanged for a maximum lifetime.
  • LCD front projection displays also may have limited colors due to the color spectrum of the light source as well as due to limited thickness and transmission of the color filters.
  • the white point of bright displays may correspond to the maximum output of the RGB but may not correspond to the D65 white point.
  • the white point should be corrected as the correction may result in a reduced luminance.
  • the white point can be corrected while still maintaining a correct luminance.
  • a switch between predefined white point settings e.g. cool (8500K), normal (D65) and warm (5000K)
  • the adaptive color gamut mapping can improve the color settings of such devices.
  • LCD-TV displays may have a wide gamut color because of the color spectrum of the backlight combined with their color filters.
  • the displays also may comprise a white sub-pixel to enhance the brightness and efficiency.
  • the adaptive color gamut mapping may be used to reshape the color space accordingly supporting a bright gamut color.
  • the adaptive color gamut mapping can be used to prevent clipping artefacts during a rendering process.
  • the adaptive color gamut mapping may be used to enhance the color settings of displays with white gamut color capabilities as such displays may comprise an extra primary color to enhance the gamut and support an efficient use of the backlight spectrum.
  • Adaptive color gamut mapping may be used to reshape the color space supporting bright gamut colors.
  • the adaptive color gamut mapping may be used to prevent clipping artefacts during the rendering process.
  • user settings may be implemented by the adaptive color gamut mapping such that the personal preferences of a user can be used for the setting of the display. This may be performed by modifying the input coefficients of the sRGB input gamut to user specific coefficients which may describe a modified gamut before it is used in the mapping process.
  • the mapping process may be used to map the 3D input gamut to a 3D output gamut wherein an input image appears with the user preferred saturation brightness, hue and whitepoint setting.
  • an image enhancement can be performed by the adaptive color gamut mapping.
  • additional adaptive processes analyzing properties of the input pixel and depending on their location in the 3D color space with respect to hue, saturation and brightness, these pixels are mapped to a different location with a different hue, saturation and brightness. Accordingly, the input image may appear differently on a display as e.g. some green colors are getting more saturation, some skin-tone colors are getting a more ideal hue, bright white colors are getting a white-point towards blue and dark colors are getting less luminance and more saturation.
  • the above described color gamut mapping can be used in any display devices to enhance the front-of-screen performance.
  • the adaptive color gamut mapping may also be embodied in display drivers or display modules.
  • the above described adaptive color gamut mapping algorithm may be implemented in a companion chip. This is advantageous as only a limited amount of additional hardware is required to perform the respective algorithm.
  • the adaptive color gamut mapping algorithm may also be executed in software on a programmable processing device. Such a software algorithm may run on an ARM platform or on a media processor like Trimedia.

Abstract

The present invention relates to a video processing device comprising a luminance and saturation detector (LSHD) for detecting the luminance values (lum) and the saturation values (sat) of pixels of an input video signal (IN); and a white-point, saturation and hue modulator (WSH) for transforming luminance and saturation properties (lum, sat) of the pixels of the input video signal (IN) into white-point, saturation and hue correction factors (W, Wc; S, Sc; H, Hc). The video device also comprises a color gamut matrix generating unit (CGMG) for generating a color gamut matrix in the perception domain based on the white-point, saturation and hue correction factors (Wc, Sc, Hc) of the white-point, saturation and hue modulator (WSH); a color gamut mapping unit (20) for multiplying the pixels of the input video signal (IN) with a color gamut matrix generated by the color gamut matrix generating unit (CGMG); and a clipping unit (31) for clipping the results of the a color gamut mapping unit (20) which are out of a predefined range.

Description

DEVICE AND METHOD FOR PROCESSING COLOR IMAGE DATA
FIELD OF THE INVENTION
The present invention relates to a video processing device and a method for processing color image data.
BACKGROUND OF THE INVENTION
More and more mobile electronic devices are designed with a color display device for displaying color images. These color images can for example be generated by a camera or a video processor. The supplied color image data may have to undergo an image processing to improve the appearance of the image on the display device. Nowadays, multi- media data have to be displayed in a mobile phone, in a multi-media portable player, etc. Typically, the display technology used in these mobile devices has some limitations, in particular with respect to the picture quality and the color reproduction. The colors of the supplied image data are typically encoded according to already existing standards. These standards have been selected to facilitate a design of displays based on the available materials. Accordingly, as long as a camera or a display match the chosen standard, a reasonable reproduction of the colors or image data may be expected.
On the other hand, LCD displays in particular for mobile applications may not be able to meet the requirements of these standards. One of these standards is the sRGB standard, which defines the x-y coordinates of the red, green and blue light source in connection with a reference white point. The primary RGB coordinates of the sRGB standard define a triangle drawn in the Y-xy diagram. The Y component represents the luminance of a pixel (perpendicular to the x-y axis) while the x-y coordinate relates to a unique color with respect to saturation and hue. Preferably, the color coordinates of each encoded pixel lie inside or on the border of this triangle. For many mobile LCD displays, their display triangle is smaller than the sRGB reference resulting in several artifacts. These artifacts may include a lack of saturation due to a smaller gamut of the display of the mobile device. A further artifact may relate to hue errors as the color display primaries do not match the sRGB standard primary values. Furthermore, the reference white point may be shifted such that black and white parts in a scene may be represented in a color. To cope with these problems, color gamut mapping is used to process input pixels such that the colors on a display with a smaller gamut are reproduced to match a reference display.
WO 2005/109854 discloses a method for processing color image data to be displayed on a target device. Input pixel data within a source color gamut is received. The input pixel data in said source color gamut is mapped to a target color gamut which can be associated to a target device. The mapping is controlled according to a color saturation value of the input pixel data.
Fig. 1 shows a block diagram of a basic color gamut mapping. The input image data IN are processed by gamma function 10. The output of the gamma function 10 undergoes a color gamut mapping function 20 by processing the input pixel data with a static matrix (3x3). The output of the color gamut mapping function 20 is processed by a hard clipper function 30. Here, negative values of R, G, B are clipped to zero and a value of the RGB is set to 2096 for 36-bit RGB pixel data if the value of the RGB is larger than 2096. The output of the hard clipper function 30 is processed by the de-gamma function 40 and the output OUT of the de-gamma function 40 is supplied to a display in a target device. The gamma function 10 is required as the input image data IN or pixels relate to the video domain. The values of the RGB signal are now proportional to the luminance of the three primary light sources. By performing a gamut mapping, light sources are lineary mixed to achieve a desired color. Gamut mapping is preferably performed in the light domain. The gamma transformation performed in the gamma function 10 corresponds to a non- linear operation and may increase the resolution of the RGB signal in the digital domain. The coefficients of the gamut mapping matrix in the gamut mapping function 20 are chosen such that an input RGB luminance value can be directly mapped into a new RGB luminance value for the display of the mobile application. In other words, the matrix is designed to adapt the ratio between the RGB subpixels. The coefficients of the gamut mapping matrix can be calculated:
MTXcgm = (MTXd1Sp)"1 • MTXSRGB wherein the MTXSRGB and the MTXdlsp matrices are used to translate a RGB value to the XYZ domain. These matrices are determined based on the primary colors and a reference white point.
However, the above described color gamut mapping may lead to subpixels which are out of range (negative or positive out of range). To avoid such values, a clipping operation (hard, soft or smart) is performed. The inverse gamma function is used to transform the pixels back to the video domain as a display typically cannot handle the luminance values directly, e.g. due to standard interface display driver hardware.
Fig. 2 shows two graphs for illustrating problems arising from a color gamut mapping. In the upper graph, a color triangle of the input image data and of the output image data are displayed. In the lower graph, color triangle watched from the side is depicted. In the upper diagram, the input stimuli Is represent scanned lines, wherein for each scanned line the saturation is increased in steps of 10%. The twelve lines depicted in the lower diagram comprise a constant hue. The result of color gamut mapping OM is depicted in the same diagram. The x-y diagram is taken at a constant perceptive luminance level of 30% and output values above or below this luminance are color coded (red/blue) until a threshold of 5% is exceeded. Those part of the line where the threshold is exceed are labelled with UT. Accordingly the color gamut mapping works well for pixels that coincide both triangles.
In the lower diagram, the luminance on the vertical axis is shown while the blue-yellow color space surface is watched from aside. The flat lines represent the input stimuli Is. The bended lines show the luminance of the display if no gamut mapping is performed at all. The lines MO show the result after color gamut mapping. Accordingly, the gamut mapping works well in the luminance direction for pixels that coincide with both triangles.
However, the following gamut mapping problems may arise. Those pixels which fall outside the display triangle may generate negative values after mapping, i.e. negative light is required to reproduce this particular color on the display. As this is physically not possible, these negative values must be clipped off to a value the display can represent. The upper diagram shows the problem area at the corner of the display triangle. Moreover, pixels with high amplitude may lay outside the display range and have to be therefore limited to a value, which can be physically represented. The lower Y-x-y diagram depicts a problem area at the right top where the lines UMO clip against the ceiling. In general, any sudden discontinuity in the first derivative of the luminance will lead to visible artifacts in the resulting image. This artifact can be seen at the right side of the Y-x-y diagram where the luminance suddenly bends upwards. Therefore, a trade-off is required between the color gamut mapping and the (soft) clipping.
SUMMARY OF THE INVENTION It is an object of the present invention to provide a video processing device and a method for performing a more efficient color gamut mapping with reduced hardware resources.
This object is solved by a video processing device according to claim 1 and by a method according to claim 6.
Therefore, a video processing device is provided which comprises a luminance and saturation detector for detecting the luminance values and the saturation values of pixels of an input video signal and a white-point, saturation and hue modulator for transforming luminance and saturation properties of the pixels of the input video signal into white-point, saturation and hue correction factors. The video device also comprises a color gamut matrix generating unit for generating a color gamut matrix in the perception domain based on the white-point, saturation and hue correction factors of the white-point, saturation and hue modulator, a color gamut mapping unit for multiplying the pixels of the input video signal with a color gamut matrix generated by the color gamut matrix generating unit, and a clipping unit for clipping the results of the a color gamut mapping unit which are out of a predefined range.
According to an aspect of the invention the luminance and saturation detector comprises a RGB squaring unit for squaring amplitudes of sub-pixels of the input video signal, a RGB shuffler unit for ranking the squared sub-pixels based on their amplitude value; a luminance and saturation calculation unit for calculating a value of the luminance, an internal saturation and a saturation correction factor; and a saturation correction unit for outputting a corrected saturation value.
According to a further aspect of the invention the white-point, saturation and hue modulator comprises a white point modulator for determining a white-point correction factor based on the luminescence value from the luminance and saturation calculation unit, a saturation modulator for determining a saturation correction factor based on the saturation value from the luminance and saturation calculation unit, and a hue modulator for determining a hue correction factor based on the saturation value from the luminance and saturation calculation unit. According to still a further aspect of the invention the color gamut matrix generating unit is adapted to generate the color gamut matrix based upon measured characteristics of a display module such that the color gamut matrix can be apated to the actual display module. The invention also relates to a method of processing color image data. The luminance values and the saturation values of pixels of an input video signal are detected by a luminance and saturation detector. The luminance and saturation properties of the pixels of the input video signal are transformed into white-point, saturation and hue correction factors by a white-point, saturation and hue modulator. A color gamut matrix is generated in the perception domain based on the white-point, saturation and hue correction factors of the white-point, saturation and hue modulator by a color gamut matrix generating unit. The pixels of the input video signal are multiplied with a color gamut matrix generated by the color gamut matrix generating unit by a color gamut mapping unit. The results of the a color gamut mapping unit which are out of a predefined range are clipped by a clipping unit.
The present invention relates to the realization that the color gamut mapping algorithms take measures to avoid clipping and preserve contrast at several points, wherein a duplication of functionality might be expected. A saturation dependent attenuation is present in front of the video matrix while the matrix itself may also be able to perform a saturation dependent attenuation. The soft clipper adjusts values below and above the operating range of the display. However, if it is possible to detect these situations, the matrix coefficients can be modified to avoid severe negative and positive values and the soft clipper can be replaced by a much cheaper hard clipper (in terms of hardware resources). Furthermore, the color gamut mapping is executed in the linear light domain. This is however disadvantageous as a matched gamma and de-gamma functional block is required. These operations are non linear, such that they require a lot of calculations in hardware or relatively large lookup tables. Moreover, values, resulting from the gamma operation, must be represented in a higher resolution (e.g. an increase of a linear 8 bit input to a 12 bit output) or in a non-linear representation (exponent-mantissa) to avoid quantization noise. In addition, the video matrix operation must be executed in the numerical representation chosen that represent the luminance values of the gamma operation. Since the video matrix requires nine multiplications and six additions, the amount of consumed hardware resources and power dissipation can be a disadvantage.
With the video processing device according to the invention the previously required gamma and de-gamma function can be omitted, i.e. the color gamut mapping is performed directly in the video domain, i.e. the video perceptive domain. Moreover, the required 8 bit video values can be used to calculate the color gamut mapped output pixels directly. The coefficients of the color gamut matrix (calculated according to the EBU standard) as well as the display data in the linear light domain are corrected to handle the missing gamma block and pixel colors around the white point are mapped as if the gamma function is present. The soft clipper used according to the prior art can be replaced by a hard clipper as the coefficients of the color gamut matrix are modified to avoid severe clipping. This modification is achieved by the adaptive path in the video processing device according to the invention. In the adaptive branch of the video processing device the properties to reduce a white-point, a hue and a saturation correction are determined based on a simple luminance and saturation value and the optimal coefficients are derived for each pixel. While the color gamut mapping is performed completely in the video branch of the video device, the adaptive branch is merely used to prevent a loss of detail (e.g. by clipping) and to preserve the contrast of an image. A loss of detail may occur for artificial images as the mapping algorithm may not be aware of spatial and temporal relations between pixels. The reduction of parameters for the white-point, saturation and hue may be determined directly, as a direct relation is present between the color gamut matrix coefficients and clipping artifacts. Furthermore, the number of bits in a ROM for storing static mapping data is reduced. E.g. if a 24bit pixel application is taken into account, a display can be characterized by 9 * 5bits + 3*4bits of ROM storage capacity.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described hereinafter.
Fig. 1 shows a block diagram of a basic color gamut mapping, Fig. 2 shows two graphs for illustrating problems arising from a color gamut mapping;
Fig. 3 shows block diagram of a gamut mapping system in the light domain according to the prior art;
Fig. 4 shows an illustration of a reduction of the hue correction, Fig. 5 shows a block diagram of a color gamut mapping system in the perceptive domain according to a first embodiment,
Fig. 6 shows an illustration of a video matrix relationship; Fig. 7 shows an illustration of models for the luminescence, the saturation and the hue;
Fig. 8 shows an illustration of models of a hue corrected saturation, Fig. 9 shows a block diagram of a LSHD detector according to a second embodiment; Fig. 10 shows an illustration of a transfer curve of a modulator;
Fig. 11 shows a block diagram of a WSH modulator according to a third embodiment,
Fig. 12 shows a block diagram of a matrix generator according to a fourth embodiment;
Fig. 13 shows a block diagram of a color gamut mapping unit according to a fifth embodiment;
Fig. 14 shows a representation of the calculation of XYZ matrices,
Fig. 15 shows a representation of the calculation of static matrix coefficients, Fig. 16 shows an illustration of a trade-off between color gamut mapping and soft clipping; and
Fig. 17 shows a representation of an example of an adaptive matrix coefficient,
DETAILED DESCRIPTION OF EMBODIMENTS
Fig. 3 shows a block diagram of a gamut mapping system in the light domain. In input video signal IN is processed by the video path, i.e. by the gamma function 10, by the adaptive color gamut mapping function 20, by a soft clipper function 30 and by the de- gamma function 40 (as described according to Fig. 1). The adaptive processing for the adaptive color gamut function is performed by a luminance unit LU, a saturation unit SU, a RGB unit RGB, a white point unit WPU, a triangle unit TU and by a color matrix unit CMU. The luminance unit LU measures the luminance of the input signal IN and the saturation unit SU measures the saturation of the input signal IN. Optimal coefficients are determined based on the measured luminance lum and saturation sat. The mapping coefficients for the color gamut mapping are calculated from predefined coefficients, i.e. parameters inserted at the color gamut matrix generator by means of the shift and rotate values for every pixel. The video matrix coefficients are modified under circumstances where clipping has to be avoided at the cost of less color correction. For those pixels, it is more important to preserve the details and contrast in the resulting image. A trade-off between color mapping and clipping prevention has to be made carefully since no spatial or temporal pixel information is available to the color gamut mapping algorithm.
The decision whether a pixel needs to be color mapped or requires a set of modified coefficients for the video matrix may be based on the case when a pixel has a high luminance, (i.e. the white point correction is skipped) or on the case when a pixel has a high saturation, (i.e. the hue is not corrected).
Hence, the white point correction is skipped if a pixel has a high luminance. Accordingly, the diagonal coefficients si to s3 in the video matrix 20 are modified such that the sum of the rows is equal to one. A White point correction implies that at least one of the sub-pixels has a gain above one and that clipping will occur if the sub-pixel's amplitude is also high. Any unsaturated pixels with high amplitude will therefore get the color of the backlight as the white point is not corrected anymore. If the white point of the backlight is chosen such that its color is shifted towards blue, the perceived pixel luminance will appear to be brighter, which can be advantageous. Therefore, pixels with low luminance amplitude will have the correct white point according to the sRGB standard while pixels with a high luminance will get the bluish white point of the backlight and thereby appear to be brighter.
On the other hand, if a pixel has a high saturation, the hue is not corrected anymore, i.e. the red, green and blue coefficients rl, r2, gl, g2, bl and b2 are averaged (rl = r2 = average (rl, r2) and so on). The resulting "virtual" display triangle VDT vertices are lying on the lines that can be drawn between the primary colors and the white point of the input standard according to Fig. 4. The lack of saturation is still corrected by the video matrix. Hence, if e.g. red needs more gain (saturation), the red amplitude is increased to produce more red light in the primary color of the display. So if the color of red is wrong (hue), it will not be corrected anymore. The basic idea behind this way of clipping is that for high saturations, the display will still be fully driven and maximum contrast is achieved.
A saturation dependent attenuation is required before entering the video matrix to reduce the amplitude of highly saturated colors. Now, these colors will produce less negative values after color gamut mapping and thereby are easier to handle in the soft clipper. A soft clipper is still required to make sure that the values presented at the de-gamma block are in range. Negative values are removed by adding white to all three sub-pixels. This will reduce the saturation without disturbing the pixel's hue. Next, sub-pixel amplitudes above one are detected and, in that case, the amplitudes are reduced such that they are just in range. According to the above described adaptive color gamut mapping the front of screen performance of the display is increased such that perceived colors are matched more closer to the input standard. However, in such a color gamut mapping process measures must be taken to avoid artefacts such as loss of detail and contrast.
Fig. 5 shows a block diagram of a color gamut mapping system in the perceptive domain according to a first embodiment. The color gamut mapping system comprises a luminescence and saturation detector LSHD, a white point, saturation and hue modulator WSH, a color gamut matrix generator CGMG, an adaptive color gamut mapping unit 20 and a hard clipper unit 31.
The luminescence and saturation detector LSHD receives the input signal IN and derives the luminescence lum and the saturation sat from a pixel by means of a simplified module. The hue of the pixel is also used in order to compensate the measured saturation for a case that the respective color vector points to a secondary color.
The white point, saturation and hue modulator WSH implements three functions transforming the luminescence lum and saturation sat of a pixel to those values which are required for white point W, saturation S and hue H correction . The output of the modulator may constitute normalized values indicating the amount of the originally measured data may be part of the coefficients of the video matrix. The modulator WSH also receives three input values, namely WLm, SSm, HSm. These three values indicate characteristics of the three transfer functions. The color gamut matrix generator CGMG receives the three output signals W,
S, H from the white point, saturation and hue modulator WSH and outputs the optimal video matrix (i. e. the matrix coefficients si, s2, s3, rl, r2, bl, b2, gl, g2) for a pixel, which is to be mapped. Static coefficients of the video matrix are encoded in predefined white-point WRGB, saturation SRGB and hue HRGB parameters. These parameters determine the color gamut mapping of pixels from the input IN to the output OUT. If the coefficients of the static matrix are modified in order to reduce the required amount of white point, saturation and hue correction, the quality of the color gamut mapping will deteriorate. However, some details and contrast will be preserved as a clipping is prevented. The parameters WRGB; SRGB ; HRGB as well as WLm, SSm, HSm are used to characterize a display which is to be driven by the video processing system.
Accordingly, the color mapping system according to Fig. 5 is able to operate directly in the video (perceptive) domain. The color gamut mapping unit 20 and the hard clipper unit 31 constitute the video processing path while the luminance and saturation detector LSHD, the white point, saturation and hue modulator WSH and the color gamut matrix generator CGMG relate to the adaptive processing path, which serves to prevent a loss of detail e.g. by clipping and to prevent contrast in the image. The input pixels from an input reference system SRGB are mapped to a corresponding display reference system. This is performed by multiplying the input pixel vector with a three-by-three matrix in the color gamut mapping unit 20. If the result of this processing is out of the range, these values can be hard-clipped by the hard clipper unit 31. In particular, values greater than 1 are clipped to 1 and values below zero are clipped to 0. The units for the adaptive processing (LSHD, WSH, CGMG) are used to ensure that the pixels will be mapped without a loss of detail or contrast. The units of the adaptive processing may also be used to implement a soft clipping function. Hence, the color gamut mapping is performed by the units for the video processing while the units for the adaptive processing modify the coefficients of the video matrix in order to perform a respective mapping and in order to avoid a severe clipping.
Fig. 6 shows an illustration of a video matrix relationship. In particular, the relationships between the correction parameters and the matrix coefficients are depicted. The matrix coefficients as modified constitute a virtual triangle in the Y-X-Y domain. The arrows in the Fig. 6 depict the effect of the transformation on the virtual triangle V if the matrix coefficients are changed by the corresponding correction.
In the uppermost diagram, the effects of the color gamut matrix generator on the white point parameters are depicted. The white point reference is shifted and the perceived overall hue is changed. In the middle diagram, the effects of the color gamut matrix generator on the saturation parameters is depicted. Here, the virtual triangle V is decreased, the perceived saturation is increased and in independent saturation control for RGB colors is achieved. In the lower diagram, the effect of the color gamut matrix generator on the hue parameters is depicted. The virtual triangle is rotated and the perceived hue has changed. Moreover, an independent hue control for RGB can be achieved.
A relation between the white point, the saturation and the hue correction and the coefficients of the video matrix originate from the fact that the mapping from an input signal in RGB to an output signal in RGB is performed by means of matrix coefficients. However, if a different kind of mapping is performed, the relationship will also be different. The relationship between the white point correction and the matrix coefficients are as follows:
Figure imgf000012_0001
If an unsaturated pixel vector V1n is multiplied with the video matrix, the RGB ratio will be determined by the sum of the rows. Here, the subpixels in the input vector V1n will have the same amplitude V. The particular amount of correction which is required if the white point of a display does not correspond to the input standard white point will correspond to the sum of rows. By modifying the sum of rows to 1, the white point correction can be avoided. In the above equations, the coefficients S1, S2 and S3 can be varied. However, modifying the other coefficients is not possible as these coefficients relate to saturation and hue correction.
With respect to the saturation correction, the required amount of saturation mapping for the particular display will correspond to an average of the red, green and blue coefficients: (ri+r2)/2, (gi+g2)/2, (bi+b2)/2. For the case that the coefficients correspond to a color and are equal to the average of this color, the primary display color coordinate can be found on a line between the reference white point and the primary color according to the input standard. A virtual triangle can be obtained by using the averages of these coefficients in the Y-x-y domain. With the virtual triangle, negative averages will result in a smaller gamut triangle size while positive averages will lead to a wider gamut triangle size. If the averages are zero, the triangle size will correspond to the input standard triangle size. If a saturation correction is to be avoided, the red, green and blue averages will have to be modified to equal to zero. As the relationship between the virtual triangle size and the average values of the coefficients is none-linear, the average values can be reduced to zero in order to reduce the amount of saturation correction.
With respect to the hue correction, the hue correction corresponds to differences between coefficients of a color: (ri-r2), (gi-g2), (bi-b2). The hue correction can be avoided by averaging these coefficients. Fig. 7 shows an illustration of modules for the luminescence, the saturation and the hue. These modules are used by the luminance and saturation detector LSHD to determine the luminescence and saturation properties of a pixel. According to a first step, the amplitudes of sub-pixels are squared such that these values correspond to the values in the linear light domain. The square is an approximation of a gamma factor of 2,2 if the adaptive branch is used. Thereafter, the sub-pixels are shuffled. The sub-pixels are ranked in order of their magnitude. The value with the highest magnitude is referred to as "max", the next value corresponds to "med" (medium) and the last value corresponds to "min". Based on these values, the properties of the luminescence, the saturation and the hue is calculated or derived.
Figure imgf000013_0001
with Min = L-(I-S), Med = L-(I-S-H), and Max = L-(RS), The value of the saturation corresponds to the modulation depth between sub- pixels. The value of the hue relates to the direction of the vector with respect to a primary or secondary color PC, SC. The bottom part of Fig. 7 depicts those values of the hue if the primary PC and secondary colors SC are also taken into account. When determining the hue HU, normalized values are produced. Accordingly, the values for max, med, min with respect to the luminance and the saturation will stay within a range of 0 and 1 while the values of the hue will be within a range of-1 and +1. The saturation and hue values are independent of the average luminance amplitude of a pixel. This is of significance as independent luminance and saturation values are required to control the color gamut mapping process.
Fig. 8 shows an illustration of modules of a hue corrective saturation. According to Fig.8, firstly the saturation according to the saturation model SMl and the hue according to the hue model HM are calculated as shown according to Fig. 7 based on a saturation model SMl with no hue correction and a hue model HM. Thereafter, the measured saturation Si of those colors which have a vector pointing into the direction of secondary colors is reduced by "m", which corresponds to the so-called modulation index. In the top right corner of Fig. 8, the modulation index is selected such that the detected saturation Sd is approximately proportional to the distance between the pixel color coordinates and the white point.
Figure imgf000014_0001
Now, the units of the color gamut system are described in more detail. Fig. 9 shows a block diagram of a LSHD detector according to a second embodiment. This LSHD detector according to the second embodiment may be used in the color gamut mapping system according to Fig. 5. The LSHD detector comprises a RGB square unit RGB SQ, a RGB shuffler unit RGB SH, a LSH unit and a saturation correction unit SC. The RGB square unit RGB SQ squares the amplitudes of the sub-pixels Ri, Bi, Gi. Preferably, the square is calculated as an eight bit number multiplied with a six bit number. The result thereof is depicted as a seven bit value. The RGB shuffler unit RGB SH is coupled at its input to the output Rs, Gs, Bs of the RGB square unit. The RGB shuffler unit RGB SH serves to rank the sub-pixels in the order of the value of their amplitude. The RGB shuffler unit outputs a maximum MAX, a medium MED and a minimum value MIN. Based on these values, the luminescence LD, the internal saturation Si and the correction factor for the saturation Cs is calculated by the LSH unit LSH and outputted. The saturation connection unit SC receives the internal saturation Si and the correction fact Cs and outputs corrected saturation Sd. The output values of the LSH unit correspond to the maximum for an optimization of calculation reasons in the luminescence saturation domain.
Accordingly, in the RGB square unit RGB SQ, the square values are calculated as follows:
Rs = (R1 )2^5 = (G1)2, B s = (B1 )2 with Ri, Gi, Bi = 8,-8,U and Rs, Gs, Bs =
7,-7,U
The operation of the RGB shuffling unit can be described as follows: Max = MAX(R3 , Gs ,Bs ),Med = MED(Rs , Gs ,Bs ),Min = MIN (Rs , G3 , B3 ) with Rs, Gs, Bs = 7,-7,U, and Max, Med, Min = 7,-7,U
The operation of the LSH unit is described as follows:
Figure imgf000015_0002
with Max, Med, Min = 7,-7,U and Ld, Si, Cs = 6,-6,U The saturation correction is calculated as follows:
with Si, Sc = 6,-6,U, and Sd = 6,-6,U
Figure imgf000015_0001
Fig. 10 shows an illustration of the transfer curve of a modulator. Here, the transfer curve is modified by the slope. The input parameters and the output correction factors relate to normalized values. The depicted horizontal line HL represents the limitation of the correction values to 1. If the input values are larger, the slope transfer will determine the output. A correction value of 1 will indicate that the corresponding property is completely part of the resulting video matrix. A value of zero will indicate that the corresponding correction property is skipped in the video matrix.
Fig. 11 shows a block diagram of WSH modulator according to a third embodiment. The WSH modulator according to the third embodiment can be used in the color gamut mapping system according to Fig. 5. The modulator according to Fig. 11 comprises a white point modulator WPM, a saturation modulator SM and a hue modulator HM. By means of these three modulators, the luminescence and saturation properties Ld, Sd from the LSH detector LSHD are transformed into correction factors Hc, Wc, Sc for determining the white point, the saturation and the hue correction. In order to facilitate the processing of these functions and to save hardware resources, the ideal curves for these modulators correspond to a linear function as depicted in Fig. 11. The parameters WLm, SSm and HSm correspond to the linear function and are inputted to the WSH modulator. The operation of the white point modulator WPM can be described by the following equation:
Wc=(l-Ld) + (WLm-WLm-Ld) with Ld = 6,-6,U, WLm = 4,-3, U, and Wc = 6,-6,U The operation of the saturation modulator can be described as follows:
Sc=(l-Sd) + (SSm-SSm-Sd) with Sd = 6,-6,U, SSm = 4,-3, U, and Sc = 6,-6,U
The operation of the hue modulator HM can be described as follows:
H c = (1 - Sd ) + (HS1n -HS1n- S d), with Sd = 6,-6,U, HSm = 4,-3, U, and Hc = 6,-6-U
Fig.12 shows a block diagram of a matrix generator according to a fourth embodiment. The matrix generator CGMG according to the fourth embodiment can be used in the mapping system according to Fig.5. The color gamut matrix generator CGMG is used to generate nine coefficients which are used for the color gamut mapping on the basis of the allowed amount of white point, saturation and hue correction Wc, Sc, Hc from a previous block. The nine parameters are calculated based from the measured dataof a display module. Preferably, the nine parameters are defined to make a calculation of the coefficients easy when the required correction is taken into account. The relation between the coefficients and the nine parameters is as follows:
Figure imgf000016_0001
The video matrix MTXCGM is calculated in the linear light domain from the measurement data. Thereafter, these coefficients are compensated as no gamma and de- gamma operation is performed. Accordingly, based on these values, the nine parameters are calculated as follows:
Kl0=Sr-Sc+Hr-Hc
K21 =Sg-Sc+Hg-Hc K01 =S -Sc-H -Hc
Figure imgf000017_0001
with Wc, Sc, = 6, -6, U; Wr, Wg, Wb = 5,-5,S; Sr, Sg, Sb = 5,-5,S; and Hr, Hg, Hb= 5,- 5,S
Fig. 13 shows a block diagram of a video matrix unit according to a fifth embodiment. The color gamut mapping unit 20 , i.e. a video matrix unit VM according to the fifth embodiment can be used in the mapping system according to Fig. 5. The video matrix unit VM 20 serves to perform the color gamut mapping for the video processing. The input pixel vector (Ri, Gi, Bi) is multiplied by the matrix coefficients Koo- • • K22 in order to obtain a color gamut mapped output pixel vector Ro, Go, Bo. Preferably, the amplitudes of these coefficients are close to a unity matrix, i.e. the diagonal coefficients are around 1 and the other coefficients are around 0. Therefore, the output pixel vector can be described as follows:
Figure imgf000017_0002
The color gamut mapping can be performed as follows:
Figure imgf000017_0003
with Ri, Gi, Bi = 8,-8,U, K00...K11 = 6,-6,S, and Ro, Go, Bo = 8,-8,U Fig. 14 shows a representation of a calculation of XYZ matrices. In Fig. 14, the calculation of the OTP parameters based on the input standard and the measured data of the display. Here, the parameters WRGB, SRGB and HRGB define the color gamut mapping while the parameters WLM, SSM and HSM define a trade-off between the color mapping and soft clipping.
Figure imgf000018_0001
The values of kl and k2 are used to determine the RGB ratio for the white point:
Figure imgf000018_0002
ccordingly, firstly the WRGB, SRGB and HRGB parameters are calculated. This is performed by calculating the MTXSRGB and MTXDISP matrix based on the desired color standard for input images and the measurements of the display primaries and the display white point. The XYZ matrix is calculated as follows:
Figure imgf000018_0003
Based on this matrix, the color gamut matrix MTXCGM in the linear light domain can be calculated as follows:
MTX cgm = (MTX dlψ yι - MTX EBU
Fig. 15 shows a representation of the calculation of static matrix coefficients. Here, the transformation of the coefficients of the MTXcgm matrix into the perceptive domain is shown. Nine coefficients from the luminescence domain are inputted and are converted to nine coefficients in the video domain. The linear coefficients are indicated as KL r0Wi coiUmn and the perceptive coefficients are indicated as Kp r0Wi coiUmn • During the conversion from the coefficients from the perceptive to the linear domain a gamma value of approx. 2,2 is taken into account. As the linear domain is the only correct domain, the equations in the perceptive domain are such that they are also corrective. Therefore, the white point correction will be the same for pixel vectors being processed by both matrices. Furthermore, the first derivative of the saturation correction around the white point will also be the same for both matrices. Accordingly, the proportional increase of saturation from grey up to 30 to 40% is a good approximation of the correct values. Moreover, the first derivative of the hue correction around the white point is the same for both of the matrices. Accordingly, the angle of a pixel vector starting from the white point constitutes a good approximation of the correct angles.
The perceptive coefficients are calculated as follows:
Figure imgf000019_0001
Thereafter, the parameters WRGB, SRGB and HRGB can be calculated as follows:
Figure imgf000019_0002
Fig. 16 shows an illustration of a trade-off between color gamut mapping and soft clipping. The luminescence and saturation properties of each input pixel are determined. These properties correspond to a two dimensional vector point to an area in Fig. 16. The correct action to be taken will depend on the area which is addressed by the pixel property. In Fig. 16, four areas are defined. If the luminescence and saturation value of a pixel is below a certain threshold, the color gamut mapping is performed based on the originally determined matrix coefficients. These thresholds correspond to the value of the parameters WLM and HSM specifying the slopes of the correction transformation. However, if the luminescence is low, the matrix coefficients are adjusted firstly by removing the hue correction and followed by a removal of the saturation correction. As the saturation correction is important for the display, any gradual decrease of coefficients related to saturation correction is compensated in increasing the diagonal coefficients with the same amount. Although this is to be a rather rough first order approximation, this is sufficient for the present algorithm. Pixels having a high saturation and a low luminescence may result in negative color gamut mapping values. These coefficients which are responsible for this are modified to towards zero and a saturation loss is compensated by adding the same amount to the diagonal values. A pixel vector with a high saturation value is used as a mask for the matrix such that a column is selected where the diagonal coefficients are used as a gain factor for this color. If a pixel comprises a high luminescence, the white point correction can be removed to avoid clipping against the sealing. It should be noted that a pixel with a high luminescence cannot have a high saturation at the same time.
To determine of the hue and saturation slopes, the calculation of the parameters WLM, SSM and HSM is based on the correct pixel vectors generating values below 0 or above 1.
Negative clipping in the red/cyan direction can be determined as follows:
Figure imgf000020_0001
Ro=Go=Bo=O:
Figure imgf000020_0002
The value of α can be determined for red/cyan by
Figure imgf000020_0003
Figure imgf000021_0002
value closest to zero is taken.
The value of α can be determined for green/magenta by:
Figure imgf000021_0003
e value closest to zero is taken and run through the saturation model. The color directions green/magenta and blue/yellow relate to the same calculations. When α for red/cyan is calculated, a value closest to zero is taken. First α for is calculated, next values are run through a saturation module, finally the slope is calculated. The value of α can be determined for blue/yellow by:
Figure imgf000021_0001
The value closest to zero is taken and run through the saturation model.
The above-described formulas are used to determine the HSM parameter controlling the amount of hue correction in the video matrix. Starting from a grey value, the saturation is increased until negative values are obtained. The α value closest to zero is taken to calculate the slope HSM based on the following equation:
Figure imgf000021_0004
Thereafter, the SSM calculation is performed which only differs in the coefficients used in the matrix from calculating the slope. The hue component is already removed from the matrix coefficients before the saturation correction is skipped. Accordingly, the red, green and blue coefficients (T1 , V2, gi, g2, bi, b2) which are responsible for the hue correction are averaged and the α value closest to zero is used to calculate the slope SSM by means of the following equation:
Figure imgf000022_0001
calculation of the WLM parameters are performed by determining positive clipping values in a red/cyan direction:
Figure imgf000022_0003
and V is calculated for blue/yellow.
Figure imgf000022_0002
Figure imgf000023_0002
The value closest to zero is taken and run through the luminance model. Accordingly, the Vp value which is closest to zero is used to reduce the value of α to zero. This is because the white point correction has to be removed from pixels with a high luminescence. The value of Vp may be used to calculate the WLM slope:
Figure imgf000023_0001
Fig. 17 shows a representation of an example of an adaptive matrix coefficient. If the luminescence values are below 0,853 and the saturation values are below 0,486 (please note that these values are merely used for illustrative purposes and should not be considered as restrictive) the original color gamut matrix coefficients are used. If the saturation values are between 0,486 and 0,264, the red, green and blue coefficients are averaged, reducing the hue correction. If the saturation values are above 0,624, the red, green and blue coefficients are decreased to zero, reducing the saturation correction. At the same time, the average values are added to the diagonal values, to maintain luminance. As an illustrative example, the value 1, 288 in the right bottom matrix of Fig.
18 is calculated from the sum of the original row in the left bottom matrix (1 ,201 - 0,231 + 0,201 = 1,171). The average value of the red column is subtracted from that (1,171 - (-0,116) = 1,288. Accordingly, the white point and saturation correction for the red component correspond approx. to a first order approximation. If the luminescence value is above 0,853, the sum of the rows are made equal to one. This is preferably performed by adjusting the diagonal coefficients. Then the diagonal coefficients of the left two matrices are compared. The sum of the red row in the bottom matrix corresponds to 1,171 while the sum of the same row in the top matrix is equal to one. Then the red diagonal coefficients are calculated as 1 - (-0,231) - 0,201 = 1,030. Here, the color gamut mapping in the linear domain as shown in the left triangle is compared to the color gamut mapping in the perceptive domain as depicted in the right angle without any adaptation of the matrix coefficients. As shown in the right angle, the pixel colors around the white point are closed to the desired colors. The saturation values as plotted in increasing steps of 10% up to 30% to 40% correspond to a good approximation. If high saturation values are present, clipping occurs for both triangles due to the absence of the adaptive processing according to the first embodiment. Here, the adaptation of the matrix coefficients according to the first embodiment has been employed. If the mapping at the blue vertex of the smaller display triangle is compared, the left triangle shows a clipping against the boundaries of the triangle, while the right triangle shows a much smoother trajectory from grey to blue. Although this difference appears to be rather small, the impact on front screen performance with natural images is rather big. This is in particular true if a spatial relation between pixels is present.
The above described color mapping can be used to compress a gamut for mobile displays. Furthermore, the gamut can be extended for example wide gamut LCD TV. Moreover, the color space can be modified for multi-primary displays. User settings like saturation and color temperature can be executed. A picture enhancement by means of a blue stretch or green stretch can be performed.
The above-described principles of the invention (i.e. the adaptive color gamut mapping) may be used to compress the gamut for mobile displays. On the other hand, the gamut may be expanded for white gamut LCD television systems. The color space for multi- primary displays can be modified. Moreover, user settings like saturation, color temperature or the like can be executed and a picture enhancement, e.g. a blue stretch, green stretch can be achieved.
The above described adaptive color gamut mapping can be used to enhance color settings for display with limited color capabilities by performing a gamut compression. For mobile LCD displays which require a minimum power dissipation, limited colors are provided as the color spectrum of the backlight and the limited thickness and transmission of color filters are present. Moreover, mobile LCD displays only enable a poor performance with respect to contrast and bit depth such that color gamut mapping is performed without using an adaptive branch which reduces the required hardware while introducing artefacts. PLED/OLED displays have limited colors because of the degradation of materials with optimal primary colors. Therefore, a preferred spectrum of primaries will be exchanged for a maximum lifetime. LCD front projection displays also may have limited colors due to the color spectrum of the light source as well as due to limited thickness and transmission of the color filters. The white point of bright displays may correspond to the maximum output of the RGB but may not correspond to the D65 white point. In the case of very bright pixels, the white point should be corrected as the correction may result in a reduced luminance. However, for less bright pixels, the white point can be corrected while still maintaining a correct luminance. According to an embodiment, a switch between predefined white point settings (e.g. cool (8500K), normal (D65) and warm (5000K)) can be performed. For a gamut expansion, i.e. a wide and bright gamut, the adaptive color gamut mapping can improve the color settings of such devices. LCD-TV displays may have a wide gamut color because of the color spectrum of the backlight combined with their color filters. The displays also may comprise a white sub-pixel to enhance the brightness and efficiency. The adaptive color gamut mapping may be used to reshape the color space accordingly supporting a bright gamut color. The adaptive color gamut mapping can be used to prevent clipping artefacts during a rendering process.
In case of multi-primary displays, the adaptive color gamut mapping may be used to enhance the color settings of displays with white gamut color capabilities as such displays may comprise an extra primary color to enhance the gamut and support an efficient use of the backlight spectrum. Adaptive color gamut mapping may be used to reshape the color space supporting bright gamut colors. The adaptive color gamut mapping may be used to prevent clipping artefacts during the rendering process. On the other hand, user settings may be implemented by the adaptive color gamut mapping such that the personal preferences of a user can be used for the setting of the display. This may be performed by modifying the input coefficients of the sRGB input gamut to user specific coefficients which may describe a modified gamut before it is used in the mapping process. The mapping process may be used to map the 3D input gamut to a 3D output gamut wherein an input image appears with the user preferred saturation brightness, hue and whitepoint setting.
Furthermore, an image enhancement can be performed by the adaptive color gamut mapping. By means of additional adaptive processes, analyzing properties of the input pixel and depending on their location in the 3D color space with respect to hue, saturation and brightness, these pixels are mapped to a different location with a different hue, saturation and brightness. Accordingly, the input image may appear differently on a display as e.g. some green colors are getting more saturation, some skin-tone colors are getting a more ideal hue, bright white colors are getting a white-point towards blue and dark colors are getting less luminance and more saturation. The above described color gamut mapping can be used in any display devices to enhance the front-of-screen performance. The adaptive color gamut mapping may also be embodied in display drivers or display modules. Alternatively or in addition, the above described adaptive color gamut mapping algorithm may be implemented in a companion chip. This is advantageous as only a limited amount of additional hardware is required to perform the respective algorithm. In addition or alternatively, the adaptive color gamut mapping algorithm may also be executed in software on a programmable processing device. Such a software algorithm may run on an ARM platform or on a media processor like Trimedia. It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design many alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps other than those listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. In the device claim enumerating several means, several of these means can be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Furthermore, any reference signs in the claims shall not be constrained as limiting the scope of the claims.

Claims

CLAIMS:
1. Video processing device comprising: a luminance and saturation detector (LSHD) for detecting the luminance values (lum) and the saturation values (sat) of pixels of an input video signal (IN); a white-point, saturation and hue modulator (WSH) for transforming luminance and saturation properties (lum, sat) of the pixels of the input video signal (IN) into white-point, saturation and hue correction factors (W, Wc; S, Sc; H, Hc); a color gamut matrix generating unit (CGMG) for generating a color gamut matrix in the perception domain based on the white-point, saturation and hue correction factors (Wc, Sc, Hc) of the white-point, saturation and hue modulator (WSH); - a color gamut mapping unit (20) for multiplying the pixels of the input video signal (IN) with a color gamut matrix generated by the color gamut matrix generating unit (CGMG); and a clipping unit (31) for clipping the results of the a color gamut mapping unit (20) which are out of a predefined range.
2. Video processing device according to claim 1, wherein the luminance and saturation detector (LSHD) comprises: a RGB squaring unit (RGB SQ) for squaring amplitudes of sub-pixels of the input video signal; - a RGB shuffler unit (RGB SH) for ranking the squared sub-pixels based on their amplitude value; a luminance and saturation calculation unit (LSH) for calculating a value of the luminance (LD), an internal saturation (Si) and a saturation correction factor (Cs); and a saturation correction unit (Sc) for outputting a corrected saturation value (Sd).
3. Video processing device according to claim 2, wherein the white-point, saturation and hue modulator (WSH) comprising: a white point modulator (WPM) for determining a white-point correction factor (Wc) based on the luminescence value (Ld) from the luminance and saturation calculation unit (LSH), a saturation modulator (SM) for determining a saturation correction factor (Sc) based on the saturation value (Sd) from the luminance and saturation calculation unit (LSH); and a hue modulator (HM) for determining a hue correction factor (Hc) based on the saturation value (Sd) from the luminance and saturation calculation unit (LSH).
4. Video processing device according to claim 1, 2 or 3, wherein the color gamut matrix generating unit (CGMG) is adapted to generate the color gamut matrix based upon measured characteristics (WRGB , SRGB , HRGB) of a display module.
5. Video processing device according to any one of the claims 1 to 4, further comprising a memory for storing display module specific parameters (WRGB , SRGB , HRGB).
6. Method of processing color image data, comprising the steps of: detecting the luminance values (lum) and the saturation values (sat) of pixels of an input video signal (IN) by a luminance and saturation detector (LSHD); transforming luminance and saturation properties (lum, sat) of the pixels of the input video signal (IN) into white-point, saturation and hue correction factors (W, Wc; S, Sc; H, Hc) by a white-point, saturation and hue modulator (WSH); generating a color gamut matrix in the perception domain based on the white- point, saturation and hue correction factors (Wc, Sc, Hc) of the white-point, saturation and hue modulator (WSH) by a color gamut matrix generating unit (CGMG); - multiplying the pixels of the input video signal (IN) with a color gamut matrix generated by the color gamut matrix generating unit (CGMG) by a color gamut mapping unit (20); and clipping the results of the a color gamut mapping unit (20) which are out of a predefined range by a clipping unit (31).
7. Video display system comprising:
A video processing device according to one of the claims 1 to 5.
PCT/IB2007/054707 2006-11-30 2007-11-20 Device and method for processing color image data WO2008065575A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US12/516,785 US8441498B2 (en) 2006-11-30 2007-11-20 Device and method for processing color image data
EP07849188A EP2123056A1 (en) 2006-11-30 2007-11-20 Device and method for processing color image data

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP06125110 2006-11-30
EP06125110.4 2006-11-30

Publications (1)

Publication Number Publication Date
WO2008065575A1 true WO2008065575A1 (en) 2008-06-05

Family

ID=39187223

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2007/054707 WO2008065575A1 (en) 2006-11-30 2007-11-20 Device and method for processing color image data

Country Status (4)

Country Link
US (1) US8441498B2 (en)
EP (1) EP2123056A1 (en)
CN (1) CN101543084A (en)
WO (1) WO2008065575A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110187735A1 (en) * 2008-08-29 2011-08-04 Sharp Kabushiki Kaisha Video display device
WO2012106122A1 (en) * 2011-01-31 2012-08-09 Malvell World Trade Ltd. Systems and methods for performing color adjustments of pixels on a color display
EP2498499A3 (en) * 2011-03-08 2016-03-23 Dolby Laboratories Licensing Corporation Interpolation of color gamut for display on target display
CN115174881A (en) * 2022-07-15 2022-10-11 深圳市火乐科技发展有限公司 Color gamut mapping method and device, projection equipment and storage medium

Families Citing this family (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4710721B2 (en) * 2006-06-05 2011-06-29 富士ゼロックス株式会社 Color conversion apparatus and color conversion program
JPWO2010067488A1 (en) * 2008-12-10 2012-05-17 パナソニック株式会社 Color correction apparatus and color correction method
US20100214282A1 (en) 2009-02-24 2010-08-26 Dolby Laboratories Licensing Corporation Apparatus for providing light source modulation in dual modulator displays
CN101873480A (en) * 2010-02-10 2010-10-27 宇龙计算机通信科技(深圳)有限公司 Color difference signal correction method and device of streaming medium live broadcast
EP2569949B1 (en) * 2010-05-13 2018-02-21 Dolby Laboratories Licensing Corporation Gamut compression for video display devices
WO2012122104A2 (en) 2011-03-09 2012-09-13 Dolby Laboratories Licensing Corporation High contrast grayscale and color displays
US9135864B2 (en) 2010-05-14 2015-09-15 Dolby Laboratories Licensing Corporation Systems and methods for accurately representing high contrast imagery on high dynamic range display systems
JP5404546B2 (en) * 2010-07-16 2014-02-05 株式会社ジャパンディスプレイ Driving method of image display device
CN102985906B (en) * 2010-07-16 2015-12-02 惠普发展公司,有限责任合伙企业 Color based on color profile adjustment display device exports
JP5140206B2 (en) * 2010-10-12 2013-02-06 パナソニック株式会社 Color signal processing device
TWI538473B (en) 2011-03-15 2016-06-11 杜比實驗室特許公司 Methods and apparatus for image data transformation
EP2518719B1 (en) * 2011-04-08 2016-05-18 Dolby Laboratories Licensing Corporation Image range expansion control methods and apparatus
SI3595281T1 (en) 2011-05-27 2022-05-31 Dolby Laboratories Licensing Corporation Scalable systems for controlling color management comprising varying levels of metadata
US8897559B2 (en) * 2011-09-21 2014-11-25 Stmicroelectronics (Grenoble 2) Sas Method, system and apparatus modify pixel color saturation level
US9024961B2 (en) 2011-12-19 2015-05-05 Dolby Laboratories Licensing Corporation Color grading apparatus and methods
KR102118309B1 (en) 2012-09-19 2020-06-03 돌비 레버러토리즈 라이쎈싱 코오포레이션 Quantum dot/remote phosphor display system improvements
CN104981861B (en) * 2013-02-14 2017-04-12 三菱电机株式会社 Signal conversion device and method
CN105009193B (en) 2013-03-08 2019-01-11 杜比实验室特许公司 Technology for the dual modulation displays converted with light
JP6288943B2 (en) 2013-05-20 2018-03-07 三星ディスプレイ株式會社Samsung Display Co.,Ltd. Video display device
JP5811228B2 (en) * 2013-06-24 2015-11-11 大日本印刷株式会社 Image processing apparatus, display apparatus, image processing method, and image processing program
KR102019679B1 (en) 2013-08-28 2019-09-10 삼성디스플레이 주식회사 Data processing apparatus, display apparatus including the same, and method for gamut mapping
JP6389728B2 (en) * 2013-10-22 2018-09-12 株式会社ジャパンディスプレイ Display device and color conversion method
WO2015148244A2 (en) * 2014-03-26 2015-10-01 Dolby Laboratories Licensing Corporation Global light compensation in a variety of displays
JP6236188B2 (en) 2014-08-21 2017-11-22 ドルビー ラボラトリーズ ライセンシング コーポレイション Dual modulation technology with light conversion
EP3119086A1 (en) * 2015-07-17 2017-01-18 Thomson Licensing Methods and devices for encoding/decoding videos
KR102534810B1 (en) 2016-03-02 2023-05-19 주식회사 디비하이텍 Apparatus and method for adjusting color image in display device
WO2017171766A1 (en) * 2016-03-31 2017-10-05 Hewlett Packard Enterprise Development Lp Identifying outlying values in matrices
US10070109B2 (en) * 2016-06-30 2018-09-04 Apple Inc. Highlight recovery in images
EP4072137A1 (en) * 2016-10-05 2022-10-12 Dolby Laboratories Licensing Corporation Source color volume information messaging
EP3340165A1 (en) * 2016-12-20 2018-06-27 Thomson Licensing Method of color gamut mapping input colors of an input ldr content into output colors forming an output hdr content
EP3367659A1 (en) * 2017-02-28 2018-08-29 Thomson Licensing Hue changing color gamut mapping
CN108335271B (en) * 2018-01-26 2022-03-18 努比亚技术有限公司 Image processing method and device and computer readable storage medium
JP6879268B2 (en) * 2018-06-18 2021-06-02 株式会社Jvcケンウッド Color correction device
US10855964B2 (en) 2018-08-29 2020-12-01 Apple Inc. Hue map generation for highlight recovery
US11100620B2 (en) 2018-09-04 2021-08-24 Apple Inc. Hue preservation post processing for highlight recovery
CN110459176A (en) * 2019-08-16 2019-11-15 合肥工业大学 A kind of gamut conversion method of displayer
CN113029363B (en) * 2019-12-24 2022-08-16 Oppo广东移动通信有限公司 Detection method, device and equipment of mixed light source and storage medium
CN111312141A (en) * 2020-02-18 2020-06-19 京东方科技集团股份有限公司 Color gamut adjusting method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040021672A1 (en) * 2001-06-26 2004-02-05 Osamu Wada Image display system, projector, image processing method, and information recording medium
US20040170319A1 (en) * 2003-02-28 2004-09-02 Maurer Ron P. System and method of gamut mapping image data
WO2005109854A1 (en) * 2004-05-11 2005-11-17 Koninklijke Philips Electronics N.V. Method for processing color image data

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7986291B2 (en) * 2005-01-24 2011-07-26 Koninklijke Philips Electronics N.V. Method of driving displays comprising a conversion from the RGB colour space to the RGBW colour space

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040021672A1 (en) * 2001-06-26 2004-02-05 Osamu Wada Image display system, projector, image processing method, and information recording medium
US20040170319A1 (en) * 2003-02-28 2004-09-02 Maurer Ron P. System and method of gamut mapping image data
WO2005109854A1 (en) * 2004-05-11 2005-11-17 Koninklijke Philips Electronics N.V. Method for processing color image data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PIERRE DE GREEF ET AL: "46.2: LifePix Image Enhancement for Mobile Displays", 2005 SID INTERNATIONAL SYMPOSIUM. BOSTON, MA, MAY 24 - 27, 2005, SID INTERNATIONAL SYMPOSIUM, SAN JOSE, CA : SID, US, 24 May 2005 (2005-05-24), pages 1488 - 1491, XP007012331 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110187735A1 (en) * 2008-08-29 2011-08-04 Sharp Kabushiki Kaisha Video display device
WO2012106122A1 (en) * 2011-01-31 2012-08-09 Malvell World Trade Ltd. Systems and methods for performing color adjustments of pixels on a color display
US8767002B2 (en) 2011-01-31 2014-07-01 Marvell World Trade Ltd. Systems and methods for performing color adjustment of pixels on a color display
US9349345B2 (en) 2011-01-31 2016-05-24 Marvell World Trade Ltd. Systems and methods for performing color adjustment of pixels on a color display
EP2498499A3 (en) * 2011-03-08 2016-03-23 Dolby Laboratories Licensing Corporation Interpolation of color gamut for display on target display
CN115174881A (en) * 2022-07-15 2022-10-11 深圳市火乐科技发展有限公司 Color gamut mapping method and device, projection equipment and storage medium
CN115174881B (en) * 2022-07-15 2024-02-13 深圳市火乐科技发展有限公司 Color gamut mapping method, device, projection equipment and storage medium

Also Published As

Publication number Publication date
CN101543084A (en) 2009-09-23
US20100020242A1 (en) 2010-01-28
EP2123056A1 (en) 2009-11-25
US8441498B2 (en) 2013-05-14

Similar Documents

Publication Publication Date Title
US8441498B2 (en) Device and method for processing color image data
JP4668986B2 (en) Color image data processing method
US10761371B2 (en) Display device
US9710890B2 (en) Joint enhancement of lightness, color and contrast of images and video
KR100989351B1 (en) Systems and methods for selective handling of out-of-gamut color conversions
US8233098B2 (en) Gamut adaptation
KR101481984B1 (en) Method and apparatus for image data transformation
US8379971B2 (en) Image gamut mapping
KR101348369B1 (en) Color conversion method and apparatus for display device
US8238654B2 (en) Skin color cognizant GMA with luminance equalization
KR101263809B1 (en) Preferential Tone Scale for Electronic Displays
CN114999363A (en) Color shift correction method, device, equipment, storage medium and program product
WO2012140551A1 (en) Generation of image signals for a display
Kim et al. Illuminant-adaptive color reproduction for mobile display
Kim New display concept for realistic reproduction of high-luminance colors
Chorin 72.2: Invited Paper: Color Processing for Wide Gamut and Multi‐Primary Displays

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200780044143.0

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07849188

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2007849188

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 12516785

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE