COLOR COORDINATE SYSTEMS AND IMAGE EDITING
Sergey N. Bezryadin
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] For the US designation, the present application is a continuation-in-part of each of the following US patent applications, which are incorporated herein by reference:
1. No. 11/321,443, filed December 28, 2005 by Sergey N. Bezryadin;
2. No. 11/322,111, filed December 28, 2005 by Sergey N. Bezryadin;
3. No. 11/377,161, filed March 16, 2006 by Sergey N. Bezryadin; 4. No. 11/376,837, filed March 16, 2006 by Sergey N. Bezryadin;
5. No. 11/377,591, filed March 16, 2006 by Sergey N. Bezryadin;
6. No. 11/494,393, filed July 26, 2006 by Sergey N. Bezryadin;
7. No. 11/432,221, filed May 10, 2006 by Sergey N. Bezryadin.
The present application also claims Paris Convention priority of the seven applications listed above.
BACKGROUND OF THE INVENTION
[0002] The present invention relates to digital representation, processing, storage and transmission of images, including editing of polychromatic and monochromatic color images. A digital representation of an image can be stored in a storage device (e.g. a computer memory, a digital video recorder, or some other device). Such representation can be transmitted over a network, and can be used to display the image on a computer monitor, a television screen, a printer, or some other device. The image can be edited using a suitable computer program.
[0003] Color is a sensation caused by electromagnetic radiation (light) entering a human eye. The light causing the color sensation is called "color stimulus". Color depends on the radiant power and spectral composition of the color stimulus, but different stimuli can cause the same color sensation. Therefore, a large number of colors can be reproduced ("matched") by mixing just three "primary" color stimuli, e.g. a Red, a Blue and a Green. The primary stimuli can be produced by three "primary" light beams which, when mixed and reflected from an ideal diffuse surface, produce a desired color. The
color can be represented by its coordinates, which specify the intensities of the primary light beams. For example, in linear RGB color coordinate systems, a color S is represented by coordinates R, G, B which define the intensities of the respective Red, Green and Blue primary light beams needed to match the color S. lfP(λ) is the radiance (i.e. the energy per unit of time per unit wavelength) of a light source generating the color S, then the RGB coordinates can be computed as:
R= f °°P(λ)F(λ)Jλ (1)
where F(A) , g(X) , b (X) are "color matching functions" (CMF's). For each fixed wavelength λ, the values F(A) , g(X) , b (X) are respectively the R, G and B values needed to match color produced by a monochromatic light of the wavelength λ of a unit radiance. The color matching functions are zero outside of the visible range of the λ values, so the integration limits in (1) can be replaced with the limits of the visible range. The integrals in (1) can be replaced with sums if the radiance P(λ) is specified as power at discrete wavelengths. Fig. 1 illustrates the color matching functions for the 1931 CIE RGB color coordinate system for a 2° field. (CIE stands for "Commission Internationale de l'Eclairage".) See D. Malacara, "Color Vision and Colorimetry: theory and applications" (2002), and Wyszecki & Stiles, "Color Science: concepts and Methods, Quantitative Data and Formulae" (2nd Ed. 2000), both incorporated herein by reference.
[0004] The RGB system of Fig. 1 is called linear because, as shown by equations (1), the R, G, and B values are linear in P(λ). In a linear system, the intensities such as R, G, B are called "tristimulus values".
[0005] As seen from Fig. 1, the function F(A) can be negative, so the R coordinate can be negative. If R is negative, this means that when the color S is mixed with |R| units of the Red primary, the resulting color matches the mixture of G units of the Green primary with B units of the Blue primary.
[0006] New linear color coordinate systems can be obtained as non-degenerate linear transformations of other systems. For example, the 1931 CIE XYZ color coordinate
system for a 2° field is obtained from the CIE RGB system of Fig. 1 using the following transformation:
where:
This XYZ system does not correspond to real, physical primaries. The color matching functions x(λ) , 7(A) , ~z (A) for this XYZ system are shown in Fig. 2. These color matching functions are defined by the same matrix ARGB-XYZ '•
The tristimulus values X, Y, Z can be computed from the color matching functions in the usual way:
[0007] There are also non-linear color coordinate systems. One example is a nonlinear sRGB system standardized by International Electrotechnical Commission (IEC) as IEC 61966-2-1. The sRGB coordinates can be converted to the XYZ coordinates (4) or the CIE RGB coordinates (1). Another example is HSB (Hue, Saturation, Brightness). The HSB system is based on sRGB. In the HSB system, the colors can be visualized as points of a vertical cylinder. The Hue coordinate is an angle on the cylinder's horizontal circular cross section. The pure Red color corresponds to Hue=0°; the pure Green to Hue=120°; the pure Blue to Hue=240°. The angles between 0° and 120° correspond to mixtures of the Red and the Green; the angles between 120° and 240° correspond to mixtures of the Green and the Blue; the angles between 240° and 360° correspond to mixtures of the Red and the Blue. The radial distance from the center indicates the color's
Saturation, i.e. the amount of White (White means here that R=G=B). At the circumference, the Saturation is maximal, which means that the White amount is 0 (this means that at least one of the R, G, and B coordinates is 0). At the center, the Saturation is 0 because the center represents the White color (R=G=B). The Brightness is measured along the vertical axis of the cylinder, and is defined as max(R,G,B).
[0008] Different color coordinate systems are suitable for different purposes. For example, the sRGB system is convenient for rendering color on certain types of monitors which recognize the sRGB coordinates and automatically convert these coordinates into color. The HSB system is convenient for some color editing operations including brightness adjustments.
[0009] Brightness can be thought of as a degree of intensity of a color stimulus. Brightness corresponds to our sensation of an object being "bright" or "dim". Brightness has been represented as the Y value of the XYZ system of Fig. 2, or as the maximum of the R, G and B coordinates of the sRGB coordinate system. Other representations also exist. Hue is the attribute of a color perception denoted by blue, green yellow, red, purple, and so on. See the Wyszecki & Styles book cited above, page 487. Saturation can be thought of as a measure of the white amount in the color.
[0010] Contrast can be thought of as the brightness difference between the brightest and the dimmest portions of the image or part of the image.
[0011] Exemplary contrast editing techniques are described in William K. Pratt,
"DIGITAL IMAGE PROCESSING" (3ed. 2001), pages 243-252, incorporated herein by reference.
[0012] Sharpness relates to object boundary definition. The image is sharp if the object boundaries are well defined. The image is blurry if it is not sharp.
[0013] It is desirable to obtain color coordinate systems which facilitate brightness editing and other types of image editing. For example, it may be desirable to highlight an image area, i.e. to increase its brightness, without making large changes to other image areas. Conversely, it may be desirable to shadow (i.e. to dim) a selected image area without making large changes in other image areas. It is also desirable to obtain color coordinate systems which facilitate transforming a polychromatic image into a monochromatic image (for example, in order to display a polychromatic image using a
monochromatic printer or monitor, or to achieve a desired visual effect). It may also be desirable to change the hue or saturation, for example to change the hue of a monochromatic image.
SUMMARY
[0014] This section summarizes some features of the invention. The invention is not limited to these features. The invention is defined by the appended claims, which are incorporated into this section by reference.
[0015] Some embodiments of the present invention provide novel color coordinate systems and novel image editing techniques suitable for novel and/or conventional color coordinate systems.
[0016] In some embodiments of the present invention, the contrast is edited as follows. Let us suppose that a digital image consist of a number of pixels. Let us enumerate these pixels aspι,p2,... (The invention is not limited to any pixel enumeration.) Let B(pt) denote the brightness at the pixel p, (J-1 ,2, ...). The contrast editing can be performed by changing the brightness B (pi) to the value
where ε is a positive constant other than 1, and Bαvg is a weighted average (e.g. the mean) of the brightness values B in an image region R(pt) containing the pixel pt. Of note, this equation means that:
[0017] The invention is not limited to such embodiments. For example, the brightness can be edited according to the equation:
where /is a predefined strictly increasing non-identity function, and Bαvg(pi) is a function of the B values of image portions in an image region R(p,) containing the portion pt.
[0018] Some embodiments use a color coordinate system in which the brightness B is one of the coordinates. In some embodiments, the brightness B can be changed without
changing the chromaticity coordinates. For a linear color coordinate system with coordinates T\, T2, T3, the chromaticity coordinates are defined as:
T1Z(Ti+ T2+ T3), T2Z(T1+ T2+ T3), T3Z(T1+ T2+ T3).
In some embodiments, a non-linear color coordinate system is used in which the brightness B is defined by a square root of a quadratic polynomial in tristimulus values,
e.g. by a value
where T\, T2, T3 are tristimulus values. In some embodiments, B may or may not be one of the coordinates, but one of the coordinates is defined by B alone or in combination with the sign of one or more of T1, T2, T3.
[0019] In some embodiments, the color coordinate system has the following coordinates:
B= ^r1 2 +r2 2 + r3 2
S2=T2ZB
S3=T3ZB
In this coordinate system, if the B coordinate is changed, e.g. multiplied by some number k, and S2 and S3 are unchanged, the color modification corresponds to multiplying the tristimulus values by L Such color modification does not change the chromaticity coordinates
TxI(T1+ T2+ T3), T2Z(T1+ T2+ T3), T3I(T1+ T2+ T3).
Therefore, the (B, S2, S3) coordinate system facilitates color editing when it is desired not to change the chromaticity coordinates (so that no color shift would occur). Of note, there is a known color coordinate system xyY which also allows changing only one coordinate Y without changing the chromaticity coordinates. The xyY system is defined from the XYZ coordinates of Fig. 2 as follows: the Y coordinate of the xyY system is the same as the Y coordinate of the XYZ system; x=X/(X+Y+Z); z=Z/(X+Y+Z).
In the xyY system, if Y is changed but x and y remain unchanged, then the chromaticity coordinates are unchanged. The xyY system differs from the (B, S2, S3) system in that the Y coordinate is a linear coordinate and is a tristimulus value, while the B coordinate is a non-linear function of tristimulus values (and of the power distribution P(λ)).
[0020] Linear transformations of such color coordinate systems can be used to obtain other novel color coordinate systems.
[0021] The techniques described above can be used to adjust either the global contrast, i.e. when the region R(pt) is the whole image, or the local contrast, when the region R(pt) is only a part of the image. In some embodiments, the region R(pι) contains 10% to 30% of the image in the local contrast adjustments, and other percentages are possible.
[0022] The inventor has observed that the same techniques can be used to change the image sharpness if the region R (pt) is small, e.g. 1% of the image or less. In some embodiments, the region R(pi) is contained in a rectangle of at most 31 pixels by 31 pixels, with the center at pixel pt. Even smaller outer rectangles can be used, e.g. 21 x 21 pixels, or 11 x 11 pixels, and other rectangles. The image may contain thousands, millions, or some other number of pixels.
[0023] Also, the inventor has observed that it is sometimes desirable to change the global dynamic range of an image without making large changes to the local dynamic range. A dynamic range is the ratio of the maximum brightness Bmax to the minimum brightness Bmin. (The brightness can be defined in a number of ways some of which are discussed above and below.) For the global dynamic range, the maximum and minimum brightness values are computed over the whole image (or a large portion of the image). For a local dynamic range, the maximum and minimum brightness values are computed over a small area. Below, "dynamic range" of an image is presumed global unless stated to the contrary. Suppose, for example, that an image has a (global) dynamic range
and the image must be displayed on a monitor capable of a maximum dynamic range of only 300. This could be achieved simply by dividing the brightness B(pt) at each pixel # by 10. However, some portions of the image may be adequately displayable without any change to their brightness, and dividing the brightness by 10 at these portions may result in image deterioration. For example, suppose that the brighter image portions can be displayed on the monitor without any brightness change, while the dimmer portions are outside of the monitor's range. Dividing the brightness by 10 would not improve the display of the dimmer portions. This could also cause deterioration of the brighter portions. It may be desirable to brighten (highlight) the dimmer portions without making large changes to the brighter portions so that the global
dynamic range DR* (after editing) would become 300. The dynamic range of small image areas ("local dynamic range") should preferably undergo little modification.
[0024] In some embodiments, the image editing is performed by changing the brightness value B(p,) at each pixel p, to a value B*(pt) such that: B*(pl)=f(B(p1),Bavg(pl)) (4A) where Bavg(pι) is some average brightness value (e.g. the mean brightness) in an image region R(pt) containing the pixel p,, and/is a predefined function such that for all i, the function f(B(p,),y) is strictly monotonic in y for each B(p,), and for each Bavg(pJ, the function /(x.Bavgφj)) is strictly increasing inx. For example, in some embodiments,
where BQ and ε are some constants.
[0025] Consider again the example above, with dimmer areas outside of the monitor's range. It is desirable to "highlight" the dimmer image areas to increase their brightness so that the global dynamic range would become 300, but the brightness of the bright image areas should not change much. In some embodiments, this is accomplished via the image editing according to (4B). The value Bo is set to the maximum brightness value Bmax over the initial image, and ε is set to a positive value, e.g. 1/2. When the average brightness Bavg(pι) is close to the maximum brightness BQ, the brightness does not change much (percentage-wise), i.e. B*(pl)IB(pl) is close to 1. Therefore, the brighter image areas are almost unchanged. When the value B^p1) is well below the maximum BQ (i.e. in the dimmer areas), the coefficient (BQIB Ovg(p 'Jf is large, so the image is significantly brightened (highlighted) in these areas. In some embodiments, the local dynamic range is almost unchanged. Indeed, for two pixels/?, andpp the equation (4B) proyides:
B*(pι)/B*(pJ)=[B(pι)/B(p])-]x(Bαvg(pJ)IBαvg(p1))ε (4C) If P1 and p} are near each other, than Bαvg(p,) and Bαvg(pj) are likely to be close to each other. Hence, the coefficient (Bαvg(pJ)IBαvg(pι))ε is small, so the maximum of the values B(p,)/B(pj) over a small area will not change much. (The local dynamic range depends of course on how the "small area" is defined. In some embodiments, the area is defined so that the average brightness BαVg does not change much in the area, but other definitions are also possible. The invention is not limited to embodiments with small changes of the local dynamic range.)
[0026] If it is desirable to reduce the brightness of the brighter areas while leaving the
dimmer areas unchanged or almost unchanged, one can select BQ to be the minimum brightness. To reduce the brightness of the dimmer areas while leaving the brighter areas almost unchanged, one can select BQ to be the maximum brightness and ε to be negative. Other options are also possible. In many embodiments, the global dynamic range is increased if ε is positive, and the global dynamic range is decreased if ε is negative. Of note, when ε is negative, equation (4B) can be written as:
where |ε| is the absolute value of ε.
[0027] Color editing (4B)-(4D) can be combined with other types of editing performed in separate steps or simultaneously. For example, highlighting of an image area can be combined with brightening the whole image (multiplying the brightness by a constant k>\) via the following transformation:
[0028] To convert a color image to a monochromatic image (i.e. a polychromatic image to a monochromatic image) in the coordinate system (22,,S251S3), or to change the color (e.g. the hue) of a monochromatic image, the coordinates S2 and S3 can be set to a predefined value representing the desired color (e.g. white for a black-and-white monochromatic image), and the value B can be left unchanged for each pixel. As a result, the brightness will be unchanged. Alternatively, the B coordinate can be processed to change the image brightness or contrast or achieve some other brightness-related effect.
[0029] Another embodiment uses a color coordinate system BCH in which the B
I 9 9 0 coordinate is defined as above (e.g. by the value B= JTf + T^ + W , or the value B and the sign of one or more Of T1, T2, T3). The other coordinates C and Hare such that: cos C= Tι/B tan H=T3ZT2
In some embodiments, B represents the brightness, C represents the chroma (saturation), and H represents the hue. A color-to-monochromatic image conversion, or changing the color of a monochromatic image, can be achieved by setting C and H to predefined values representing the desired color. The B coordinate can be unchanged, or can be changed as needed for brightness-related processing.
[0030] Linear transformations of such color coordinate systems can be used to obtain
other color coordinate systems with easy polychromatic to monochromatic conversion.
[0031] The invention is not limited to the features and advantages described above. The invention is not limited to editing an image to fit it into a dynamic range of a monitor, a printer, or some other display device. For example, the image editing may be performed to improve the image quality or for any other artistic, esthetic, or other purposes. Other features are described below. The invention is defined by the appended claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0032] Figs. 1 and 2 are graphs of color matching functions for prior art color coordinate systems.
[0033] Fig. 3 illustrates some color coordinate systems according to some embodiments of the present invention.
[0034] Figs. 4 and 5 are block diagrams illustrating color editing according to some embodiments of the present invention.
[0035] Fig. 6 illustrates an image edited according to some embodiments of the present invention.
[0036] Fig. 7 illustrates an image extension at the image boundaries for image editing according to some embodiments of the present invention.
[0037] Figs. 8, 9 are brightness graphs illustrating contrast editing according to some embodiments of the present invention.
[0038] Fig. 10 is a flowchart of an image editing method according to some embodiments of the present invention.
[0039] Fig. 11 is a block diagram illustrating color editing according to some embodiments of the present invention.
[0040] Figs. 12-14 are brightness graphs illustrating image editing according to some embodiments of the present invention.
[0041] Figs. 15, 16 illustrate image values transformed via image editing according to
some embodiments of the present invention.
[0042] Figs. 17 and 18 are block diagrams illustrating color editing according to some embodiments of the present invention.
DESCRIPTION OF SOME EMBODIMENTS
[0043] The embodiments described in this section illustrate but do not limit the invention. The invention is defined by the appended claims.
[0044] Some embodiments of the present invention use color coordinate systems Bef and BCH which can be defined, for example, as follows. First, a linear color coordinate system DEF is defined as a linear transformation of the 1931 CIE XYZ color coordinate system of Fig. 2:
(5)
where:
0.205306 0.712507 0.467031
A XYZ-DEF 1.853667 -1.279659 -0.442859 (6) -0.365451 1.011998 -0.610425
It has been found that for many computations, adequate results are achieved if the elements of matrix AXYZ-DEF are rounded to four digits or fewer after the decimal point, i.e. the matrix elements can be computed with an error Err<0.00005. Larger errors can also be tolerated in some embodiments. The DEF coordinate system corresponds to color matching functions d(X) , e'(X) , /(A) which can be obtained from x(X) , y(X), z(λ) using the same matrix AXYZ-DEF '■
By definition of the color matching functions and the tristimulus values,
• •»»oco _
E= Jo C " P(X)e(X)dX
As explained above, the integration can be performed over the visible range only.
[0045] As seen in Fig. 2, the color matching functions x (A) , y(X) , z(λ) are never negative. It follows from equations (4) that the X, Y, Z values are never negative. Since the first row of matrix AXYZ-DEF has only positive coefficients, the function d(X) is never negative, and the D value is also never negative.
[0046] When D>0 and E=F=O, the color is white or a shade of gray. Such colors coincide, up to a constant multiple, with the CIE D65 white color standard.
[0047] If a color is produced by a monochromatic radiation with
(this is a red color), then F=O and E>0.
[0048] The color matching functions d(X) , e"(λ) , /(A) form an orthonormal system in the function space Z2 on [0,∞) (or on any interval containing the visible range of the λ values if the color matching functions are zero outside of this range), that is:
where K is a positive constant defined by the measurement units for the wavelength λ and the radiance P(λ). The units can be chosen so that K=I .
[0049] The integrals in (9) can be replaced with sums if the CMF's are defined at discrete λ values, i.e.:
X X X where the sums are taken over a discrete set of the λ values. The constant K can be different than in (9). Color matching functions will be called orthonormal herein if the satisfy the equations (9) or (10).
[0050] If 51 and S2 are two colors with DEF coordinates (Dl ,El ,Fl ) and (D2,E2,F2),
a dot product of these colors can be defined as follows:
<Sl ,S2>=D1 *D2+El *E2+F\ *F2 (11)
Thus, the DEF coordinate system can be thought of as a Cartesian coordinate system having mutually orthogonal axes D, E, F (Fig. 3), with the same measurement unit for each of these axes.
[0051] The dot product (11) does not depend on the color coordinate system as long as the color coordinate system is orthonormal in the sense of equations (9) or (10) and its CMF' s are linear combinations of x(λ) , y(λ) , z(λ) (and hence of d(X) , if(λ) , /(A) ). More particularly, let T\, T2, T3 be tristimulus values in a color coordinate system whose CMF's T\ , T2 , T3 belong to a linear span Span(d(λ) ,e~(λ) ,/(λ) ) and satisfy the conditions (9) or (10). For the case of equations (9), this means that:
/0°° [Ti (X)]2 dX = Jo°° [T2 (X)]2 dX = Jo°° [F3 (λ)]2 dX =K with the same constant K as in (9). The discrete case (10) is similar. Suppose the colors 51, 52 have the TiT2T3 coordinates (7Y1, T2Λ, T3Λ) and (T12, T2.i, T32) respectively. Then the dot product (11) is the same as
[0052] The brightness B of a color S can be represented as the length (the norm) of the vector S:
B=\\S\ \ = J< S,S > = JD2 +E2 +F2 (13)
The Bef color coordinate system is defined as follows:
5 ?==VVT£>" +E +F^ (14) e=E/B f=F/B If #=0 (absolute black color), then e and/can be left undefined or can be defined in any way, e.g. as zeroes.
[0053] Since D is never negative, the D, E, F values can be determined from the B, e, /values as follows:
F=fxB
D=^B2 -O* B)2 -(J* B)2 =B^\-e2 -f2
[0054] The Bef system is convenient for brightness editing because the brightness can be changed by changing the B coordinate and leaving e and /unchanged. Thus, if it is desired to change from some value B to a value kxB for some factor k, the new Bef colox coordinates B*,e*,f* can be computed as:
e =e
Here k>\ if the brightness is to be increased, and k<\ if the brightness is to be decreased. The transformation (16) corresponds to multiplying the D, E and F values by k:
E =kxE
F*=kxF This transformation in turn corresponds to multiplying the color' s radiance P(λ) by k (see equations (4)). The radiance multiplication by A: is a good model for the intuitive understanding of brightness as a measure of the intensity of the color stimulus. Also, there is no color shift in the transformation (16), i.e. the color's chromatic perception does not change.
[0055] The brightness can be changed over the whole image or over a part of the image as desired. The brightness transformation can be performed using computer equipment known in the art. For example, in Fig. 4, a computer processor 410 receives the color coordinates B, e, /(from a memory, a network, or some other source) and also receives the factor k (which can be defined by a user via some input device, e.g. a keyboard or a mouse). The processor outputs the kxB, e, /coordinates. In some embodiments, this color editing is controlled by a user who watches the color on a monitor 420 before and after editing. To display the color on monitor 420, the processor 410 may have to convert the color from the Bef coordinate system to some other coordinate system suitable for the monitor 420, e.g. sRGB or some other system as defined by the monitor specification. The color conversion can be performed by converting the color between Bef and DEF as specified by equations (14) and (15), and between DEF and the XYZ system of Fig. 2 by equations (5) and (6). The color
conversion between the XYZ coordinate system and the sRGB can be performed using known techniques.
[0056] Transformation (16) is a convenient way to perform expocorrection in a digital photographic camera (processor 410 can be part of the camera), or to obtain a result equivalent to expocorrection when editing a digital image.
[0057] Another color coordinate system that facilitates the brightness editing is the spherical coordinate system for the DEF space. This coordinate system BCH (Brightness, Chroma, Hue) is defined as follows (see also Fig. 3):
C ("chroma") is the angle between the color S and the D axis;
H ("hue") is the angle between (i) the orthogonal projection SEF of the vector S on the EF plane and (ii) the E axis.
[0058] The term "chroma" has been used to represent a degree of saturation (see Malacara's book cited above). Recall that in the HSB system, the Saturation coordinate of a color represents the white amount in the color. In the BCH system, the C coordinate of a color is a good representation of saturation (and hence of chroma) because the C coordinate represents how much the color deviates from the white color D65 represented by the D axis (E=F=O).
[0059] The H coordinate of the BCH system represents the angle between the projection SEF and the red color represented by the E axis, and thus the H coordinate is a good representation of hue.
[0060] Transformation from SCH to DEF can be performed as follows:
D=Bxcos C (19)
E=Bxsin Cxcos H F=Bxsin Cxsin H
[0061] Transformation from DEF to BCH can be performed as follows. The B coordinate can be computed as in (18). The C and H computations depend on the range of these angles. Any suitable ranges can be chosen. In some embodiments, the angle C is in the range [0,π/2], and hence C=cos'l(D/B) (20)
In some embodiments, the angle H can be computed from the relationship: tan H=FZE (21A) or tan H=f/e (21B) where e and/are the Bef coordinates (14). For example, any of the following relations can be used:
H=taήι(F/E)+a (22)
H= tan 1 (f/e) +a where or depends on the range of H. In some embodiments, the angle His measured from the positive direction of the E axis, and is in the range from 0 to 2π or -π to π. In some embodiments, one of the following computations is used:
H=arctg(E,F) (23)
H=arctg(e,f) where arctg is computed as shown in pseudocode in Appendix 1 at the end of this section before the claims.
[0062] Transformation from BCH to Bef can be performed as follows:
B=B (24) e= sin Cxcos H f= sin Cxsin H Transformation from Bef to BCH can be performed as follows:
B=B (25)
C=cosΛ Vl - e2 - f2 , or C=siήx ^e2 + f2
(22)), or H=αrctg(e,f) (see Appendix 1).
[0063] As with Bef the BCH system has the property that a change in the B coordinate without changing C and H corresponds to multiplying the tristimulus values D, E, F by some constant k (see equations (17)). The brightness editing is therefore simple to perform. In Fig. 5, the brightness editing is performed like in Fig. 4, with the C and H coordinates used instead of e and/
[0064] The ite/and 5CH systems are also suitable for contrast editing. In some embodiments, the contrast is enhanced by increasing the brightness difference between brighter and dimmer image portions. For the sake of illustration, suppose the image
consists of a two-dimensional set of pixels p, (/= 1, 2, ...) with coordinates (xbyj, as shown in Fig. 6 for an image 610. Let B(pJ denote the brightness value (the B coordinate) at the pixel /?„ and B (pj denote the modified brightness to be computed to edit the contrast. Let Bav%(pJ be some average brightness value, for example, the mean of the brightness values in some region R(p,) containing the pixel p,:
where N(pJ is the number of the pixels in the region R(P1). In the Bef system, for each pixel pt, the new coordinates B (p,), e*(p,), f* (pj for the contrast adjustment are computed from the original coordinates B(p,), e(p,), f(pj as follows:
e (pj= e(pj (27.2)
Here, ^is some positive constant other than 1. Of note, the equation (27.1) implies that:
Thus, if ε > 1 , the contrast increases because the brightness range is increased over the image. If ε<\, the contrast decreases.
[0065] The contrast adjustment can be performed over pixels p, of a part of the image rather than the entire image. Also, different rvalues can be used for different pixels pt. Alternatively, the same rvalue can be used for all the pixels.
[0066] Similarly, in the BCH system, the B coordinate is computed as in (27.1 ), and the C and H coordinates are unchanged:
C* (Pl)= C(Pl) (29.2)
H* (PJ= H(PJ (29.3)
[0067] In some embodiments, the region R(pJ is the entire image. Thus, the value B avg(pj is constant, and can be computed only once for all the pixels p,. In other embodiments, different regions R(pJ are used for different pixels. In Fig. 6, the region R(pJ is a square centered at the pixel p, and having a side of 5 pixels long (the length
measured in pixels can be different along the x and y axes if a different pixel density is present at the two axes). In some embodiments, the region R(pt) is a non-square rectangle centered at pt, or a circle or an ellipse centered at/?,. In some embodiments, the regions R(Pi) have the same size and the same geometry for different pixels p,, except possibly at the image boundary.
[0068] In some embodiments, the regions R(pt) corresponding to pixels p, at the image boundary have fewer pixels than the regions corresponding to inner pixels/*/. In other embodiments, the image is extended (or just the brightness values are extended) to allow the regions R(pt) to have the same size for all the pixels pt. Different extensions are possible, and one extension is shown in Fig. 7 for the square regions R(pt) of Fig. 6. The image 610 has n rows (y=0,...,n-l) and m columns (x=0,...,m-l). The image is extended as follows. First, in each row >>, the brightness values are extended by two pixels to the left (x=-l, -2) and two pixels to the right (x=m, m+1). The brightness values are extended by reflection around the original image boundaries, i.e. around the columns x=0 and x=m-\ . For example, B(-l, O)=B(1, 0), B(m,0)=B(m-2,0), and so on. Then in each column (including the extension columns x=-2, -\, m, m+l), the brightness values are extended by two pixels up and two pixels down using reflection. Fig. 7 shows the pixel's coordinates for each pixel and the brightness value for each extension pixel.
[0069] Efficient algorithms exist for calculating the mean values Bavg(pi) for a plurality of adjacent pixels pt for rectangular regions R(pϊ). One such algorithm is illustrated in pseudocode in Appendix 2 at the end of this section.
[0070] Figs. 8 and 9 are brightness graphs illustrating the brightness before and after editing near a color boundary for one example. For simplicity, the image is assumed to be one-dimensional. Each pixel/* has a single coordinate x. The coordinate x is the coordinate of some predetermined point in the pixel, e.g. the left edge of the pixel (the x coordinates are discrete values). The color boundary occurs at some pixel x=x0. The original brightness B(x)=a for pixels x<x0, B(x)=b for x>x0, where a<b. Each region R(Pi)=R(x) is an interval [x-R,x+R] of some constant radius R with the center at the point x.
[0071] In Fig. 8, ε> 1. If x>xo+R or x<xo-R, the average Bavg(x) =B(x). Hence (see equation (28)), B (x)=B(x). As x decreases from xo+R towards X0, the average Bavg(x) decreases, so B (x) increases hyperbolically:
where C1, C2, C3 are some constants. When x increases from xo-R towards x0, the average B11Vg(X) increases, so B*(x) decreases hyperbolically according to equation (29A), with possibly some other constants C1, c2, C3.
[0072] In Fig. 9, ε< 1. Again, if x>xo+R or x<xo-R, then B*(x)=B(x). As x decreases from xo+R towards x0, the average Bavg(x) decreases, so B*(x) decreases parabolically according to equation (29A). When x increases from X0-R towards X0, the average Bavg(x) increases, so B (x) increases parabolically according to equation (29A), with possibly some other constants C1, C2, C3. The B* function does not have to be continuous at x=x0.
[0073] In some embodiments, B0Vg is a weighted average:
1
BaVg(Pi) ∑ w(p, Pi)B(p) (29B)
N(Pi) PER(Pi) where w(p,pt) are some weights. In some embodiments, each value w(p,pi) is completely defined by (x-Xj,y-yd where (x,y) are the coordinates of the pixel/? and (xi,yι) are the coordinates of the pixel pt. In some embodiments, all the weights are positive. In some embodiments, the sum of the weights is 1. The sum of the weights can also differ from 1 to obtain a brightness shift. The brightness shift and the contrast change can thus be performed at the same time, with less computation than would be needed to perform the two operations separately.
[0074] In some embodiments, the function in equation (28) is replaced
with some other predefined non-identity function
In some embodiments, the function/is strictly increasing. In some embodiments, j( I)=I, but this is not necessary if a brightness shift is desired. The brightness shift and the contrast change can thus be performed at the same time, with less computation than would be needed to perform the two operations separately.
[0075] The contrast editing techniques described above can be used to adjust either
the global contrast, i.e. when the region R(pt) is the whole image, or the local contrast, when the region R(pt) is only a part of the image. In some embodiments, the region R(pt) contains 10% to 30% of the image in the local contrast adjustments.
[0076] The same techniques can be used to change the image sharpness if the region R(Pi) is small, e.g. 1% of the image or less. In some embodiments, the region R(pt) is contained in a rectangle of at most 31 pixels by 31 pixels, with the center at pixel p\. Even smaller outer rectangles can be used, e.g. 21 x 21 pixels, or 11 x 11 pixels, and other rectangles. The image may contain thousands, millions, or some other number of pixels. In Fig. 10, a computer system (such as shown in Fig. 4 or 5 for example) receives an image editing command at step 1010. In a non-limiting example, the command can be issued by a user via a graphical user interface. The command may specify contrast or sharpness editing. If the command is for contrast editing (step 1020), the computer system sets the size of region R(pt) to a suitable value (e.g. 10%~30% of the image), or defines the region R(pι) to be a large specific region (e.g. the whole image). If the command is for sharpness editing (step 1030), the computer system sets the size of region R(Pi) to a small value (e.g. 1% of the image, or a rectangle of 11 pixels by 9 pixels), or defines the region R(pt) to be a small specific region. At step 1040, an editing algorithm is executed as described above with the region or regions R(pt) as defined at step 1020 or 1030. Step 1040, or steps 1020-1040, can be repeated for different pixels pt. The image processing system can be made simpler and/or smaller due to the same algorithm being used for the contrast and sharpness editing.
[0077] The contrast and sharpness editing can also be performed on monochromatic images. These images are formed with a single color of possibly varying intensity. In such images, the brightness B represents the color's intensity. In some embodiments, the brightness B is the only color coordinate. The same techniques can be employed for this case as described above except that the equations (27.2), (27.3), (29.2), (29.3) can be ignored.
[0078] The Bef and BCH systems are also suitable for changing an image's dynamic range according to equations (4A)-(4E). (Of note, in Figs. 4 and 5, the dynamic range is unchanged since B*mαx/B*min=Bmαx/Bmin.) In Fig. 11, the brightness is changed as in equation (4B) with the C and H coordinates remaining unchanged. Alternatively, the Bef system can be used, with B edited according to (4B) and with the e and/coordinates
unchanged. For the sake of illustration, suppose the image consists of a two-dimensional set of pixels /», (i=l, 2, ...) with coordinates (xι,y,), as shown in Fig. 6 for image 610. Let B(pt) denote the brightness value (the B coordinate) at the pixel /?„ and B*(p,) denote the modified brightness. Bavg(p,) may be some average brightness value, for example, the mean of the brightness values in some region R(pι) containing the pixel p, as in (26). In equation (4B):
Bo is some pre-defined brightness level;
BavgfPt) is some average brightness value, e.g. the mean brightness over a region R(p,) containing the pixel p, as in (26), or a weighted average as described below in connection with equation (36); ε is a pre-defined constant. If it is desired to increase the brightness at the pixels/?, at which Bavg(pl)<Bo, then ε is positive (in some embodiments, ε is in the interval (0,1)). Good results can be achieved for BQ being the maximum brightness and 0<£<l/2. If it is desired to reduce the brightness (e.g. to deepen the shadow) at the pixels /?, at which Bavg(p,)<Bo, then ε is negative (e.g. in the interval (-1 ,0)).
[0079] In Fig. 6, the region R(p,) is a square centered at the pixel p, and having a side of 5 pixels long (the length measured in pixels can be different along the x and y axes if a different pixel density is present at the two axes). In some embodiments, the region R(p,) is a non-square rectangle centered ZAp1, or a circle or an ellipse centered at/?,. In some embodiments, the regions R(pt) have the same size and the same geometry for different pixels/?,, except possibly at the image boundary. The region R(P1) does not have to be symmetric or centered at/?,.
[0080] In some embodiments, the regions R(pJ corresponding to pixels/?, at the image boundary have fewer pixels than the regions corresponding to inner pixels/?,. In other embodiments, the image is extended using any one of the methods discussed above in connection with contrast editing (see Fig. 7 for example).
[0081] In some embodiments, the regions R(pJ are chosen such that the brightness does not change much in each region (e.g. such that the region does not cross a boundary at which the derivative of the brightness changes by more than a predefined value).
[0082] Fig. 12 illustrates the brightness values B and B* along an axis x assuming a one-dimensional image. For simplicity, the brightness is shown as if changing continuously, i.e. the pixels are assumed to be small. The original brightness B(x)=B(p,)
is shown by thin lines, and the modified brightness B* by thick lines. The value BQ is chosen to equal Bmax (the maximum brightness over the image), and ε is about 1/2. In this example, the minimum brightness value
is the mean brightness in some neighborhood R(x) of the point x.
[0083] The original brightness B has large jumps at points x=x2 and x=x3. It is assumed that R(x) is small relative to the values X2-X1, X3-X2, and X4-X3. When x is between X1 and x2, the original brightness B is near Bmtn, i.e. about i?0/4. The value Bavg is also near Bo/4, so B* is about IB. The difference (B2* -B\*) is about double the difference (B1-Bx) between the original brightness values. Therefore, the image details become more visible in the dimmest area.
[0084] When x2<x<x3, the original brightness B is near the value 2Bmin-Bo/2. As x approaches X2 from the left, the brightness average Bavg increases due to contribution of the pixels to the right of X2, so the ratio B*/B tends to decrease. When x increases past x2, this ratio further decreases. When x>x3, the original brightness B is close to the maximum value BQ, SO the modified brightness B* is close to the original brightness. Therefore, the brightest area is almost unchanged.
[0085] The coefficient (Bo/Bavg(pi))ε in (4B) is a function of Bavg, so this coefficient changes smoothly over the image. As a result, the visual effect is highlighting of the entire dimmer areas rather than just individual dim pixels (a generally dim area may include individual bright pixels). The local dynamic range therefore does not change much. The smoothness degree can be varied by varying the size of the region R(pt) over which the average Bavg is computed.
[0086] In Fig. 13, the regions R(pt) are chosen not to cross the boundaries x=xj, x=x2, X=X3, x=x4. Therefore, when x approaches x2 from the left, the brightness average Bavg(x) is not affected by the brightness values to the right of x2, so the ratio B*/B does not change much. More generally, this ratio does not change much in each of the intervals
(XlJC2), (X2JC3), (X3JCA).
[0087] In Fig. 14, BQ is set to the minimum brightness value Bmtn. This type of editing is useful when the monitor displays the dimmest areas adequately but the brightest areas are out of the monitor's dynamic range. The transformation (4B) can be used with a positive ε, e.g. ε=l/2. The brightness is reduced in image areas in which Bavg>Bo. The
brightness is reduced more in the brighter areas, less in the dimmer areas. The regions R(Pi) are as in Fig. 12, but they could be chosen as in Fig. 13 or in some other manner.
[0088] Fig. 15 illustrates how the values BQ and ε can be chosen in some embodiments. Suppose an image editing system (e.g. a computer) is provided with a maximum brightness Bmax and a minimum brightness Bmjn. These can be, for example, the minimum and maximum brightness of the original image or image portion, or the minimum and maximum brightness to be edited, or the minimum and maximum brightness of a system which created the original image. The image editing system is also provided with a target maximum brightness B*max and a target minimum brightness B*min. Substituting the values Bmax, Bmin, B*max, B*min into (4B), and assuming for simplicity that
when B(pt) is equal to Bmin or Bmax, we obtain:
These two equations can be solved for BQ and ε. (For example, the equation (30A) can be divided by (30B) to eliminate BQ and solve for ε, and then the ε solution can be substituted into (30A) or (30B) to find B0.) The solution is:
^ logZM -logZM * \ogDR
B^Bml Iε - (BmaχVε (31B)
Here DR=Bmax/Bmin, and DR*=B*mca/B*min. The logarithm can be to any base.
[0089] In Fig. 16, the image editing system is provided a starting dynamic range DR and a target dynamic range DR*. It is desired to obtain the values BQ and ε so as to convert some predefined brightness value B\ to a predefined brightness value B\*. Using the equations (30A), (30B), we obtain the value ε as in (31A). Substituting ^1 and .S1* into (4B), and assuming for simplicity that Bavg(Pi)=B(pi) when B(pt) is equal to B\, we obtain:
Using the expression (31A) for ε, we obtain:
[0090] In (4B), the value BQ can be any value, not necessarily the minimum or maximum brightness. The brightness values below BQ are modified as in Fig. 12 in some
embodiments, and the brightness values above Bo as in Fig. 14. In some embodiments, BQ is chosen as the best brightness value for a given display device, i.e. the value at which (or near which) the display device provides the best image quality. The transformation (4B) with ε>0 tends to squeeze the brightness values closer to j?o. When ε<0, the transformation (4B) tends to spread the brightness values wider above and/or below BQ.
[0091] Some embodiments use equations other than (4B). Note equation (4A). In some embodiments,
where f(y) is a non-negative function strictly monotonic injμ. In some embodiments, B*(pi)=f(Bφi),Bavg(pi)) (35) where f(x,y) is a non-negative function which is strictly monotonic my for each fixed x=B(pi). In some embodiments, the function is chosen to be strictly decreasing iny. Hence, as Bms increases, the modified brightness B* decreases for a given original brightness value B. In other embodiments, the function is chosen to be strictly increasing in y for each fixed x. In some embodiments, f(x,y) is strictly increasing in x for each fixed y. In some embodiments, f(x, Bo)=X for all x for a predefined value BQ. Hence, if the image contains pixels with an average brightness B^g=Bo, their brightness is unchanged. In some embodiments, f(x,Bo)=kχx where k is a predefined constant, i.e. the brightness is multiplied by the predefined constant whenever Z?αvg=2?o. The transformation (35) can be applied to an entire image (e.g. a rectangular array of pixels) or a part of the image.
[0092] In some embodiments, Bavg is a weighted average:
where w(p,pi) are some weights. In some embodiments, each value w(p,pi) is completely defined by (x-xi:y-yi) where (x,y) are the coordinates of the pixel/; and (xi,yt) are the coordinates of the pixel/?,. In some embodiments, all the weights are positive. In some embodiments, the sum of the weights is 1. The sum of the weights can also differ from 1 to obtain a brightness shift. The brightness shift and the dynamic range change can thus be performed at the same time, with less computation than would be needed to perform the two operations separately.
[0093] The editing can also be performed on monochromatic images. These images are formed with a single color of possibly varying intensity. In such images, the
brightness B represents the color's intensity. In some embodiments, the brightness B is the only color coordinate.
[0094] The invention is not limited to the Befand BCH coordinate systems, or the brightness values given by equation (13). In some embodiments, the brightness B is defined in some other way such that multiplication of the brightness by a value k in some range of positive values corresponds to multiplication of all the tristimulus values by the same value. In some embodiments, the chromaticity coordinates remain unchanged. For example, the brightness can be defined by the equation (49) discussed below.
[0095] The Befand BCH systems are also convenient for performing hue and saturation transformations. In these transformations, the brightness B is unchanged. The hue transformation corresponds to rotating the color vector S (Fig. 3) around the D axis. The chroma coordinate C is unchanged for the hue transformation. Suppose the color vector rotation is to be performed by an angle φ, i.e. the angle His changed to H+φ. In the BCH system, this transformation is given by the following equations: B*=B (37)
C*=C H*=H+φ while H*>Hmαx do H*=H-2π while H*< Hmin do H*=H+2π The H computation places the result into the proper range from Hmin to Hmαx, e.g. from -π to π or from 0 to 2π. If the H value is outside the interval, then 2π is added or subtracted until the H value in the interval is obtained. These additions and subtractions can be replaced with more efficient techniques, such as division modulo 2π, as known in the art. In some embodiments, the ">" sign is replaced by ">" and "<" by "<" (depending on whether the point Hmin or Hmαx is in the allowable range of the H values).
[0096] In the Bef system, the hue transformation is given by the following equations: B*=B (38) e =excos φ+fxsin φ f =fχcos φ - exsin φ
[0097] For a given hue H some embodiments specify a maximum saturation value Cmαx(H). This can be done for one or more hue values H, possibly for all the hue values.
If C exceeds Cmax(H), the corresponding color (B1QEF) may be invisible to a human being, or may be impossible to display on a given display device. Thus, for a given H, the value Cmax(H) may be specified as the maximum saturation visible to humans, or the maximum saturation reproducible on a display device, or as some value below the maximum visible or reproducible saturation. In some embodiments, if the hue is changed as in (37), then the saturation is changed so that C* does not exceed Cmax(H*). For example:
B*=B (38A)
H*=H+φ while H*>Hmax do H*=H-2π while H*< Hmin do H*=H+2π C*=CxCmax(H*)/Cmax(H) C* is computed after H*. If C did not exceed Cm0x(H), then C* will not exceed Cmax(H*).
[0098] A saturation transformation can be specified as the change in the chroma angle C, e.g. multiplying C by some positive constant k. The hue and the brightness are unchanged. Since D is never negative, the C coordinate is at most π/2. The maximum C value Cmax(H) can be a function of the H coordinate. In the BCH system, the saturation transformation can be performed as follows:
B*=B (39) C*=kxC; if C*>Cmαxrø then C=Cmax(H)
H*=H
[0099] In the Bef system, it is convenient to specify the desired saturation transformation by specifying the desired change in the e and/coordinates. The saturation change corresponds to rotating the S vector in the plane perpendicular to the EF plane. The angle H remains unchanged. Hence, the ratio
H is unchanged, so the saturation transformation can be specified by specifying the ratio e /e=f /f. Denoting this ratio by k, the transformation can be performed as follows:
B*=B (40) e =kxe f*=kxf
Equations (14), (15) imply that e2+f2<l (41)
Therefore, the values e , f* may have to be clipped. In some embodiments, this is done as follows:
ifg>l, then e =e/g
[00100] The Befana BCH systems are also convenient for converting a color image to a monochromatic image (i.e. a polychromatic image to a monochromatic image), or for changing the color (e.g. the hue) of a monochromatic image. In these transformations, the brightness B can be unchanged. The transformation is performed in the Bef system by setting the e and/coordinates to some constant values e0 and^o, as shown in Fig. 17. More particularly, computer processor 410 receives the color coordinates B, e, /and also receives the values eo,fo- The processor outputs the coordinates B, eo,fo for each pixel: B*=B (42.1) e =e0
[00101] In the BCH system, the monochromatic transformation is given by the following equations:
B*=B (42.2) C*=Co
H*=H0 where Co, Ho are some input values corresponding to the desired color. Computer processor 410 (Fig. 18) receives the color coordinates B, C, Hand also receives the values C0, H0. The processor outputs the coordinates B, C0, H0 for each pixel according to (42.2).
[00102] In some embodiments, the transformation (42.1) or (42.2) can be combined with some brightness editing. For example, in at least a portion of the image, the output brightness B* can be computed as in (16) for some input value k, or as in (27.1) or (29A). Other brightness editing types are also possible. See e.g. U.S. patent application no. 11/377,161, entitled "EDITING (INCLUDING CONTRAST AND SHARPNESS
EDITING) OF DIGITAL IMAGES", filed on March 16, 2006 by Sergey N. Bezryadin and incorporated herein by reference.
[00103] The invention includes systems and methods for color image editing and display. The Bef and BCH color coordinates can be transmitted in a data carrier such as a wireless or wired network link, a computer readable disk, or other types of computer readable media. The invention includes computer instructions that program a computer system to perform the brightness editing and color coordinate system conversions.
Computer programs with such computer instructions can be transmitted in a data carrier such as a wireless or wired network link, a computer readable disk, or other types of computer readable media. Some embodiments of the invention use hardwired circuitry instead of, or together with, software programmable circuitry.
[00104] In some embodiments, the Bef or BCH color coordinate system can be replaced with their linear transforms, e.g. the coordinates (B+e,e,f) or (2B,2e,2f) can be used instead of (B,e,f). The angle H can be measured from the E axis or some other position. The angle C can also be measured from some other position. The invention is not limited to the order of the coordinates. The invention is not limited to DEF, XYZ, or any other color coordinate system as the initial coordinate system. In some embodiments, the orthonormality conditions (9) or (10) are replaced with quasi-orthonormality conditions, i.e. the equations (9) or (10) hold only approximately. More particularly, CMF' s t\(X) , t2(X) , t3(X) will be called herein quasi-orthonormal with an error at most ε if they satisfy the following conditions: roo_ _ poo_ _ r°°— — 1. each of / tγ(X)t2(X)dX , I tι(λ)t3(X)dλ , I t2(X)t3(X)dX is in the interval J O J 0 " 0
[-ε, ε], and
dλ is in the interval [K-ε,K+έ\
for positive constants K and ε. In some embodiments, £is 0.3iC, or OλK, or some other value at most 0.3UC, or some other value. Alternatively, the CMF's will be called quasi- orthonormal with an error at most ε if they satisfy the following conditions:
1. each of
is in the interval [-ε,ε], and λ λ λ
2. each of ∑[TX(X)]2 , ∑[T2(X)]2 , ∑[T3(X)]2 is in the interval [K-ε,K+ε] λ λ λ for positive constants K and ε. In some embodiments, £"is 0.3K, or OAK, or some other value at most 0.3K, or some other value. Orthonormal functions are quasi-orthonormal,
but the reverse is not always true. If S=OAK, the functions will be called 90%- orthonormal. More generally, the functions will be called n%-orthonormal if fis (100- n)% of K. For example, for 70%-orthonormality, £=0.3^.
[00105] The invention is not limited to the orthonormal or quasi-orthonormal CMF' s or to any particular white color representation. For example, in some embodiments, the following color coordinate system (Sι,S2,S3) is used for color editing:
S1=^X2 +Y2 + Z2 (43)
S2=XZSx
The XYZ tristimulus values in (43) can be replaced with linear RGB tristimulus values or with some other tristimulus values T1, T2, T3, e.g.:
S2=T2ZSi
If Ti can be negative, than the sign of T1 can be provided in addition to the coordinates. Alternatively the sign of Ti can be incorporated into one of the Si, S2, and/or S3 values, for example:
Sι=Sign(Tι)x ψf+rJ+Tf (45)
The brightness editing can still be performed by multiplying the Si coordinate by k, with the £2 and S3 values being unchanged.
[00106] The BCH coordinate system can be constructed from linear systems other than DEF, including orthonormal and non-orthonormal, normalized and non-normalized systems. In some embodiments, a coordinate system is used with coordinates (Sι,S2,S3), where S1 is defined as in (44) or (45), and the coordinates S2 and S3 are defined in a way similar to (20)-(23), e.g.:
S2=COS'1 (TiIB) (46)
S3=tanΛ (T3ZT2)+ a where a is as in (22), or
S3=arctg(T2,T3) (47)
In some embodiments, the value B is the square root of some other quadratic polynomial of in Ti, T2, T3:
B= yjgl \Tχ + g22T2 + 83,1,T3 + gnΨl + guΨΪ + S^Wl, (48) wherein ^11, g22, g33, gn, gn, g23 are predefined constants, and gn, g22, g33 axe not equal to zero (e.g. gn, gn, g33 are positive). Linear transforms result in different mathematical expressions of the coordinates without necessarily changing the values of the coordinates. For example, the coordinate system may use coordinates S1, S2, S3 or their linear transform, wherein S1 is defined by the value B, or Si is defined by B and the sign of a predefined function of one or more Of T1, T2, T3. In some embodiments
wherein anxTi+ai2xT2+ai3xT3
cc2ixTι+ CC22XT2+ (X23XT3 a3(Tι J2 J3)= a3XxTι+ CC32XT2+ Os3XT3 wherein CU11, α12, α13, a2ι, a22, O23, αr31, a32, a33 are predefined numbers such that the following matrix A is non-degenerate:
In some embodiments, the coordinate Si is defined by the value B and by a sign of a predefined function of one or more Of T1, T2, T3. The function can be one of Qr1(T \ J2 J3), OC2(TiJ2J3), Cc3(TiJ2J3). Clearly, the values
T3 =CC3(TiJ2J3) are also tristimulus values. In some embodiments, the coordinate S2 is defined by a value β(Tι J2 J3)IB, wherein
wherein βι, /J2, /% are predefined numbers at least one of which is not equal to zero, or the coordinate S2 is defined by the value β(T\, T2J3)IB and the sign of a predefined function of one or more of T\, T2, T3. The coordinate S3 is defined by a value /(Ti, T2J3)IB, wherein
γιχTι+γ2χT2+γ3χT3, wherein γι, γi, Y3 are predefined numbers at least one of which is not equal to zero, and γ(Tι, T2J3) is not a multiple ofβ(Tι, T2J3), or the coordinate S3 is defined by the value
Y(T1JjJ3)IB and the sign of a predefined function of one or more Of T1, Tj, T3. In some embodiments, a value cos S2 is defined by a value P(TuTjJi)IB, wherein
wherein
βι, ft, are predefined numbers at least one of which is not equal to zero, or the value cos Sj is defined by the value β(T], TjJi)IB and the sign of a predefined function of one or more of T\, Tj, T3. In some embodiments, a value tan S3 is defined by a value /(TiJ2J3)I δ(Ti,T2,T3), wherein
riχTι+nχT2+nχT3,
wherein
γ2, γ3 are predefined numbers at least one of which is not equal to zero, and S1, S2, S3 are predefined numbers at least one of which is not equal to zero, or the value tan S3 is defined by the value /(T], TjJ3)IS (Ti, TjJ3) and the sign of a predefined function of one or more of T\, Tj, J3. In some embodiments, β(T\, TjJ3) is one of Or1(T1, TjJ3), U2(T1J2J3), (X3(T1J2J3); 7(T1J2J3) is another one of CC1(T1J2J3), (Xj(TxJ2Ji), (Z3(T1JjJ3); and S(T1, T2J3) is the third one of U1(T1, T2Ji), CCj(T1J2Ji), Cc3(T1J2J3). In some embodiments, any color whose tristimulus values T1J2J3 are such that Or1 (T1 J2 Ji) is not zero and αr2(7τ 1,J2,r3)=α3(7I 1,T2,r3)=0, is a white or a shade of gray. This can be D65 or some other white color. In some embodiments, a monochromatic red color of a predefined wavelength has tristimulus values T1, T2, T3 such that CC2(TiJ2J3) is positive and U3(T1J2Ji)=Q. In some embodiments, U1(T1J2Ji), U2(T1J2Ji), Cc3(T1J2J3) are tristimulus values corresponding to 70%-orthonormal color matching functions. In some embodiments, up to a constant multiple, Λ is an orthonormal matrix, i.e. ΛΛT=I, where Λτ is a transpose of A, and I is the identity matrix, i.e.
We will also call the matrix orthonormal if AAτ=cxl where c is a non-zero number. If the elements of ΛΛτ are each within cxnllOO of the respective elements of a matrix cxl for some c and n, then the matrix A will be called (100-«)%-orthonormal. For example, if the elements of AAΎ are each within 0.3 xc of the respective elements of a matrix cxl (i.e. for each element /Iy of matrix AΛτ, |/ly-ey|<0.3xc where etj is the corresponding element of
the matrix cxl), the matrix A will be called 70%-orthonormal. In some embodiments, the matrix A is 70% or more percent orthonormal (e.g. 90%-orthonormal).
[00107] The tristimulus values T\, Z2, J3, or the corresponding color matching functions, do not have to be normalized in any way. In some prior art systems, the color matching functions are normalized to require that the integral of each color matching function be equal to 1. In the XYZ system of Fig. 2, there is a normalizing condition X=Y=Z for a pre-defined, standard white color with a constant spectral power distribution function P(λ). The invention is not limited by any such requirements.
[00108] The invention is not limited to the two-dimensional images (as in Fig. 6). The invention is applicable to pixel (bitmap) images and to vector graphics. Colors can be specified separately for each pixel, or a color can be specified for an entire object or some other image portion. The invention is not limited to a particular way to specify the brightness transformation or any other transformations. For example, the brightness transformation can be specified by a value to be added to the brightness (B*=B+k) or in some other way. Other embodiments and variations are within the scope of the invention, as defined by the appended claims.
[00109] APPENDIX 1 -FUNCTION arctsfa.b)
Here is a pseudocode written in a C-like language (C is a programming language), inline float arctg(const float a, const float b) { float res; if( FLT_EPSILON * abs(b) < abs(a) )
//FLT_EPSILON is the minimum positive number ε // such that 1-ε is different from 1 on the computer. //abs() is the absolute value
{ res = tan"1 (b / a) ; if(a < 0)
{ res += π ;
}
elseif(b<0)
{ res += 2 π ;
}; } else
{ if(b < 0)
{ res = 3/2 π;
} else
{ if(b>0) { res = 1A π ;
} else
{ res = 0 ;
};
}; };
return res ;
};
[00110J APPENDIX 2-Calculation of Means B2Up1) for rectangular regions Rφ,)
void CalculatePartialSum(
float *pB, //InOut
int nLenght, //In
int nRadius //In
)
{
float *Temp = new float [nLenght + 2 * nRadius] ;
int offset = nRadius ;
> for (int i = 0; i < nLenght; ++i)
Temp[offset++] = pB[i] ;
} ;
int iLeft = nRadius ;
int iRight = nLenght -2 ;
for (i = 0; i < nRadius; ++i)
Temp[i] = pB [iLeft-] ;
Temp[offset++] = pB [iRight--] ;
} ;
iLeft = 0;
iRight = 2 * nRadius +1;
float Sum = Temp[0] ;
for (i = l; i < iRight; ++i)
Sum += Temp[i] ;
} ;
pB[0] = Sum;
for (i = 1 ; i < nLenght; ++i)
Sum += Temp[iRight++] - Temp[iLeft++] ;
pB[i] = Sum;
return;
} ;
void CalculateMeanBrightness(
float *pMeanB, //Out
image *plmage, //In
int nWidth, //In
int nHeight, //In
int nRadiusWidth, //In
int nRadiusHeight //In
)
{
int length = nWidth * nHeight ;
for(int offset = 0; offset < length; ++offset)
{
pMeanB [offset] = Brightness(plmage[offset]) ;
} ;
offset = 0;
for(int y = 0; y < nHeight; ++y)
{
CalculatePartialSum( pMeanB+offset, nWidth, nRadius Width ) ;
offset += nWidth ;
} ;
float Temp = new float[nHeight] ;
float kNorm = 1. / (2 * nRadius Width + 1 ) / (2 * nRadiusHeight + 1 ) ;
for(int x = 0; x < nWidth; ++x)
{
offset = x ;
for(int y = 0; y < nHeight; ++y)
{
Temp[y] = pMeanB [offset] ;
offset += nWidth ;
};
CalculatePartialSum( Temp, nHeight, nRadiusHeight ) ;
offset = x ;
for(y = 0; y < nHeight; ++y)
pMeanB [offset] = Temp[y] * kNorm ;
offset += nWidth ;
};
};
return ;
};