WO2008056140A2 - Detecting illumination in images - Google Patents

Detecting illumination in images Download PDF

Info

Publication number
WO2008056140A2
WO2008056140A2 PCT/GB2007/004247 GB2007004247W WO2008056140A2 WO 2008056140 A2 WO2008056140 A2 WO 2008056140A2 GB 2007004247 W GB2007004247 W GB 2007004247W WO 2008056140 A2 WO2008056140 A2 WO 2008056140A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
relations
pixel
filtered
Prior art date
Application number
PCT/GB2007/004247
Other languages
French (fr)
Other versions
WO2008056140A3 (en
Inventor
Graham D. Finlayson
Mark Samuel Drew
Clement Fredembach
Original Assignee
University Of East Anglia
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GB0622251A external-priority patent/GB0622251D0/en
Priority claimed from GB0710786A external-priority patent/GB0710786D0/en
Application filed by University Of East Anglia filed Critical University Of East Anglia
Priority to JP2009535795A priority Critical patent/JP5076055B2/en
Priority to US12/514,079 priority patent/US8385648B2/en
Priority to GB0909767A priority patent/GB2456482B/en
Publication of WO2008056140A2 publication Critical patent/WO2008056140A2/en
Publication of WO2008056140A3 publication Critical patent/WO2008056140A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/143Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20012Locally adaptive

Definitions

  • Much of computer vision, image processing and imaging in general is predicated on the assumption that there is a single prevailing illuminant lighting a scene. However, often there are multiple lights. Common examples include outdoor scenes with cast and attached shadows, indoor office environments which are typically lit by skylight and artificial illumination and the spot-lighting used in commercial premises and galleries. Relative to these mixed lighting conditions, many imaging algorithms (based on the single light assumption) can fail. Examples of failure include the inability to track objects as they cross a shadow boundary or tracking a shadow rather than the object, an incorrect colour balance being chosen in image reproduction (e.g., when printing photographs) and in an incorrect rendering of the information captured in a scene. The last problem is particularly acute when images containing strong shadows are reproduced.
  • the imaging practitioner can chose either to make the image brighter (seeing into the shadows) at the cost of compressing the detail in the lighter image areas or conversely keeping the bright areas intact but not bring out the shadow detail.
  • many photographs are a poor facsimile of the scenes we remember because our own visual system treats shadow and highlight regions in a spatially adaptive way in order to arrive at quite a different perceptual image.
  • aspects of the present invention seek to provide a method for segmenting illumination in images.
  • a method for processing an image having a plurality m of light sources by segmenting the image into different regions, each of which is lit by only one of the m light sources comprising the steps of obtaining paired images with different sets of spectral components,, and applying sets of m pre-computed mappings at the pixel or region level to the image pairs.
  • the images may be paired images with different filtering, e.g. filtered and unfiltered images.
  • the present invention is based on the realisation that the relationship between image colours (e.g. RGBs) and corresponding image colours captured through a coloured filter depends on illumination.
  • Methods according to the present invention determine the number of distinct relations present in an image and filtered image pair and by assigning a relation to each pixel or region identify which parts of the image correspond to different colours of light. The method works: for an RGB camera and R, G or B filtered counterparts; for an RGB camera and a second set of one or more sensor responses (e.g. C,
  • M and/or Y for any camera that takes a primary multispectral image (with two or more sensors) and a secondary multispectral image (with one or more sensors); given a camera with N spectral sensitivities capturing a primary image and examining the relationship between the image and a second image which has M sensor measurements.
  • the first m measurements can be related to the remaining n-m sensors, and so the relation could be an (n-m) x m matrix.
  • the relation could be an (n-m) x m matrix.
  • Relationships can be computed based on the image data or can be precomputed in a training stage. Relations can be assigned to pixels or regions (and so regions or pixels identified as particular illuminants) using a robust statistical optimisation procedure or using a simple search procedure. The search procedure involves two stages. First, a set of m relations is chosen from the larger set of N all possible relations. Second, the appropriateness of a chosen m set is computed for the primary and secondaiy image pair. The m-set that is the most appropriate overall determines which pixels or regions are lit by which lights.
  • the appropriateness of a given relation is determined by how well it predicts a particular secondary image response vector given a corresponding primary response vector. If relations are linear transforms, the responses from the primary images are mapped by one of the m relations in the m-set and the one which generated a new set of outputs which are closest to the secondary image outputs is deemed most appropriate. In general a relation is the most appropriate if it is more likely (when tested against the other m-1) candidates in some mathematical sense.
  • Statistical analysis can be used to evaluate which is the best relation to apply to a set of pixels.
  • the relation that best models a set of pixels could be the relationship which was found to be appropriate the most often.
  • the appropriateness of a set of relations is calculated across an image.
  • the difference between the actual secondary response and that predicted by the relation is summed up across the image. This gives a score for the goodness of a particular relation inset.
  • the m-set of relations that best accounts for the image data is found by search.
  • Images of scenes with cast shadows have two illuminants (direct light and sky and sky only (i.e. for the shadowed regions).
  • the shadow and non shadow areas are found by testing the appropriateness of all pairs of relations in turn.
  • a method for processing an image having a plurality m of light sources by segmenting the image into different regions, each of which is lit by only one of the m light sources comprising the steps of obtaining paired images with different sets of spectral components,, finding a best mapping between the images and assigning the majority of pixels best transformed under the mapping found to a first label and others to a second label.
  • a method for processing an image having a plurality m of light sources by segmenting the image into different regions, each of which is lit by only one of the m light sources comprising the steps of obtaining paired images with different sets of spectral components,, undertaking a chromagenic preprocessing step for a plurality N illuminants (where N>m) to produce N relations SR determining for each pixel or region which of the m relations of an m- element subset R best maps the two images.
  • the images may have different filtering, for example filtered and unfiltered images.
  • a fourth aspect of the present invention there is provided a method of improving the accuracy of information in an image employing the steps of a method according to the first, second and third aspects, and accordingly adjusting the rendering of the information in the image.
  • an image treatment system comprises means for detecting illumination in an image and means for implementing the steps according to the first, second, third or fourth aspects to adjust the treatment of the images.
  • the present invention relates to a method for identifying illumination in images.
  • a method for segmenting an image into different regions where each region is lit by only one of the m lights.
  • the method starts with the chromagenic camera discussed in [5, 6, 7].
  • a chromagenic camera takes two pictures of a scene: the first is a conventional RGB image but the second is taken with the same camera using a coloured filter placed in front of the camera optical system.
  • the chromagenic idea also extends to other camera architectures (see [5, 8] for a discussion); e.g., a camera that has more than 3 sensors can be considered chromagenic in terms of the general theory. In previous work two results were shown.
  • Fig. 1 illustrates a method in accordance with a preferred embodiment of the present invention
  • Figs. 2a and 2b respectively show an original image and its estimated illuminations obtained by a method in accordance with the present invention
  • Fig. 3 a shows an initial segmentation of an image with the present invention
  • Fig. 3b shows the result of a region-based illuminant detection procedure using the information represented in Fig. 2b and Fig. 3 a.
  • the labels ' 1', '2', '3', '4' are the labels for each region and are also the pixel values: e.g., in the shadow region of image /, the scalar value of the image pixels is the number 3.
  • a corresponding filtered image again with 2 regions is shown having regions ' 1 ' and '2' and is denoted as f.
  • mappings from unfiltered to filtered pixel values are simply a set of Nscalars, one for each of N lights.
  • N 3 mappings for 3 lights
  • the mappings from unfiltered to filtered camera responses are given by the three scalars ⁇ 1, 1/3, 1/2 ⁇ .
  • the first mapping is designated by '*r, where *1 means we can multiply the image pixel by 1 to predict the corresponding filtered output, and so on.
  • map sets A, B and C We now apply each of these mapping sets in turn. E.g., if we test set A, consisting of the candidate mappings *1 and *l/3, then we apply * 1 to the whole image I and then compare errors from the actually observed filtered responses f, and then also apply * 1/3 to the whole image I and compare errors from / .
  • mapping A which of the two mappings, *1 or * 1/3, results in the least error at a pixel results in that pixel being labelled as associated with the first, or the second, mapping. (Associations of whole regions are described below.)
  • mappings B meaning the scalar multiplier set ⁇ *l/3, *l/2 ⁇
  • C meaning the scalar multiplier set ⁇ *1, *l/2 ⁇ .
  • the map set that best predicts filtered counterparts from image RGBs can then be used to directly partition the image into regions lit by different lights. According to this invention pixels, or regions, assigned the same map will be assumed to be illuminated by the same light.
  • the chromagenic idea can still obtain by seeking a best mapping between the two images and assigning the majority of pixels best transformed under the mapping found to one label, and all others to a second label. For example, a 'robust' statistical procedure finds the best mapping from one image to the other provided that at least half the image (plus 1 pixel) is approximately associated with that mapping. Pixels not associated correctly are Outliers' and belong to the second label.
  • robust mapping can proceed in a hierarchical manner, going on to find the best mapping in just the second-label region, and going on to descend further until there is no good labelling for individual pixels. Region- labelling is then brought into play (see below).
  • q k [Q k ( ⁇ )E( ⁇ )S( ⁇ )d ⁇ (1) where the integral is evaluated over ⁇ , the visible spectrum. It is useful to combine the triplet of sensor responses g ⁇ into a single vector, which we denote by ⁇ (underscoring denotes a vector quantity).
  • A(C) is a 3 x N matrix mapping reflectance weights to RGB responses.
  • the Jg ' th. term of this Lighting Matrix is given by:
  • the linear model basis sets for light and reflectance, used in (2), are generally determined using Principal Component Analysis [9] or Characteristic Vector Analysis [10] in which case the model dimensions D E and Ds are found to be 3 (for daylights) and 6 to 8 for reflectances. Given that there are only 3 measurements at each pixel, these large dimensions in the model cast doubt on the solubility of colour constancy. However, looking at (3) we see that image formation is in reality predicated on a (light dependent) Lighting Matrix multiplying a reflectance weight vector. While we have no knowledge of E(X) or S(X), we do see that the linearity of (1) is preserved: if we add two lights together we add the respective lighting matrices.
  • the algorithm works in 2 stages.
  • a preprocessing step we pre-calculate the relations, one each for each of JV illuminants, that map RGBs to filtered counterparts. For example, we find a set of N 3 x 3 matrix transforms.
  • the operation phase we take a chromagenic pair of images — two images, one unfiltered and one filtered. The illumination is unknown for the new, testing pair.
  • We then apply each of the precomputed relations, and the relation that best maps RGBs to filtered counterparts is used to index and hence estimate the prevailing illuminant colour [7].
  • the chromagenic method for illuminant estimation is as follows:
  • Q 1 and Qf represent the matrices of unfiltered and filtered sensor responses for the s surfaces, under the fth light; superscript + denotes pseudo-inverse [15].
  • This generates a best least- squares transform, but the method is not limited to least-squares (e.g., robust methods could be used), nor is the method limited to linear (i.e., matrix) transforms. Operation: Given P surfaces in a new, test, image we have 3 xP measured image RGB matrices Q and Q F . Then the task of finding the best estimate of the scene illuminant E est ( ⁇ ) is solved by finding the index i in our set of JV illuminants that generates the least sum of squared errors:
  • i est arg mm(er ⁇ ) (i - 1,2, ...,N) (9)
  • is some simple scalar function, e.g. the sum of absolute values of vector components, or the square root of the sum of absolute values squared. If 4 is a region there is scope to make
  • function bestlabelQ must choose which label to assign to region k, of all the up to m labels assigned to pixels I ⁇ in region k.
  • An obvious candidate for function bestlabelQ is the mode function. E.g., if 7 k has 100 pixels and, of those 100, and 90 have a relation label i, then the mode is also i and the overall label for the region should also be i. Another candidate would be that label minimising the overall error in mapping unfiltered to filtered pixels, in that region k.
  • Q t (( ⁇ ) might be a sensor response function or a sensor multiplied by a filter
  • the means by which we relate the first p responses to the remaining q -p responses can be written in several general forms.
  • the unf ⁇ ltered responses are related to filtered responses by a 3 x 3 matrix transform. More generally, this map could be any function of the form/; SR 3 — > 9 ⁇ 3 (a function that maps a 3-dimensional input to a 3 dimensional output).
  • the mapping function/ 5R q - p ⁇ ⁇ R P .
  • P projects the q vector onto some q - p dimensional plane. Subtracting the projected vector from the original then makes a suitable distance measure.
  • the position of the q vector of responses measured by a camera depends strongly on illumination and weakly on reflectance we can use the position in q space to measure the likelihood of this response occurring under that light.
  • This likelihood can be calculated in many ways including testing the relationship between the first q -p responses to the last p responses (using linear or non linear functions and any arbitrary distance measure).
  • the position of the q vector can be used directly and this includes calculating the proximity to a given plane or by a computing a probablistic or other measure.
  • the information that is needed to measure whether a q vector is consistent with a given light can be precalculated or can be calculated based on the statistics of the image itself.
  • Many scenes are lit by a single light or by two lights. Often in the outdoor environment there is a single light. As well, there are often two lights: the sun+sky (non- shadow) and the sky alone (shadow). Similarly, indoors at night we may light the room by a single incandescent bulb. Yet, during the day many office environments are a combination of artificial light from above the desk and natural light coming through the window. Indeed, it is hard to think of normal circumstances when m is much larger than 2.
  • Figure 1 illustrates this process where there are just 3 relations (mappings) and instead of matrices the relations are simple scalar multipliers.
  • Figure 2 shows typical results of an optimisation Eq. (11) applied at the pixel level.
  • Figure 2(a) shows the original image; since it has shadows there are clearly two lights present in the scene. It represents noisy, pixel-based detection.
  • FIG. 3 (a) shows the segmentation arrived at by the standard Mean shift algorithm. It will be noted that there are many regions in the image: that is, we have oversegmented the image vis-a-vis our present objective, namely disambiguating shadowed from non-shadowed regions. This is important to note, as we wish to be sure that the segmentation of the input image has not merged regions which are lit by different lights (the degree of segmentation is controllable using parameters that the Mean Shift algorithm uses, and this applies to other edge-preserving segmentation algorithms as well).
  • Figure 3(b) is the result of a region-based illuminant detection procedure.
  • the regions obtained using the Mean Shift segmentation in Figure 3 (a) we then go on to assign output labels as in Eq. (13).
  • Eq. (13) In this variant, for each region we count the proportion of 'O's and 'Is' and assign the majority number to the entire region. The result shown in Figure 3(b) makes it clear that we have obtained an excellent segmentation of the lights present in the scene.
  • Figure 3 represents clean determination of shadow areas.
  • the method consists of using pre-determined transforms of pairs of images from unfiltered to filtered versions, where a chromagenic filter is utilised.
  • sets of m mappings are applied at the pixel or region level to the image pairs, to best generate an assignment of labels.
  • m or fewer assignments of labels can be determined by regression or similar method applied to the image pairs in a hierarchical manner.
  • the region-based approach generates cleaner illumination segmentations, in general.
  • this specification includes images with differing filtering characteristics.
  • a conventional digital camera and a camera with a yellow filter are used.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Probability & Statistics with Applications (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

An image having m light sources, with m preferably equalling 2 or 3, is sequented into different regions, each of which is lit by only one of the m light sources, by obtaining paired imaged with different filtering, for example a filtered and an unfiltered image, applying to the image pairs sets of m pre-computed mappings at the pixel or region level, and selecting the most appropriate. The rendering of the information in the image maybe adjusted accordingly.

Description

DETECTING ILLUMINATION IN IMAGES
Much of computer vision, image processing and imaging in general is predicated on the assumption that there is a single prevailing illuminant lighting a scene. However, often there are multiple lights. Common examples include outdoor scenes with cast and attached shadows, indoor office environments which are typically lit by skylight and artificial illumination and the spot-lighting used in commercial premises and galleries. Relative to these mixed lighting conditions, many imaging algorithms (based on the single light assumption) can fail. Examples of failure include the inability to track objects as they cross a shadow boundary or tracking a shadow rather than the object, an incorrect colour balance being chosen in image reproduction (e.g., when printing photographs) and in an incorrect rendering of the information captured in a scene. The last problem is particularly acute when images containing strong shadows are reproduced. Operating under the single light assumption, the imaging practitioner can chose either to make the image brighter (seeing into the shadows) at the cost of compressing the detail in the lighter image areas or conversely keeping the bright areas intact but not bring out the shadow detail. Indeed, many photographs are a poor facsimile of the scenes we remember because our own visual system treats shadow and highlight regions in a spatially adaptive way in order to arrive at quite a different perceptual image.
There is a good deal of work in the literature for identifying illumination change in images. Most of the previous approaches work by comparing pixels (or regions) that are spatially adjacent. Rubins and Richards [1] argue that the RGBs across a shadow edge have a certain well defined relationship. When this relationship does not hold then the edge is not a shadow edge. Freeman et al. [2] learn the statistics of reflectance and illumination edges and have some success at classifying edges in images. Finlayson et al. in [3] show how a single grey scale image can be formed from a colour image where there are no edges due to illumination. Comparing edges in this image with those in the colour image is used as a means for shadow edge detection. Moreover, in [4] Fredembach and Finlayson consider how local edges can be integrated to identify coherent shadow regions. Re-integration of edges, with edges on shadow boundaries bridged across the boundary, results in a colour without shadows, to a good degree. In these methods, shadow edges are key. While this method can work well, it is far from perfect, and moreover is tailored to the shadow detection, as opposed to the illumination detection problem. Further, a region-based rather than edge-based method would provide more evidence indicating shadows.
Aspects of the present invention seek to provide a method for segmenting illumination in images.
According to a first aspect of the present invention, there is provided a method for processing an image having a plurality m of light sources by segmenting the image into different regions, each of which is lit by only one of the m light sources, the method comprising the steps of obtaining paired images with different sets of spectral components,, and applying sets of m pre-computed mappings at the pixel or region level to the image pairs.
The images may be paired images with different filtering, e.g. filtered and unfiltered images.
The present invention is based on the realisation that the relationship between image colours (e.g. RGBs) and corresponding image colours captured through a coloured filter depends on illumination. Methods according to the present invention determine the number of distinct relations present in an image and filtered image pair and by assigning a relation to each pixel or region identify which parts of the image correspond to different colours of light. The method works: for an RGB camera and R, G or B filtered counterparts; for an RGB camera and a second set of one or more sensor responses (e.g. C,
M and/or Y); for any camera that takes a primary multispectral image (with two or more sensors) and a secondary multispectral image (with one or more sensors); given a camera with N spectral sensitivities capturing a primary image and examining the relationship between the image and a second image which has M sensor measurements.
For example, if we take an n sensor camera, the first m measurements can be related to the remaining n-m sensors, and so the relation could be an (n-m) x m matrix. For n=6 and m=3 we have a 3x3 matrix relation. Relationships can be computed based on the image data or can be precomputed in a training stage. Relations can be assigned to pixels or regions (and so regions or pixels identified as particular illuminants) using a robust statistical optimisation procedure or using a simple search procedure. The search procedure involves two stages. First, a set of m relations is chosen from the larger set of N all possible relations. Second, the appropriateness of a chosen m set is computed for the primary and secondaiy image pair. The m-set that is the most appropriate overall determines which pixels or regions are lit by which lights.
The appropriateness of a given relation is determined by how well it predicts a particular secondary image response vector given a corresponding primary response vector. If relations are linear transforms, the responses from the primary images are mapped by one of the m relations in the m-set and the one which generated a new set of outputs which are closest to the secondary image outputs is deemed most appropriate. In general a relation is the most appropriate if it is more likely (when tested against the other m-1) candidates in some mathematical sense.
Assuming that the primary and secondary images combined have p measurements per pixel, we can calculate likelihood by means of many methods. These methods test the relationship between the first q -p responses to the last p responses (using linear or non linear functions and any arbitrary distance measure). Equally, the position of the q vector can be used directly and this includes calculating the proximity to a given plane or by a computing a probablistic or other measure. The information that is needed to measure whether a q vector is consistent with a given light can be precalculated or can be calculated based on the statistics of the image itself.
The appropriateness of relations can be calculated for pixels or regions.
Statistical analysis can be used to evaluate which is the best relation to apply to a set of pixels. The relation that best models a set of pixels could be the relationship which was found to be appropriate the most often.
Alternatively, the appropriateness of a set of relations is calculated across an image. The difference between the actual secondary response and that predicted by the relation is summed up across the image. This gives a score for the goodness of a particular relation inset. The m-set of relations that best accounts for the image data is found by search.
Images of scenes with cast shadows have two illuminants (direct light and sky and sky only (i.e. for the shadowed regions). According to the present invention the shadow and non shadow areas are found by testing the appropriateness of all pairs of relations in turn.
Indoor scenes with overhead illumination and light from the window, which may be direct and shadowed, allow for three possible lights. According to the present invention, regions are classified to one of three lights by testing the appropriateness of all triples of relations in turn.
According to a second aspect of the present invention, there is provided a method for processing an image having a plurality m of light sources by segmenting the image into different regions, each of which is lit by only one of the m light sources, the method comprising the steps of obtaining paired images with different sets of spectral components,, finding a best mapping between the images and assigning the majority of pixels best transformed under the mapping found to a first label and others to a second label.
According to a third aspect of the present invention, there is provided a method for processing an image having a plurality m of light sources by segmenting the image into different regions, each of which is lit by only one of the m light sources, the method comprising the steps of obtaining paired images with different sets of spectral components,, undertaking a chromagenic preprocessing step for a plurality N illuminants (where N>m) to produce N relations SR determining for each pixel or region which of the m relations of an m- element subset R best maps the two images.
The images may have different filtering, for example filtered and unfiltered images.
According to a fourth aspect of the present invention, there is provided a method of improving the accuracy of information in an image employing the steps of a method according to the first, second and third aspects, and accordingly adjusting the rendering of the information in the image.
According to a fifth aspect of the present invention, there is provided an image treatment system comprises means for detecting illumination in an image and means for implementing the steps according to the first, second, third or fourth aspects to adjust the treatment of the images.
The present invention relates to a method for identifying illumination in images. In particular, given an input region with m different lights present, a method is disclosed for segmenting an image into different regions where each region is lit by only one of the m lights. The method starts with the chromagenic camera discussed in [5, 6, 7]. A chromagenic camera takes two pictures of a scene: the first is a conventional RGB image but the second is taken with the same camera using a coloured filter placed in front of the camera optical system. The chromagenic idea also extends to other camera architectures (see [5, 8] for a discussion); e.g., a camera that has more than 3 sensors can be considered chromagenic in terms of the general theory. In previous work two results were shown. First, that the relationship between RGBs and filtered RGBs depends on the illumination: different lights lead to different relationships. Second, that using (precomputed) relations alone it was possible to estimate the light colour present in the scene. Moreover, the chromagenic approach was shown to provide a more accurate estimate of the illuminant colour than other methods [7]. While the chromagenic approach worked well on average for the single-prevailing-light illuminant estimation problem, it is in itself not directly applicable for multiple light detection. Indeed, the performance of the chromagenic algorithm for illuminant estimation diminished if only a small proportion of the input pixels were used. Therefore, if many of the input pixels are illuminated by other lights, chromagenic illumination estimation will decrease in accuracy.
In this work we assume only that we have pre-computed for a particular camera a set R of N relations that might plausibly map RGBs to filtered counterparts, for N different lights. If there are in fact m lights in an image newly presented then in this invention we can find the set of m relations which best predicts our data. Here, each pixel, or region, is associated with the relation (one of the m) that best maps the pixel, or region, to the filtered counterparts. These m relations are found by using the TV pre-computed relations established for this camera. Once found, the pixels, or regions, associated with the same relation are assumed to be illuminated under the same light. Of course, if we find that only a subset of the m relations are used, we conclude that there are fewer lights than originally hypothesised. This is an important point as, pragmatically, an algorithm performing reasonably can only assume that there will be m lights in any given scene if, when there are indeed fewer than m lights, the algorithm reports that this is so. For example, in many circumstances it will suffice to assume that there are two or fewer lights. We seek a method which detects two lights when two lights are present but that detects a single illumination when the scene is lit by a single light.
Preferred embodiments of the present invention will now be described, by way of example only, with reference to the accompanying drawings, of which:
Fig. 1 illustrates a method in accordance with a preferred embodiment of the present invention;
Figs. 2a and 2b respectively show an original image and its estimated illuminations obtained by a method in accordance with the present invention;
Fig. 3 a shows an initial segmentation of an image with the present invention; and
Fig. 3b shows the result of a region-based illuminant detection procedure using the information represented in Fig. 2b and Fig. 3 a.
We illustrate a method in accordance with the invention schematically in Figure 1. Pre-calculated mappings (here represented by scalar multiplication factors 1, 1/3, and 1/2) are applied to unfiltered pixel values {3, 4} to best match filtered counterparts {1, 2}. Here we have a very simple image with two regions which for the purposes of this example are represented by single numbers (perhaps this image is the cast shadow of an aeroplane flying over the ground). The top image /, has the region labelled '3' representing a shadow. In this diagrammatic representation, we do not illustrate the algorithm using colour, which the method actually uses in practice. Instead, we simplify to a single, scalar value at each pixel for illustrative purposes. In Figure 1, the labels ' 1', '2', '3', '4', are the labels for each region and are also the pixel values: e.g., in the shadow region of image /, the scalar value of the image pixels is the number 3. A corresponding filtered image again with 2 regions is shown having regions ' 1 ' and '2' and is denoted as f.
For these scalar- values images, our pre-computed set SR of mappings from unfiltered to filtered pixel values are simply a set of Nscalars, one for each of N lights. Now suppose in this example we have predetermined a set of N= 3 mappings for 3 lights, and the mappings from unfiltered to filtered camera responses are given by the three scalars {1, 1/3, 1/2}. On the right-hand side there these three possible mappings are shown: the first mapping is designated by '*r, where *1 means we can multiply the image pixel by 1 to predict the corresponding filtered output, and so on. As there are three scaling factors (in the general case these are mappings) there are '3 choose 2' = _V!/((N - 2)!2!) = 3 !/(l !2!) = 3 possible combinations of two maps. In the diagram we label these map sets A, B and C. We now apply each of these mapping sets in turn. E.g., if we test set A, consisting of the candidate mappings *1 and *l/3, then we apply * 1 to the whole image I and then compare errors from the actually observed filtered responses f, and then also apply * 1/3 to the whole image I and compare errors from / . At the pixel level, for this mapping A, which of the two mappings, *1 or * 1/3, results in the least error at a pixel results in that pixel being labelled as associated with the first, or the second, mapping. (Associations of whole regions are described below.)
We carry on as above with the alternate mappings B, meaning the scalar multiplier set {*l/3, *l/2}, and C, meaning the scalar multiplier set {*1, *l/2}. Now we determine a method for deciding which mapping is the best, overall. If we follow the line labelled A we see the calculations involved in estimating the goodness of this map set. For the input pixel 3 in image /, from mappings set A we can apply a map of either * 1 or * 1/3. Applying each map in turn we arrive at the two leftmost leaf nodes in Figure 1 : We calculate the predicted filtered response (i.e., we apply the map to the pixel value) calculate error from the actually observed filtered image F "by subtracting that actual pixel response. In terms of the example we calculate: 3*1-1 and 3*1/3-1 (equalling errors of 2 and 0 respectively for the error for pixels in region '3' mapped to region ς 1 '). Since 0 is less than 2, the mapping * 1/3 is chosen as associated with this pixel. Looking at the next leftmost pair of tree nodes, we go through the same procedure for the second pixel '4'. Again in this case we see that * 1/3 better predicts the actual filtered outputs (though, not exactly). Based on relation set A alone we would conclude that both pixels '3' and '4' are best mapped to the corresponding filtered counterparts using the same relation, *l/3, and so at this stage we would conclude that both pixels were captured under the same light. If relation set A best modelled our data overall, then our hypothesis that there were two lights would be wrong (only one relation and hence one light was found to be present).
Parsing the rest of the tree, we see that the middle relation branch (relation set B) results in the smallest total of absolute difference between predicted and actual responses. (In this simple example the difference is actually exactly 0). Moreover, we see that pixels '3' and '4' are mapped with two different relations, respectively *l/3 and *l/2. So, in this case we would conclude that each pixel is captured with respect to a different illuminant. Simple though this example is, it in essence captures the key steps in our invention.
Of course, in real image processing an image will have k pixels or regions. Each pixel will typically be described by an RGB triplet, as will the corresponding filtered counterpart. The relations that predict how camera responses are mapped to filtered counterparts are multidimensional, not scalar, functions. For example the relations could be 3 x 3 matrix transforms or more complex non-linear maps. Moreover, there are many more maps (iV=50 to 100 are used in our experiments) than in Figure 1 and so there are many more map sets to consider. But, in essence the computation is the same. For every pixel, or region, in the image we find the map (belonging to a set of 2 maps if we are considering m - 2) that best maps the RGB(s) to the filtered counterpart(s). We then calculate a prediction error for the whole image. This process is repeated for all possible map sets. The map set that best predicts filtered counterparts from image RGBs can then be used to directly partition the image into regions lit by different lights. According to this invention pixels, or regions, assigned the same map will be assumed to be illuminated by the same light.
If we do not happen to have available any pre-computed mappings from unfiltered to filtered responses, then the chromagenic idea can still obtain by seeking a best mapping between the two images and assigning the majority of pixels best transformed under the mapping found to one label, and all others to a second label. For example, a 'robust' statistical procedure finds the best mapping from one image to the other provided that at least half the image (plus 1 pixel) is approximately associated with that mapping. Pixels not associated correctly are Outliers' and belong to the second label. In fact, robust mapping can proceed in a hierarchical manner, going on to find the best mapping in just the second-label region, and going on to descend further until there is no good labelling for individual pixels. Region- labelling is then brought into play (see below).
There is a subtlety in our approach: we use chromagenic theory only to find regions lit by different lights but do not estimate the colour of the lights themselves. This might appear a little strange. After all, each pixel, or region, is associated with a single relation and each relation is defined (in a training stage) to be the map that takes RGBs to filtered counterparts for a particular light. We might (wrongly) conclude that once we have identified which regions are lit by the same light that we also know the colour of the light for these regions. We do not know the light colours because chromagenic illuminant estimation tends to work best when there is a large colour diversity in a scene. Often the total number of pixels found to be (say) in a shadow is a relatively small proportion of the size of the image. In this case the best relation mapping RGBs to filtered counterparts might be for the wrong illuminant. This poses a problem if our goal was to estimate the colour of the light. But, here we seek to use the relations only as a means for discriminating between illuminants.
Chromagenic theory will now be discussed. Let us denote light, reflectance and sensor as E(X), S(X) and Qk(X) where k indexes R, G, B. For Lambertian surfaces, image formation can be written as:
qk = [Qk(λ)E(λ)S(λ)dλ (1) where the integral is evaluated over ω, the visible spectrum. It is useful to combine the triplet of sensor responses g^ into a single vector, which we denote by ^(underscoring denotes a vector quantity).
Now, let us introduce linear models for light and surface:
EW s ^e1E1(X) SW ≤ ∑σ^W (2)
(=1 J=I where Ei(X), i =1..-OE f°rm an approximate basis set for illuminants and Sj{λ), j =1..DS form an approximate basis set for surfaces; weights Gi and σj form the best fit for particular lights and surfaces to these basis sets. Then the image formation equations (Eq. (I)) can be succinctly written as:
£ = Λ(e)σ (3)
where A(C) is a 3 x N matrix mapping reflectance weights to RGB responses. The Jg'th. term of this Lighting Matrix is given by:
AGIO* = j>W 2>, E1(X) Sj (λ)dλ (4)
I=I
One formulation of the colour constancy problem is as follows: Given a set of measured response vectors q_, how can we recover the reflectance and illumination characteristics, i.e. recover σ^and €?
The linear model basis sets for light and reflectance, used in (2), are generally determined using Principal Component Analysis [9] or Characteristic Vector Analysis [10] in which case the model dimensions DE and Ds are found to be 3 (for daylights) and 6 to 8 for reflectances. Given that there are only 3 measurements at each pixel, these large dimensions in the model cast doubt on the solubility of colour constancy. However, looking at (3) we see that image formation is in reality predicated on a (light dependent) Lighting Matrix multiplying a reflectance weight vector. While we have no knowledge of E(X) or S(X), we do see that the linearity of (1) is preserved: if we add two lights together we add the respective lighting matrices. It follows that the dimensionality of light and surface, viewed from the perspective of image formation, depends on how well a set of M, 3 xN Lighting matrices interacting with N x 1 weight vectors model observed image RGBs. By reasoning in this way, Marimont and Wandell [11] demonstrated that a very good model of image formation is possible with only DE - 3 (three lighting matrices) and Ds = 3 (three degrees of freedom in reflectance). This is encouraging because the model numbers are small. However, they are still not small enough to enable us to decouple light and reflectance. To see why, suppose we have a single illuminant and s reflectances, providing us with 3s measurements and 3s + 3 unknowns. Even by observing that there is a scalar indeterminacy between surface lightness and illuminant brightness (since they multiply each other), so that the unknowns number 3s + 2, this is still less than the number of known quantities: i.e., 3s < 3s + 2.
Suppose now, however, that we observe the s surfaces under two lights. We now have 6s measurements and more knowns than unknowns, 6s > 3s + 5 for two or more surfaces (i.e., 5 = 6 - 1 = two lights multiplied by 3, minus the brightness indeterminacy). Indeed, a number of authors [12, 13, 14] have presented algorithms which can algebraically solve the colour constancy problem in this case. Implicit to these approaches is the idea that RGBs are mapped across illumination by a 3 x 3 linear map:
q2 = A(S1)[A(Ej)]-1 2l (5)
Finlayson [14] observed that because we can always generate the same RGBs (through a judicious choice of sigma weights) under any light, we can only hope to solve the two-light constancy problem if the 3 x 3 linear transform is unique. Indeed, for most sensors, lights and surfaces, uniqueness was shown to hold in a simplified approximation model and the two-light constancy problem was thus shown to be soluble. However, one of the flaws in this approach is the requirement of having available images of the same surfaces seen under two lights, an impractical requirement in general.
In chromagenic theory, rather than capturing a scene under two different lights we instead simulate a second light by placing a filter in front of the camera to generate an additional image. We can write the new filtered responses as:
qk r = [Qk(X)F(X)E(X)S(X)dX, k ~ R,G,B (6)
Let us define a filtered illuminant as E (X)= E(X)F (X) (7)
Then (6) becomes
qζ = [ Qk (λ)EF (λ)S(λ)dλ, k = R,G,B (8)
where the superscript F denotes dependence on a coloured filter. From an equation-counting perspective we now have enough knowns to solve for our unknowns: we simply take two pictures of every scene, one filtered and one not. Importantly, in [7] it was shown, assuming 3 degrees of freedom in light and surface is an accurate enough portrayal of nature, that the transform mapping RGBs to filtered counterparts uniquely defines the illuminant colour. This result led to the chromagenic theory of illuminant estimation.
The algorithm works in 2 stages. In a preprocessing step we pre-calculate the relations, one each for each of JV illuminants, that map RGBs to filtered counterparts. For example, we find a set of N 3 x 3 matrix transforms. In the operation phase, we take a chromagenic pair of images — two images, one unfiltered and one filtered. The illumination is unknown for the new, testing pair. We then apply each of the precomputed relations, and the relation that best maps RGBs to filtered counterparts is used to index and hence estimate the prevailing illuminant colour [7].
The chromagenic method for illuminant estimation is as follows:
Preprocessing: For a database of N lights E(X) and s surfaces S(X) calculate T' = QfQ* where
Q1 and Qf represent the matrices of unfiltered and filtered sensor responses for the s surfaces, under the fth light; superscript + denotes pseudo-inverse [15]. This generates a best least- squares transform, but the method is not limited to least-squares (e.g., robust methods could be used), nor is the method limited to linear (i.e., matrix) transforms. Operation: Given P surfaces in a new, test, image we have 3 xP measured image RGB matrices Q and QF . Then the task of finding the best estimate of the scene illuminant Eest(λ) is solved by finding the index i in our set of JV illuminants that generates the least sum of squared errors:
iest = arg mm(erη ) (i - 1,2, ...,N) (9)
with erη ≡ jrQ - Q1
It is worth remarking that in the simplest approach the transform matrices are defined by regression (e.g., the Moore-Penrose inverse uses least-squares regression). Therefore, illuminant relations, implemented as 3 x 3 matrices, do not perfectly transform RGBs to filtered counterparts. This modest imprecision has two consequences which bear on the method in accordance with the present invention disclosed below. First, to accurately estimate the best transform we need a large test set of surfaces (since we wish the relations to apply for all surfaces). Second, if we attempt to estimate the light colour for a small set of surfaces then we might wrongly estimate the illuminant: the best transform for set of red patches might be different from the best transform for a large set of colours (red, greens, white etc.).
Thus, when we run the chromagenic algorithm for an image that has only a small set of surfaces, we will find a relation, according to the algorithm presented above but this relation may in fact index the wrong light colour.
A preferred embodiment of a method according to the present invention will now be described. Given a chromagenic image pair, i.e., RGBs along with corresponding filtered counterparts, we can determine which pixels, or regions, are illuminated by the same lights. Below we define our approach formally, assuming there can be m lights in an image. In practice m<=2 will be appropriate for most images and therefore we set m = 2 when we outline the particular implementation of our algorithm that is discussed in the next section. Let us begin by assuming that for N lights we carry out the chromagenic preprocessing step and solve for the TV relations SR that best map RGBs to filtered counterparts. Here, however, we do not necessarily assume that the relation is a 3 x 3 matrix transform but rather, for generality, assume an arbitrary function f : 33 — > 33 , where 3 is the set of possible integers in a colour image (for example, for 16-bit colour channels, 3 is the set [0..65535]). Suppose we now select an m-element subset R cr SR . Taking each pixel, or region, in turn we determine which of the m relations best maps the RGB(s) to the filtered counterpart(s). Once each pixel, or region, is assigned a single relation it is a simple matter to calculate how well the set of m relations R accounts for our data. Of course there are many possible w-element subsets R in SR . Mathematically, the set of all m element subsets of SR is denoted SR Qn) and we call this set the m-set of SR . That R e SR Qn) which best describes the relation between image and filtered counterpart overall is then found through an optimisation procedure (which is essentially a searching algorithm). This effectively finds the m best mappings, and thus an m-level labelling of pixels. E.g., in the case m = 2 this amounts to a binary labelling of pixels. This labelling could arise, for example, from shadowed and unshadowed regions. Before we can write down this optimisation mathematically, we need to introduce a little more notation. Let SR = \fl , /2 , ... , fN } and let 7/c and Jk denote the Mi pixel or region in the image and its filtered counterpart. The relation/ can be thought of as a mathematical function or computer algorithm that maps an image to filtered counterparts for a particular illuminant labelled i. Thus, if/ is appropriate for the image region //c, we would expect
f (h) = ik F (10)
For a given relation set R we have to assign to each pixel, or region, h one of the m relations/, i e 1, 2,...,N depending on which best predicts iζ . Remembering that SR Qn) denotes the set of all m-element subsets of SR , and letting ^ e 1, 2,... ,m denote which of the m relations best applies at the Mi pixel or region, we then must solve the following optimisation: General statement of optimisation:
Figure imgf000016_0001
with R € Sl(W) (11) ik e {l,2,...,m]
If 4 is a single pixel then ||.|| is some simple scalar function, e.g. the sum of absolute values of vector components, or the square root of the sum of absolute values squared. If 4 is a region there is scope to make ||.| a more robust measure, e.g. the median deviation.
In the final step of the present method, we wish to identify different regions as belonging to different lights. After solving the optimisation (11), we arrive at the the best overall set of mappings R and the best set of pixel labels i/c, k e 1, 2,... ,m. This associates regions with labels for m lights directly by the relation indices ik as follows: all the pixels, or areas of pixels, where 4 = 1 are taken as having been imaged under the same light, indexed by T. Similarly, all pixels or areas where i^ = 2 are taken as under another light, index by '2', and so on up to 4 = m.
To make our approach slightly more general we allow the goodness of fit operation to be carried out pixelwise but will assign lighting labels on a region by region basis. Suppose we compute an assignment of n regions indexed by k, k =1, 2,... ,n in an image. Many algorithms exist for such a task; such an algorithm is referred to as a segmentation procedure. Let 1% denote theyth pixel in the Mi region. We now, initially, assign the relation labels i^ by minimising:
Region-driven statement of optimisation:
Figure imgf000016_0002
with R ε 3t(rø) (12)
Λ e (l,2 w) ^ e (1,2,..., m)
We can now assign labels to entire regions based on the fits to the underlying pixels:
ik = bestlabel({ikJ : ikj e Ik }) (13)
Here, function bestlabelQ must choose which label to assign to region k, of all the up to m labels assigned to pixels I^ in region k. An obvious candidate for function bestlabelQ is the mode function. E.g., if 7k has 100 pixels and, of those 100, and 90 have a relation label i, then the mode is also i and the overall label for the region should also be i. Another candidate would be that label minimising the overall error in mapping unfiltered to filtered pixels, in that region k.
We remark that minimising (11) or (12) can be computationally laborious. The computational cost is proportional to the cardinality of the set SR (m). If, say, there are 50 relations in SR (a reasonable number to account for the range of typical lights [16]) then the cardinality of
50' the m-set SR (m) is : which for m =2, 3, 4, 5 is equal to 1225, 19600, 230300 and ml(50 - m)l
2118110. A brute force search is only really possible for small m, i.e. m =2 or m = 3.
If, of course, we allow all possible maps to be in IR (e.g. all possible 3 x 3 matrices) then our solution strategy will follow classical optimisation theory (and not the combinatorical approach suggested above). In the optimisation approach we would start with an initial guess of m good transforms and then seek to update these incrementally by minimising a cost function. For example, we might employ the widely used gradient descent method. These differential optimisations tend to find locally, as oppose to globally, optimal solutions. Hueristic techniques such as simulated annealing might be used to find a global optimum.
To complete this section, we suggest other modifications of the basic algorithm which can be used for illuminant detection. First, though, we have presented the basic theory assuming three RGB sensors and three filtered counterparts, embodiments of the present invention also cover the case where we have six arbitrary sensor response functions (they need not be a filter correction apart). In this case the relation/Q best maps the first three sensor responses to the second three. Further we allow other means of arriving at multidimensional response data. For example our method can detect shadows given a normal RGB image and a second image taken where a flash is used to illuminate the scene. In general, methods according to the present invention can be applied to any capture condition which might be written as: qk F = [ Qk {λ)E{λ)S{λ)dλ, k = l,2,...,Q
(14)
where Qt((λ) might be a sensor response function or a sensor multiplied by a filter
F/ i \ i Tf flash ( 2 \ transmittance. Setting Ojt(λ) = — — -^- accurately models the effect of adding flash
E{λ) light to a scene and is also covered by the present invention.
The number of sensors is also not important for the present invention. Indeed, given a q sensor camera, our method will still work if p of the sensor responses, recorded for different lights and surfaces, are related to the remaining q —p responses by some function/(). In the embodiment presented in detail above, q=6 andp=3, but equally q oxvάp could be any two numbers where p<q: q=7 and p=2, or q=3 and p=/. The last case draws attention to the fact that for a conventional RGB camera, we can relate the blue responses to the red and green responses in the manner described above. And, even though the relationship is less strong (e.g. the fit in (9) will have significant error), the method will still provide a degree of illumination detection.
Also, the means by which we relate the first p responses to the remaining q -p responses (for a q response camera) can be written in several general forms. In the method described above, where q = 6 and/? = 3, the unfϊltered responses are related to filtered responses by a 3 x 3 matrix transform. More generally, this map could be any function of the form/; SR3 — > 9^3 (a function that maps a 3-dimensional input to a 3 dimensional output). For an arbitrary q (number of sensors) and/> (number of dependent responses), the mapping function/: 5Rq-p → ΪRP. We also point out that we can generalise how we compute the distances that we have thus far written as
Figure imgf000019_0001
- /o (where β~p and P denote the first q - p and remaining/? responses and the subscript k indexes the Mi pixel or region). We can do this in two ways. First, we can use an arbitrary definition of the magnitude function || || e.g. it could be the standard Euclidean distance, or, it could be any reasonable distance function (e.g. such as one of the Minkowski family of norms). Second, we observe that if f[I%~p) « lk p then this implies that the q vector lies in a particular part of g-dimensional space. For example, β) is a px(q -p) matrix transform then the q vector of responses must lie on a q-p dimensional plane embedded in q space. Thus, rather than computing a relation^) directly and then calculating |
/(/'"" ) -
Figure imgf000019_0002
we could instead calculate the distance of the q vector of responses to a q -
p dimensional plane. It follows we might rewrite our fitting function as:
Figure imgf000019_0003
- Ik \ where
P projects the q vector onto some q - p dimensional plane. Subtracting the projected vector from the original then makes a suitable distance measure.
We can extend this idea still further and write
Figure imgf000019_0004
- /*| ≡ |PX (41 projects the q vector of responses onto the p dimensional plane orthogonal to the q - p dimensional plane where we expect I^ to lie. More generally, we might calculate the measure P(Λ) where P is a function that returns a small number when the response vector is likely for the illuminant under consideration. Here P could, for example, be some sort of probabilistic measure.
It is the preferred embodiment of this invention that we determine the fit, or likelihood, that a given g-vector of responses occurs for a given light in a preprocessing step. This might be the 3><3 matrices best mapping RGBs to filtered counterparts for a given training set. Alternatively, for the other embodiments discussed we could precalculate the best relations of the form/:SR/'"?— > 5HP. Or, if we use the position of the response vectors directly, then we could precalculate the best fitting plane or precalculate a probabilistic model which ascribes a likelihood that given q vectors occur under different lights. However, we note that that the fit, or likelihood, that a given g-vector of responses occurs for a given light can be computed within a single image by using the image statistics and this is also covered by our invention. For example, for the case of 3 x 3 linear maps taking RGBs to filtered counterparts and where there are just two lights present in a scene we might find the pair of transforms that best accounts for the image data (one of the pair is applied at each pixel according to which light is present) by using robust statistics. We find the best 3 x 3 matrix that maps at least 50% of the image plus one pixel to corresponding filtered counterparts. The remaining pixels are treated as outliers and can be fit separately. The inliers and outliers determine which part of the image are lit by the different lights. Our experiments indicate good illuminant detection in this case. Further, all the different combinations of distance measures, and fitting functions described above, could, in principle, be trained on the image data itself, using standard techniques.
To summarise, in methods according to the present invention when the position of the q vector of responses measured by a camera depends strongly on illumination and weakly on reflectance we can use the position in q space to measure the likelihood of this response occurring under that light. This likelihood can be calculated in many ways including testing the relationship between the first q -p responses to the last p responses (using linear or non linear functions and any arbitrary distance measure). Equally, the position of the q vector can be used directly and this includes calculating the proximity to a given plane or by a computing a probablistic or other measure. The information that is needed to measure whether a q vector is consistent with a given light can be precalculated or can be calculated based on the statistics of the image itself.
There will now be described a method, working with real images, of finding image regions lit by two illuminants. Arguably, the m = 2 case is the most interesting and most common case. Many scenes are lit by a single light or by two lights. Often in the outdoor environment there is a single light. As well, there are often two lights: the sun+sky (non- shadow) and the sky alone (shadow). Similarly, indoors at night we may light the room by a single incandescent bulb. Yet, during the day many office environments are a combination of artificial light from above the desk and natural light coming through the window. Indeed, it is hard to think of normal circumstances when m is much larger than 2.
Let us, thus, implement the algorithm given in Eq. (11) for the m = 2 case. We begin by creating the set SR , which in this case consists of fifty 3 x3 matrix transforms. These transforms were calculated by imaging a standard colour reference chart (the Macbeth ColorChecker [17]) under 50 lights one at a time, with and without a coloured filter, using a Nikon D70 camera (which outputs linear (raw unprocessed) images). The 50 lights were chosen to be representative of typical lights that are encountered every day and included: blue sky only, blue sky + sun, overcast sky, fluorescent light and incandescent illumination. The Macbeth ColorChecker has 24 different coloured patches and so we solved for each 3 x3 transform by regressing the 24 unfiltered RGBs onto the filtered counterparts.
Now we run the algorithm. In the first pass, we start by making use of a pixel-based optimisation algorithm, using Eq. (11): We calculate the 2-set SR (2): the set of all subsets of SR with 2 elements. Because there are 50 transforms there are '50 choose 2' equals 1225 combinations. For a given relation set R containing a particular pair of 3 x 3 matrices, we test which matrix best maps each image pixel to the filtered counterpart. As we do so, we calculate the discrepancy, or error, between the mapped RGBs and the actual filtered responses. We repeat this process over all 1225 combinations of two lights (and hence mappings); we determine that one pair of transforms, one of which is applied at each pixel, that best maps the unfiltered to filtered image overall. Figure 1 illustrates this process where there are just 3 relations (mappings) and instead of matrices the relations are simple scalar multipliers. Figure 2 shows typical results of an optimisation Eq. (11) applied at the pixel level. Figure 2(a) shows the original image; since it has shadows there are clearly two lights present in the scene. It represents noisy, pixel-based detection.
Because a single transform is applied to each pixel, we can view the output of this process as a binary image. Denoting the matrix transform that best fits the data as '0' (for the 1st transform) and ' 1 '(for the second transform), we show our estimate of the illuminations present in the scene in Figure 2(b). While it is clear that we have some correspondence between shadowed and non-shadow regions, and therefore our algorithm is working, the output is far from perfect. It looks like the correct answer, but appears corrupted by a high degree of noise.
Now let us apply the region-based label assignment given by optimisation Eq. (12) followed by Eq. (13). Using the Mean Shift algorithm [18], or any similarly edge-preserving segmentation algorithm, we calculate an initial segmentation of an image. Figure 3 (a) shows the segmentation arrived at by the standard Mean shift algorithm. It will be noted that there are many regions in the image: that is, we have oversegmented the image vis-a-vis our present objective, namely disambiguating shadowed from non-shadowed regions. This is important to note, as we wish to be sure that the segmentation of the input image has not merged regions which are lit by different lights (the degree of segmentation is controllable using parameters that the Mean Shift algorithm uses, and this applies to other edge-preserving segmentation algorithms as well).
Figure 3(b) is the result of a region-based illuminant detection procedure. We start with the output given in Figure 2(b). In conjunction with the regions obtained using the Mean Shift segmentation in Figure 3 (a), we then go on to assign output labels as in Eq. (13). In this variant, for each region we count the proportion of 'O's and 'Is' and assign the majority number to the entire region. The result shown in Figure 3(b) makes it clear that we have obtained an excellent segmentation of the lights present in the scene.
Figure 3 represents clean determination of shadow areas.
Importantly, we have found this simple approach to illumination detection reliably delivers good results.
Thus, we have disclosed a method for segmenting illumination in images. The method consists of using pre-determined transforms of pairs of images from unfiltered to filtered versions, where a chromagenic filter is utilised. To determine a segmentation with m or fewer illuminant labels, sets of m mappings are applied at the pixel or region level to the image pairs, to best generate an assignment of labels. Alternatively, if no pre-calculated mappings are available, m or fewer assignments of labels can be determined by regression or similar method applied to the image pairs in a hierarchical manner. The region-based approach generates cleaner illumination segmentations, in general.
Where reference is made in this specification to a filtered and an unfiltered image, this includes images with differing filtering characteristics. One can use instead two filtered images with different filtering. Alternatively one can simply use two different cameras, for example cameras of different makes. In a specific example, a conventional digital camera and a camera with a yellow filter are used.
References
[1] J. M. Rubin and W. A. Richards. Color vision and image intensities: When are changes material. Biological Cybernetics, 45:215-226, 1982.
[2] M.F. Tappen, W.T. Freeman, and E.H. Adelson. Recovering intrinsic images from a single image. In Advances in Neural Information Processing Systems 15. MIT Press, 2003.
[3] G.D. Finlayson, S.D. Hordley, and M.S. Drew. Removing shadows from images. In
ECCV 2002: European Conference on Computer Vision, pages 4:823-836, 2002. Lecture
Notes in Computer Science Vol. 2353.
[4] C. Fredembach and G.D. Finlayson. Hamiltonian path based shadow removal. In
British Machine Vision Conf, 2005.
[5] G.D. Finlayson and P.M. Morovic. Human visual processing: Beyond 3 sensors. In IEE
Int.
Conf. on Visual Information Engg. (VIE2005), pages 1-7, 2005.
[6] G.D. Finlayson, S.D. Hordley, and P.M Morovic. Chromagenic filter design. In 10th.
Congress of the Int. Colom Assoc. (AIC2005), 2005.
[7] G.D. Finlayson, S.D. Hordley, and P.M Morovic. Colour constancy using the chromagenic constraint. In Computer Vision and Ratt. Rec. (CVPR2005), 2005.
[8] G.D. Finlayson. Image recording apparatus employing a single ccd chip to record two digital optical images. Awarded US Patent, My 06, Pat num 7,046,288.
[9] J.P.S. Parkkinen, J. Hallikainen, and T. Jaaskelainen. Characteristic spectra of Munsell colors.
J. Opt. Soc. Am. A, 6:318-322, 1989. [10] L.T. Maloney and B. A. Wandell. Color constancy: a method for recovering surface spectral reflectance. J. Opt. Soc. Am. A, 3:29-33, 1986.
[11] D. H. Marimont and B. A. Wandell. Linear models of surface and illuminant spectra.
J Opt. Soc. Am. A, 9:1905-1913, 1992.
[12] M. D'Zmura and G. Iverson. Color constancy. I. Basic theory of two-stage linear recovery of spectral descriptions for lights and surfaces. J. Opt. Soc. Am. A, 10:2148-2165,
1993.
[13] M. Tsukada and Y. Ohta. An approach to color constancy using multiple images. In Int.
Conf. on Computer Vision (ICCV90), 1990.
[14] G.D. Finlayson, M.S. Drew, and B.V. Funt. Diagonal transforms suDce for color constancy. In M. Conf. on Computer Vision (ICCV93), 1993.
[15] G. Strang. Linear Algebra, and its Applications. Harcourst, Brace, Jovanovich, 3rd edition,
1988.
[16] K.Barnard, L. Martin, B.V. Funt, and A. Coath. A data set for colour research. Color
Research and Application, 27:147-151, 2002.
[17] CS. McCamy, H. Marcus, and J.G. Davidson. A color-rendition chart. J. App. Photog.
Eng., 2:95-99, 1976.
[18] D. Comaniciu and P. Meer. Mean shift: A robust approach toward feature space analysis.
PAMI, 24:603-619, 2002.

Claims

1. A method for processing an image having a plurality m of light sources by segmenting the image into different regions, each of which is lit by only one of the m light sources, the method comprising the steps of obtaining paired images with different sets of spectral components,, and applying sets of m pre-computed mappings at the pixel or region level to the image pairs.
2. A method according to claim 1, wherein an m-set is selected based on how well it predicts a particular secondary image response vector given a corresponding primary response vector.
3. A method according to claim 1, wherein the images are mapped using linear transforms and an m-set is selected which generates a new set of outputs which are closest to the secondary image outprints.
4. A method according to claim 1, wherein statistical analysis is used to select an m-set.
5. A method according to claim 1, wherein each m-set is calculated across an image, the difference between an actual secondary response and that predicted by the m-set is summed across the image, the goodness is assessed, and the best m-set is found by search.
6. A method according to any of claims 1 to 5, wherein the first image is produced by an RGB camera and the second image corresponds to filtered RGB counterparts.
7. A method according to any of claims 1 to 5, wherein the first image is produced by an RGB camera and the second image corresponds to a set of one or more different sensor responses.
8. A method according to claim 7, wherein the set of sensor responses is CMY.
9. A method according to any of claims 1 to 5, wherein the first image is a primary multispectral image taken by a camera with at least two sensors and the second image is a secondary multispectral image taken by at least one further sensor of the camera.
10. A method according to claim 1 wherein, taking corresponding pixels or regions of first and second images in turn, there is determined from a set of m relations which of relations best maps the first image to the second image.
11. A method according to claim 1 further comprising the step of finding a host mapping between the images and assigning the majority of pixels best transformed under the mapping found to a first label and others to a second label.
12. A method according to claim 1 further comprising undertaking a chromagenic preprocessing step for a plurality JV illuminants (where N>m) to produce JV relations SR determining for each pixel or region which of the m relations of an m-element subset R best maps the two images.
13. A method for processing an image having a plurality m of light sources by segmenting the image into different regions, each of which is lit by only one of the m light sources, the method comprising the steps of obtaining paired images with different sets of spectral components,, finding a best mapping between the images and assigning the majority of pixels best transformed under the mapping found to a first label and others to a second label.
14. A method for processing an image having a plurality m of light sources by segmenting the image into different regions, each of which is lit by only one of the m light sources, the method comprising the steps of obtaining paired images with different sets of spectral components,, undertaking a chromagenic preprocessing step for a plurality JV illuminants (where N>m) to produce N relations SR determining for each pixel or region which of the m relations of an m-element subset R best maps the two images.
15. A method according to any preceding claim wherein the images are paired images with different filtering.
16. A method according to claim 15 wherein the images are paired filtered and unfiltered images.
17. A method according to any preceding claim where n = 2.
18. A method according to any of claims 1 to 16, where n = 3.
19. A method of improving the accuracy of information in an image employing the steps of a method according to any preceding claim and accordingly adjusting the rendering of the information in the image.
20. An image treatment system comprising means for detecting illumination in an image and means for implementing the steps according to my preceding claim to adjust the treatment of the images.
PCT/GB2007/004247 2006-11-08 2007-11-08 Detecting illumination in images WO2008056140A2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2009535795A JP5076055B2 (en) 2006-11-08 2007-11-08 Image illumination detection
US12/514,079 US8385648B2 (en) 2006-11-08 2007-11-08 Detecting illumination in images
GB0909767A GB2456482B (en) 2006-11-08 2007-11-08 Detecting illumination in images

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
GB0622251.7 2006-11-08
GB0622251A GB0622251D0 (en) 2006-11-08 2006-11-08 Detecting illumination in images
GB0710786A GB0710786D0 (en) 2007-06-05 2007-06-05 Detecting illumination in images
GB0710786.5 2007-06-05

Publications (2)

Publication Number Publication Date
WO2008056140A2 true WO2008056140A2 (en) 2008-05-15
WO2008056140A3 WO2008056140A3 (en) 2008-10-02

Family

ID=39027269

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2007/004247 WO2008056140A2 (en) 2006-11-08 2007-11-08 Detecting illumination in images

Country Status (4)

Country Link
US (1) US8385648B2 (en)
JP (2) JP5076055B2 (en)
GB (1) GB2456482B (en)
WO (1) WO2008056140A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010220197A (en) * 2009-03-12 2010-09-30 Ricoh Co Ltd Device and method for detecting shadow in image
US20130063562A1 (en) * 2011-09-09 2013-03-14 Samsung Electronics Co., Ltd. Method and apparatus for obtaining geometry information, lighting information and material information in image modeling system

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8194975B2 (en) * 2009-06-29 2012-06-05 Tandent Vision Science, Inc. Use of an intrinsic image in face recognition
TR201101980A1 (en) * 2011-03-01 2012-09-21 Ulusoy İlkay An object-based segmentation method.
CN102184403B (en) * 2011-05-20 2012-10-24 北京理工大学 Optimization-based intrinsic image extraction method
JP6178321B2 (en) 2011-11-04 2017-08-09 エンパイア テクノロジー ディベロップメント エルエルシー IR signal capture for images
US8509545B2 (en) 2011-11-29 2013-08-13 Microsoft Corporation Foreground subject detection
JP5382831B1 (en) * 2013-03-28 2014-01-08 株式会社アクセル Lighting device mapping apparatus, lighting device mapping method, and program
JP6446790B2 (en) * 2014-02-21 2019-01-09 株式会社リコー Image processing apparatus, imaging apparatus, image correction method, and program

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030016863A1 (en) * 2001-07-05 2003-01-23 Eastman Kodak Company Process of identification of shadows in an image and image obtained using the process
US7046288B1 (en) * 1998-06-27 2006-05-16 University Of East Anglia Image recording apparatus employing a single CCD chip to record two digital optical images

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7084907B2 (en) * 2001-01-15 2006-08-01 Nikon Corporation Image-capturing device
US6691051B2 (en) * 2001-08-14 2004-02-10 Tektronix, Inc. Transient distance to fault measurement
SE0402576D0 (en) * 2004-10-25 2004-10-25 Forskarpatent I Uppsala Ab Multispectral and hyperspectral imaging
WO2006081438A2 (en) * 2005-01-27 2006-08-03 Tandent Vision Science, Inc. Differentiation of illumination and reflection boundaries

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7046288B1 (en) * 1998-06-27 2006-05-16 University Of East Anglia Image recording apparatus employing a single CCD chip to record two digital optical images
US20030016863A1 (en) * 2001-07-05 2003-01-23 Eastman Kodak Company Process of identification of shadows in an image and image obtained using the process

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
CHENG LU ET AL: "Shadow Removal via Flash/Noflash Illumination" MULTIMEDIA SIGNAL PROCESSING, 2006 IEEE 8TH WORKSHOP ON, IEEE, PI, October 2006 (2006-10), pages 198-201, XP031011048 ISBN: 0-7803-9751-7 *
FINLAYSON G D ET AL: "4-Sensor camera calibration for image representation invariant to shading, shadows, lighting, and specularities" PROCEEDINGS OF THE EIGHT IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION. (ICCV). VANCOUVER, BRITISH COLUMBIA, CANADA, JULY 7 - 14, 2001, INTERNATIONAL CONFERENCE ON COMPUTER VISION, LOS ALAMITOS, CA : IEEE COMP. SOC, US, vol. VOL. 1 OF 2. CONF. 8, 7 July 2001 (2001-07-07), pages 473-480, XP010554126 ISBN: 0-7695-1143-0 *
FINLAYSON G D ET AL: "Colour Constancy Using the Chromagenic Constraint" COMPUTER VISION AND PATTERN RECOGNITION, 2005. CVPR 2005. IEEE COMPUTER SOCIETY CONFERENCE ON SAN DIEGO, CA, USA 20-26 JUNE 2005, PISCATAWAY, NJ, USA,IEEE, 20 June 2005 (2005-06-20), pages 1079-1086, XP010817463 ISBN: 0-7695-2372-2 *
FINLAYSON G ET AL: "Detecting Illumination in Images" COMPUTER VISION, 2007. ICCV 2007. IEEE 11TH INTERNATIONAL CONFERENCE ON, 14 October 2007 (2007-10-14), - 21 October 2007 (2007-10-21) pages 1-8, XP007904066 *
SALVADOR E ET AL: "Cast shadow segmentation using invariant color features" COMPUTER VISION AND IMAGE UNDERSTANDING, ACADEMIC PRESS, SAN DIEGO, CA, US, vol. 95, no. 2, August 2004 (2004-08), pages 238-259, XP004520275 ISSN: 1077-3142 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010220197A (en) * 2009-03-12 2010-09-30 Ricoh Co Ltd Device and method for detecting shadow in image
US20130063562A1 (en) * 2011-09-09 2013-03-14 Samsung Electronics Co., Ltd. Method and apparatus for obtaining geometry information, lighting information and material information in image modeling system

Also Published As

Publication number Publication date
GB0909767D0 (en) 2009-07-22
JP5076055B2 (en) 2012-11-21
US8385648B2 (en) 2013-02-26
US20100098330A1 (en) 2010-04-22
GB2456482A (en) 2009-07-22
WO2008056140A3 (en) 2008-10-02
JP2010509666A (en) 2010-03-25
JP2012238317A (en) 2012-12-06
JP5301715B2 (en) 2013-09-25
GB2456482B (en) 2011-08-17

Similar Documents

Publication Publication Date Title
US8385648B2 (en) Detecting illumination in images
Gijsenij et al. Improving color constancy by photometric edge weighting
Kim et al. Robust radiometric calibration and vignetting correction
Hwang et al. Context-based automatic local image enhancement
CN105046701B (en) Multi-scale salient target detection method based on construction graph
CN102572450A (en) Three-dimensional video color calibration method based on scale invariant feature transform (SIFT) characteristics and generalized regression neural networks (GRNN)
US8611660B2 (en) Detecting illumination in images
US20130129204A1 (en) Illuminant Estimation
US10070111B2 (en) Local white balance under mixed illumination using flash photography
CN115082328A (en) Method and apparatus for image correction
Lu et al. Color constancy using 3D scene geometry
EP3973500A1 (en) Physics-based recovery of lost colors in underwater and atmospheric images under wavelength dependent absorption and scattering
Banić et al. Using the red chromaticity for illumination estimation
Fredembach et al. The bright-chromagenic algorithm for illuminant estimation
Drew et al. Closed-form attitude determination under spectrally varying illumination
EP1886276A2 (en) Illuminant estimation
CN108876849B (en) Deep learning target identification and positioning method based on auxiliary identification
Mindru et al. Model estimation for photometric changes of outdoor planar color surfaces caused by changes in illumination and viewpoint
Riaz et al. Visibility restoration using generalized haze-lines
Kaur et al. A comparative review of various illumination estimation based color constancy techniques
CN109993690A (en) A kind of color image high accuracy grey scale method based on structural similarity
Gordan et al. Pseudoautomatic lip contour detection based on edge direction patterns
CN114513612B (en) AR photographing image light supplementing method and system based on machine learning
Kim et al. Non-local haze propagation with an iso-depth prior
US20050163392A1 (en) Color image characterization, enhancement and balancing process

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07824480

Country of ref document: EP

Kind code of ref document: A2

ENP Entry into the national phase

Ref document number: 2009535795

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 0909767

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20071108

WWE Wipo information: entry into national phase

Ref document number: 0909767.6

Country of ref document: GB

122 Ep: pct application non-entry in european phase

Ref document number: 07824480

Country of ref document: EP

Kind code of ref document: A2

WWE Wipo information: entry into national phase

Ref document number: 12514079

Country of ref document: US