WO2008056140A2 - Detecting illumination in images - Google Patents
Detecting illumination in images Download PDFInfo
- Publication number
- WO2008056140A2 WO2008056140A2 PCT/GB2007/004247 GB2007004247W WO2008056140A2 WO 2008056140 A2 WO2008056140 A2 WO 2008056140A2 GB 2007004247 W GB2007004247 W GB 2007004247W WO 2008056140 A2 WO2008056140 A2 WO 2008056140A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- images
- relations
- pixel
- filtered
- Prior art date
Links
- 238000005286 illumination Methods 0.000 title claims description 27
- 238000013507 mapping Methods 0.000 claims abstract description 44
- 238000001914 filtration Methods 0.000 claims abstract description 6
- 238000009877 rendering Methods 0.000 claims abstract description 4
- 238000000034 method Methods 0.000 claims description 79
- 230000004044 response Effects 0.000 claims description 49
- 239000013598 vector Substances 0.000 claims description 27
- 238000012545 processing Methods 0.000 claims description 8
- 238000007781 pre-processing Methods 0.000 claims description 7
- 230000003595 spectral effect Effects 0.000 claims description 7
- 238000007619 statistical method Methods 0.000 claims description 2
- 238000004422 calculation algorithm Methods 0.000 description 24
- 239000011159 matrix material Substances 0.000 description 15
- 238000013459 approach Methods 0.000 description 13
- 230000011218 segmentation Effects 0.000 description 12
- 238000001514 detection method Methods 0.000 description 10
- 238000012360 testing method Methods 0.000 description 9
- 238000005259 measurement Methods 0.000 description 6
- 230000015572 biosynthetic process Effects 0.000 description 5
- 239000003086 colorant Substances 0.000 description 5
- 238000002372 labelling Methods 0.000 description 5
- 238000005284 basis set Methods 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000012549 training Methods 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000012886 linear function Methods 0.000 description 2
- 238000005316 response function Methods 0.000 description 2
- 239000008186 active pharmaceutical agent Substances 0.000 description 1
- 230000001154 acute effect Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000003292 diminished effect Effects 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 235000021384 green leafy vegetables Nutrition 0.000 description 1
- 108091023663 let-7 stem-loop Proteins 0.000 description 1
- 108091063478 let-7-1 stem-loop Proteins 0.000 description 1
- 108091049777 let-7-2 stem-loop Proteins 0.000 description 1
- 238000007620 mathematical function Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000000513 principal component analysis Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000002922 simulated annealing Methods 0.000 description 1
- 238000010561 standard procedure Methods 0.000 description 1
- 238000012109 statistical procedure Methods 0.000 description 1
- 238000002834 transmittance Methods 0.000 description 1
- 238000001429 visible spectrum Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/143—Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/88—Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20004—Adaptive image processing
- G06T2207/20012—Locally adaptive
Definitions
- Much of computer vision, image processing and imaging in general is predicated on the assumption that there is a single prevailing illuminant lighting a scene. However, often there are multiple lights. Common examples include outdoor scenes with cast and attached shadows, indoor office environments which are typically lit by skylight and artificial illumination and the spot-lighting used in commercial premises and galleries. Relative to these mixed lighting conditions, many imaging algorithms (based on the single light assumption) can fail. Examples of failure include the inability to track objects as they cross a shadow boundary or tracking a shadow rather than the object, an incorrect colour balance being chosen in image reproduction (e.g., when printing photographs) and in an incorrect rendering of the information captured in a scene. The last problem is particularly acute when images containing strong shadows are reproduced.
- the imaging practitioner can chose either to make the image brighter (seeing into the shadows) at the cost of compressing the detail in the lighter image areas or conversely keeping the bright areas intact but not bring out the shadow detail.
- many photographs are a poor facsimile of the scenes we remember because our own visual system treats shadow and highlight regions in a spatially adaptive way in order to arrive at quite a different perceptual image.
- aspects of the present invention seek to provide a method for segmenting illumination in images.
- a method for processing an image having a plurality m of light sources by segmenting the image into different regions, each of which is lit by only one of the m light sources comprising the steps of obtaining paired images with different sets of spectral components,, and applying sets of m pre-computed mappings at the pixel or region level to the image pairs.
- the images may be paired images with different filtering, e.g. filtered and unfiltered images.
- the present invention is based on the realisation that the relationship between image colours (e.g. RGBs) and corresponding image colours captured through a coloured filter depends on illumination.
- Methods according to the present invention determine the number of distinct relations present in an image and filtered image pair and by assigning a relation to each pixel or region identify which parts of the image correspond to different colours of light. The method works: for an RGB camera and R, G or B filtered counterparts; for an RGB camera and a second set of one or more sensor responses (e.g. C,
- M and/or Y for any camera that takes a primary multispectral image (with two or more sensors) and a secondary multispectral image (with one or more sensors); given a camera with N spectral sensitivities capturing a primary image and examining the relationship between the image and a second image which has M sensor measurements.
- the first m measurements can be related to the remaining n-m sensors, and so the relation could be an (n-m) x m matrix.
- the relation could be an (n-m) x m matrix.
- Relationships can be computed based on the image data or can be precomputed in a training stage. Relations can be assigned to pixels or regions (and so regions or pixels identified as particular illuminants) using a robust statistical optimisation procedure or using a simple search procedure. The search procedure involves two stages. First, a set of m relations is chosen from the larger set of N all possible relations. Second, the appropriateness of a chosen m set is computed for the primary and secondaiy image pair. The m-set that is the most appropriate overall determines which pixels or regions are lit by which lights.
- the appropriateness of a given relation is determined by how well it predicts a particular secondary image response vector given a corresponding primary response vector. If relations are linear transforms, the responses from the primary images are mapped by one of the m relations in the m-set and the one which generated a new set of outputs which are closest to the secondary image outputs is deemed most appropriate. In general a relation is the most appropriate if it is more likely (when tested against the other m-1) candidates in some mathematical sense.
- Statistical analysis can be used to evaluate which is the best relation to apply to a set of pixels.
- the relation that best models a set of pixels could be the relationship which was found to be appropriate the most often.
- the appropriateness of a set of relations is calculated across an image.
- the difference between the actual secondary response and that predicted by the relation is summed up across the image. This gives a score for the goodness of a particular relation inset.
- the m-set of relations that best accounts for the image data is found by search.
- Images of scenes with cast shadows have two illuminants (direct light and sky and sky only (i.e. for the shadowed regions).
- the shadow and non shadow areas are found by testing the appropriateness of all pairs of relations in turn.
- a method for processing an image having a plurality m of light sources by segmenting the image into different regions, each of which is lit by only one of the m light sources comprising the steps of obtaining paired images with different sets of spectral components,, finding a best mapping between the images and assigning the majority of pixels best transformed under the mapping found to a first label and others to a second label.
- a method for processing an image having a plurality m of light sources by segmenting the image into different regions, each of which is lit by only one of the m light sources comprising the steps of obtaining paired images with different sets of spectral components,, undertaking a chromagenic preprocessing step for a plurality N illuminants (where N>m) to produce N relations SR determining for each pixel or region which of the m relations of an m- element subset R best maps the two images.
- the images may have different filtering, for example filtered and unfiltered images.
- a fourth aspect of the present invention there is provided a method of improving the accuracy of information in an image employing the steps of a method according to the first, second and third aspects, and accordingly adjusting the rendering of the information in the image.
- an image treatment system comprises means for detecting illumination in an image and means for implementing the steps according to the first, second, third or fourth aspects to adjust the treatment of the images.
- the present invention relates to a method for identifying illumination in images.
- a method for segmenting an image into different regions where each region is lit by only one of the m lights.
- the method starts with the chromagenic camera discussed in [5, 6, 7].
- a chromagenic camera takes two pictures of a scene: the first is a conventional RGB image but the second is taken with the same camera using a coloured filter placed in front of the camera optical system.
- the chromagenic idea also extends to other camera architectures (see [5, 8] for a discussion); e.g., a camera that has more than 3 sensors can be considered chromagenic in terms of the general theory. In previous work two results were shown.
- Fig. 1 illustrates a method in accordance with a preferred embodiment of the present invention
- Figs. 2a and 2b respectively show an original image and its estimated illuminations obtained by a method in accordance with the present invention
- Fig. 3 a shows an initial segmentation of an image with the present invention
- Fig. 3b shows the result of a region-based illuminant detection procedure using the information represented in Fig. 2b and Fig. 3 a.
- the labels ' 1', '2', '3', '4' are the labels for each region and are also the pixel values: e.g., in the shadow region of image /, the scalar value of the image pixels is the number 3.
- a corresponding filtered image again with 2 regions is shown having regions ' 1 ' and '2' and is denoted as f.
- mappings from unfiltered to filtered pixel values are simply a set of Nscalars, one for each of N lights.
- N 3 mappings for 3 lights
- the mappings from unfiltered to filtered camera responses are given by the three scalars ⁇ 1, 1/3, 1/2 ⁇ .
- the first mapping is designated by '*r, where *1 means we can multiply the image pixel by 1 to predict the corresponding filtered output, and so on.
- map sets A, B and C We now apply each of these mapping sets in turn. E.g., if we test set A, consisting of the candidate mappings *1 and *l/3, then we apply * 1 to the whole image I and then compare errors from the actually observed filtered responses f, and then also apply * 1/3 to the whole image I and compare errors from / .
- mapping A which of the two mappings, *1 or * 1/3, results in the least error at a pixel results in that pixel being labelled as associated with the first, or the second, mapping. (Associations of whole regions are described below.)
- mappings B meaning the scalar multiplier set ⁇ *l/3, *l/2 ⁇
- C meaning the scalar multiplier set ⁇ *1, *l/2 ⁇ .
- the map set that best predicts filtered counterparts from image RGBs can then be used to directly partition the image into regions lit by different lights. According to this invention pixels, or regions, assigned the same map will be assumed to be illuminated by the same light.
- the chromagenic idea can still obtain by seeking a best mapping between the two images and assigning the majority of pixels best transformed under the mapping found to one label, and all others to a second label. For example, a 'robust' statistical procedure finds the best mapping from one image to the other provided that at least half the image (plus 1 pixel) is approximately associated with that mapping. Pixels not associated correctly are Outliers' and belong to the second label.
- robust mapping can proceed in a hierarchical manner, going on to find the best mapping in just the second-label region, and going on to descend further until there is no good labelling for individual pixels. Region- labelling is then brought into play (see below).
- q k [Q k ( ⁇ )E( ⁇ )S( ⁇ )d ⁇ (1) where the integral is evaluated over ⁇ , the visible spectrum. It is useful to combine the triplet of sensor responses g ⁇ into a single vector, which we denote by ⁇ (underscoring denotes a vector quantity).
- A(C) is a 3 x N matrix mapping reflectance weights to RGB responses.
- the Jg ' th. term of this Lighting Matrix is given by:
- the linear model basis sets for light and reflectance, used in (2), are generally determined using Principal Component Analysis [9] or Characteristic Vector Analysis [10] in which case the model dimensions D E and Ds are found to be 3 (for daylights) and 6 to 8 for reflectances. Given that there are only 3 measurements at each pixel, these large dimensions in the model cast doubt on the solubility of colour constancy. However, looking at (3) we see that image formation is in reality predicated on a (light dependent) Lighting Matrix multiplying a reflectance weight vector. While we have no knowledge of E(X) or S(X), we do see that the linearity of (1) is preserved: if we add two lights together we add the respective lighting matrices.
- the algorithm works in 2 stages.
- a preprocessing step we pre-calculate the relations, one each for each of JV illuminants, that map RGBs to filtered counterparts. For example, we find a set of N 3 x 3 matrix transforms.
- the operation phase we take a chromagenic pair of images — two images, one unfiltered and one filtered. The illumination is unknown for the new, testing pair.
- We then apply each of the precomputed relations, and the relation that best maps RGBs to filtered counterparts is used to index and hence estimate the prevailing illuminant colour [7].
- the chromagenic method for illuminant estimation is as follows:
- Q 1 and Qf represent the matrices of unfiltered and filtered sensor responses for the s surfaces, under the fth light; superscript + denotes pseudo-inverse [15].
- This generates a best least- squares transform, but the method is not limited to least-squares (e.g., robust methods could be used), nor is the method limited to linear (i.e., matrix) transforms. Operation: Given P surfaces in a new, test, image we have 3 xP measured image RGB matrices Q and Q F . Then the task of finding the best estimate of the scene illuminant E est ( ⁇ ) is solved by finding the index i in our set of JV illuminants that generates the least sum of squared errors:
- i est arg mm(er ⁇ ) (i - 1,2, ...,N) (9)
- is some simple scalar function, e.g. the sum of absolute values of vector components, or the square root of the sum of absolute values squared. If 4 is a region there is scope to make
- function bestlabelQ must choose which label to assign to region k, of all the up to m labels assigned to pixels I ⁇ in region k.
- An obvious candidate for function bestlabelQ is the mode function. E.g., if 7 k has 100 pixels and, of those 100, and 90 have a relation label i, then the mode is also i and the overall label for the region should also be i. Another candidate would be that label minimising the overall error in mapping unfiltered to filtered pixels, in that region k.
- Q t (( ⁇ ) might be a sensor response function or a sensor multiplied by a filter
- the means by which we relate the first p responses to the remaining q -p responses can be written in several general forms.
- the unf ⁇ ltered responses are related to filtered responses by a 3 x 3 matrix transform. More generally, this map could be any function of the form/; SR 3 — > 9 ⁇ 3 (a function that maps a 3-dimensional input to a 3 dimensional output).
- the mapping function/ 5R q - p ⁇ ⁇ R P .
- P projects the q vector onto some q - p dimensional plane. Subtracting the projected vector from the original then makes a suitable distance measure.
- the position of the q vector of responses measured by a camera depends strongly on illumination and weakly on reflectance we can use the position in q space to measure the likelihood of this response occurring under that light.
- This likelihood can be calculated in many ways including testing the relationship between the first q -p responses to the last p responses (using linear or non linear functions and any arbitrary distance measure).
- the position of the q vector can be used directly and this includes calculating the proximity to a given plane or by a computing a probablistic or other measure.
- the information that is needed to measure whether a q vector is consistent with a given light can be precalculated or can be calculated based on the statistics of the image itself.
- Many scenes are lit by a single light or by two lights. Often in the outdoor environment there is a single light. As well, there are often two lights: the sun+sky (non- shadow) and the sky alone (shadow). Similarly, indoors at night we may light the room by a single incandescent bulb. Yet, during the day many office environments are a combination of artificial light from above the desk and natural light coming through the window. Indeed, it is hard to think of normal circumstances when m is much larger than 2.
- Figure 1 illustrates this process where there are just 3 relations (mappings) and instead of matrices the relations are simple scalar multipliers.
- Figure 2 shows typical results of an optimisation Eq. (11) applied at the pixel level.
- Figure 2(a) shows the original image; since it has shadows there are clearly two lights present in the scene. It represents noisy, pixel-based detection.
- FIG. 3 (a) shows the segmentation arrived at by the standard Mean shift algorithm. It will be noted that there are many regions in the image: that is, we have oversegmented the image vis-a-vis our present objective, namely disambiguating shadowed from non-shadowed regions. This is important to note, as we wish to be sure that the segmentation of the input image has not merged regions which are lit by different lights (the degree of segmentation is controllable using parameters that the Mean Shift algorithm uses, and this applies to other edge-preserving segmentation algorithms as well).
- Figure 3(b) is the result of a region-based illuminant detection procedure.
- the regions obtained using the Mean Shift segmentation in Figure 3 (a) we then go on to assign output labels as in Eq. (13).
- Eq. (13) In this variant, for each region we count the proportion of 'O's and 'Is' and assign the majority number to the entire region. The result shown in Figure 3(b) makes it clear that we have obtained an excellent segmentation of the lights present in the scene.
- Figure 3 represents clean determination of shadow areas.
- the method consists of using pre-determined transforms of pairs of images from unfiltered to filtered versions, where a chromagenic filter is utilised.
- sets of m mappings are applied at the pixel or region level to the image pairs, to best generate an assignment of labels.
- m or fewer assignments of labels can be determined by regression or similar method applied to the image pairs in a hierarchical manner.
- the region-based approach generates cleaner illumination segmentations, in general.
- this specification includes images with differing filtering characteristics.
- a conventional digital camera and a camera with a yellow filter are used.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Probability & Statistics with Applications (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Color Television Image Signal Generators (AREA)
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2009535795A JP5076055B2 (en) | 2006-11-08 | 2007-11-08 | Image illumination detection |
US12/514,079 US8385648B2 (en) | 2006-11-08 | 2007-11-08 | Detecting illumination in images |
GB0909767A GB2456482B (en) | 2006-11-08 | 2007-11-08 | Detecting illumination in images |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB0622251.7 | 2006-11-08 | ||
GB0622251A GB0622251D0 (en) | 2006-11-08 | 2006-11-08 | Detecting illumination in images |
GB0710786A GB0710786D0 (en) | 2007-06-05 | 2007-06-05 | Detecting illumination in images |
GB0710786.5 | 2007-06-05 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2008056140A2 true WO2008056140A2 (en) | 2008-05-15 |
WO2008056140A3 WO2008056140A3 (en) | 2008-10-02 |
Family
ID=39027269
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/GB2007/004247 WO2008056140A2 (en) | 2006-11-08 | 2007-11-08 | Detecting illumination in images |
Country Status (4)
Country | Link |
---|---|
US (1) | US8385648B2 (en) |
JP (2) | JP5076055B2 (en) |
GB (1) | GB2456482B (en) |
WO (1) | WO2008056140A2 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010220197A (en) * | 2009-03-12 | 2010-09-30 | Ricoh Co Ltd | Device and method for detecting shadow in image |
US20130063562A1 (en) * | 2011-09-09 | 2013-03-14 | Samsung Electronics Co., Ltd. | Method and apparatus for obtaining geometry information, lighting information and material information in image modeling system |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8194975B2 (en) * | 2009-06-29 | 2012-06-05 | Tandent Vision Science, Inc. | Use of an intrinsic image in face recognition |
TR201101980A1 (en) * | 2011-03-01 | 2012-09-21 | Ulusoy İlkay | An object-based segmentation method. |
CN102184403B (en) * | 2011-05-20 | 2012-10-24 | 北京理工大学 | Optimization-based intrinsic image extraction method |
JP6178321B2 (en) | 2011-11-04 | 2017-08-09 | エンパイア テクノロジー ディベロップメント エルエルシー | IR signal capture for images |
US8509545B2 (en) | 2011-11-29 | 2013-08-13 | Microsoft Corporation | Foreground subject detection |
JP5382831B1 (en) * | 2013-03-28 | 2014-01-08 | 株式会社アクセル | Lighting device mapping apparatus, lighting device mapping method, and program |
JP6446790B2 (en) * | 2014-02-21 | 2019-01-09 | 株式会社リコー | Image processing apparatus, imaging apparatus, image correction method, and program |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030016863A1 (en) * | 2001-07-05 | 2003-01-23 | Eastman Kodak Company | Process of identification of shadows in an image and image obtained using the process |
US7046288B1 (en) * | 1998-06-27 | 2006-05-16 | University Of East Anglia | Image recording apparatus employing a single CCD chip to record two digital optical images |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7084907B2 (en) * | 2001-01-15 | 2006-08-01 | Nikon Corporation | Image-capturing device |
US6691051B2 (en) * | 2001-08-14 | 2004-02-10 | Tektronix, Inc. | Transient distance to fault measurement |
SE0402576D0 (en) * | 2004-10-25 | 2004-10-25 | Forskarpatent I Uppsala Ab | Multispectral and hyperspectral imaging |
WO2006081438A2 (en) * | 2005-01-27 | 2006-08-03 | Tandent Vision Science, Inc. | Differentiation of illumination and reflection boundaries |
-
2007
- 2007-11-08 GB GB0909767A patent/GB2456482B/en active Active
- 2007-11-08 JP JP2009535795A patent/JP5076055B2/en active Active
- 2007-11-08 US US12/514,079 patent/US8385648B2/en active Active
- 2007-11-08 WO PCT/GB2007/004247 patent/WO2008056140A2/en active Application Filing
-
2012
- 2012-07-06 JP JP2012152977A patent/JP5301715B2/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7046288B1 (en) * | 1998-06-27 | 2006-05-16 | University Of East Anglia | Image recording apparatus employing a single CCD chip to record two digital optical images |
US20030016863A1 (en) * | 2001-07-05 | 2003-01-23 | Eastman Kodak Company | Process of identification of shadows in an image and image obtained using the process |
Non-Patent Citations (5)
Title |
---|
CHENG LU ET AL: "Shadow Removal via Flash/Noflash Illumination" MULTIMEDIA SIGNAL PROCESSING, 2006 IEEE 8TH WORKSHOP ON, IEEE, PI, October 2006 (2006-10), pages 198-201, XP031011048 ISBN: 0-7803-9751-7 * |
FINLAYSON G D ET AL: "4-Sensor camera calibration for image representation invariant to shading, shadows, lighting, and specularities" PROCEEDINGS OF THE EIGHT IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION. (ICCV). VANCOUVER, BRITISH COLUMBIA, CANADA, JULY 7 - 14, 2001, INTERNATIONAL CONFERENCE ON COMPUTER VISION, LOS ALAMITOS, CA : IEEE COMP. SOC, US, vol. VOL. 1 OF 2. CONF. 8, 7 July 2001 (2001-07-07), pages 473-480, XP010554126 ISBN: 0-7695-1143-0 * |
FINLAYSON G D ET AL: "Colour Constancy Using the Chromagenic Constraint" COMPUTER VISION AND PATTERN RECOGNITION, 2005. CVPR 2005. IEEE COMPUTER SOCIETY CONFERENCE ON SAN DIEGO, CA, USA 20-26 JUNE 2005, PISCATAWAY, NJ, USA,IEEE, 20 June 2005 (2005-06-20), pages 1079-1086, XP010817463 ISBN: 0-7695-2372-2 * |
FINLAYSON G ET AL: "Detecting Illumination in Images" COMPUTER VISION, 2007. ICCV 2007. IEEE 11TH INTERNATIONAL CONFERENCE ON, 14 October 2007 (2007-10-14), - 21 October 2007 (2007-10-21) pages 1-8, XP007904066 * |
SALVADOR E ET AL: "Cast shadow segmentation using invariant color features" COMPUTER VISION AND IMAGE UNDERSTANDING, ACADEMIC PRESS, SAN DIEGO, CA, US, vol. 95, no. 2, August 2004 (2004-08), pages 238-259, XP004520275 ISSN: 1077-3142 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010220197A (en) * | 2009-03-12 | 2010-09-30 | Ricoh Co Ltd | Device and method for detecting shadow in image |
US20130063562A1 (en) * | 2011-09-09 | 2013-03-14 | Samsung Electronics Co., Ltd. | Method and apparatus for obtaining geometry information, lighting information and material information in image modeling system |
Also Published As
Publication number | Publication date |
---|---|
GB0909767D0 (en) | 2009-07-22 |
JP5076055B2 (en) | 2012-11-21 |
US8385648B2 (en) | 2013-02-26 |
US20100098330A1 (en) | 2010-04-22 |
GB2456482A (en) | 2009-07-22 |
WO2008056140A3 (en) | 2008-10-02 |
JP2010509666A (en) | 2010-03-25 |
JP2012238317A (en) | 2012-12-06 |
JP5301715B2 (en) | 2013-09-25 |
GB2456482B (en) | 2011-08-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8385648B2 (en) | Detecting illumination in images | |
Gijsenij et al. | Improving color constancy by photometric edge weighting | |
Kim et al. | Robust radiometric calibration and vignetting correction | |
Hwang et al. | Context-based automatic local image enhancement | |
CN105046701B (en) | Multi-scale salient target detection method based on construction graph | |
CN102572450A (en) | Three-dimensional video color calibration method based on scale invariant feature transform (SIFT) characteristics and generalized regression neural networks (GRNN) | |
US8611660B2 (en) | Detecting illumination in images | |
US20130129204A1 (en) | Illuminant Estimation | |
US10070111B2 (en) | Local white balance under mixed illumination using flash photography | |
CN115082328A (en) | Method and apparatus for image correction | |
Lu et al. | Color constancy using 3D scene geometry | |
EP3973500A1 (en) | Physics-based recovery of lost colors in underwater and atmospheric images under wavelength dependent absorption and scattering | |
Banić et al. | Using the red chromaticity for illumination estimation | |
Fredembach et al. | The bright-chromagenic algorithm for illuminant estimation | |
Drew et al. | Closed-form attitude determination under spectrally varying illumination | |
EP1886276A2 (en) | Illuminant estimation | |
CN108876849B (en) | Deep learning target identification and positioning method based on auxiliary identification | |
Mindru et al. | Model estimation for photometric changes of outdoor planar color surfaces caused by changes in illumination and viewpoint | |
Riaz et al. | Visibility restoration using generalized haze-lines | |
Kaur et al. | A comparative review of various illumination estimation based color constancy techniques | |
CN109993690A (en) | A kind of color image high accuracy grey scale method based on structural similarity | |
Gordan et al. | Pseudoautomatic lip contour detection based on edge direction patterns | |
CN114513612B (en) | AR photographing image light supplementing method and system based on machine learning | |
Kim et al. | Non-local haze propagation with an iso-depth prior | |
US20050163392A1 (en) | Color image characterization, enhancement and balancing process |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 07824480 Country of ref document: EP Kind code of ref document: A2 |
|
ENP | Entry into the national phase |
Ref document number: 2009535795 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 0909767 Country of ref document: GB Kind code of ref document: A Free format text: PCT FILING DATE = 20071108 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 0909767.6 Country of ref document: GB |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 07824480 Country of ref document: EP Kind code of ref document: A2 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 12514079 Country of ref document: US |