IE20060559U1 - Automatic detection and correction of non-red flash eye defects - Google Patents

Automatic detection and correction of non-red flash eye defects Download PDF

Info

Publication number
IE20060559U1
IE20060559U1 IE2006/0559A IE20060559A IE20060559U1 IE 20060559 U1 IE20060559 U1 IE 20060559U1 IE 2006/0559 A IE2006/0559 A IE 2006/0559A IE 20060559 A IE20060559 A IE 20060559A IE 20060559 U1 IE20060559 U1 IE 20060559U1
Authority
IE
Ireland
Prior art keywords
region
image
red
pixels
eye
Prior art date
Application number
IE2006/0559A
Other versions
IES84402Y1 (en
Inventor
Bigioi Petronel
Ciuc Mihai
Capata Adrian
Original Assignee
Fotonation Vision Limited
Filing date
Publication date
Application filed by Fotonation Vision Limited filed Critical Fotonation Vision Limited
Publication of IE20060559U1 publication Critical patent/IE20060559U1/en
Publication of IES84402Y1 publication Critical patent/IES84402Y1/en

Links

Classifications

    • G06T7/0081
    • G06T7/0097

Abstract

ABSTRACT A method for detecting and correcting large and small non—red flash eye defects in an image is disclosed. The method comprises selecting pixels of the image which have a luminance above a threshold value and labelling neighbouring selected pixels as luminous regions. A number of geometrical filters including a roundness filter are applied to the luminous regions to remove false candidate luminous regions.

Description

Automatic Detection and Correction of Non-Red Flash Eye Defects The present invention relates to a system and method for automatically detecting and correcting eye flash artefacts in a digital image, and in particular, white—eye flash defects.
W O 03/071484 Al, Pixology, discloses a variety of techniques for red-eye detection and correction in digital images. In particular, Pixology discloses detecting "glint" of a red-eye defect and then analyzing the surrounding region to determine the full extent of the eye defect. 1 0 US 6,873,743, Steinberg discloses a similar technique where initial image segmentation is based on both a red chrominance component and a luminance component.
White-eye defects (white eyes) do not present the red hue that corresponds to the microscopic blood vessels inside the eyeballs which cause the more common red eye defects. Neither does white eye correspond to the glint of the eye which is a reflection of the flash from the external moist surface of the eye's lens, aka the cornea. White eye occurs more rarely but under the same conditions as red eye, i.e. pictures taken with a flash in poor or low light illumination conditions. In some cases, white eyes appear slightly golden by acquiring a yellowish hue.
The reasons for white eye include complex reflection patterns where the light bounces inside the vitreous humor, the lens and the aqueous-humor and the cornea. The defect is exacerbated in cases where the light hits the eye in an angle such as in a profile or semi-profile. Other examples may include the case of dark iris such as brown eyes, where the light reflects between the iris, the aqueous-humor and the cornea.
There are two main types of white-eye, small and large. Small white eyes 10 as illustrated in Figure I, appear on far distant subjects. They resemble luminous dots and information in their neighbourhood about other facial features is poor and therefore unreliable. Large white eyes as illustrated in Figure 2, are very well defined and one can rely on information around them.
According to the present invention, there is provided a method of detecting such artefacts.
Embodiments of the invention will now be described, by way of example, with reference to the accompanying drawings, in which: Figure I illustrates an image with small white-eye defects; Figure 2 illustrates an image with a large white-eye defect; Figure 3 depicts a flow diagram of the automatic detection and correction of small white-eye according to a first aspect of the present invention; Figure 4 depicts a flow diagram of the automatic detection and correction of large white-eye according to a second aspect of the present invention; Figure 5(a) illustrates a grey-level version of an image to be corrected; Figure 5(b) illustrates an edge-image of the image of figure 5(a) carried out using a Sobel gradient; and Figure 5(c) illustrates a most representative circle of the image of Figure 5(b) as produced using the Hough Transform.
A first aspect of the invention provides a method for automatic detection and further on correction of small white eyes. A flowchart illustrating an embodiment of this first aspect of the invention is shown in Figure 3. In this embodiment, an eye defect is said to be white or golden if it is bright, for example, in Lab colour space the local average luminance l is higher than 100, and is not too saturated, for example, in Lab colour space, the absolute value ofa and preferably 1) parameters does not exceed 17 and preferably does not exceed I5.
Initially, the luminance of each pixel of an acquired image 250 to be corrected is determined and a selection of all the pixels whose luminance is larger than a threshold value is made, 300. In the preferred embodiment, the acquired image is in RGB space and the intensity is calculated as l=max[R,G] and the intensity threshold value is 220. Also, in the preferred embodiment, to avoid highly-saturated colors (such as pure red or pure green) the saturation computed as abs(R-G) is compared to a threshold of 35, and discarded if higher. As such, only high-luminance pixels are retained, which provide seeds for a future region growing procedure.
In alternative implementations, the fomiula for luminance can be taken as the Y value for an image in YCbCr space. However it will be appreciated that luminance can be taken as the L value for an image in CIE-Lab space or indeed any other suitable measure can be employed.
The selected pixels are then labelled 310. This involves identifying selected pixels neighbouring other selected pixels and labelling them as luminous regions of connected selected pixels.
These luminous regions are then subjected to a plurality of geometrical filters 320 in order to remove luminous regions, which are not suitable candidates for white eyes.
In the preferred embodiment. the regions first pass through a size filter 321, which removes regions whose size is greater than an upper limit. The upper limit is dependent on the size of the image, and in the preferred embodiment, the upper limit is approximately lO0 pixels for a megapixel image.
Filtered regions then pass through a shape filter 322, which removes all suitably sized luminous regions, which are not deemed round enough. The roundness of the luminous regions is assessed by comparing the ratio of the two variances along the two principal axes with a given threshold. Regions such as those comprising less than approximately 5-10 pixels, are exempt from passing through the shape filter, as for such small regions, shape is irrelevant.
Filling factor 323 is a process that removes empty regions bounded by the luminous regions if certain criteria are met. In the preferred embodiment, the ratio of the area of luminous region to the area of the bounded empty region is determined and if this ratio is below a certain threshold, preferably, 0.5, the luminous region is removed.
The remaining luminous regions are finally passed through a skin filter 324 and a face filter. 325 to prevent white spots being mis-detected as white eyes based on the fact that they neighbour something that is not characteristic of the human face or skin colour. lEo5o559 Skin around white-eye tends to be under—illuminated and turn slightly reddish. A wide palette of skin prototypes is maintained for comparison with the pixels of the luminous regions. For each luminous region. the ratio of pixels, characteristic to the human skin, to pixels, which are not characteristic to the human skin, in a bounding box, is computed and compared to a threshold value. In the preferred embodiment, the threshold is quite restrictive at 85-90%.
Similarly, a wide palette of possible face colours is maintained for comparison with the pixels of the luminous regions. For each luminous region, the ratio of pixels, characteristic to the human face, to pixels, which are not characteristic to the human face, in a bounding box, is computed and compared to a threshold value. In the preferred embodiment, the threshold is quite restrictive at 85-90%. If the imposed percentage is met or exceeded, the region proceeds to the step of region growing, 330.
Region growing 330 begins by selecting the brightest pixel of each successfully filtered luminous region as a seed. Each neighbour of the seed is examined to determine whether or not it is a valley point. A valley point is a pixel that has at least two neighbouring pixels with higher intensity values, located on both sides of the given pixel in one of its four main directions (horizontal, vertical and its two diagonals). As illustrated below in table 1, the central pixel with intensity 99 is a valley point because it has two neighbours in a given direction that both have greater intensity values. Table 2 illustrates a central pixel, 99, which is not a valley point because there is no saddle configuration on one of the four main directions.
Starting from the seed, an aggregation process examines the seed pixel’s neighbours and adds these to the aggregated region provided that they are not valley points.
This examination and aggregation process continues until there are no non-valley neighbours left unchecked or until a maximum threshold size is reached. lfa maximum threshold size is reached, the region is deemed not to be a white eye and no further testing is carried out on this region. lE06o559 The outcome of this stage is a number of aggregated regions, which have been grown from the brightest points of each previously defined and filtered luminous region, and aggregated according to the valley point algorithm. It will be seem however that in alternative implementations, aggregation could take place before filtering and so the filters 320 could be applied to aggregated regions rather than luminous regions.
A number of computations are then carried out on these aggregated regions. 340.
The roundness of the aggregated region is calculated 341 as R=perimeter2/(4.1t.Area), where R21. R=l for the perfect circle, and thus the larger the R value, the more elongated the shape.
W hite-eyes should be round and so must be characterised by a value of R that does not exceed a certain threshold value. In the preferred embodiment, the threshold value for R is a function of eye's size. Thus we expect an eye to be rounder as its size increases (the smaller the eye, the poorer the approximation of its shape by a circle, and the less accurate the circle representation in the discrete plane). Three thresholds are used in the preferred embodiment for a 2 megapixel image — these will scale linearly for larger/smaller image sizes): R = l.l for large eye (i.e., size between 65 and l00 pixels) R = 1.3 for medium-sized eye (size between 25 and 65 pixels) R = L42 for small eyes (size less than 25 pixels).
The contrast of the aggregated regions is then computed 342 as the ratio of the average intensity of the valley points delimiting the aggregated region to the maximum intensity value inside the region, i.e. the intensity of the brightest seed point from step 330. As small white eyes occur normally in low illumination conditions, the contrast should be high.
Most of the small white-eyes have a yellowish hue meaning that they have at least some pixels characterised by high values of the b component in Lab space. Therefore the maximum value of b, bmax, is a good discriminator between actual white-eyes and for instance, eye glints or other point-like luminous refieetions. lEo6o5so In the preferred embodiment, the pixels being processed are in RGB colour space. In order to obtain a value for the b component, the aggregated regions are transformed from RGB colour space to Lab colour space.
The maximum value of the b component, bmax, in Lab colour space is then calculated and com pared with a threshold, bmeshold, step 343. If bmax >= bthmhold, the average saturation in the region is then computed, 344. Otherwise, the aggregated region is deemed not to be white- eye.
The average saturation in the aggregated region is computed as S = ‘M0? + 152) , 344. White- eyes are more coloured than other regions and as such the region’s average saturation must exceed a threshold in order for a candidate region to be declared white-eye, 350.
All aggregated regions passing the tests outlined above are labelled white-eyes and undergo a correction procedure 399 according to the preferred embodiment for the present invention.
The correction procedure comprises setting the intensity, I in LAB space, ofthe aggregated region‘s points to the average intensity ofthe valley points delimiting the region as used in the contrast calculation, step 342. In the preferred embodiment, the whole aggregated region is then smoothed by applying a 3x3 averaging filter.
According to the a further aspect of the present invention, there is provided a method for automatic detection and correction of large white eyes, as depicted in the flowchart of Figure The main characteristics of large white eyes is that by being very well defined, their shape is round and they are well separated from the iris.
Referring to Figure 4, it can be seen that the first five stages of the large white-eye automatic detection process, thresholding 400, labelling, 4l0, size filter 430, shape filter 440 and filling factor 450, are identical to those of the small white-eye automatic detection process as described above. However, it will be seen that the threshold applied in the size filter 430 will lEo6o559 be larger than for the step 322 and that different parameters may also be required for the other stages.
Nonetheless, once the luminous regions have passed through the geometrical filters 420, the next steps determine and analyse the edges of the suspected large white-eyes.
First, an intensity gradient of each luminous region is computed 460. The gradient is calculated from a grey-scale version of each luminous region as depicted in Figure 5(a).
Gradient is any function that has a high response at points where image variations are great.
Conversely, the response of the gradient is low in uniform areas. In the preferred embodiment, the gradient is computed by linear filtering with two kernels, one for the horizontal gradient, Gx, and one for the vertical gradient, Gy. The modulus of the gradient is then computed as G = sqrt(Gx2 + Gyz) and is further thresholded to obtain edge points and produce a binary edge-image as depicted in Figure 5(b). In the preferred embodiment, step 460 is carried out using a simple Sobel gradient. However it will be appreciated that any gradient function such as Prewitt or Canny may be used.
Once the edges ofthe suspected large white-eye regions have been determined. a Hough Transform is performed on each gradient image, 470. A Hough Transform detects shapes that can be parameterised, for example, lines, circles, ellipses etc and is applied to binary images, usually computed as edge maps from intensity images. The Hough Transform is based on an alternative space to that of the image, called accumulator space. Each point (x,y) in the original image contributes to all points in the accumulator space, in this case, corresponding to the possible circles that may be formed to contain the (x,y) point. Thus, all points corresponding to an existing circle in the original edge-image will all contribute to that point in the accumulator space corresponding to that particular circle.
Next, the most representative circle as produced by the Hough Transform must be detected for each region, 480. This step comprises inspecting the points in the Hough accumulator space, which have a significant value. This value is dependent on the number of points in the original edge image, which contribute to each point in the accumulator space. if no representative circle is found, there is deemed to be no large white eye present in that region of the image. lEoeo55o However, ifa high value point is found, then the corresponding circle in the original image is checked and a verification of the circle 490 is carried out.
This involves checking for example whether the most representative circle encircles the original seed point for the luminous region and/or whether the average gradient along the circle exceeds a threshold.
If a circle of a luminous region is verified, the region is corrected, 499, by darkening the pixels in the interior ofthe circle. In the preferred embodiment, the intensity of the pixels is set to 50 and an averaging filter is applied.
Preferably, however, the correction also takes into account the possibility of the luminous region including a glint, which should not be darkened. In RGB space, glint candidates are selected as high luminance pixels (min(R, G) >= 220 and max(R, G) == 255). lfa very round (both in aspect ratio and elongation), luminous, and desaturated region is found within the interior of a luminous region, its pixels are removed from the luminous region pixels to be corrected.
In the case where further eye-colour information is available, for example in the case where person-recognition procedures are available with a database of previously captured images, the additional colour information stored with that person’s infonnation in the database can be advantageously incorporated into the correction of both large and small white-eye.
In some cases, the same eye may contain both red and non-red defects. In this case, the invention herein maybe combined with known methods ofdetecting and correcting red eye defects in digital images. For example, in one embodiment, luminous or aggregated white- eye regions may be grouped with neighbouring detected red eye regions to create a single compound region. This compound region may then be used as an unit defect in the correction stage.
The present invention is not limited to the embodiments described herein, which may be amended or modified without departing from the scope of the present invention.

Claims (5)

Claims:
1. A method for detecting non-red flash eye defects in an image, said method comprising: defining one or more luminous regions in said image, each region having at least one pixel having luminance above a luminance threshold value and a redness below a red threshold value; applying at least one filter to a region corresponding to each luminous region; calculating the roundness of a region corresponding to each luminous region; and in accordance with said filtering and said roundness, detennining whether said region corresponds to a non-red flash eye defect.
2. A method according to claim I wherein said defining comprises: selecting pixels of the image which have a luminance above a luminance threshold value and a redness below a red threshold value; and grouping neighbouring selected pixels into said one or more luminous regions.
3. A method as recited in claim 2 wherein said defining further comprises: selecting neighbouring pixels of the image with a redness above a second red threshold value indicative of red eye artifact; and grouping said neighbouring pixels of the image with a redness above said second red threshold with pixels of a neighbouring luminous region to create a combined flash eye defect region.
4. The method according to claim 3 wherein said method further comprises the step of correcting said combined flash eye defect region.
5. The method according to claim 2, wherein said image is in RGB space, said luminance threshold value is approximately 220 out of 255 said redness is a function of the difference between said R and said G values for a pixel and said red threshold value is approximately 35 out of255.
IE2006/0559A 2006-07-27 Automatic detection and correction of non-red flash eye defects IES84402Y1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
USUNITEDSTATESOFAMERICA14/02/20066

Publications (2)

Publication Number Publication Date
IE20060559U1 true IE20060559U1 (en) 2006-11-01
IES84402Y1 IES84402Y1 (en) 2006-11-01

Family

ID=

Similar Documents

Publication Publication Date Title
IES20060559A2 (en) Automatic detection and correction of non-red flash eye defects
US8184900B2 (en) Automatic detection and correction of non-red eye flash defects
EP1812901B1 (en) Method and apparatus of correcting hybrid flash artifacts in digital images
CN107451998B (en) Fundus image quality control method
US8290267B2 (en) Detecting redeye defects in digital images
Medhi et al. An effective fovea detection and automatic assessment of diabetic maculopathy in color fundus images
EP1199672A2 (en) Red-eye detection method
US8285002B2 (en) Image processing apparatus and method, image sensing apparatus, and program
Schildkraut et al. A fully automatic redeye detection and correction algorithm
CN117011291A (en) Watch shell quality visual detection method
Poostchi et al. Diabetic retinopathy dark lesion detection: preprocessing phase
JP4315243B2 (en) Evaluation method of color uniformity
IE20060559U1 (en) Automatic detection and correction of non-red flash eye defects
CN110675325A (en) Method and device for enhancing fundus image
IES84402Y1 (en) Automatic detection and correction of non-red flash eye defects
Otoum et al. Evaluating the effectiveness of treatment of corneal ulcers via computer-based automatic image analysis
Bigioi et al. CORRECTION OF NON-REID EYE FI ASH 6,195,127 B1 2/2001 Sugimoto
IES84986Y1 (en) Detecting red-eye defects in digital images
IE20080340U1 (en) Detecting red-eye defects in digital images