WO2010149220A1 - An apparatus - Google Patents

An apparatus Download PDF

Info

Publication number
WO2010149220A1
WO2010149220A1 PCT/EP2009/058056 EP2009058056W WO2010149220A1 WO 2010149220 A1 WO2010149220 A1 WO 2010149220A1 EP 2009058056 W EP2009058056 W EP 2009058056W WO 2010149220 A1 WO2010149220 A1 WO 2010149220A1
Authority
WO
WIPO (PCT)
Prior art keywords
region
image
colour information
colour
eye
Prior art date
Application number
PCT/EP2009/058056
Other languages
French (fr)
Inventor
Meng Gang
Li Jiangwei
Wang Kongqiao
Original Assignee
Nokia Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Corporation filed Critical Nokia Corporation
Priority to PCT/EP2009/058056 priority Critical patent/WO2010149220A1/en
Publication of WO2010149220A1 publication Critical patent/WO2010149220A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • H04N1/62Retouching, i.e. modification of isolated colours only or in isolated picture areas only
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • H04N1/62Retouching, i.e. modification of isolated colours only or in isolated picture areas only
    • H04N1/624Red-eye correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30216Redeye defect

Definitions

  • the present application relates to a method and apparatus.
  • the method and apparatus relate to image processing and in particular, but not exclusively limited to, some further embodiments relate to image processing for red-eye reduction.
  • red-eye is caused by a camera flash lamp flashing in poor light and make eyes look "blood red” instead of their natural colour.
  • the flash light of a camera may be reflected back from a retina into the objective lens of the camera.
  • the reflected flash light is red because the retina absorbs ail colours of the visible spectrum except red. Images with red-eye are considered unacceptable by users because red-eye "defects" severely distort the appearance of faces.
  • red-eye defects can be reduced by increasing the light level in an immediate environment because the subject's pupils contract and the reflective surface causing red-eye will be reduced.
  • a user of the camera may not have the equipment or access to raise the light levels in the immediate environment.
  • red-eye defects can be reduced by increasing the distance from the objective lens and the flash lamp.
  • the above automatic red-eye removal methods try to locate affected areas of red-eye defects purely based on a identifying areas having a particular redness over a threshold and eliminating the areas having excessive redness.
  • Figure 1 discloses examples of red-eye removal according to known red-eye reduction techniques, which uses some of the above mentioned methods.
  • Figure 1 discloses some images 1 a, 1 b and 1c suffering from red-eye defects 2.
  • Red-eye removal techniques, as described above, are applied to the images 1 producing altered images 4 in figure 1d, 1e and 1f.
  • the altered images 4 each have areas 3 where known red-eye removal techniques have been applied after locating an eye in an image.
  • the different areas 3 providing different results when known red-eye reduction techniques are applied to them.
  • the other two images 1e and 1f suffer from incorrect red-eye removal. That is, some pixels not part of an eye in the image are altered and the resulting images have blue blurs around the immediate eye area. This can be seen for example by the darkened region about the eye 5 on figure 1e.
  • a method comprising: filtering a plurality of elements of an image with a first threshold of colour information; filtering the plurality of elements at a second threshold of the colour information; determining a first region of the image, a second region of the image and a third region of the image on the basis of filtered elements of the image; determining a reference point of the first region; and modifying the colour information of at least one image element of the first region in according to the distance of the at least one image element from the reference point of the first region.
  • the method comprises determining the colour information for the plurality of elements.
  • the method further comprises determining the first and second thresholds on the basis of the colour information.
  • the determining of the first and second thresholds is based on a distribution of the colour information associated with the first, second and third regions of the image.
  • the determining of the first and second thresholds comprising comparing the relative distribution of the colour information for the plurality of elements.
  • the determining of the first, second and third regions comprises determining the first region when some of the elements have a colour information over the first threshold, determining the second region when some of the elements have a colour information between the first and second threshold and determining the third region when some of the elements have a colour information below the second threshold.
  • the method further comprises determining that a mean value of the colour information for the first region is greater than a mean value of the colour information for the second region.
  • the method further comprises initiating correction of the colour information of the first region when the colour information of the first region is greater than the second region.
  • the colour information is the redness of the elements of the image.
  • the colour information is the value of the red component of red green blue colour space.
  • the image comprises an eye.
  • the first region is an eye centre
  • the second region is an eyelid
  • the third region is a sclera.
  • the image comprises a red-eye colour defect.
  • the elements are pixels of a digital image.
  • the method comprises determining a centre portion and an outer portion of the first region.
  • the modifying comprises modifying the colour information of each element of the centre portion according other colour information of each respective element.
  • the modifying comprises modifying the colour information of each element in the outer portion according to the distance from the reference point, the colour information and other colour information of each respective element.
  • the modifying comprises reducing the magnitude of the modification of the colour information as the distance of an element from the reference point increases.
  • the modifying of the colour information of the at least one element in the outer portion is based on a non-linear function of the distance.
  • the outer portion is a ring.
  • the first region is circular and / or elliptical.
  • the reference point is the centre of the first region.
  • the other colour information is the blueness and / or the greenness of the image.
  • the other colour information is the value of the blue component and / or green component of red green blue colour space.
  • an apparatus comprising: a first filter configured to filter a plurality of elements of an image with a first threshold of colour information; a second filter configured to filter the plurality of elements at a second threshold of the colour information; region determiner configured to determine a first region of the image, a second region of the image and a third region of the image on the basis of filtered elements of the image.
  • reference determiner configured to determine a reference point of an image region; and modifier configured to modify colour information of at least one image element of the image region according to the distance of the at least one image element from the reference point of the image region.
  • a method comprising: filtering a plurality of elements of an image with a first threshold of colour information; filtering the plurality of elements at a second threshold of the colour information; and determining a first region of the image, a second region of the image and a third region of the image on the basis of filtered elements of the image.
  • an apparatus comprising: a first filter configured to filter a plurality of elements of an image with a first threshold of colour information; a second filter configured to filter the plurality of elements at a second threshold of the colour information; and region determiner configured to determine a first region of the image, a second region of the image and a third region of the image on the basis of filtered elements of the image.
  • a method comprising: determining a reference point of an image region comprising a portion of an eye; and modifying colour information of at least one image element of the portion of the eye according to a distance of the at least one image element from the reference point of the image region such that colour defects of the portion of the eye are corrected.
  • an apparatus comprising: a determiner configured to determine a reference point of an image region comprising a portion of an eye; and a modifier configured to modify colour information of at least one image element of the portion of the eye according to the distance of the at least one image element from the reference point of the image region such that colour defects of the portion of the eye are corrected,
  • a seventh aspect of the present invention there is an electronic device comprising apparatus according to any of the preceding aspects.
  • a ninth aspect of the present invention there is a computer readable medium comprising a computer program thereon, the computer program performing the method of any of the preceding aspects.
  • Figure 1 illustrates examples of red-eye reduction techniques
  • Figure 2 discloses a schematic representation of an apparatus according to some embodiments
  • Figure 3 discloses more detailed schematic representation of the apparatus according to some embodiments.
  • Figure 4 discloses a flow diagram of the process according to some embodiments
  • Figure 5 discloses a more detailed flow diagram of the process according to some embodiments
  • Figure 6 discloses a schematic representation of the apparatus according to some further embodiments
  • Figure 7 discloses a flow diagram of the process according to some further embodiments.
  • Figures 8a discloses an image with red-eye defects
  • Figure 8b discloses a greyscale redness mask of an image with red-eye defects
  • Figure 8c discloses a graph of the distribution of pixel redness of an image with red-eye defects
  • Figure 8d discloses a binarized mapping of an image with red-eye defects
  • Figure 9a discloses an image with red-eye defects analysed according to some further embodiments.
  • Figure 9b discloses a graph of the magnitude of colour information correction versus distance according to some further embodiments.
  • Figure 9c discloses a modified image with red-eye correction according to some further embodiments.
  • Figure 2 discloses a schematic block diagram of an exemplary electronic device 10 or apparatus.
  • the electronic device is configured to perform red-eye reduction techniques according to some embodiments of the application.
  • the electronic device 10 is in some embodiments a mobile terminal, mobile phone or user equipment for operation in a wireless communication system.
  • the electronic device is a digital camera.
  • the electronic device 10 comprises an integrated camera module 11 , which is linked to a processor 15.
  • the processor 15 is further linked to a display 12.
  • the processor 15 is further linked to a transceiver (TX/RX) 13, to a user interface (Ul) 15 and to a memory 16.
  • TX/RX transceiver
  • Ul user interface
  • the camera module 11 and / or the display 12 is separate from the electronic device and the processor receives signals from the camera module 11 via the transceiver 13 or another suitable interface.
  • the processor 15 may be configured to execute various program codes 17.
  • the implemented program codes 17, in some embodiments, comprise image capture digital processing or configuration code.
  • the implemented program codes 17 in some embodiments further comprise additional code for further processing of images.
  • the implemented program codes 17 may in some embodiments be stored for example in the memory 16 for retrieval by the processor 15 whenever needed.
  • the memory 15 in some embodiments may further provide a section 18 for storing data, for example data that has been processed in accordance with the application.
  • the camera module 11 comprises a camera 19 having a lens for focussing an image on to a digital image capture means such as a charged coupled device (CCD).
  • the camera module 11 further comprises a flash lamp 20 for illuminating an object before capturing an image of the object.
  • the flash lamp is linked to the camera processor.
  • the camera 19 is also linked to a camera processor 21 for processing signals received from the camera.
  • the camera processor 21 is linked to camera memory 22 which may store program codes for the camera processor 21 to execute when capturing an image.
  • the implemented program codes (not shown) may in some embodiments be stored for example in the camera memory 22 for retrieval by the camera processor 21 whenever needed.
  • the camera processor 21 and the camera memory 22 are the processor 15 and the memory 16 respectively.
  • the apparatus capable of implementing red-eye reduction technique in some embodiments may be implemented in at least partially in hardware without the need of software or firmware.
  • the user interface 14 in some embodiments enables a user to input commands to the electronic device 10, for example via a keypad, and/or to obtain information from the electronic device 10, for example via the display
  • the transceiver 13 enables a communication with other electronic devices, for example via a wireless communication network.
  • a user of the electronic device 10 may use the camera module 11 for capturing an image that is to be transmitted to some other electronic device or that is to be stored in the data section 18 of the memory 16.
  • a corresponding application in some embodiments may be activated to this end by the user via the user interface 14.
  • This application which may in some embodiments be run by the processor 15, causes the processor 15 to execute the code stored in the memory 16.
  • the processor 15 may then process the digital image in the same way as described with reference to Figures 4 and 5 and 7.
  • the resulting image may in some embodiments be provided to the transceiver 13 for transmission to another electronic device.
  • the digital image could be stored in the data section 18 of the memory 16, for instance for a later transmission or for a later presentation on the display 10 by the same electronic device 10.
  • the electronic device 10 may in some embodiments also receive a digital image with red-eye defects from another electronic device via its transceiver
  • the processor 15 executes the processing program code stored in the memory 16.
  • the processor 15 may then in these embodiments process the received image with red-eye defects, and may process the digital image with red-eye defects in the same way as described with reference to Figures 4, 5 and 7.
  • Execution of the red-eye reduction processing program code could in some embodiments be triggered as well by an application that has been called by the user via the user interface 14.
  • FIG. 3 shows a schematic configuration for red-eye reduction apparatus for digital images including a camera module 11 , digital image processor 300, an eye location module 302, a masking module 304, an image filter module 306, a binarization module 308 and an image correction module.
  • red-eye reduction apparatus may comprise some but not all of the above parts.
  • the red-eye reduction apparatus may comprise only the digital image processor 300 where a digital image with red-eye defects from an external source is input to the digital image processor 300 with preconfigured structure and parameters and the digital image processor 300 further outputs processed image to an externa! image correction module 310.
  • the digital image processor 300 may be the 'core' element of the red-eye reduction apparatus and other parts or modules may be added or removed dependent on the application.
  • the modules represent processors configured to carry out the processes described below, which are located in the same, or different chipsets.
  • the digital image processor 300 is configured to carry out all the processes and Figure 3 exemplifies the analysis and modification of the digital image.
  • the camera module 11 receives light waves reflected off an object and converts them into digital electrical signals with digital capture means.
  • the camera module 11 may be any suitable image capture means.
  • the image capture means is a digital camera.
  • other types of cameras such as an infrared camera.
  • FIG. 8a shows an enlarged portion of a digital image comprising an eye.
  • the pupil of the eye is red because the light from the flash lamp has been reflected back to the objective lens of the camera.
  • the camera module 11 may output the digital image data in the form of a electrical signal is passed to the digital image processor 300.
  • the digital image processor 300 may in some embodiments send the digital image to the eye location module 302.
  • the eye location module 302 may in these embodiment uses known algorithms to locate the area of the eyes in a digital image, identifying a region of a face in a digital image which comprises at least one eye.
  • the eye location algorithm may be based on statistical methods or prior knowledge.
  • a statistical method may determine if an area of a digital image is an eye candidate or not and locate coarse eye positions by exhaustively searching in an area of the digital image containing a face. By repeating a location algorithm on different digital images, statistical learning can be achieved whereby the most likely positions of eye locations in the digital image containing a face are determined.
  • the locations of eye are determined with the prior knowledge. For example, an eye location algorithm may determine the coarse location of eyes in a digital image on the basis that eyes are the darkest pixels and / or eyes have round edges on the upper face. However as shown in Figure 1 , the precise location of the eye may not be correctly identified.
  • the eye location module 302 may in some embodiments locate an eye location in a digital image and crops the region containing the eye(s) from faces in the digital image.
  • Figure 8a discloses an example of a cropped eye area of an image.
  • the cropped eye area of the "digital image may then in some embodiments be sent to the digital image processor 300.
  • the operation of eye location is not necessary because this operation is carried out in another electronic device for example in some embodiments a cropped eye region of the digital image is sent to the electronic device 10 using the transceiver 13.
  • the digital image processor 300 in some embodiments initiates an analysis for a plurality of elements of the cropped eye image, in some embodiments the plurality of elements are individual pixels of the digital image. In alternative embodiments, the plurality of elements comprises larger portions of the image, for example a plurality of groups of pixels.
  • the digital image processor 300 in some embodiments sends the cropped eye image region to the masking module 304.
  • the masking module may in some embodiments 304 analyse colour information of each pixel of the cropped eye image.
  • the masking module 304 determines the redness of each pixel in the cropped eye image.
  • the redness of the pixel is determined in red-green-blue (RGB) colour space.
  • RGB red-green-blue
  • other colour spaces are used to determine the redness of each pixel.
  • some embodiments may determine the redness of each pixel in one or more of the following colour spaces: RGB, cyan-magenta- yellow-key (CMYK), Luma-ln phase-Quadrature(YIQ) and YCbCr.
  • the masking module 304 in some embodiments determines the red component of each pixel in the cropped eye image as shown in step 404 of figure 4.
  • the masking module 302 determines the redness of each pixel using equation [1]. , « , 255 '* tn
  • R + G + B + K where r is the redness of a pixel, R is the red colour component in RGB colour space, B is the blue colour component in RGB colour space, G is the green colour component in RGB colour space and K is a small constant.
  • K is provided to avoid the case where the denominator of equation [1] equaling zero.
  • the masking module 304 returns a value of close to 255 if a pixel is completely red, and close to 0 if the pixel is black.
  • the masking module 304 in some embodiments repeats the above determination of the redness of a pixel for all the pixels, e.g. a total of n pixels, in the cropped eye image region .
  • the repeat operation is shown in step 416.
  • the masking module 304 in some embodiments then maps the distribution of pixels against redness value, r, with a grayscale map.
  • Figure 8b shows a grayscale map of the redness of the pixels in the cropped eye image region. The higher the "redness" of the pixel within the cropped eye image region, the lighter the grayscale map image pixel is. The area most affected by the "redeye” effect appears the lightest in this image.
  • the masking module 304 in some embodiments sends the grayscale map to the digital image processor 300. In some embodiments, the step of determining the redness information is not necessary because the redness information may have previously be determined, for example by another electronic device.
  • the digital image processor 300 in some embodiments sends the grayscale map to the image filter module 306.
  • the image filter module 306 determines thresholds of redness based on the statistical distribution of pixel "redness" in the grayscale map.
  • the redness thresholds are in some embodiments determined in the image filter module using a maximum entropy self-adaptive histogram threshold method.
  • a statistical distribution of redness for the pixels of the cropped image is disclosed in Figure 8c.
  • Figure 8c shows the number of pixels having a particular redness. For instance, figure 8c shows a first group of pixels 801 have a high redness value, a second group 802 of pixels have a medium redness value and a third group 803 of pixels having a low redness value.
  • the first group of pixels 801 are an eye centre 805 comprising red-eye defects, the second group of pixels 802 are an eyelid 806 and the third group of pixels are the sclera or white of the eye 804.
  • a global static redness threshold for determining the region associated with red-eye defects for all images leads to incorrect red-eye defect removal.
  • a red-eye reduction technique may be applied and produce colour errors on the image where non-red-eye regions are affected. For example, as skin tones and lighting conditions vary from image to image, so that one image may be generally redder than another image.
  • a global static redness threshold for identifying red-eye defects may therefore provide unsatisfactory results.
  • some embodiments provide a redness threshold which is dynamically applied to the redness information of the grayscale map.
  • the image filter module 306 in some embodiments compares the relative distribution of the redness of the pixels. In this way, the image filter module 306 determines the different regions of the eye and surrounding the eye based on certain assumptions.
  • the image filter module 306 in some embodiments identifies a plurality of pixels which have a low redness, which is identified as the sclera 804. The image filter determines that the pixels with the low redness is the sclera 804 because the sclera 804 is the least red relative to the rest of the eye.
  • the image filter module 306 in some embodiments further identifies a group of pixels which have the highest redness values, which is identified as the eye centre comprising red-eye defects 805.
  • the eye centre 805 comprises both the pupil and the iris.
  • the image filter module 306 determines a plurality of pixels of intermediate redness, has a redness between the redness of region of the sclera and the redness of the region of the eye centre.
  • the region having an intermediate redness is identified as an eyelid region 806.
  • the image filter module 306 identifies separate thresholds of redness for filtering each pixel of the cropped eye image into the separate regions.
  • Figure 8c shows a first threshold of redness 851.
  • the first threshold of redness distinguishes between the eye centre 805 having red-eye defects and the eyelid 806 having intermediate redness.
  • Figure 8c further shows a second redness threshold 853.
  • the second redness threshold 853 distinguishes between the sclera region 804 and the eyelid region 806.
  • the image filter module 306 determines the redness thresholds as shown in step 405 in figure 4.
  • the image filter module 306 in some embodiments filters pixel / by analyzing the redness r for each pixel and determining which region the pixel / falls into by comparing the redness r to the first and second threshold.
  • Figure 5 discloses the filtering in more detail.
  • the image filter module 306 in some embodiments then filters each pixel / in the cropped eye image as shown in step 406 of figure 4.
  • Step 406 in figure 5 corresponds to an expanded version of the step of filtering pixel / on the basis of the redness information as shown in figure 4.
  • the image filtering module 306 determines whether the pixel has a redness r below the first redness threshold as shown in step 502. If not, the image filter module 306 in some embodiments determines that the pixel is in the eye centre region 805 as shown in step 506. If the pixel has a redness r below the first redness threshold, the image filter module 306 in some embodiments determines whether the pixel has a redness below the second redness threshold as shown in step 504.
  • the image filter module 306 determines that the pixel is in the sclera 804 as shown in step 510. If the pixel has a redness above the second redness threshold, the image filter module 306 in some embodiments determines that the pixel is in the eyelid region as shown in step 508.
  • the image filter module 306 in some embodiments then repeats the filtering 406 for all the pixels in the cropped eye region as shown in step 424.
  • the image filter module 306 in some embodiments then sends the information of which pixels are in which regions to the digital image processor 300.
  • the digital image processor 300 in some embodiments then sends the information of which pixels are in which regions to the binarization module 308.
  • the binarization module 308 generates a mapping of which pixels are in a certain region.
  • Figure 8d shows a binarization mapping of the cropped eye image showing the most red pixels 808 requiring red-eye correction in white and other pixels in black 809.
  • the step of binarization may also generate noise elements 807, which are not part of the eye centre region, in terms of red-eye reduction, the binarization module generates a mapping of which pixels are in the eye centre region 805.
  • the binarization is performed to provide a simple reference of which pixels need to be corrected for red-eye defects.
  • the use of the first and second thresholds means that the step of binarization differentiates between the eye centre 805 and the eyelid 806.
  • the binarization module binarizes the pixels as shown in figure 4 operation 408.
  • the binarization module 308 in some embodiments sends the binarized cropped eye image to the digital image processor 300. As mentioned above, the step of binarization may generate noise elements 807.
  • the digital image processor 300 in some embodiments then sends the binarized mapping of the cropped eye image to a noise reduction module 312.
  • the noise reduction module 312 in some embodiments analyzes the information and removes pixels which are incorrectly filtered into the different eye regions.
  • the noise reduction module 312 in some embodiments applies morphological operations to the binarized mapping of the cropped eye image.
  • the morphological operations may comprise repeated image merging operations and splitting operations.
  • the morphological operations comprise one or more of the following image processing operations: dilation, erosion, opening and closing operations.
  • image dilation and erosion operations are used to "fill" small holes within regions of a digital image and unwanted "noise" pixels can be removed.
  • the red-eye correction will be applied to the noise elements, which will result in an unsatisfactory image because some parts of a resulting corrected image will have a reduced redness where correction is not necessary.
  • the step of noise reduction is not necessary because there will be few or no noise elements introduced into the binarized mapping of the cropped eye image.
  • the noise reduction module 312 in some embodiments returns the binarized mapping of the cropped eye image to the digital image processor 300.
  • the digital image processor 300 determines the eye centre region which requires red-eye defect correction as shown in step 414.
  • the digital image processor 300 is able to segment the an eye image into the three regions of sclera 804, eye centre 805 and eyelid 806, Furthermore, the digital image processor 300 can handle coarse initial locations of eyes in an image and reliably position red-eye defects.
  • the digital image processor 300 determines whether red-eye correction is necessary as shown in step 418.
  • the digital image processor 300 determines the mean value of the red components in the eye centre region.
  • the digital image processor 300 determines whether the mean red component of the eye centre region is greater than the mean red component on the eyelid region. If this is the case then the digital image processor 300 determines that there are red-eye defects in the image and red-eye defect correction is necessary. This is because generally in a normal eye image the eyelid is redder than the eye centre region, therefore if this is not the case, the image contains red-eye defects.
  • the image processor 300 alternatively determines the mean redness of eye centre region and compares it to the mean redness of the eyelid region.
  • operation 418 is not necessary.
  • the digital image processor 300 has prior knowledge that a digital image comprises red-eye defects.
  • detection of red-eye defects is carried out in another electronic device.
  • the digital image processor 300 determines that redeye defect correction is necessary, the digital image processor sends the binarized mapping of the cropped eye image and the original image to an image correction module 310.
  • FIG. 6 discloses a more detailed version of the image correction module 310.
  • the image correction module 310 comprises an image correction controller 606.
  • the image correction controller 606 is linked to an asymptotic recovery correction module 602 and a fixed correction module 604,
  • the image correction controller 606 in some embodiments determines the eye centre region 805 as shown in step 702.
  • the eye centre region 805 is determined from the binarized mapping of the cropped eye image,
  • the image correction controller 606 in some embodiments then determines a reference point of the eye centre region 805 as shown in step 704. In some embodiments the image correction controller determines a centre point 902 in figure 9a. The image correction controller determines the centre point 902 by determining the mean centre of all the pixels which are in the eye centre region 805.
  • the image correction controller determines the mean centre (x 0 , yo) by using equation [2] and [3].
  • y o ⁇ y ⁇ [3] n , wherein (x 0 , yo) is the coordinate of the mean centre of the eye centre region, n is the total number of pixels and (X 1 , y,) is the coordinate of a pixel in the centre region.
  • the image correction controller 606 determines an inner portion 908 and an outer portion 910 of the eye centre region as shown in step 706.
  • the eye centre region 805 comprises the pupil and the iris and the red-eye defects may not be uniform across the eye centre region. In this way, the inner portion 908 of the eye centre region may be seriously affected by redeye defects. However, the outer region of the eye and the outer portion 910 of the eye centre region is less affected by red-eye defects.
  • the image correction controller 606 determines the dimensions of the ring-like outer portion 910 using equations [4] and [5] as shown in step 706. T max — max K w , .) , (Wo ) I [5]
  • B is the eye centre region
  • (Xj, yi) is a point within B and
  • is the Euclidean distance between two points.
  • the geometric centre is determined from equation [2] and [3] in step 704.
  • r m j n and r ma ⁇ are the radii of the inner and outer circles of B respectively.
  • the inner circle 904 and the outer circle 906 are shown in figure 9a.
  • the image correction controller 606 determines the distance of a pixel i from the centre point 902.
  • the image controller determines the distance using equation [6].
  • the image correction controller 606 determines the type of defect correction in as shown in step 710.
  • the image correction controller 606 determines whether pixel / is in the inner portion 908 of the eye centre region. That is, the image correction controller 606 determines whether distance of the pixel is less than r m i n as shown in step 712.
  • the pixel is within the inner portion 908, then the pixel is subject to a significant red-eye defect. In this way, the pixel is often saturated with red light.
  • the image correction controller 606 sends the colour information of the pixel to the fixed correction module 604 as shown in step 716.
  • the fixed correction module 604 modifies the red component value of the pixel using equation [7].
  • the pixel within the inner portion 908 is modified using the average value of the corresponding blue and green components value for the same pixel. wherein R' is the modified red component value of the pixel, G is the green component value of the pixel and B is the blue component value of the pixel.
  • the fixed correction module 604 in some embodiments then returns the modified pixel colour information to the image correction controller 606.
  • the image correction controller 606 could determines that pixel / is in the outer portion 910 of the eye centre region. That is, the image correction controller 606 determines that distance of the pixel is greater than r mi n as shown in step 712.
  • the pixel is subject to a mild red-eye defect. Between r mi ⁇ and r max , the outer portion 910, there exists a transition area where the red-eye defection gradually weakens because there is less reflected red light. Most colour information of the pixels in this region is useful so the recovery algorithm for the outer portion 910 keeps most of the information, but removes the noise due to the red-eye defects.
  • the image correction controller 606 in some embodiments sends the colour information of the pixel and the distance of the pixel from the mean centre point 902 to the asymptotic recovery correction module 602 as shown in step 714.
  • the asymptotic recovery correction module 602 modifies the red component of the pixel using equation [8] and [9].
  • the pixel in the outer portion 910 is modified in dependence of the distance of the pixel from the mean centre point 902.
  • the magnitude of the correction used on the pixel is shown in figure 9b using equation [9]. That is, the original red component of the pixel is more reliable the further away the point is from the mean centre point 902 and so the information of the redness should be largely retained.
  • R' is the modified red component value for the pixel
  • R is the original red component value for the pixel
  • G is the green component value for the pixel
  • B is the blue component value for the pixel.
  • the recovery algorithms remove red-eye defects and keep useful colour information at the same time. This means that the corrected eye appearance looks natural. Indeed, the algorithm does not substantially interfere with the original image such that a smooth correction is achieved.
  • the asymptotic recovery correction module 602 in some embodiments returns the modified pixel colour information to the image correction controller 606.
  • step 710 The red-eye correction of step 710 is repeated for all the pixels in the eye centre region 805.
  • the image correction controller 606 modifies the original image by using the modified red component value of each pixel as determined in step 710. Thereafter the image correction controller 606 sends the modified image to the digital image processor 300.
  • Figure 9c discloses an example of an modified image with the red-eye reduction techniques disclosed herein applied to it.
  • the modified image with red-eye correction applied is sent to the display 10 for the user to view.
  • the modified image is stored in memory 16 or transmitted to another electronic device using the transceiver 13.
  • user equipment is intended to cover any suitable type of wireless user equipment, such as mobile telephones, portable data processing devices or portable web browsers.
  • various embodiments of the invention may be implemented in hardware or special purpose circuits, software, logic or any combination thereof.
  • some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto.
  • the embodiments of this invention may be implemented by computer software executable by a data processor of the mobile device, such as in the processor entity, or by hardware, or by a combination of software and hardware.
  • any blocks of the logic flow as in the Figures may represent program steps, or interconnected logic circuits, blocks and functions, or a combination of program steps and logic circuits, blocks and functions.
  • the software may be stored on such physical media as memory chips, or memory blocks implemented within the processor, magnetic media such as hard disk or floppy disks, and optical media such as for example DVD and the data variants thereof, CD.
  • the memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor-based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory.
  • the data processors may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASIC), gate level circuits and processors based on multi-core processor architecture, as non-limiting examples.
  • Embodiments of the inventions may be practiced in various components such as integrated circuit modules.
  • the design of integrated circuits is by and large a highly automated process. Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate.
  • Programs such as those provided by Synopsys, Inc. of Mountain View, California and Cadence Design, of San Jose, California automatically route conductors and locate components on a semiconductor chip using well established rules of design as well as libraries of pre-stored design modules.
  • the resultant design in a standardized electronic format (e.g., Opus, GDSlI, or the like) may be transmitted to a semiconductor fabrication facility or "fab" for fabrication.
  • circuitry or circuit may refer to ail of the following: (a) hardware-only circuit implementations (such as implementations in only analogue and/or digital circuitry) and (b) to combinations of circuits and software (and/or firmware), such as and where applicable: (i) to a combination of processor(s) or (ii) to portions of processor(s)/software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and (c) to circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present.
  • circuitry would also cover an implementation of merely a processor (or multiple processors) or portion of a processor and its (or their) accompanying software and/or firmware.
  • circuitry would also cover, for example and if applicable to the particular claim element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in server, a cellular network device, or other network device.
  • processor and memory may comprise but are not limited to in this application: (1) one or more microprocessors, (2) one or more processor(s) with accompanying digital signal processor(s), (3) one or more processor(s) without accompanying digital signal processor(s), (3) one or more special- purpose computer chips, (4) one or more field-programmable gate arrays (FPGAS), (5) one or more controllers, (6) one or more application-specific integrated circuits (ASICS), or detector(s), processor(s) (including dual-core and multiple-core processors), digital signal processor(s), controiler(s), receiver, transmitter, encoder, decoder, memory (and memories), software, firmware, RAM, ROM, display, user interface, display circuitry, user interface circuitry, user interface software, display software, circuit(s), antenna, antenna circuitry, and circuitry.
  • processors including dual-core and multiple-core processors
  • ASICS application-specific integrated circuits
  • processor(s) including dual-core and multiple-core processors
  • digital signal processor(s) controiler(

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)

Abstract

A method comprises: filtering a plurality of elements of an image with a first threshold of colour information; filtering the plurality of elements at a second threshold of the colour information; determining a first region of the image, a second region of the image and a third region of the image on the basis of filtered elements of the image; determining a reference point of the first region; and modifying the colour information of at least one image element of the first region in according to the distance of the at least one image element from the reference point of the first region.

Description

An Apparatus
The present application relates to a method and apparatus. In some embodiments the method and apparatus relate to image processing and in particular, but not exclusively limited to, some further embodiments relate to image processing for red-eye reduction.
The phenomenon of red-eye is caused by a camera flash lamp flashing in poor light and make eyes look "blood red" instead of their natural colour. The flash light of a camera may be reflected back from a retina into the objective lens of the camera. The reflected flash light is red because the retina absorbs ail colours of the visible spectrum except red. Images with red-eye are considered unacceptable by users because red-eye "defects" severely distort the appearance of faces.
It is known that red-eye defects can be reduced by increasing the light level in an immediate environment because the subject's pupils contract and the reflective surface causing red-eye will be reduced. However a user of the camera may not have the equipment or access to raise the light levels in the immediate environment.
It is also known that red-eye defects can be reduced by increasing the distance from the objective lens and the flash lamp. However, it may be difficult to increase the distance between the objective lens and the flash lamp in compact cameras or electronic devices with an integrated camera because the maximum distance is limited by the size of the camera or electronic device.
Some alternative electronic based solutions to reduce red-eye defects are known. When processing face images to reduce red-eye defects, the eyes are located in the image.
For example it is known to select pixels that best represent the red-eye defects by hand and then modifying neighboring pixels to remove red hues. However, selecting pixels by hand is time consuming and can require separate software and / or hardware.
In another proposal it is known to provide an automated detection and correction method where user interactivity is significantly reduced. However spatial regions containing eye colour defects have to be defined beforehand as well which may still be time consuming,
In yet another proposal it is known to provide a method for reducing red-eye which is a fully automatic red-eye removal algorithm. The fully automatic methods use a masking module to analyze information of pixels in a digital image to retrieve potential areas affected by red-eye in an image.
The above automatic red-eye removal methods try to locate affected areas of red-eye defects purely based on a identifying areas having a particular redness over a threshold and eliminating the areas having excessive redness.
However, automated red-eye reduction methods are not consistent and while the fully automatic methods may reduce red-eye defects in some digital images, they fail to remove red-eye defects in other digital images.
Figure 1 discloses examples of red-eye removal according to known red-eye reduction techniques, which uses some of the above mentioned methods. Figure 1 discloses some images 1 a, 1 b and 1c suffering from red-eye defects 2. Red-eye removal techniques, as described above, are applied to the images 1 producing altered images 4 in figure 1d, 1e and 1f.
The altered images 4 each have areas 3 where known red-eye removal techniques have been applied after locating an eye in an image. The different areas 3 providing different results when known red-eye reduction techniques are applied to them. Compared to the leftmost image 1d, the other two images 1e and 1f suffer from incorrect red-eye removal. That is, some pixels not part of an eye in the image are altered and the resulting images have blue blurs around the immediate eye area. This can be seen for example by the darkened region about the eye 5 on figure 1e.
In a first aspect of the present invention there is provided a method comprising: filtering a plurality of elements of an image with a first threshold of colour information; filtering the plurality of elements at a second threshold of the colour information; determining a first region of the image, a second region of the image and a third region of the image on the basis of filtered elements of the image; determining a reference point of the first region; and modifying the colour information of at least one image element of the first region in according to the distance of the at least one image element from the reference point of the first region.
Preferably the method comprises determining the colour information for the plurality of elements.
Preferably the method further comprises determining the first and second thresholds on the basis of the colour information.
Preferably the determining of the first and second thresholds is based on a distribution of the colour information associated with the first, second and third regions of the image.
Preferably the determining of the first and second thresholds comprising comparing the relative distribution of the colour information for the plurality of elements.
Preferably the determining of the first, second and third regions comprises determining the first region when some of the elements have a colour information over the first threshold, determining the second region when some of the elements have a colour information between the first and second threshold and determining the third region when some of the elements have a colour information below the second threshold.
Preferably the method further comprises determining that a mean value of the colour information for the first region is greater than a mean value of the colour information for the second region.
Preferably the method further comprises initiating correction of the colour information of the first region when the colour information of the first region is greater than the second region.
Preferably wherein the colour information is the redness of the elements of the image.
Preferably wherein the colour information is the value of the red component of red green blue colour space.
Preferably wherein the image comprises an eye.
Preferably the first region is an eye centre, the second region is an eyelid and the third region is a sclera.
Preferably the image comprises a red-eye colour defect.
Preferably the elements are pixels of a digital image.
Preferably the method comprises determining a centre portion and an outer portion of the first region.
Preferably the modifying comprises modifying the colour information of each element of the centre portion according other colour information of each respective element. Preferably the modifying comprises modifying the colour information of each element in the outer portion according to the distance from the reference point, the colour information and other colour information of each respective element.
Preferably the modifying comprises reducing the magnitude of the modification of the colour information as the distance of an element from the reference point increases.
Preferably the modifying of the colour information of the at least one element in the outer portion is based on a non-linear function of the distance.
Preferably the outer portion is a ring.
Preferably the first region is circular and / or elliptical.
Preferably the reference point is the centre of the first region.
Preferably the other colour information is the blueness and / or the greenness of the image.
Preferably the other colour information is the value of the blue component and / or green component of red green blue colour space.
In a second aspect of the invention there is provided an apparatus comprising: a first filter configured to filter a plurality of elements of an image with a first threshold of colour information; a second filter configured to filter the plurality of elements at a second threshold of the colour information; region determiner configured to determine a first region of the image, a second region of the image and a third region of the image on the basis of filtered elements of the image. reference determiner configured to determine a reference point of an image region; and modifier configured to modify colour information of at least one image element of the image region according to the distance of the at least one image element from the reference point of the image region.
In a third aspect of the invention there is provided a method comprising: filtering a plurality of elements of an image with a first threshold of colour information; filtering the plurality of elements at a second threshold of the colour information; and determining a first region of the image, a second region of the image and a third region of the image on the basis of filtered elements of the image.
In a fourth aspect of the invention there is provided an apparatus comprising: a first filter configured to filter a plurality of elements of an image with a first threshold of colour information; a second filter configured to filter the plurality of elements at a second threshold of the colour information; and region determiner configured to determine a first region of the image, a second region of the image and a third region of the image on the basis of filtered elements of the image.
In a fifth aspect of the invention there is provided a method comprising: determining a reference point of an image region comprising a portion of an eye; and modifying colour information of at least one image element of the portion of the eye according to a distance of the at least one image element from the reference point of the image region such that colour defects of the portion of the eye are corrected.
In a sixth aspect of the present invention there is provided an apparatus comprising: a determiner configured to determine a reference point of an image region comprising a portion of an eye; and a modifier configured to modify colour information of at least one image element of the portion of the eye according to the distance of the at least one image element from the reference point of the image region such that colour defects of the portion of the eye are corrected,
In a seventh aspect of the present invention there is an electronic device comprising apparatus according to any of the preceding aspects.
In an eighth aspect of the present invention there is a chipset according to any of the preceding aspects.
In a ninth aspect of the present invention there is a computer readable medium comprising a computer program thereon, the computer program performing the method of any of the preceding aspects.
For a better understanding of the present application and as to how the same may be carried into effect, reference will now be made by way of example to the accompanying drawings in which:
Figure 1 illustrates examples of red-eye reduction techniques;
Figure 2 discloses a schematic representation of an apparatus according to some embodiments;
Figure 3 discloses more detailed schematic representation of the apparatus according to some embodiments;
Figure 4 discloses a flow diagram of the process according to some embodiments;
Figure 5 discloses a more detailed flow diagram of the process according to some embodiments; Figure 6 discloses a schematic representation of the apparatus according to some further embodiments;
Figure 7 discloses a flow diagram of the process according to some further embodiments;
Figures 8a discloses an image with red-eye defects;
Figure 8b discloses a greyscale redness mask of an image with red-eye defects;
Figure 8c discloses a graph of the distribution of pixel redness of an image with red-eye defects;
Figure 8d discloses a binarized mapping of an image with red-eye defects;
Figure 9a discloses an image with red-eye defects analysed according to some further embodiments;
Figure 9b discloses a graph of the magnitude of colour information correction versus distance according to some further embodiments; and
Figure 9c discloses a modified image with red-eye correction according to some further embodiments.
The following describes apparatus and methods for the provision of improved red-eye reduction techniques. In this regard reference is first made to Figure 2 which discloses a schematic block diagram of an exemplary electronic device 10 or apparatus. The electronic device is configured to perform red-eye reduction techniques according to some embodiments of the application.
The electronic device 10 is in some embodiments a mobile terminal, mobile phone or user equipment for operation in a wireless communication system. In other embodiments, the electronic device is a digital camera. The electronic device 10 comprises an integrated camera module 11 , which is linked to a processor 15. The processor 15 is further linked to a display 12. The processor 15 is further linked to a transceiver (TX/RX) 13, to a user interface (Ul) 15 and to a memory 16. In some embodiments, the camera module 11 and / or the display 12 is separate from the electronic device and the processor receives signals from the camera module 11 via the transceiver 13 or another suitable interface.
The processor 15 may be configured to execute various program codes 17. The implemented program codes 17, in some embodiments, comprise image capture digital processing or configuration code. The implemented program codes 17 in some embodiments further comprise additional code for further processing of images. The implemented program codes 17 may in some embodiments be stored for example in the memory 16 for retrieval by the processor 15 whenever needed. The memory 15 in some embodiments may further provide a section 18 for storing data, for example data that has been processed in accordance with the application.
The camera module 11 comprises a camera 19 having a lens for focussing an image on to a digital image capture means such as a charged coupled device (CCD). The camera module 11 further comprises a flash lamp 20 for illuminating an object before capturing an image of the object. The flash lamp is linked to the camera processor. The camera 19 is also linked to a camera processor 21 for processing signals received from the camera. The camera processor 21 is linked to camera memory 22 which may store program codes for the camera processor 21 to execute when capturing an image. The implemented program codes (not shown) may in some embodiments be stored for example in the camera memory 22 for retrieval by the camera processor 21 whenever needed. In some embodiments the camera processor 21 and the camera memory 22 are the processor 15 and the memory 16 respectively. The apparatus capable of implementing red-eye reduction technique in some embodiments may be implemented in at least partially in hardware without the need of software or firmware.
The user interface 14 in some embodiments enables a user to input commands to the electronic device 10, for example via a keypad, and/or to obtain information from the electronic device 10, for example via the display
12. The transceiver 13 enables a communication with other electronic devices, for example via a wireless communication network.
It is to be understood again that the structure of the electronic device 10 could be supplemented and varied in many ways.
A user of the electronic device 10 may use the camera module 11 for capturing an image that is to be transmitted to some other electronic device or that is to be stored in the data section 18 of the memory 16. A corresponding application in some embodiments may be activated to this end by the user via the user interface 14. This application, which may in some embodiments be run by the processor 15, causes the processor 15 to execute the code stored in the memory 16.
The processor 15 may then process the digital image in the same way as described with reference to Figures 4 and 5 and 7.
The resulting image may in some embodiments be provided to the transceiver 13 for transmission to another electronic device. Alternatively, the digital image could be stored in the data section 18 of the memory 16, for instance for a later transmission or for a later presentation on the display 10 by the same electronic device 10.
The electronic device 10 may in some embodiments also receive a digital image with red-eye defects from another electronic device via its transceiver
13. In these embodiments, the processor 15 executes the processing program code stored in the memory 16. The processor 15 may then in these embodiments process the received image with red-eye defects, and may process the digital image with red-eye defects in the same way as described with reference to Figures 4, 5 and 7. Execution of the red-eye reduction processing program code could in some embodiments be triggered as well by an application that has been called by the user via the user interface 14.
It would be appreciated that the schematic structures described in figures 3 and 6 and the method steps in figures 4, 5 and 7 represent only a part of the operation of a complete system comprising some embodiments of the application as shown implemented in the electronic device shown in figure 2.
Figure 3 shows a schematic configuration for red-eye reduction apparatus for digital images including a camera module 11 , digital image processor 300, an eye location module 302, a masking module 304, an image filter module 306, a binarization module 308 and an image correction module. In some embodiments of the application red-eye reduction apparatus may comprise some but not all of the above parts. For example in some embodiments the red-eye reduction apparatus may comprise only the digital image processor 300 where a digital image with red-eye defects from an external source is input to the digital image processor 300 with preconfigured structure and parameters and the digital image processor 300 further outputs processed image to an externa! image correction module 310.
In other embodiments of the invention the digital image processor 300 may be the 'core' element of the red-eye reduction apparatus and other parts or modules may be added or removed dependent on the application. In other embodiments the modules represent processors configured to carry out the processes described below, which are located in the same, or different chipsets. Alternatively, the digital image processor 300 is configured to carry out all the processes and Figure 3 exemplifies the analysis and modification of the digital image.
Where elements similar to those shown in Figure 2 are described, the same reference numbers are used. The camera module 11 receives light waves reflected off an object and converts them into digital electrical signals with digital capture means. The camera module 11 may be any suitable image capture means. In some embodiments, the image capture means is a digital camera. In some alternative embodiments other types of cameras are used, such as an infrared camera.
The capture of the image by the camera module 11 is shown with respect to figure 4 in step 400. The image may suffer from red-eye defects as shown in Figure 8a. Figure 8a shows an enlarged portion of a digital image comprising an eye. The pupil of the eye is red because the light from the flash lamp has been reflected back to the objective lens of the camera.
The camera module 11 may output the digital image data in the form of a electrical signal is passed to the digital image processor 300.
The digital image processor 300 may in some embodiments send the digital image to the eye location module 302. The eye location module 302 may in these embodiment uses known algorithms to locate the area of the eyes in a digital image, identifying a region of a face in a digital image which comprises at least one eye.
In some embodiments, the eye location algorithm may be based on statistical methods or prior knowledge. A statistical method may determine if an area of a digital image is an eye candidate or not and locate coarse eye positions by exhaustively searching in an area of the digital image containing a face. By repeating a location algorithm on different digital images, statistical learning can be achieved whereby the most likely positions of eye locations in the digital image containing a face are determined. Alternatively or additionally, the locations of eye are determined with the prior knowledge. For example, an eye location algorithm may determine the coarse location of eyes in a digital image on the basis that eyes are the darkest pixels and / or eyes have round edges on the upper face. However as shown in Figure 1 , the precise location of the eye may not be correctly identified. Accurate location of eyes in a digital image is difficult and known eye location algorithms are attempt Jo roughly estimate the location of the eye. This imprecise location of the eye within the digital image may be a source of incorrect red-eye defect removal. The rough location estimates allow red-eye removal techniques to be applied to both red-eye defects and neighboring skin regions, for example eyelids,
The eye location module 302 may in some embodiments locate an eye location in a digital image and crops the region containing the eye(s) from faces in the digital image. Figure 8a discloses an example of a cropped eye area of an image. The cropped eye area of the "digital image may then in some embodiments be sent to the digital image processor 300. In some embodiments, the operation of eye location is not necessary because this operation is carried out in another electronic device for example in some embodiments a cropped eye region of the digital image is sent to the electronic device 10 using the transceiver 13.
The location of the eye area in the digital image is shown in Figure 4 by step 402.
The digital image processor 300 in some embodiments initiates an analysis for a plurality of elements of the cropped eye image, in some embodiments the plurality of elements are individual pixels of the digital image. In alternative embodiments, the plurality of elements comprises larger portions of the image, for example a plurality of groups of pixels.
The digital image processor 300 in some embodiments sends the cropped eye image region to the masking module 304. The masking module may in some embodiments 304 analyse colour information of each pixel of the cropped eye image. In particular, the masking module 304 determines the redness of each pixel in the cropped eye image. The redness of the pixel is determined in red-green-blue (RGB) colour space. However, in some embodiments other colour spaces are used to determine the redness of each pixel. For example some embodiments may determine the redness of each pixel in one or more of the following colour spaces: RGB, cyan-magenta- yellow-key (CMYK), Luma-ln phase-Quadrature(YIQ) and YCbCr.
The masking module 304 in some embodiments determines the red component of each pixel in the cropped eye image as shown in step 404 of figure 4. The masking module 302 determines the redness of each pixel using equation [1]. , «, 255'* tn
R + G + B + K where r is the redness of a pixel, R is the red colour component in RGB colour space, B is the blue colour component in RGB colour space, G is the green colour component in RGB colour space and K is a small constant. The constant K is provided to avoid the case where the denominator of equation [1] equaling zero.
In this way, the masking module 304 returns a value of close to 255 if a pixel is completely red, and close to 0 if the pixel is black.
The masking module 304 in some embodiments repeats the above determination of the redness of a pixel for all the pixels, e.g. a total of n pixels, in the cropped eye image region . The repeat operation is shown in step 416.
The masking module 304 in some embodiments then maps the distribution of pixels against redness value, r, with a grayscale map. Figure 8b shows a grayscale map of the redness of the pixels in the cropped eye image region. The higher the "redness" of the pixel within the cropped eye image region, the lighter the grayscale map image pixel is. The area most affected by the "redeye" effect appears the lightest in this image. The masking module 304 in some embodiments sends the grayscale map to the digital image processor 300. In some embodiments, the step of determining the redness information is not necessary because the redness information may have previously be determined, for example by another electronic device. The digital image processor 300 in some embodiments sends the grayscale map to the image filter module 306. The image filter module 306 in some embodiments determines thresholds of redness based on the statistical distribution of pixel "redness" in the grayscale map. The redness thresholds are in some embodiments determined in the image filter module using a maximum entropy self-adaptive histogram threshold method. A statistical distribution of redness for the pixels of the cropped image is disclosed in Figure 8c. In particular Figure 8c shows the number of pixels having a particular redness. For instance, figure 8c shows a first group of pixels 801 have a high redness value, a second group 802 of pixels have a medium redness value and a third group 803 of pixels having a low redness value. The first group of pixels 801 are an eye centre 805 comprising red-eye defects, the second group of pixels 802 are an eyelid 806 and the third group of pixels are the sclera or white of the eye 804.
A global static redness threshold for determining the region associated with red-eye defects for all images leads to incorrect red-eye defect removal. In particular, if a region of an image surrounding the eye is particularly red, a red-eye reduction technique may be applied and produce colour errors on the image where non-red-eye regions are affected. For example, as skin tones and lighting conditions vary from image to image, so that one image may be generally redder than another image. A global static redness threshold for identifying red-eye defects may therefore provide unsatisfactory results.
Advantageously, some embodiments provide a redness threshold which is dynamically applied to the redness information of the grayscale map.
The image filter module 306 in some embodiments compares the relative distribution of the redness of the pixels. In this way, the image filter module 306 determines the different regions of the eye and surrounding the eye based on certain assumptions. The image filter module 306 in some embodiments identifies a plurality of pixels which have a low redness, which is identified as the sclera 804. The image filter determines that the pixels with the low redness is the sclera 804 because the sclera 804 is the least red relative to the rest of the eye.
The image filter module 306 in some embodiments further identifies a group of pixels which have the highest redness values, which is identified as the eye centre comprising red-eye defects 805. The eye centre 805 comprises both the pupil and the iris.
The image filter module 306 in some embodiments then determines a plurality of pixels of intermediate redness, has a redness between the redness of region of the sclera and the redness of the region of the eye centre. The region having an intermediate redness is identified as an eyelid region 806.
In this way, the image filter module 306 identifies separate thresholds of redness for filtering each pixel of the cropped eye image into the separate regions. Figure 8c shows a first threshold of redness 851. The first threshold of redness distinguishes between the eye centre 805 having red-eye defects and the eyelid 806 having intermediate redness. Figure 8c further shows a second redness threshold 853. The second redness threshold 853 distinguishes between the sclera region 804 and the eyelid region 806.
The image filter module 306 in some embodiments determines the redness thresholds as shown in step 405 in figure 4.
The image filter module 306 in some embodiments filters pixel / by analyzing the redness r for each pixel and determining which region the pixel / falls into by comparing the redness r to the first and second threshold. Figure 5 discloses the filtering in more detail.
The image filter module 306 in some embodiments then filters each pixel / in the cropped eye image as shown in step 406 of figure 4.
Where the process steps are the same to previous figures, figure 5 uses the same reference numbers as previously mentioned. Step 406 in figure 5 corresponds to an expanded version of the step of filtering pixel / on the basis of the redness information as shown in figure 4. The image filtering module 306 in some embodiments determines whether the pixel has a redness r below the first redness threshold as shown in step 502. If not, the image filter module 306 in some embodiments determines that the pixel is in the eye centre region 805 as shown in step 506. If the pixel has a redness r below the first redness threshold, the image filter module 306 in some embodiments determines whether the pixel has a redness below the second redness threshold as shown in step 504. If the pixel has a redness below the second redness threshold, the image filter module 306 in some embodiments determines that the pixel is in the sclera 804 as shown in step 510. If the pixel has a redness above the second redness threshold, the image filter module 306 in some embodiments determines that the pixel is in the eyelid region as shown in step 508.
The image filter module 306 in some embodiments then repeats the filtering 406 for all the pixels in the cropped eye region as shown in step 424.
The image filter module 306 in some embodiments then sends the information of which pixels are in which regions to the digital image processor 300.
The digital image processor 300 in some embodiments then sends the information of which pixels are in which regions to the binarization module 308. The binarization module 308 generates a mapping of which pixels are in a certain region. Figure 8d shows a binarization mapping of the cropped eye image showing the most red pixels 808 requiring red-eye correction in white and other pixels in black 809. The step of binarization may also generate noise elements 807, which are not part of the eye centre region, in terms of red-eye reduction, the binarization module generates a mapping of which pixels are in the eye centre region 805. The binarization is performed to provide a simple reference of which pixels need to be corrected for red-eye defects. Advantageously, the use of the first and second thresholds means that the step of binarization differentiates between the eye centre 805 and the eyelid 806. The binarization module binarizes the pixels as shown in figure 4 operation 408.
The binarization module 308 in some embodiments sends the binarized cropped eye image to the digital image processor 300. As mentioned above, the step of binarization may generate noise elements 807.
The digital image processor 300 in some embodiments then sends the binarized mapping of the cropped eye image to a noise reduction module 312.
The noise reduction module 312 in some embodiments analyzes the information and removes pixels which are incorrectly filtered into the different eye regions. The noise reduction module 312 in some embodiments applies morphological operations to the binarized mapping of the cropped eye image. The morphological operations may comprise repeated image merging operations and splitting operations. In some embodiments the morphological operations comprise one or more of the following image processing operations: dilation, erosion, opening and closing operations. Advantageously image dilation and erosion operations are used to "fill" small holes within regions of a digital image and unwanted "noise" pixels can be removed.
If the noise elements 807 are not removed, the red-eye correction will be applied to the noise elements, which will result in an unsatisfactory image because some parts of a resulting corrected image will have a reduced redness where correction is not necessary. In some embodiments, the step of noise reduction is not necessary because there will be few or no noise elements introduced into the binarized mapping of the cropped eye image.
The noise reduction module 312 in some embodiments returns the binarized mapping of the cropped eye image to the digital image processor 300. In this way, the digital image processor 300 determines the eye centre region which requires red-eye defect correction as shown in step 414. Advantageously, the digital image processor 300 is able to segment the an eye image into the three regions of sclera 804, eye centre 805 and eyelid 806, Furthermore, the digital image processor 300 can handle coarse initial locations of eyes in an image and reliably position red-eye defects.
The digital image processor 300 then in some embodiments determines whether red-eye correction is necessary as shown in step 418. The digital image processor 300 in some embodiments determines the mean value of the red components in the eye centre region. The digital image processor 300 in some embodiments then determines whether the mean red component of the eye centre region is greater than the mean red component on the eyelid region. If this is the case then the digital image processor 300 determines that there are red-eye defects in the image and red-eye defect correction is necessary. This is because generally in a normal eye image the eyelid is redder than the eye centre region, therefore if this is not the case, the image contains red-eye defects. In some embodiments the image processor 300 alternatively determines the mean redness of eye centre region and compares it to the mean redness of the eyelid region.
In some alternative embodiments, operation 418 is not necessary. For example, the digital image processor 300 has prior knowledge that a digital image comprises red-eye defects. In other embodiments detection of red-eye defects is carried out in another electronic device.
If the digital image processor 300 in some embodiments determines that redeye defect correction is necessary, the digital image processor sends the binarized mapping of the cropped eye image and the original image to an image correction module 310.
Figure 6 discloses a more detailed version of the image correction module 310. The image correction module 310 comprises an image correction controller 606. The image correction controller 606 is linked to an asymptotic recovery correction module 602 and a fixed correction module 604, The image correction controller 606 in some embodiments determines the eye centre region 805 as shown in step 702. The eye centre region 805 is determined from the binarized mapping of the cropped eye image,
The image correction controller 606 in some embodiments then determines a reference point of the eye centre region 805 as shown in step 704. In some embodiments the image correction controller determines a centre point 902 in figure 9a. The image correction controller determines the centre point 902 by determining the mean centre of all the pixels which are in the eye centre region 805.
The image correction controller determines the mean centre (x0, yo) by using equation [2] and [3].
« ,
yo = ~∑ [3] n , wherein (x0, yo) is the coordinate of the mean centre of the eye centre region, n is the total number of pixels and (X1, y,) is the coordinate of a pixel in the centre region.
The image correction controller 606 then determines an inner portion 908 and an outer portion 910 of the eye centre region as shown in step 706.
The eye centre region 805 comprises the pupil and the iris and the red-eye defects may not be uniform across the eye centre region. In this way, the inner portion 908 of the eye centre region may be seriously affected by redeye defects. However, the outer region of the eye and the outer portion 910 of the eye centre region is less affected by red-eye defects.
The image correction controller 606 determines the dimensions of the ring-like outer portion 910 using equations [4] and [5] as shown in step 706. T max — max K w,.), (Wo)I [5]
wherein B is the eye centre region, (Xj, yi) is a point within B and ||*|| is the Euclidean distance between two points. The geometric centre is determined from equation [2] and [3] in step 704. rmjn and rmaχ are the radii of the inner and outer circles of B respectively. The inner circle 904 and the outer circle 906 are shown in figure 9a.
The image correction controller 606 then determines the distance of a pixel i from the centre point 902. The image controller determines the distance using equation [6].
Figure imgf000023_0001
The image correction controller 606 in some embodiments then determines the type of defect correction in as shown in step 710. The image correction controller 606 in some embodiments determines whether pixel / is in the inner portion 908 of the eye centre region. That is, the image correction controller 606 determines whether distance of the pixel is less than rmin as shown in step 712.
If the pixel is within the inner portion 908, then the pixel is subject to a significant red-eye defect. In this way, the pixel is often saturated with red light.
The image correction controller 606 sends the colour information of the pixel to the fixed correction module 604 as shown in step 716. The fixed correction module 604 in some embodiments modifies the red component value of the pixel using equation [7]. The pixel within the inner portion 908 is modified using the average value of the corresponding blue and green components value for the same pixel. wherein R' is the modified red component value of the pixel, G is the green component value of the pixel and B is the blue component value of the pixel. The fixed correction module 604 in some embodiments then returns the modified pixel colour information to the image correction controller 606.
Alternatively, in some embodiments the image correction controller 606 could determines that pixel / is in the outer portion 910 of the eye centre region. That is, the image correction controller 606 determines that distance of the pixel is greater than rmin as shown in step 712.
If the pixel is within the outer portion 910, then the pixel is subject to a mild red-eye defect. Between rmiπ and rmax , the outer portion 910, there exists a transition area where the red-eye defection gradually weakens because there is less reflected red light. Most colour information of the pixels in this region is useful so the recovery algorithm for the outer portion 910 keeps most of the information, but removes the noise due to the red-eye defects.
The image correction controller 606 in some embodiments sends the colour information of the pixel and the distance of the pixel from the mean centre point 902 to the asymptotic recovery correction module 602 as shown in step 714. The asymptotic recovery correction module 602 in some embodiments modifies the red component of the pixel using equation [8] and [9]. The pixel in the outer portion 910 is modified in dependence of the distance of the pixel from the mean centre point 902. The magnitude of the correction used on the pixel is shown in figure 9b using equation [9]. That is, the original red component of the pixel is more reliable the further away the point is from the mean centre point 902 and so the information of the redness should be largely retained.
T ~ rm a = [8]
Y max — 7" mm R'= V^ • R + (1 - V^) • ^-^ [9]
where α is a weighting factor based on the distance r calculated in equation [6], R' is the modified red component value for the pixel, R is the original red component value for the pixel, G is the green component value for the pixel and B is the blue component value for the pixel.
In this way, the recovery algorithms remove red-eye defects and keep useful colour information at the same time. This means that the corrected eye appearance looks natural. Indeed, the algorithm does not substantially interfere with the original image such that a smooth correction is achieved.
The asymptotic recovery correction module 602 in some embodiments returns the modified pixel colour information to the image correction controller 606.
The red-eye correction of step 710 is repeated for all the pixels in the eye centre region 805.
The image correction controller 606 in some embodiments modifies the original image by using the modified red component value of each pixel as determined in step 710. Thereafter the image correction controller 606 sends the modified image to the digital image processor 300. Figure 9c discloses an example of an modified image with the red-eye reduction techniques disclosed herein applied to it.
The modified image with red-eye correction applied is sent to the display 10 for the user to view. Alternatively, the modified image is stored in memory 16 or transmitted to another electronic device using the transceiver 13.
It shall be appreciated that the term user equipment is intended to cover any suitable type of wireless user equipment, such as mobile telephones, portable data processing devices or portable web browsers. In general, the various embodiments of the invention may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. For example, some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto. While various aspects of the invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
The embodiments of this invention may be implemented by computer software executable by a data processor of the mobile device, such as in the processor entity, or by hardware, or by a combination of software and hardware. Further in this regard it should be noted that any blocks of the logic flow as in the Figures may represent program steps, or interconnected logic circuits, blocks and functions, or a combination of program steps and logic circuits, blocks and functions. The software may be stored on such physical media as memory chips, or memory blocks implemented within the processor, magnetic media such as hard disk or floppy disks, and optical media such as for example DVD and the data variants thereof, CD.
The memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor-based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory. The data processors may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASIC), gate level circuits and processors based on multi-core processor architecture, as non-limiting examples. Embodiments of the inventions may be practiced in various components such as integrated circuit modules. The design of integrated circuits is by and large a highly automated process. Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate.
Programs, such as those provided by Synopsys, Inc. of Mountain View, California and Cadence Design, of San Jose, California automatically route conductors and locate components on a semiconductor chip using well established rules of design as well as libraries of pre-stored design modules. Once the design for a semiconductor circuit has been completed, the resultant design, in a standardized electronic format (e.g., Opus, GDSlI, or the like) may be transmitted to a semiconductor fabrication facility or "fab" for fabrication.
As used in this application, the term circuitry or circuit may refer to ail of the following: (a) hardware-only circuit implementations (such as implementations in only analogue and/or digital circuitry) and (b) to combinations of circuits and software (and/or firmware), such as and where applicable: (i) to a combination of processor(s) or (ii) to portions of processor(s)/software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and (c) to circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present.
This definition of circuitry applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term circuitry would also cover an implementation of merely a processor (or multiple processors) or portion of a processor and its (or their) accompanying software and/or firmware. The term circuitry would also cover, for example and if applicable to the particular claim element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in server, a cellular network device, or other network device.
The term processor and memory may comprise but are not limited to in this application: (1) one or more microprocessors, (2) one or more processor(s) with accompanying digital signal processor(s), (3) one or more processor(s) without accompanying digital signal processor(s), (3) one or more special- purpose computer chips, (4) one or more field-programmable gate arrays (FPGAS), (5) one or more controllers, (6) one or more application-specific integrated circuits (ASICS), or detector(s), processor(s) (including dual-core and multiple-core processors), digital signal processor(s), controiler(s), receiver, transmitter, encoder, decoder, memory (and memories), software, firmware, RAM, ROM, display, user interface, display circuitry, user interface circuitry, user interface software, display software, circuit(s), antenna, antenna circuitry, and circuitry.
The foregoing description has provided by way of exemplary and non-limiting examples a full and informative description of the exemplary embodiment of this invention. However, various modifications and adaptations may become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings and the appended claims. However, all such and similar modifications of the teachings of this invention will stiil fall within the scope of this invention as defined in the appended claims.

Claims

Claims
1. A method comprising: filtering a plurality of elements of an image with a first threshold of colour information; filtering the plurality of elements at a second threshold of the colour information; determining a first region of the image, a second region of the image and a third region of the image on the basis of filtered elements of the image; determining a reference point of the first region; and modifying the colour information of at least one image element of the first region in according to the distance of the at least one image element from the reference point of the first region.
2. A method according to claim 1 wherein the method comprises determining the colour information for the plurality of elements.
3. A method according to claims 1 or 2 wherein the method further comprises determining the first and second thresholds on the basis of the colour information.
4. A method according to claim 3 wherein the determining of the first and second thresholds is based on a distribution of the colour information associated with the first, second and third regions of the image.
5. A method according to claims 3 or 4 wherein the determining of the first and second thresholds comprising comparing the relative distribution of the colour information for the plurality of elements.
6, A method according to any of the preceding claims wherein the determining of the first, second and third regions comprises determining the first region when some of the elements have a colour information over the first threshold, determining the second region when some of the elements have a colour information between the first and second threshold and determining the third region when some of the elements have a colour information below the second threshold.
7 A method according to any of the preceding claims wherein the method further comprises determining that a mean value of the colour information for the first region is greater than a mean value of the colour information for the second region.
8. A method according to claim 7 wherein the method further comprises initiating correction of the colour information of the first region when the colour information of the first region is greater than the second region.
9. A method according to any of the preceding claims wherein the colour information is the redness of the elements of the image.
10. A method according to any of the preceding claims wherein the colour information is the value of the red component of red green blue colour space.
11. A method according to any of the preceding claims wherein the image comprises an eye.
12 A method according to any of the preceding claims wherein the first region is an eye centre, the second region is an eyelid and the third region is a sclera.
13 A method according to claim 12 wherein the image comprises a redeye colour defect.
14. A method according to any of the preceding claims wherein the elements are pixels of a digital image.
15. A method according to any of the preceding claims wherein the method comprises determining a centre portion and an outer portion of the first region.
16. A method according to claim 15 wherein the modifying comprises modifying the colour information of each element of the centre portion according other colour information of each respective element.
17. A method according to claims 15 or 16 wherein the modifying comprises modifying the colour information of each element in the outer portion according to the distance from the reference point, the colour information and other colour information of each respective element.
18. A method according to claim 17 wherein the modifying comprises reducing the magnitude of the modification of the colour information as the distance of an element from the reference point increases.
19. A method according to claims 17 or 18 wherein the modifying of the colour information of the at least one element in the outer portion is based on a non-linear function of the distance.
20. A method according to any of claims 15 to 19 wherein the outer portion is a ring.
21. A method according to any of claims 15 to 20 wherein the first region is circular and / or elliptical.
22. A method according to any of any of claims 15 to 21 wherein the reference point is the centre of the first region.
23. A method according to claims 16 and 17 wherein the other colour information is the blueness and / or the greenness of the image.
24. A method according claims 16 and 17 wherein the other colour information is the value of the blue component and / or green component of red green blue colour space.
25. An apparatus comprising: a first filter configured to filter a plurality of elements of an image with a first threshold of colour information; a second filter configured to filter the plurality of elements at a second threshold of the colour information; region determiner configured to determine a first region of the image, a second region of the image and a third region of the image on the basis of filtered elements of the image. reference determiner configured to determine a reference point of an image region; and modifier configured to modify colour information of at least one image element of the image region according to the distance of the at least one image element from the reference point of the image region.
26. An apparatus according to claim 25 wherein the apparatus further comprises a colour information determiner configured to determine the colour information for the plurality of elements.
27. An apparatus according to claims 25 or 26 wherein the apparatus further comprises a threshold determiner configured to determine the first and second thresholds on the basis of the colour information.
28. An apparatus according to claim 27 wherein the threshold determiner is further configured to determine the first and second thresholds based on a distribution of the colour information associated with the first, second and third regions of the image.
29. An apparatus according to claims 27 or 28 wherein the threshold determiner is configured to compare the relative distribution of the colour information for the plurality of elements.
30. An apparatus according to any of claims 25 to 29 wherein the region determiner is configured to determine the first region when some of the elements have a colour information over the first threshold, determine the second region when some of the elements having a colour information between the first and second threshold and determine the third region when some of the elements having a colour information below the second threshold.
31. An apparatus according to any of claims 25 to 29 wherein the region determiner is configured to determine that a mean value of the colour information for the first region is greater than a mean value of the colour information for the second region.
32. An apparatus according to claim 31 wherein the apparatus further comprises an initiator configured to initiate correction of the colour information of the first region when the colour information of the first region is greater than the second region.
33. A method according to any of claims 25 to 32 wherein the colour information is the redness of the elements of the image.
34. An apparatus according to any of claims 25 to 33 wherein the colour information is the value of the red component of red green blue colour space.
35. An apparatus according to any of claims 25 to 34 wherein the image comprises an eye.
36 An apparatus according to any of claims 25 to 35 wherein the first region is an eye centre, the second region is an eyelid and the third region is a sclera.
37 An apparatus according to claims 36 wherein the eye centre comprises a red-eye defect.
38. An apparatus according to any of claims 25 to 37 wherein the elements are pixels of a digital image.
39. An apparatus according to any of claims 25 to 38 wherein the determiner is further configured to determine a centre portion and an outer portion of the first region.
40. An apparatus according to claim 39 wherein the modifier is configured to modify the colour information of each element of the centre portion according other colour information of each respective element.
41. An apparatus according to claims 39 or 40 wherein the modifier is configured to modify the colour information of each element in the outer portion according to the distance from the reference point, the colour information and other colour information of each respective element.
42. An apparatus according to claim 41 wherein the modifier is configured to reduce the magnitude of the modification of the colour information as the distance of an element from the reference point increases.
43. An apparatus according to claims 41 or 42 wherein the modifier is configured to modify the colour information of the at least one element in the outer portion is based on a non-linear function of the distance.
44. An apparatus according to any of claims 39 to 43 wherein the outer portion is a ring.
45. An apparatus according to any of claims 39 to 44 wherein the image region is circular and / or elliptical.
46. An apparatus according to any of claims 39 to 45 wherein the reference point is the centre of the first region.
47. An apparatus according to claims 40 and 41 wherein the other colour information is the blueness and / or the greenness of the image.
48. An apparatus according claims 40 and 41 wherein the other colour information is the value of the blue component and / or green component of red green biue colour space.
49. A method comprising: filtering a plurality of elements of an image with a first threshold of colour information; filtering the plurality of elements at a second threshold of the colour information; and determining a first region of the image, a second region of the image and a third region of the image on the basis of filtered elements of the image.
50. A method according to claim 49 wherein the method comprises determining the colour information for the plurality of elements.
51. A method according to claims 49 or 50 wherein the method further comprises determining the first and second thresholds on the basis of the colour information.
52. A method according to claim 51 wherein the determining of the first and second thresholds is based on a distribution of the colour information associated with the first, second and third regions of the image.
53. A method according to claims 51 or 52 wherein the determining of the first and second thresholds comprising comparing the relative distribution of the colour information for the plurality of elements.
54. A method according to any of claims 49 to 53 wherein the determining of the first, second and third regions comprises determining the first region when some of the elements have a colour information over the first threshold, determining the second region when some of the elements have a colour information between the first and second threshold and determining the third region when some of the elements have a colour information below the second threshold.
55 A method according to any of claims 49 to 54 wherein the method further comprises determining that a mean value of the colour information for the first region is greater than a mean value of the colour information for the second region.
56. A method according to claim 55 wherein the method further comprises initiating correction of the colour information of the first region when the colour information of the first region is greater than the second region.
57. A method according to any of claims 49 to 56 wherein the colour information is the redness of the elements of the image.
58. A method according to any of claims 49 to 57 wherein the colour information is the value of the red component of red green blue colour space.
59. A method according to any of claims 49 to 58 wherein the image comprises an eye.
60 A method according to any of claims 49 to 59 wherein the first region is an eye centre, the second region is an eyelid and the third region is a sclera.
61 A method according to any of claims 49 to 60 wherein the image comprises a red-eye defect.
62. A method according to any of claims 49 to 61 wherein the elements are pixels of a digital image.
63. An apparatus comprising: a first filter configured to filter a plurality of elements of an image with a first threshold of colour information; a second filter configured to filter the plurality of elements at a second threshold of the colour information; and region determiner configured to determine a first region of the image, a second region of the image and a third region of the image on the basis of filtered elements of the image.
64. An apparatus according to claim 63 wherein the apparatus further comprises a colour information determiner configured to determine the colour information for the plurality of elements.
65. An apparatus according to claims 63 or 64 wherein the apparatus further comprises a threshold determiner configured to determine the first and second thresholds on the basis of the colour information.
66. An apparatus according to claim 65 wherein the threshold determiner is further configured to determine the first and second thresholds based on a distribution of the colour information associated with the first, second and third regions of the image.
67. An apparatus according to claims 65 or 66 wherein the threshold determiner is configured to compare the relative distribution of the colour information for the plurality of elements.
68. An apparatus according to any of claims 63 to 67 wherein the region determiner is configured to determine the first region when some of the elements have a colour information over the first threshold, determine the second region when some of the elements having a colour information between the first and second threshold and determine the third region when some of the elements having a colour information below the second threshold.
69. An apparatus according to any of claims 63 to 68 wherein the region determiner is configured to determine that a mean value of the colour information for the first region is greater than a mean value of the colour information for the second region.
70. An apparatus according to claim 69 wherein the apparatus further comprises an initiator configured to initiate correction of the colour information of the first region when the colour information of the first region is greater than the second region.
71. A method according to any of claims 63 to 70 wherein the colour information is the redness of the elements of the image.
72. An apparatus according to any of claims 63 to 71 wherein the colour information is the value of the red component of red green blue colour space.
73. An apparatus according to any of claims 63 to 72 wherein the image comprises an eye.
74 An apparatus according to any of claims 63 to 73 wherein the first region is an eye centre, the second region is an eyelid and the third region is a sclera.
75 An apparatus according to claim 74 wherein the eye centre comprises a red-eye defect.
76. An apparatus according to any of claims 63 to 75 wherein the elements are pixels of a digital image.
77. A method comprising: determining a reference point of an image region comprising a portion of an eye; and modifying colour information of at least one image element of the portion of the eye according to a distance of the at least one image element from the reference point of the image region such that colour defects of the portion of the eye are corrected.
78. A method according to claim 77 wherein the method comprises determining a centre portion and an outer portion of the image region.
79. A method according to claim 78 wherein the modifying comprises modifying the colour information of each element of the centre portion according other colour information of each respective element.
80. A method according to claims 78 or 79 wherein the modifying comprises modifying the colour information of each element in the outer portion according to the distance from the reference point, the colour information and other colour information of each respective element.
81. A method according to claim 80 wherein the modifying comprises reducing the magnitude of the modification of the colour information as the distance of an element from the reference point increases.
82. A method according to claims 80 or 81 wherein the modifying of the colour information of the at least one element in the outer portion is based on a non-linear function of the distance.
83. A method according to any of claims 78 to 82 wherein the outer portion is a ring.
84. A method according to any of claims 77 to 83 wherein the image region is circular and / or elliptical.
85. A method according to any of claims 78 to 84 wherein the reference point is the centre of the image region.
86. A method according to any of claims 77 to 85 wherein the colour information is the redness of the elements of the image.
87. A method according to claims 80 and 81 wherein the other colour information is the blueness and / or the greenness of the image.
88. A method according claims 80 and 81 wherein the other colour information is the value of the blue component and / or green component of red green blue colour space.
89. A method according to any of claims 77 to 88 wherein the colour information is the value of the red component of red green blue colour space.
90 A method according to any of claims 77 to 89 wherein the image region is an eye centre.
91 A method according to claim 90 wherein the eye centre comprises a red-eye colour defect.
92. An apparatus comprising: a determiner configured to determine a reference point of an image region comprising a portion of an eye; and a modifier configured to modify colour information of at least one image element of the portion of the eye according to the distance of the at least one image element from the reference point of the image region such that colour defects of the portion of the eye are corrected.
93. An apparatus according to claim 92 wherein the determiner is further configured to determine a centre portion and an outer portion of the image region.
94. An apparatus according to claim 93 wherein the modifier is configured to modify the colour information of each element of the centre portion according other colour information of each respective element.
95. An apparatus according to claims 93 or 94 wherein the modifier is configured to modify the colour information of each element in the outer portion according to the distance from the reference point, the colour information and other colour information of each respective element.
96. An apparatus according to claim 95 wherein the modifier is configured to reduce the magnitude of the modification of the colour information as the distance of an element from the reference point increases.
97. An apparatus according to claims 95 or 96 wherein the modifier is configured to modify the colour information of the at least one element in the outer portion is based on a non-linear function of the distance.
98. An apparatus according to any of claims 93 to 97 wherein the outer portion is a ring.
99. An apparatus according to any of claims 92 to 98 wherein the image region is circular and / or elliptical.
100. An apparatus according to any of claims 93 to 99 wherein the reference point is the centre of the image region.
101. An apparatus according to any of claims 92 to 100 wherein the colour information is the redness of the elements of the image.
102. An apparatus according to claims 94 and 95 wherein the other colour information is the blueness and / or the greenness of the image.
103. An apparatus according claims 94 and 95 wherein the other colour information is the value of the blue component and / or green component of red green blue colour space.
104. An apparatus according to any of claims 92 to 103 wherein the colour information is the value of the red component of red green blue colour space.
105 An apparatus according to any of claims 92 to 104 wherein the image region is an eye centre.
106 An apparatus according to claim 105 wherein the eye centre comprises a red-eye colour defect.
107. An electronic device comprising apparatus as claimed in any of claims 25 to 48, 63 to 76 and 92 to 106.
108. A chipset comprising apparatus as claimed in any of claims 25 to 48, 63 to 76 and 92 to 106.
109. A computer readable medium comprising a computer program thereon, the computer program performing the method of any of claims 1 to 24, 49 to 62 and 77 to 91.
PCT/EP2009/058056 2009-06-26 2009-06-26 An apparatus WO2010149220A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2009/058056 WO2010149220A1 (en) 2009-06-26 2009-06-26 An apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2009/058056 WO2010149220A1 (en) 2009-06-26 2009-06-26 An apparatus

Publications (1)

Publication Number Publication Date
WO2010149220A1 true WO2010149220A1 (en) 2010-12-29

Family

ID=41478966

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2009/058056 WO2010149220A1 (en) 2009-06-26 2009-06-26 An apparatus

Country Status (1)

Country Link
WO (1) WO2010149220A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020150292A1 (en) * 2001-04-13 2002-10-17 O'callaghan Andrais Redeye reduction of digital images
WO2003071484A1 (en) * 2002-02-22 2003-08-28 Pixology Software Limited Detection and correction of red-eye features in digital images
US6728401B1 (en) * 2000-08-17 2004-04-27 Viewahead Technology Red-eye removal using color image processing
US20040213476A1 (en) * 2003-04-28 2004-10-28 Huitao Luo Detecting and correcting red-eye in a digital image
JP2006059092A (en) * 2004-08-19 2006-03-02 Noritsu Koki Co Ltd Catchlight synthesis method
US20070140589A1 (en) * 2005-11-07 2007-06-21 Canon Kabushiki Kaisha Image processing method and apparatus thereof
US20070252906A1 (en) * 2006-04-28 2007-11-01 Ulichney Robert A Perceptually-derived red-eye correction

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6728401B1 (en) * 2000-08-17 2004-04-27 Viewahead Technology Red-eye removal using color image processing
US20020150292A1 (en) * 2001-04-13 2002-10-17 O'callaghan Andrais Redeye reduction of digital images
WO2003071484A1 (en) * 2002-02-22 2003-08-28 Pixology Software Limited Detection and correction of red-eye features in digital images
US20040213476A1 (en) * 2003-04-28 2004-10-28 Huitao Luo Detecting and correcting red-eye in a digital image
JP2006059092A (en) * 2004-08-19 2006-03-02 Noritsu Koki Co Ltd Catchlight synthesis method
US20070140589A1 (en) * 2005-11-07 2007-06-21 Canon Kabushiki Kaisha Image processing method and apparatus thereof
US20070252906A1 (en) * 2006-04-28 2007-11-01 Ulichney Robert A Perceptually-derived red-eye correction

Similar Documents

Publication Publication Date Title
JP6449516B2 (en) Image and feature quality for ocular blood vessel and face recognition, image enhancement and feature extraction, and fusion of ocular blood vessels with facial and / or sub-facial regions for biometric systems
US10445574B2 (en) Method and apparatus for iris recognition
CN108197546B (en) Illumination processing method and device in face recognition, computer equipment and storage medium
JP4755202B2 (en) Face feature detection method
JP3810776B2 (en) A method for detecting and correcting red eyes in digital images.
US9262690B2 (en) Method and device for detecting glare pixels of image
KR20180109665A (en) A method and apparatus of image processing for object detection
KR101631012B1 (en) Image processing apparatus and image processing method
WO2015070723A1 (en) Eye image processing method and apparatus
EP3440593B1 (en) Method and apparatus for iris recognition
WO2016010721A1 (en) Multispectral eye analysis for identity authentication
JP2004348733A (en) Method and device for detecting red-eye area in digital image
US8442317B2 (en) Method, apparatus and computer program product for compensating eye color defects
EP3961495B1 (en) System and method for finding an area of an eye from a facial image
Gasparini et al. Automatic red-eye removal for digital photography
CN117197064A (en) Automatic non-contact eye red degree analysis method
CN112329572B (en) Rapid static living body detection method and device based on frame and flash point
WO2010149220A1 (en) An apparatus
CN108090950A (en) A kind of method for optimizing the high light pollution of go image
Chang et al. Automatic detection and correction for glossy reflections in digital photograph
KR101143555B1 (en) Face Detection System using Skin Color Filtering and Morphology Process and Method Therefor
Marchesotti et al. Safe red-eye correction plug-in using adaptive methods
CN116596944A (en) Information processing method and device for fundus image evaluation
CN116245750A (en) Method, equipment and storage medium for removing rainbow-like glare of RGB (red, green and blue) image under screen
Swami et al. A new method of flash image shadow detection and patch-based correction of shadow regions

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09779975

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09779975

Country of ref document: EP

Kind code of ref document: A1