CA2405270A1 - Method of image defect detection and correction - Google Patents
Method of image defect detection and correction Download PDFInfo
- Publication number
- CA2405270A1 CA2405270A1 CA002405270A CA2405270A CA2405270A1 CA 2405270 A1 CA2405270 A1 CA 2405270A1 CA 002405270 A CA002405270 A CA 002405270A CA 2405270 A CA2405270 A CA 2405270A CA 2405270 A1 CA2405270 A1 CA 2405270A1
- Authority
- CA
- Canada
- Prior art keywords
- red
- eye
- classified
- image
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000001514 detection method Methods 0.000 title description 8
- 238000000034 method Methods 0.000 title description 7
- 230000007547 defect Effects 0.000 title description 3
- 241000593989 Scardinius erythrophthalmus Species 0.000 description 59
- 201000005111 ocular hyperemia Diseases 0.000 description 59
- 230000035945 sensitivity Effects 0.000 description 9
- 230000011218 segmentation Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 210000001747 pupil Anatomy 0.000 description 3
- 230000000717 retained effect Effects 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 210000001525 retina Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30216—Redeye defect
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Description
Method of Image Defect Detection and Correction Introduction The present invention relates to a method to detect and remove red-eye in digital images without user intervention. The method also allows for optional user intervention to increase detection of red-eye and reduce the occurrence of false positives.
Red-eye is acknowledged as one of the biggest problems within consumer photography.
Despite numerous attempts to solve the problem, as yet no definitive solution that provides automatic red-eye removal has been introduced to the market.
Red-eye is a common problem occurring in photographs of people and animals taken in dimly lit areas with a flash. Red-eye results when the light from a photographic flash enters the pupil of the eye and bounces off the capillaries of the retina.
Most flash pictures are taken in relative darkness, when people's pupils are dilated, which allows light to reflect off the capillaries and return to the camera. The capillaries are filled with blood and produce a reflection with a red glow. Typically this happens when the flash is directly above the lens and the subject is looking into the camera. If the pupils are dilated sufficiently, red-eye can occur even if the subject is not looking directly into the camera.
Functional Overview The image defect detection and replacement method has four basic functions:
1 ) Automated Operation - This operation is for the automated detection of red-eye in an image and the replacement of the red color with grey-scale pixels.
The resulting image from this operation is termed the "Corrected Image". The Automated Operation must be completed before any of the following three operations can be performed.
Red-eye is acknowledged as one of the biggest problems within consumer photography.
Despite numerous attempts to solve the problem, as yet no definitive solution that provides automatic red-eye removal has been introduced to the market.
Red-eye is a common problem occurring in photographs of people and animals taken in dimly lit areas with a flash. Red-eye results when the light from a photographic flash enters the pupil of the eye and bounces off the capillaries of the retina.
Most flash pictures are taken in relative darkness, when people's pupils are dilated, which allows light to reflect off the capillaries and return to the camera. The capillaries are filled with blood and produce a reflection with a red glow. Typically this happens when the flash is directly above the lens and the subject is looking into the camera. If the pupils are dilated sufficiently, red-eye can occur even if the subject is not looking directly into the camera.
Functional Overview The image defect detection and replacement method has four basic functions:
1 ) Automated Operation - This operation is for the automated detection of red-eye in an image and the replacement of the red color with grey-scale pixels.
The resulting image from this operation is termed the "Corrected Image". The Automated Operation must be completed before any of the following three operations can be performed.
2) Decrease Sensitivity Operation - This operation is for the correction of a false positive caused by the Automated Operation. Upon viewing the Corrected Image the user may want to return the original color of a false positive object that has been incorrectly classified as a red-eye and re-colored.
With a single user request one object is removed from the group of objects classified as red-eye. The object removed is the one with the least statistical likelihood of actually being a red-eye. When this is done the original color of the object is returned in the Corrected Image.
With a single user request one object is removed from the group of objects classified as red-eye. The object removed is the one with the least statistical likelihood of actually being a red-eye. When this is done the original color of the object is returned in the Corrected Image.
3) Increase Sensitivity Operation - This operation is for the detection and re-coloring of a red-eye that was missed by the Automated Operation. Upon viewing the Corrected Image the user may see a red-eye that was not detected and re-colored. With a single user request one object is classified as a red-eye from the group of objects segmented and not already classified as red-eye.
The object added is the one with the highest statistical likelihood of actually being a red-eye. When this is done the red color of the object is replaced by grey-scale pixels in the Corrected Image.
The object added is the one with the highest statistical likelihood of actually being a red-eye. When this is done the red color of the object is replaced by grey-scale pixels in the Corrected Image.
4) Manual Override Operation - This operation is for the manual correction of a missed red-eye or a false positive. Upon viewing the Corrected Image the user may observe a red-eye that was not detected and re-colored or they may observe a false positive object that has been incorrectly classified as a red-eye and re-colored. First the user selects the observed object. If the object is a false positive, it is returned to its original color. If the object is an undetected red-eye and it is a segmented object, it will be re-colored using grey-scale pixels. The user may choose to use this operation instead of the Decrease Sensitivity Operation or the Increase Sensitivity Operation. It can be used to correct a specific false positive or re-color a specific red-eye.
Each of these four operations is described in more detail in the following sections.
AUTOMATED OPERATION
The automated detection and removal of red-eye in digital photographic images is illustrated in the flow chart of Figure I . Segmentation, feature extraction and classification take place at a number of different resolutions to cover the wide range of possible red-eye sizes in digital photographs. Multiple resolutions are used for speed optimization as full resolution is not needed for the detection of medium and large red-eyes. Although any number of resolutions could be used, for the purposes of this description three resolutions are used, namely full resolution, half resolution, and quarter resolution. These three resolutions are indicated in Figure 1 by the three separate paths of the flow chart that originate from the "Input Image" module. For each resolution, small round red objects are segmented using a tophat operation performed on the image generated by subtracting the original green component image from the original red component image. This produces three "Segmentation Masks". Next the features of segmented objects are extracted for the objects in each Segmentation Mask. The features extracted describe each segmented object's color, shape, and texture as well as the color and texture of the region surrounding the object. Next each segmented object is classified based on its feature set. The classification technique used is based on the object's feature space quadratic distance to a training red-eye feature cluster. If the distance is closer than a given threshold the object is classified as a red-eye, while if it is further away it is classified as a non-red-eye. The classification process produces three "Red-eye Masks", one each for small, medium and large occurrences of red-eye. The quarter and half resolution Red-eye Masks are then resized to full-size. The three full-size Red-eye Masks are logically "OR'ed" together to produce a "Final Red-eye Mask". Finally the areas classified as red-eye are re-colored using grey-scale to produce the Corrected Image, where the grey-scale values used are equal to the average of the actual green and blue pixels in the original image.
DECREASE SENSITIVITY OPERATION
Following the Automated Operation, upon viewing the Corrected Image the user may want to remove a false positive object that has been incorrectly classified as a red-eye and re-colored. The three Red-eye Mask images generated by the Automated Operation are retained in memory (until the next time the Automated Operation is performed) as well as each segmented object's red-eye classification "grade" -- where object classification grade is related to the probability that the object is a red-eye (inversely related to the object's feature space distance to the training red-eye cluster). When the Decrease Sensitivity Operation is called, of all the objects classified as red-eye, the object with the lowest probability of being a red-eye is re-classified as a non-red-eye, removed from the appropriate Red-eye Mask image and returned to its original color in the Corrected Image.
This function can only be called after the Automated Operation has been called and can be called as often as requested.
INCREASE SENSITIVITY OPERATION
Following the Automated Operation, upon viewing the Corrected Image the user may see a red-eye that was not detected. The three Segmentation Mask images generated by the Automated Operation are retained in memory (until the next time the Automated Operation is performed) as well as each object's red-eye classification "grade" as described above in the Decrease Sensitivity Operation. When the Increase Sensitivity Operation is called, of all the objects not already classified as red-eye, the object with the highest probability of being a red-eye is classified as red-eye, added to the appropriate Red-eye Mask and the Corrected Image re-colored using grey scale as described above in the Automated Operation section. This function can only be called after the Automated Operation has been called.
MANUAL OVERRIDE OPERATION
Following the Automated Operation, upon viewing the Corrected Image the user may see a red-eye that was not detected, or they may see a false positive object that has been incorrectly classified as a red-eye and re-colored. When the Manual Overnde Operation is called it must be passed the coordinates of a pixel on either an undetected red-eye or a false positive object.
The function will determine if the coordinates passed to it are on an object currently classified as red-eye (indicating that the user considers it to be a false positive) and if so it will remove the object from the appropriate Red-eye Mask image and change the object's classification grade so that it is relatively low. This indicates for the purposes of any further operations that the object is very unlikely to be a red-eye (since the user is indicating that the object is not a red-eye). Finally the object is returned to its original color in the Corrected Image.
If the coordinates passed into the fixnction are on an object currently classified as a non-red-eye (indicating that the user considers it to be an undetected red-eye) then each Segmentation Mask (see Figure 1 ) is checked in descending order of resolution to determine if the coordinates are on a segmented object that is present in that mask. Once a segmented object containing the selected coordinates is found, no fiuther Segmentation Masks are checked. The segmented object is classified as red-eye and added to the appropriate Red-eye Mask (see Figure L). The object's classification grade is increased to indicate for the purposes of any further operations that the object is highly likely to be a red-eye (since the user has just indicated this). Finally the object is re-colored in the Corrected Image using grey scale as described above in the Automated Operation section.
This function can only be called after the Automated Operation has been called and can be called as often as requested.
Each of these four operations is described in more detail in the following sections.
AUTOMATED OPERATION
The automated detection and removal of red-eye in digital photographic images is illustrated in the flow chart of Figure I . Segmentation, feature extraction and classification take place at a number of different resolutions to cover the wide range of possible red-eye sizes in digital photographs. Multiple resolutions are used for speed optimization as full resolution is not needed for the detection of medium and large red-eyes. Although any number of resolutions could be used, for the purposes of this description three resolutions are used, namely full resolution, half resolution, and quarter resolution. These three resolutions are indicated in Figure 1 by the three separate paths of the flow chart that originate from the "Input Image" module. For each resolution, small round red objects are segmented using a tophat operation performed on the image generated by subtracting the original green component image from the original red component image. This produces three "Segmentation Masks". Next the features of segmented objects are extracted for the objects in each Segmentation Mask. The features extracted describe each segmented object's color, shape, and texture as well as the color and texture of the region surrounding the object. Next each segmented object is classified based on its feature set. The classification technique used is based on the object's feature space quadratic distance to a training red-eye feature cluster. If the distance is closer than a given threshold the object is classified as a red-eye, while if it is further away it is classified as a non-red-eye. The classification process produces three "Red-eye Masks", one each for small, medium and large occurrences of red-eye. The quarter and half resolution Red-eye Masks are then resized to full-size. The three full-size Red-eye Masks are logically "OR'ed" together to produce a "Final Red-eye Mask". Finally the areas classified as red-eye are re-colored using grey-scale to produce the Corrected Image, where the grey-scale values used are equal to the average of the actual green and blue pixels in the original image.
DECREASE SENSITIVITY OPERATION
Following the Automated Operation, upon viewing the Corrected Image the user may want to remove a false positive object that has been incorrectly classified as a red-eye and re-colored. The three Red-eye Mask images generated by the Automated Operation are retained in memory (until the next time the Automated Operation is performed) as well as each segmented object's red-eye classification "grade" -- where object classification grade is related to the probability that the object is a red-eye (inversely related to the object's feature space distance to the training red-eye cluster). When the Decrease Sensitivity Operation is called, of all the objects classified as red-eye, the object with the lowest probability of being a red-eye is re-classified as a non-red-eye, removed from the appropriate Red-eye Mask image and returned to its original color in the Corrected Image.
This function can only be called after the Automated Operation has been called and can be called as often as requested.
INCREASE SENSITIVITY OPERATION
Following the Automated Operation, upon viewing the Corrected Image the user may see a red-eye that was not detected. The three Segmentation Mask images generated by the Automated Operation are retained in memory (until the next time the Automated Operation is performed) as well as each object's red-eye classification "grade" as described above in the Decrease Sensitivity Operation. When the Increase Sensitivity Operation is called, of all the objects not already classified as red-eye, the object with the highest probability of being a red-eye is classified as red-eye, added to the appropriate Red-eye Mask and the Corrected Image re-colored using grey scale as described above in the Automated Operation section. This function can only be called after the Automated Operation has been called.
MANUAL OVERRIDE OPERATION
Following the Automated Operation, upon viewing the Corrected Image the user may see a red-eye that was not detected, or they may see a false positive object that has been incorrectly classified as a red-eye and re-colored. When the Manual Overnde Operation is called it must be passed the coordinates of a pixel on either an undetected red-eye or a false positive object.
The function will determine if the coordinates passed to it are on an object currently classified as red-eye (indicating that the user considers it to be a false positive) and if so it will remove the object from the appropriate Red-eye Mask image and change the object's classification grade so that it is relatively low. This indicates for the purposes of any further operations that the object is very unlikely to be a red-eye (since the user is indicating that the object is not a red-eye). Finally the object is returned to its original color in the Corrected Image.
If the coordinates passed into the fixnction are on an object currently classified as a non-red-eye (indicating that the user considers it to be an undetected red-eye) then each Segmentation Mask (see Figure 1 ) is checked in descending order of resolution to determine if the coordinates are on a segmented object that is present in that mask. Once a segmented object containing the selected coordinates is found, no fiuther Segmentation Masks are checked. The segmented object is classified as red-eye and added to the appropriate Red-eye Mask (see Figure L). The object's classification grade is increased to indicate for the purposes of any further operations that the object is highly likely to be a red-eye (since the user has just indicated this). Finally the object is re-colored in the Corrected Image using grey scale as described above in the Automated Operation section.
This function can only be called after the Automated Operation has been called and can be called as often as requested.
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA002405270A CA2405270A1 (en) | 2002-10-10 | 2002-10-10 | Method of image defect detection and correction |
US10/682,364 US20040114829A1 (en) | 2002-10-10 | 2003-10-10 | Method and system for detecting and correcting defects in a digital image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA002405270A CA2405270A1 (en) | 2002-10-10 | 2002-10-10 | Method of image defect detection and correction |
Publications (1)
Publication Number | Publication Date |
---|---|
CA2405270A1 true CA2405270A1 (en) | 2004-04-10 |
Family
ID=32331607
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA002405270A Abandoned CA2405270A1 (en) | 2002-10-10 | 2002-10-10 | Method of image defect detection and correction |
Country Status (1)
Country | Link |
---|---|
CA (1) | CA2405270A1 (en) |
-
2002
- 2002-10-10 CA CA002405270A patent/CA2405270A1/en not_active Abandoned
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7847840B2 (en) | Detecting red eye filter and apparatus using meta-data | |
US20050031224A1 (en) | Detecting red eye filter and apparatus using meta-data | |
CN111080577B (en) | Fundus image quality evaluation method, fundus image quality evaluation system, fundus image quality evaluation apparatus, and fundus image storage medium | |
JP4966021B2 (en) | Method and apparatus for optimizing red eye filter performance | |
US20130044243A1 (en) | Red-Eye Filter Method and Apparatus | |
JP2011503704A (en) | Detection of red-eye defects in digital images | |
JP2007097178A (en) | Method for removing "red-eyes" by face detection | |
WO2001071421A1 (en) | Red-eye correction by image processing | |
JP4982567B2 (en) | Artifact removal for images taken with flash | |
JP3510040B2 (en) | Image processing method | |
CA2405270A1 (en) | Method of image defect detection and correction | |
JP3709656B2 (en) | Image processing device | |
JP2010219870A (en) | Image processor and image processing method | |
IE20050040U1 (en) | Red-eye filter method and apparatus using pre-acquisition information | |
IES84150Y1 (en) | Red-eye filter method and apparatus using pre-acquisition information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FZDE | Discontinued |