US20110122297A1 - Method and apparatus for detection and correction of flash-induced eye defects within digital images using preview or other reference images - Google Patents
Method and apparatus for detection and correction of flash-induced eye defects within digital images using preview or other reference images Download PDFInfo
- Publication number
- US20110122297A1 US20110122297A1 US12/947,731 US94773110A US2011122297A1 US 20110122297 A1 US20110122297 A1 US 20110122297A1 US 94773110 A US94773110 A US 94773110A US 2011122297 A1 US2011122297 A1 US 2011122297A1
- Authority
- US
- United States
- Prior art keywords
- image
- digital image
- flash
- main digital
- filter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 121
- 230000007547 defect Effects 0.000 title claims abstract description 66
- 238000001514 detection method Methods 0.000 title claims abstract description 64
- 238000012937 correction Methods 0.000 title claims description 72
- 238000012545 processing Methods 0.000 claims description 73
- 230000008569 process Effects 0.000 claims description 54
- 238000011045 prefiltration Methods 0.000 claims description 47
- 238000004458 analytical method Methods 0.000 claims description 45
- 241000593989 Scardinius erythrophthalmus Species 0.000 abstract description 111
- 201000005111 ocular hyperemia Diseases 0.000 abstract description 111
- 238000010191 image analysis Methods 0.000 description 34
- 230000009466 transformation Effects 0.000 description 27
- 238000004422 calculation algorithm Methods 0.000 description 25
- 230000008901 benefit Effects 0.000 description 17
- 238000003702 image correction Methods 0.000 description 17
- 230000003287 optical effect Effects 0.000 description 17
- 238000000844 transformation Methods 0.000 description 13
- 210000000887 face Anatomy 0.000 description 12
- 239000000428 dust Substances 0.000 description 11
- 230000000694 effects Effects 0.000 description 9
- 238000012360 testing method Methods 0.000 description 9
- 230000001419 dependent effect Effects 0.000 description 8
- 230000009467 reduction Effects 0.000 description 8
- 210000001747 pupil Anatomy 0.000 description 7
- 230000011218 segmentation Effects 0.000 description 6
- 230000003595 spectral effect Effects 0.000 description 6
- 235000003332 Ilex aquifolium Nutrition 0.000 description 5
- 241000209027 Ilex aquifolium Species 0.000 description 5
- 238000001914 filtration Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000012805 post-processing Methods 0.000 description 5
- 230000002123 temporal effect Effects 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 238000001444 catalytic combustion detection Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 230000007812 deficiency Effects 0.000 description 4
- 210000004709 eyebrow Anatomy 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 239000003607 modifier Substances 0.000 description 4
- 238000009877 rendering Methods 0.000 description 4
- 238000003860 storage Methods 0.000 description 4
- 230000001131 transforming effect Effects 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 230000004397 blinking Effects 0.000 description 3
- 230000008451 emotion Effects 0.000 description 3
- 230000002708 enhancing effect Effects 0.000 description 3
- 239000011521 glass Substances 0.000 description 3
- 238000003707 image sharpening Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 239000010813 municipal solid waste Substances 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 230000002829 reductive effect Effects 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 239000013589 supplement Substances 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 230000009977 dual effect Effects 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000000153 supplemental effect Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000003213 activating effect Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000002547 anomalous effect Effects 0.000 description 1
- 238000010420 art technique Methods 0.000 description 1
- 210000004204 blood vessel Anatomy 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000011248 coating agent Substances 0.000 description 1
- 238000000576 coating method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000011109 contamination Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 210000004209 hair Anatomy 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000012528 membrane Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000037311 normal skin Effects 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000001303 quality assessment method Methods 0.000 description 1
- 238000011946 reduction process Methods 0.000 description 1
- 210000001525 retina Anatomy 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 230000036548 skin texture Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
- 230000002792 vascular Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/94—Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/97—Determining parameters from multiple pictures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20004—Adaptive image processing
- G06T2207/20008—Globally adaptive
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30216—Redeye defect
Definitions
- the present invention relates to digital image processing, and more particularly to a method and apparatus for red-eye detection in an acquired digital image.
- Red-eye is a phenomenon in flash photography where a flash is reflected within a subjects eye and appears in a photograph as a red dot where the black pupil of the subject's eye would normally appear.
- the unnatural glowing red of an eye is due to internal reflections from the vascular membrane behind the retina, which is rich in blood vessels.
- This objectionable phenomenon is well understood to be caused in part by a small angle between the flash of the camera and the lens of the camera. This angle has decreased with the miniaturization of cameras with integral flash capabilities. Additional contributors include the relative closeness of the subject to the camera and ambient light levels.
- a method for red-eye detection in an acquired digital image including acquiring one or more preview or other reference images without a flash, and determining any red regions that exist within said one or more reference images.
- a main image is acquired with a flash of approximately a same scene as the one or more reference images.
- the main image is analyzed to determine any candidate red eye defect regions that exist within the main image. Any red regions determined to exist within the one or more reference images are compared with any candidate red eye defect regions determined to exist within the main image.
- Candidate red eye defect regions within the main image are removed as candidates if any corresponding red regions are determined also to exist within the one or more reference images.
- the analyzing may include applying a chain of one or more red-eye filters to said main image.
- the red-eye filter chain may include: (i) a pixel locator and segmentor; (ii) a shape analyser; (iii) a falsing analyser; and (iv) a pixel modifier.
- the pixel locator and segmentor may include a pixel transformer.
- a third acquired image may be corrected based on any red eye defect regions determined in the main image.
- the third acquired image may include a sub-sampled copy of the main image.
- the analyzing may include recognizing one or more faces or types of faces, or combinations thereof, in the main image.
- the method may include correcting the main image based on analysis of at least one preview image and determining a degree of blur; a degree of dust contamination; a color imbalance; a white imbalance; a gamma error; a texture characteristic error; or noise characteristics, or combinations thereof.
- the method may also include recognizing a same face or type of face, or both, in a second image as in the main image, and applying one or more same corrective processes to the second image as the main image.
- the method may include determining correcting to apply to the main image based on analysis of at least one preview image.
- the correcting may include (i) contrast normalization and image sharpening; (ii) image color adjustment and tone scaling; (iii) exposure adjustment and digital fill flash; (iv) brightness adjustment with color space matching; (v) image auto-gamma determination with image enhancement.; (v) image enhancement; or (vi) face based image enhancement, or combinations thereof.
- the method may include, responsive to determining to apply correcting to the main image, also determining that such correcting cannot be beneficially applied to the main image, and disabling the correcting and providing an indication of such to a user.
- the determining to apply correcting may include interacting with a user to determine the corrections to be made and/or performing a color space transformation.
- the method may include determining a sequence in which to apply more than one correcting action to the main image.
- the method may be performed in whole or in part within a portable digital camera.
- a digital image acquisition device including one or more optics and a sensor for acquiring digital images including main images and relatively low resolution preview images, a processor, and a processor readable medium having stored thereon digital code for programming the processor to perform any of the methods of red eye detection or other image correction described herein.
- FIG. 1( a ) shows a prior art in-camera redeye detection system
- FIG. 1( b ) shows an improved redeye detection system according to an embodiment of the present invention
- FIG. 2( a ) is a flowchart illustrating the operation of the system of FIG. 1( b );
- FIG. 2( b ) is a flowchart illustrating an alternative mode of operation of the system of FIG. 1( b );
- FIG. 2( c ) illustrates another alternative mode of operation of the system of FIG. 1( b );
- FIG. 2( d ) is a flowchart illustrating a further alternative mode of operation of the system of FIG. 1( b );
- FIG. 2( e ) is a flowchart illustrating a still further alternative mode of operation of the system of FIG. 1( b );
- FIG. 3 shows the redeye filter chain of FIG. 1( b ) in more detail
- FIG. 4( a ) illustrates the operation of portions of FIGS. 2( a ), 2 ( b ), 2 ( d ) & 2 ( e ) in more detail;
- FIG. 4( b ) illustrates an alternative implementation of FIG. 4( a );
- FIG. 4( c ) is a flowchart illustrating the operation of a portion of the system of FIG. 1( b );
- FIGS. 5( a ) and 5 ( b ) illustrate the operation of a red-eye filter chain according to an embodiment of the present invention.
- the data that are used to process the main image come at least not solely from the image itself, but instead or also from one or more separate “reference” images.
- Reference images provide supplemental meta data, and in particular supplemental visual data to an acquired image, or main image.
- the reference image can be a single instance, or in general, a collection of one or more images varying from each other.
- the so-defined reference image(s) provides additional information that may not be available as part of the main image.
- Example of a spatial collection may be multiple sensors all located in different positions relative to each other.
- Example of temporal distribution can be a video stream.
- the reference image differs from the main captured image, and the multiple reference images differ from each other in various potential manners which can be based on one or combination of permutations in time (temporal), position (spatial), optical characteristics, resolution, and spectral response, among other parameters.
- the reference image is captured before and/or after the main captured image, and preferably just before and/or just after the main image. Examples may include preview video, a pre-exposed image, and a post-exposed image. In certain embodiments, such reference image uses the same optical system as the acquired image, while in other embodiments, wholly different optical systems or optical systems that use one or more different optical components such as a lens, an optical detector and/or a program component.
- a reference image may differ in the location of secondary sensor or sensors, thus providing spatial disparity.
- the images may be taken simultaneously or proximate to or in temporal overlap with a main image.
- the reference image may be captured using a separate sensor located away from the main image sensor.
- the system may use a separate optical system, or via some splitting of a single optical system into a plurality of sensors or a plurality of sub-pixels of a same sensor. As digital optical systems become smaller dual or multi sensor capture devices will become more ubiquitous. Some added registration and/or calibration may be typically involved when two optical systems are used.
- one or more reference images may also be captured using different spectral responses and/or exposure settings.
- One example includes an infra red sensor to supplement a normal sensor or a sensor that is calibrated to enhance specific ranges of the spectral response such as skin tone, highlights or shadows.
- one or more reference images may also be captured using different capture parameters such as exposure time, dynamic range, contrast, sharpness, color balance, white balance or combinations thereof based on any image parameters the camera can manipulate.
- one or more reference images may also be captured using a secondary optical system with a differing focal length, depth of field, depth of focus, exit pupil, entry pupil, aperture, or lens coating, or combinations thereof based on any optical parameters of a designed lens.
- one or more reference images may also capture a portion of the final image in conjunction with other differentials.
- Such example may include capturing a reference image that includes only the center of the final image, or capturing only the region of faces from the final image. This allows saving capture time and space while keeping as reference important information that may be useful at a later stage.
- Reference images may also be captured using varying attributes as defined herein of nominally the same scene recorded onto different parts of a same physical sensor.
- one optical subsystem focuses the scene image onto a small area of the sensor, while a second optical subsystem focuses the scene image, e.g., the main image, onto a much larger area of the sensor.
- This has the advantage that it involves only one sensor and one post-processing section, although the two independently acquired scene images will be processed separately, i.e., by accessing the different parts of the sensor array.
- This approach has another advantage, which is that a preview optical system may be configured so it can change its focal point slightly, and during a capture process, a sequence of preview images may be captured by moving an optical focus to different parts of the sensor.
- multiple preview images may be captured while a single main image is captured.
- An advantageous application of this embodiment would be motion analysis.
- Getting data from a reference image in a preview or postview process is akin to obtaining meta data rather than the image-processing that is performed using the meta data. That is, the data used for processing a main image, e.g., to enhance its quality, is gathered from one or more preview or postview images, while the primary source of image data is contained within the main image itself.
- This preview or postview information can be useful as clues for capturing and/or processing the main image, whether it is desired to perform red-eye detection and correction, face tracking, motion blur processing, dust artefact correction, illumination or resolution enhancement, image quality determination, foreground/background segmentation, and/or another image enhancement processing technique.
- the reference image or images may be saved as part of the image header for post processing in the capture device, or alternatively after the data is transferred on to an external computation device. In some cases, the reference image may only be used if the post processing software determines that there is missing data, damaged data or need to replace portions of the data.
- the reference image may also be saved as a differential of the final image.
- Example may include a differential compression or removal of all portions that are identical or that can be extracted from the final image.
- a face detection process may first find faces, find eyes in a face, and check if the pupils are red, and if red pupils are found, then the red color pupils are corrected, e.g., by changing their color to black.
- Another red-eye process may involve first finding red in a digital image, checking whether the red pixels are contained in a face, and checking whether the red pixels are in the pupil of an eye. Depending on the quality of face detection available, one or the other of these may be preferred. Either of these may be performed using one or more preview or postview images, or otherwise using a reference image, rather than or in combination with, checking the main image itself.
- a red-eye filter may be based on use of acquired preview, postview or other reference image or images, and can determine whether a region may have been red prior to applying a flash.
- the post processing may determine that the subject's eyes were closed or semi closed. If there exists a reference image that was captured time-wise either a fraction of a second before or after such blinking, the region of the eyes from the reference image can replace the blinking eye portion of the final image.
- the camera may store as the reference image only high resolution data of the Region of Interest (ROI) that includes the eye locations to offer such retouching.
- ROI Region of Interest
- Multiple reference images may be used, for example, in a face detection process, e.g., a selected group of preview images may be used. By having multiple images to choose from, the process is more likely to have a more optimal reference image to operate with.
- a face tracking process generally utilizes two or more images anyway, beginning with the detection of a face in at least one of the images. This provides an enhanced sense of confidence that the process provides accurate face detection and location results.
- a perfect image of a face may be captured in a reference image, while a main image may include an occluded profile or some other less than optimal feature.
- the reference image By using the reference image, the person whose profile is occluded may be identified and even have her head rotated and unblocked using reference image data before or after taking the picture. This can involve upsampling and aligning a portion of the reference image, or just using information as to color, shape, luminance, etc., determined from the reference image. A correct exposure on a region of interest or ROI may be extrapolated using the reference image.
- the reference image may include a lower resolution or even subsampled resolution version of the main image or another image of substantially a same scene as the main image.
- Meta data that is extracted from one or more reference images may be advantageously used in processes involving face detection, face tracking, red-eye, dust or other unwanted image artefact detection and/or correction, or other image quality assessment and/or enhancement process.
- meta data e.g., coordinates and/or other characteristics of detected faces, may be derived from one or more reference images and used for main image quality enhancement without actually looking for faces in the main image.
- a reference image may also be used to include multiple emotions of a single subject into a single object. Such emotions may be used to create more comprehensive data of the person, such as smile, frown, wink, and/or blink. Alternatively, such data may also be used to post process editing where the various emotions can be cut-and-pasted to replace between the captured and the reference image. An example may include switching between a smile to a sincere look based on the same image.
- the reference image may be used for creating a three-dimensional representation of the image which can allow rotating subjects or the creation of three dimensional representations of the scene such as holographic imaging or lenticular imaging.
- a reference image may include an image that differs from a main image in that it may have been captured at a different time before or after the main image.
- the reference image may have spatial differences such as movements of a subject or other object in a scene, and/or there may be a global movement of the camera itself.
- the reference image may, preferably in many cases, have lower resolution than the main image, thus saving valuable processing time, bytes, bitrate and/or memory, and there may be applications wherein a higher resolution reference image may be useful, and reference images may have a same resolution as the main image.
- the reference image may differ from the main image in a planar sense, e.g., the reference image can be infrared or Gray Scale, or include a two bit per color scheme, while the main image may be a full color image.
- Other parameters may differ such as illumination, while generally the reference image, to be useful, would typically have some common overlap with the main image, e.g., the reference image may be of at least a similar scene as the main image, and/or may be captured at least somewhat closely in time with the main image.
- Some cameras have a pair of CCDs, which may have been designed to solve the problem of having a single zoom lens.
- a reference image can be captured at one CCD while the main image is being simultaneously captured with the second CCD, or two portions of a same CCD may be used for this purpose.
- the reference image is neither a preview nor a postview image, yet the reference image is a different image than the main image, and has some temporal or spatial overlap, connection or proximity with the main image.
- a same or different optical system may be used, e.g., lens, aperture, shutter, etc., while again this would typically involve some additional calibration.
- Such dual mode system may include a IR sensor, enhanced dynamic range, and/or special filters that may assist in various algorithms or processes.
- a blurred image may be used in combination with a non-blurred image to produce a final image having a non-blurred foreground and a blurred background.
- Both images may be deemed reference images which are each partly used to form a main final image, or one may be deemed a reference image having a portion combined into a main image. If two sensors are used, one could save a blurred image at the same time that the other takes a sharp image, while if only a single sensor is used, then the same sensor could take a blurred image followed by taking a sharp image, or vice-versa.
- a map of systematic dust artefact regions may be acquired using one or more reference images.
- Reference images may also be used to disqualify or supplement images which have with unsatisfactory features such as faces with blinks, occlusions, or frowns.
- a method for distinguishing between foreground and background regions of a digital image of a scene includes capturing first and second images of nominally the same scene and storing the captured images in DCT-coded format. These images may include a main image and a reference image, and/or simply first and second images either of which images may comprise the main image.
- the first image may be taken with the foreground more in focus than the background, while the second image may be taken with the background more in focus than the foreground.
- Regions of the first image may be assigned as foreground or background according to whether the sum of selected high order DCT coefficients decreases or increases for equivalent regions of the second image.
- one or more processed images based on the first image or the second image, or both are rendered at a digital rendering device, display or printer, or combinations thereof.
- respective regions of two images of nominally the same scene are said to be equivalent if, in the case where the two images have the same resolution, the two regions correspond to substantially the same part of the scene. If, in the case where one image has a greater resolution than the other image, the part of the scene corresponding to the region of the higher resolution image is substantially wholly contained within the part of the scene corresponding to the region of the lower resolution image.
- the two images are brought to the same resolution by sub-sampling the higher resolution image or upsampling the lower resolution image, or a combination thereof.
- the two images are preferably also aligned, sized or other process to bring them to overlapping as to whatsoever relevant parameters for matching.
- the two images may not be identical to each other due to slight camera movement or movement of subjects and/or objects within the scene.
- An additional stage of registering the two images may be utilized.
- the first image may be a relatively high resolution image
- the second image may be a relatively low resolution pre- or post-view version of the first image
- the processing may be done in the camera as post processing, or externally in a separate device such as a personal computer or a server computer.
- both images can be stored.
- two DCT-coded images can be stored in volatile memory in the camera for as long as they are being used for foreground/background segmentation and final image production.
- both images may be preferably stored in non-volatile memory.
- the lower resolution image may be stored as part of the file header of the higher resolution image.
- regions of the image are stored as two separated regions. Such cases include foreground regions that may surround faces in the picture.
- processing can be performed just on the region including and surrounding the face to increase the accuracy of delimiting the face from the background.
- Inherent frequency information as to DCT blocks is used to provide and take the sum of high order DCT coefficients for a DCT block as an indicator of whether a block is in focus or not. Blocks whose high order frequency coefficients drop when the main subject moves out of focus are taken to be foreground with the remaining blocks representing background or border areas. Since the image acquisition and storage process in a digital camera typically codes captured images in DCT format as an intermediate step of the process, the method can be implemented in such cameras without substantial additional processing.
- a method is also provided for determining an orientation of an image relative to a digital image acquisition device based on a foreground/background analysis of two or more images of a scene.
- FIG. 1 illustrates a prior art in-camera redeye system.
- a main image is acquired 105 from a sensor subsystem.
- This image is further processed 110 based on image acquisition parameters such as ambient lighting, length of exposure, usage of pre-flash and flash, lens focal length & aperture settings, etc.
- This image processing is pre-calibrated during the design of the camera and, due to the non-linear relationships between the various acquisition parameters, it typically involves a significant amount of empirical testing using as broad a range of image capture conditions as is practical.
- modern digital cameras have much improved auto-focus and auto-exposure algorithms it is still possible to capture images of non-optimal quality either through incorrect camera settings or through encountering conditions which are not fully accounted for by the empirical calibrations process for that camera.
- the main acquired and processed image is normally committed to non-volatile storage in camera memory, or in an onboard storage card 170 .
- the possibility of redeye defects implies that the image should first be passed through an in-camera redeye filter 90 .
- a more detailed description of such a filter can be found in U.S. Pat. No. 6,407,777 to DeLuca herein incorporated by reference.
- a pixel locator filter 92 which detects candidate eye-defect pixels based on a color analysis and then groups said pixels into redeye candidate regions;
- a shape analyzer filter 94 which determines if a eye candidate region is acceptable in terms of geometry, size and compactness and further analyzes neighbouring features such as eyebrows and iris regions; and
- a falsing filter 98 which eliminates candidate regions based on a wide range of criteria. Any candidate regions which survive the falsing filter are then modified by a pixel modifier 96 and the corrected image 170 - 2 may then be stored in the main image store 170 .
- This prior art system typically will also feature a sub-sampler which can generate lower resolution versions 170 - 3 of the main acquired and processed image 170 - 1 .
- This sub-sampling unit may be implemented in either software or may be hardware based and is, primarily, incorporated in modern digital cameras to facilitate the generation of thumbnail images for the main camera display.
- FIG. 1( b ) illustrates a preferred embodiment of red-eye detection system according to the present invention.
- the system improves on the prior art by providing an additional image analysis prefilter 130 and an image compensation prefilter 135 to the prior art imaging chain to reduce the overall incidence of errors in the redeye detection process 90 for non-optimally acquired images.
- the image analysis prefilter 130 combines one or more techniques for determining image quality. Such techniques are well known to one familiar in the art of image processing and in particular image editing and enhancements. Thus, the prefilter provides an in-camera analysis of a number of characteristics of an acquired, processed image with a view to determining if these characteristics lie within acceptable limits. It will be clear to those skilled in the art that the exact combination of analysis techniques will be dependent on the characteristics of the non-optimally acquired images generated by a particular digital camera. In addition, the determination of what image quality matters need to be addressed is primarily dependent on the effect of such characteristics on the red eye filter 90 .
- a low-end digital camera may omit complex noise filtering circuitry on its sensor as it is targeted at cost-sensitive markets and may employ low quality optics for similar reasons. Thus it may be susceptible to a greater degree of image noise and exhibit a poor dynamic range for white and color balance; (ii) a high-end professional camera will have a much greater dynamic range for color and white balance but may require more sophisticated image analysis to compensate for motion blur, sensor dust and other image distortions that are of concern to professional photographers.
- One subsystem of the image analysis prefilter is a blur analyzer 130 - 1 , which performs an image analysis to determine blurred regions within a digital image—this operate on either the full size main image 170 - 1 or one or more sub-sampled copies of the image 170 - 3 .
- One technique for in-camera blur detection is outlined in US patent application 2004/0120598 to Feng which describes a computationally efficient means to determine blur by analysis of DCT coefficients in a JPEG image.
- the analyser provides a measure of the blur in the supplied image(s) to be used later in the prefilter 135 . This measure could be as simple as an index between 0 and 1 indicating the degree of blur. However, it could also indicate which regions of the image are blurred and the extent to which these are blurred.
- a further subsystem of the image analysis prefilter is a dust analyzer 130 - 2 .
- the problems caused by dust on imaging devices are well known in the prior art. In the context of the present invention it is important to track the location and severity of dust particles as these may interfere with the correct detection of eye-defects when the two forms of defect overlap.
- Of particular relevance are techniques where the detection of defects in a digital image is based solely on analysis of the digital image and that do not directly relate to the image acquisition process.
- U.S. Pat. No. 6,233,364 to Krainiouk et al. discloses determining anomalous image regions based on the difference between the gradient of an image at a set of grid points and the local mean of the image gradient.
- 6,266,054 to Lawton et al discloses automating the removal of narrow elongated distortions from a digital image utilizing the characteristics of image regions bordering the distortion.
- US patent application 2003/0039402 and WIPO patent application WO-03/019473 both to Robins et al. disclose detecting defective pixels by applying a median filter to an image and subtracting the result from the original image to obtain a difference image. This is used to construct at least one defect map and as such provide a measure of the effect of dust on an image supplied to the subsystem 130 - 2 .
- U.S. Pat. No. 6,035,072 to Read discloses mapping defects or dirt, which affect an image acquisition device.
- a plurality of images are processed and stationary components which are common between images are detected and assigned a high probability of being a defect.
- Additional techniques which are employed to modify defect probability include median filtering, sample area detection and dynamic adjustment of scores.
- This dynamic defect detection process allows defect compensation, defect correction and alerting an operator of the likelihood of defects, but from the point of view of the preferred embodiment, it is the map which is produced which indicates to the prefilter 135 the degree to which the supplied images are affected by dust and/or defects.
- Additional subsystems of the image analysis prefilter are a white balance analyzer 130 - 3 , a color balance analyzer 130 - 4 , and a gamma/luminance analyzer 130 - 5 .
- each of these provides, for example, an indicator of the degree to which each of these characteristics deviates from optimal and by which the supplied image might be corrected.
- Those skilled in the art will realize that such techniques are practiced in a digital camera as part of corrective image processing based on acquisition settings 110 .
- Prior art techniques which can be employed in embodiments of the present invention also exist for post-processing of an acquired image to enhance its appearance.
- U.S. Pat. No. 6,249,315 to Holm teaches how a spatially blurred and sub-sampled version of an original image can be used to obtain statistical characteristics of a scene or original image.
- this information is combined with the tone reproduction curves and other characteristics of an output device or media to provide an enhancement strategy for digital images, whereas in the preferred embodiment, an analysis prefilter employing the technique of Holm preferably provides the color characteristics of the supplied image to the prefilter 135 .
- U.S. Pat. No. 6,192,149 to Eschback et al. discloses improving the quality of a printed image by automatically determining the image gamma and then adjusting the gamma of a printer to correspond to that of the image.
- Eschback is concerned with enhancing the printed quality of a digital image and not the digital image itself, if does teach a means for automatically determining the gamma of a digital image and as such can be used in an analysis pre-filter in embodiments of the present invention.
- U.S. Pat. No. 6,101,271 to Yamashita et al. discloses implementing a gradation correction to an RGB image signal which allows image brightness to be adjusted without affecting the image hue and saturation
- a further subsystem of the image analysis prefilter is an image texture analyzer 130 - 6 which allows texture information to be gathered from the acquired and processed main image. This information can be useful both in determining different regions within an image and, when combined with information derived from other image analysis filters such as the blur analyzer 130 - 1 or a noise analyzer 130 - 7 it can enable automatically enhancement of an image by applying deblurring or denoising techniques.
- US patent application 2002/0051571 to Jackway et al discloses texture analysis for digital images.
- US patent application 2002/0090133 to Kim et al discloses measuring color-texture distances within a digital images and thus offering improved segmentation for regions within digital images.
- a further subsystem of the image analysis prefilter is the noise analyzer 130 - 7 which produces a measure of the effect of noise on the image supplied to the subsystem 130 - 7 .
- a further illustrative subsystem of the image analysis prefilter 130 is an object/region analyzer 130 - 8 which allows localized analysis of image regions.
- One particular region which will invariably be found in an image with eye-defects is a human face region. The detection of a face region in an image with eye-defects is simplified as described in US patent application 2004/0119851 to Kaku. Again, an analysis pre-filter employing Kaku would therefore provide indicators of where faces regions are to be found in a supplied image to the pre-filter 135 .
- the last illustrative subsystem of the image analysis prefilter 130 is a face recognition analyzer 130 - 9 which includes a database of pre-determined data obtained from training performed on a personal image collection (not shown) loaded onto the digital camera in order to recognize a person associated with a determined region preferably acquired by the analyzer 130 - 8 and to provide an indicator of the person or person(s) whose faces have been recognized in an image.
- the face recognition analyzer 130 - 9 may provide an indicator of the types of any faces recognized in the image provided to the pre-filter 130 - 9 , for example, a child or adult face, or African, Asian or Caucasian face.
- the analyser 130 - 9 comprises a set of classifiers which enable multiple sets of face (and/or non-face) data to be combined to provide improved recognition of persons found in an image.
- the types of classifiers used can be based on skin color, age characteristics, eye-shape and/or eye-brow thickness, the person's hair and/or clothing, poses associated with a person and/or whether or not a person may be wearing makeup, such as eye-shadow or lipstick, or glasses, as preferably obtained from the training performed on the personal image collection.
- a face recognition analyzer 130 - 9 as an element of the image analysis prefilter is that it enables additional image processing modules to perform face and peripheral region analysis which will enable a determination of known persons within an image.
- a more detailed description of the preferred person recognizer 135 - 2 a is provided in co-pending application Ser. No. 11/027,001, filed Dec. 29, 2004, and hereby incorporated by reference.
- an additional database component containing classifier signatures associated with known persons is preferably included. This database will typically be derived from a personal collection of images maintained by the owner of a digital camera and, in most typical embodiments, these will be stored off-camera.
- the image analysis prefilter may also incorporate a module to separate background and foreground regions of an image (not shown).
- a module is described in co-pending application entitled “Foreground/Background Segmentation in Digital Images With Differential Exposure Calculations, serial number not yet assigned (FN-122), filed Aug. 30, 2005, hereby incorporated by reference, and may be advantageously employed to reduce the area of an image to which a redeye filter is applied, thus speeding up the execution time.
- the image is not necessarily corrected, or the filter chain is not necessarily adapted but the method of application of the filter chain to the image is altered.
- a combination of image correction analyzer 135 - 2 and a redeye subfilter database 135 - 3 p 1 interpret the results of the image analysis performed by the image analysis prefilter 130 ; (ii) if corrective image processing is active, determine an optimal correction strategy for application to the acquired, processed image, or a subsampled copy thereof, (iii) if adaption of the redeye filter chain is implemented, determine any parameter/filter conflicts and further determines an optimal adaption of the redeye filter chain (described later); and (iv) if both corrective image processing and filter adaption are active, determine an optimal combination of each.
- a customized redeye filter set stored as a set of rules in the database 135 - 3 may be applied to the image.
- a redeye filter set stored as a set of rules in the database 135 - 3 may be applied to the image.
- redeye For example, children and babies are particularly susceptible to redeye. They are also more prone to certain types of redeye, e.g. “bright-eye” where the eye is almost completely white with only a reddish periphery, which can be often more difficult to analyze and correct.
- racial or ethnic characteristics can cause differences in the color characteristics of the redeye phenomenon. For example, Asian people often exhibit a dull reddish or even “brownish” form of redeyem while persons of Indian descent often exhibit redeye effects with a distinctly “purplish” hue. The extent of these characteristics may vary somewhat from individual to individual.
- the filter parameters may be changed on the basis of skin color in that a distinctive set of prototype values could be available for each person; or age characteristics, to enable a higher tolerance of certain color and/or luminance-based filters; eye-shape and/or eye-brow thickness which are person specific; and/or whether or not a person is wearing glasses, which can introduce strong glints resulting in detection errors for standard filter sets.
- the filter order may be changed depending, on the ‘identity’ of the person in the image, i.e. whether or not the person is wearing makeup and/or glasses. For example, if a person is wearing eye shadow and/or lipstick, certain skin filters might not be applied. Instead, alternative filters could be used to determine a uniform color/texture in place of the normal skin filter.
- the actual corrective image processing 135 - 1 will typically be implemented as a library of image processing algorithms which may be applied in a variety of sequences and combinations to be determined by the image correction analyzer 135 - 2 . In many digital cameras some of these algorithms will have partial or full hardware support thus improving the performance of the compensation prefilter 135 .
- the analysis prefilter 130 can operate on a subsampled copy of the main image 170 - 3 .
- the detection phase of the redeye filter 90 can be applied to a subsampled copy of the main image 170 - 3 , although not necessarily of the same resolution.
- corrective image processing is used by the image compensation prefilter it will also be applied to a subsampled copy of the main image 170 - 3 .
- This has significant benefits with respect to computation speed and computing resources, making it particularly advantageous for in-camera embodiments.
- the image correction analyzer 135 - 2 may not always be able to determine an optimal correction strategy for an acquired, processed image due to conflicts between image processing algorithms, or between the filter adaptions required for the redeye filter chain. In other instances, where a strategy can be determined but the image correction analyzer 135 - 2 may be aware that the strategy is marginal and may not improve image quality it may be desirable to obtain user input. Thus the image correction analyzer 135 - 2 may generate a user indication 140 and in certain embodiments may also employ additional user interaction to assist in the image correction and redeye filter processes.
- FIG. 2 a to FIG. 2 e illustrate several alternative embodiments of the present invention which are described as follows:
- 1( a ) is comparison to a known value, to determine if the pixel is, in simplified terms, red or not.
- the value of the pixel in to compare with is ⁇ R′,G′,B′ ⁇ .
- the two steps above of correcting and comparing may be combined simply by transforming the static value of ⁇ R′,G′,B′ ⁇ based on the inverse of the correction transformation.
- the entire image is not corrected, but the comparison is similar to the state as if the image was corrected.
- the complexity and number of necessary steps compared to the original algorithm is exactly the same, with the extra value that the image algorithm now is taking into account the sub-optimal quality of the image.
- FIG. 2( d ) illustrates a combination of the embodiments described in 2 ( b ) and 2 ( c ).
- This embodiment is identical to the previous embodiments except that if subfilter compensation is not possible 252 it incorporates two additional steps to determining if corrective image processing can be applied 206 and if this is possible a second step 208 to apply said corrective image processing.
- subfilter adaption is preferred to corrective image processing as it requires practically no computational resources, but only changes the input parameters of the subfilters which comprise the redeye filter chain and the composition and order-of-execution of the chain itself. However in certain circumstances correction of the original acquired image by image processing means may provide more reliable redeye detection, or be desirable as an end in itself.
- FIG. 2( e ) describes an alternative variation of the algorithm. This is identical to the embodiment of FIG. 2( a ) except that after determining if corrective image processing is possible 206 , corrective image processing is applied to both the main acquired image 170 - 1 and a subsampled copy 170 - 3 thereof, step 208 - 1 . A second additional step then saves the corrected acquired image 170 - 2 , in the main image store 170 , step 209 , and a user indication 140 is generated to inform the camera user that an improved image is available. Additional steps may be added to allow the user to select between original 170 - 1 and corrected images 170 - 2 if so desired.
- redeye detection 92 , 94 , 98 is applied to the corrected subsampled copy of the main acquired image and the redeye correction 96 is applied to the corrected copy of the main acquired image.
- corrective image processing would not be applied to the full-sized main image 170 - 1 so that the redeye correction would be applied to the uncorrected main image.
- FIG. 3 shows the principle subfilter categories which exist within the main redeye filter 90 . While each of the component filters will be referred to in sequence, it will be appreciated that where appropriate more than one of these filters may be applied at a given time and the decisions above to modify the filter chain can include a decision not alone as to which filters may be executed in a sequence, but also on which filters can be applied in parallel sequences. As described above, the pixel transformer filter 92 - 0 allows global pixel-level transformations of images during color determining and pixel grouping operations.
- pixel color filters 92 - 1 which perform the initial determining if a pixel has a color indicative of a flash eye defect; a region segmentor 92 - 2 which segments pixels into candidate redeye groupings; regional color filters 92 - 3 , color correlation filters 92 - 4 , and color distribution filters 92 - 5 which operate on candidate regions based these criteria.
- the pixel locator and region segmenter 92 contains two additional functional blocks which do not contribute directly to the color determining and segmentation operations but are nevertheless intertwined with the operation of the pixel locator and region segmenter.
- the resegmentation engine 92 - 6 is a functional block which is particularly useful for analyzing difficult eye defects. It allows the splitting 92 - 6 a and regrouping 92 - 6 b of borderline candidate regions based on a variety of threshold criteria.
- a shape analyzer 94 next applies a set of subfilters to determine if a particular candidate grouping is physically compatible with known eye-defects.
- some basic geometric filters are first applied 94 - 1 followed by additional filters to determine region compactness 94 - 2 and boundary continuity 94 - 3 . Further determining is then performed based on region size 94 - 4 , and a series of additional filters then determine if neighbouring features exist which are indicative of eye shape 94 - 5 , eyebrows 94 - 6 and iris regions 94 - 7 .
- the redeye filter may additionally use anthropometric data to assist in the accurate determining of such features.
- a falsing analyzer 98 which contains a range of subfilter groups which eliminate candidate regions based on a range of criteria including lips filters 98 - 1 , face region filters 98 - 2 , skin texture filters 98 - 3 , eye-glint filters 98 - 4 , white region filters 98 - 5 , region uniformity filters 98 - 6 , skin color filters 98 - 7 , and eye-region falsing filters 98 - 8 . Further to these standard filters a number of specialized filters may also be included as part of the falsing analyzer 98 .
- a category of filter based on the use of acquired preview images 98 - 9 which can determine if a region was red prior to applying a flash.
- This particular filter may also be incorporated as part of the initial region determining process 92 , as described in co-pending U.S. application Ser. No. 10/919,226 from August, 2004 entitled “Red-Eye Filter Method And Apparatus” herein incorporated by reference.
- An additional category of falsing filter employs image metadata determined from the camera acquisition process 98 - 10 . This category of filter can be particularly advantageous when combined with anthropometric data as described in PCT Application No. PCT/EP2004/008706.
- a user confirmation filter 98 - 11 which can be optionally used to request a final user input at the end of the detection process. This filter can be activated or disabled based on how sub-optimal the quality of an acquired image is.
- the pixel modifier 96 is essentially concerned with the correction of confirmed redeye regions. Where an embodiment of the invention incorporates a face recognition module 130 - 9 then the pixel modifier may advantageously employ data from an in-camera known person database (not shown) to indicate aspects of the eye color of a person in the image. This can have great benefit as certain types of flash eye-defects in an image can destroy indications of original eye color.
- an additional component of the redeye filter 90 is a filter chain adapter 99 .
- This component is responsible for combining, and sequencing the subfilters of the redeye filter 90 and for activating each filter with a set of input parameters corresponding to the parameter list(s) 99 - 1 supplied from the image compensation prefilter 135 .
- the pixel locator & region segmenter 92 , the shape analyzer 94 and the falsing analyzer 98 are illustrated as separate components it is not intended to exclude the possibility that subfilters from these components may be applied in out-of-order sequences.
- regions which pass all the falsing filters except for the region uniformity filter 98 - 6 may be returned to the resegmentation engine 92 - 6 to determine if the region was incorrectly segmented.
- a subfilter from the pixel locator and region segmentor 92 may be used to add an additional capability to the falsing analysis 98 .
- FIG. 4 shows in more detail the operation of the image analysis 130 and image compensation prefilters 135 .
- the operation of the compensation prefilter 135 and more particularly the operation of the image correction analyzer 135 - 2 has been separated into two functional modes:
- FIG. 4( a ) illustrates the workflow for the determining and performing corrective image processing (so corresponding generally to steps 206 , 208 of FIGS. 2( a ),(b),(d) and (e)) while FIG.
- FIG. 4( b ) describes the determining and performing filter chain adaption including determining if a single chain, or a combination of multiple filter chains will compensate for the non-optimal image characteristics determined by the image analysis prefilter 130 (so corresponding generally to step 250 , 252 and 254 of FIGS. 2( c ) and 2 ( d )).
- FIG. 4( c ) illustrates an exemplary embodiment of the workflow of the image analysis prefilter 130 .
- the image correction analyzer 135 - 2 first loads an image characteristic list 401 obtained from the image analysis prefilter 130 . This list will allow the correction analyzer to quickly determine if a simple image correction is required or if a number of image characteristics will require correction 402 . In the case of a single characteristic the correction analyzer can immediately apply the relevant corrective image processing 412 followed by some tests of the corrected image 414 to ensure that image quality is at least not deteriorated by the applied corrective technique. If these tests are passed 416 then the image can be passed on to the redeye filter 90 for eye defect correction. Otherwise, if corrective image processing has failed the sanity tests 416 then an additional test may be made to determine if filter chain adaption is possible 422 .
- the algorithm will initiate the workflow described in FIG. 4( b ) for determining the required filter chain adaptions 450 . If corrective image processing has failed 416 and filter chain adaption is not possible 422 then the correction analyzer will disable the redeye filter for this image 220 , and provide a user indication to that effect 140 after which it will pass control back to the main in-camera application 224 .
- the user indication may be interactive and may provide an option to allow the normal redeye filter process to proceed on the uncorrected image, or alternatively offer additional user-selectable choices for additional image analysis and/or correction strategies.
- the next step in the workflow process is to determine the primary image deficiency 404 . After this has been successfully determined from the image characteristics list the next step is to determine the interdependencies between this primary correction required and said secondary image characteristics. Typically there will be more than one approach to correcting the primary image characteristic and the correction analyzer must next determine the effects of these alternative correction techniques on the secondary image characteristics 406 before correction can be initiated.
- the correction analyzer may determine that these interdependencies cannot be resolved 408 .
- an additional test is next made to determine if filter chain adaption is possible 422 .
- the algorithm will initiate the workflow described in FIG. 4( b ) for determining the required filter chain adaptions 450 . If corrective image processing has failed 416 and filter chain adaption is not possible 422 then the correction analyzer will disable the redeye filter for this image 220 , and provide a user indication to that effect 140 after which it will pass control back to the main in-camera application 224 .
- the correction analyzer next proceeds to determine the image processing chain 410 .
- this step may incorporate the determining of additional corrective techniques which can further enhance the primary correction technique which has been determined.
- the correction analyzer will, essentially, loop back through steps 404 , 406 , and 408 for each additional correction technique until it has optimized the image processing chain.
- the determining of step 408 will require access to a relatively complex knowledgebase 135 - 4 . In the present embodiment this is implemented as a series of look-up-tables (LUTs) which may be embedded in the non-volatile memory of a digital camera.
- LUTs look-up-tables
- the content of the knowledgebase is highly dependent on (i) the image characteristics determined by the image analysis prefilter and (ii) the correction techniques available to the compensation prefilter and (iii) the camera within which the invention operates.
- said knowledgebase will differ significantly from one embodiment to another.
- said knowledgebase can be easily updated by a camera manufacturer and, to some extent, modified by an end-user.
- various embodiments would store, or allow updating of the knowledgebase from (i) a compact flash or other memory card; (ii) a USB link to a personal computer; (iii) a network connection for a networked/wireless camera and (iv) from a mobile phone network for a camera which incorporates the functionality of a mobile phone.
- the knowledgebase may reside on a remote server and may respond to requests from the camera for the resolving of a certain set of correction interdependencies.
- An example of image characteristics determined by the image analysis prefilter is a person or type of person recognised by the analyzer 130 - 9 .
- a person or type of person has been recognized using the face recognition analyzer, 130 - 9 , it is preferred to determine whether a customized redeye filter set is available and if it has been loaded onto the camera. If this data is not available, or if a person could not be recognized from a detected face, a generic filter set will be applied to the detected face region. If a person is recognized, the redeye filter will be modified according to a customised profile loaded on the camera and stored in the database 135 - 3 . In general, this profile is based on an analysis of previous images of the recognised person or type of person and is designed to optimise both the detection and correction of redeye defects for the individual or type of person.
- the corrective image processing chain is applied to the image 412 and a number of sanity checks are applied 412 to ensure that the image quality is not degraded by the correction process 416 . If these tests fail then it may be that the determined interdependencies were marginal or that an alternative image processing strategy is still available 418 . If this is so then the image processing chain is modified 420 and corrective image processing is reapplied 412 . This loop may continue until all alternative image processing chains have been exhausted. It is further remarked that the entire image processing chain may not be applied each time. For example, if the differences between image processing chains is a single filter then a temporary copy of the input image to that filter is kept and said filter is simply reapplied with different parameter settings.
- step 418 determines that all corrective measures have been tried it will next move to step 422 which determines if filter chain adaption is possible. Now returning to step 416 , if the corrective image processing is applied successfully then the image is passed on to the redeye filter 90 .
- FIG. 4( b ) describes an alternative embodiment of the correction analyzer 135 - 2 which determines if filter chain adaption is possible and then modifies the redeye filter appropriately. Initially the image characteristics list is loaded 401 and for each characteristic a set of filters which require adaption is determined 452 . This is achieved through referencing the external database 135 - 3 and the comments and discussion provided in the context of the image correction knowledgebase 135 - 4 apply equally here.
- the correction analyzer must determine which filters overlap a plurality of image characteristics 454 and, additionally determine if there are conflicts between the filter adaptions required for each of the plurality of image characteristics 456 . If such conflicts exist the correction analyzer must next decide if they can be resolved 460 .
- To provide a simple illustrative example we consider two image characteristics which both require an adaption of the threshold of the main redness filter in order to compensate for the measured non-optimallity of each.
- the correction analyzer must next determine from the knowledgebase the result of compensating for the first characteristic with a lowered threshold of 15% rather than the initially requested 10%. Such an adjustment will normal be an inclusive one and the correction analyzer may determine that the conflict can be resolved by adapting the threshold of the main redness filter to 15%. However it might also determine that the additional 5% reduction in said threshold will lead to an unacceptable increase in false positives during the redeye filtering process and that this particular conflict cannot be simply resolved.
- an alternative is to determine if they are separable 466 . If they are separable that implies that two distinct redeye filter processes can be run with different filter chains and the results of the two detection processes can be merged prior to correcting the defects. In the case of the example provided above this implies that one detection process would be run to compensate for a first image characteristic with a threshold of 10% and a second detection process will be run for the second image characteristic with a threshold of 15%. The results of the two detection processes will then be combined in either an exclusive or an inclusive manner depending on the separability determination obtained from the subfilter database 135 - 3 . In embodiments where a face recognition module 130 - 9 is employed, a separate detection process may be determined and selectively applied to the image for each known person.
- the correction analyzer will prepare a single filter chain parameter list 462 which will then be loaded 464 to the filter chain adapter 99 of the redeye filter 90 illustrated in FIG. 3 .
- the correction analyzer prepares a number of parameter lists 468 for the filter chain adapter which are then loaded 464 as in the previous case. The redeye filter is then applied 90 .
- the correction analyzer will then make a determination if image processing compensation might be possible 422 . If so then the image processing compensation workflow of FIG. 4( a ) may be additionally employed 400 . If it is determined that image processing compensation is not possible then the correction analyzer will disable the redeye filter for this image 220 , and provide a user indication to that effect 140 after which it will pass control back to the main in-camera application 224 .
- FIG. 4( c ) describes the workflow of the image analysis prefilter 130 illustrated in FIG. 1( b ).
- This performs an image processing analysis of at least one image characteristic according to at least one of a plurality of image processing techniques.
- the output of this analysis should be a simple measure of goodness of the analyzed image characteristic.
- said measure is a percentage of the optimum for said characteristic.
- 100% represents perfect quality for the measured image characteristic; values above 95% represent negligible image distortions/imperfections in said characteristic; values above 85% represent noticeable, but easily correctable distortions/imperfections and values above 60% represent major distortions/imperfections which require major image processing to correct the image characteristic. Values below 60% imply that the image is too badly distorted to be correctable.
- the first step in this workflow is to load or, if it is already loaded in memory, to access the image to be analyzed.
- the analysis prefilter next analyzes a first characteristic of said image 482 and determines a measure of goodness. Now if said characteristic is above a first threshold (95%) 486 then it is marked as not requiring corrective measures 487 in the characteristic list. If it is below said first threshold, but above a second threshold (85%) 488 then it is marked as requiring secondary corrective measures 489 . If it is below said second threshold, but above a third threshold (60%) 490 then it is marked as requiring primary corrective measures 491 and if below said third threshold 492 it is marked as uncorrectable 493 .
- the present invention which combine corrective image processing with filter chain adaption there may be two distinct sets of thresholds, one relating to the correctability using image processing techniques and the second relating to the degree of compensation possible using filter chain adaption.
- certain filters may advantageously scale their input parameters directly according to the measure of goodness of certain image characteristics.
- the redness threshold of the main color filter which, over certain ranges of values, may be scaled directly according to a measure of excessive “redness” in the color balance of a non-optimally acquired image.
- the image characteristic list may additionally include the raw measure of goodness of each image characteristic.
- the raw measure of goodness will be exported from the image analysis prefilter 130 and the threshold based determining of FIG. 4( c ) will be performed within the correction analyzer 135 - 2 in which case threshold values may be determined from the image correction knowledgebase 135 - 4 .
- the main loop continues by determining if the currently analyzed characteristic is the last image characteristic to be analyzed 496 . If not it returns to analyzing the next image characteristic 482 . If it is the last characteristic it then passes the image characteristics list to the image compensation prefilter 494 and returns control to the main camera application 224 . It should be remarked that in certain embodiments that a plurality of image characteristics may be grouped together and analyzed concurrently, rather than on a one-by-one basis. This may be preferable if several image characteristics have significant overlap in the image processing steps required to evaluate them. It may also be preferable where a hardware co-processor or DSP unit is available as part of the camera hardware and it is desired to batch run or parallelize the computing of image characteristics on such hardware subsystems.
- a third principle embodiment of the present invention has already been briefly described. This is the use of a global pixel-level transformation of the image within the redeye filter itself and relies on the corrective image processing, as determined by the correction analyzer 135 - 2 , being implementable as a global pixel-level transformation of the image.
- Those skilled in the art will realize that such a requirement implies that certain of the image analyzer elements which comprise the image analysis prefilter 130 are not relevant to this embodiment. For example dust analysis, object/region analysis, noise analysis and certain forms of image blur cannot be corrected by such transformations. However many other image characteristics are susceptible to such transformations. Further, we remark that this alternative embodiment may be combined with the other two principle embodiments of the invention to compliment each other.
- FIG. 5( a ) we illustrate an exemplary embodiment of the red pixel locating and red region segmenting workflow which occurs within the redeye filter as steps 92 - 1 and 92 - 2 .
- This workflow has been modified to incorporate a global pixel-level transformation 92 - 0 of the image as an integral element of the color determining and region grouping steps of the redeye filter. It is implicit in this embodiment that the correction analyzer has determined that a global pixel level transformation can achieve the required image compensation.
- the image to be processed by the redeye filter is first loaded 502 and the labeling LUT for the region grouping process in initialized 504 . Next the current pixel and pixel neighbourhoods are initialized 506 .
- FIG. 5( b ) shows a diagrammatic representation of a 4-pixel neighborhood 562 , shaded light gray in the figure and containing the three upper pixels and the pixel to the left of the current pixel 560 , shaded dark gray in the figure.
- This 4-pixel neighborhood is used in the labeling algorithm of this exemplary embodiment.
- a look-up table, LUT is defined to hold correspondence labels.
- step 506 we see that after initialization is completed the next step for the workflow of FIG. 5( a ) is to begin a recursive iteration through all the pixels of an image in a raster-scan from top-left to bottom-right.
- the first operation on each pixel is to apply the global pixel transformation 508 .
- the loaded image is an RGB bitmap and the global pixel transformation is of the form: P(R,G,B).fwdarw.P(R′,G′,B′), where the red, green and blue values of the current pixel, P(R,G,B) are mapped to a shifted set of color space values, P(R′,G′,B′).
- the workflow next determines if the current pixel satisfies membership criteria for a candidate redeye region 510 . Essentially this implies that the current pixel has color properties which are compatible with an eye defect; this does not necessarily imply that the pixel is red as a range of other colors can be associated with flash eye defects. If the current pixel satisfies membership criteria for a segment 510 , i.e., if it is sufficiently “red”, then the algorithm checks for other “red” pixels in the 4-pixel neighborhood 512 .
- the current pixel is assigned membership of the current label 530 .
- the LUT is then updated 532 and the current label value is incremented 534 . If there are other “red” pixels in the 4-pixel neighborhood then the current pixel is given membership in the segment with the lowest label value 514 and the LUT is updated accordingly 516 .
- a test is then performed to determine if it is the last pixel in the image 518 . If the current pixel is the last pixel in the image then a final update of the LUT is performed 540 .
- next image pixel is obtained by incrementing the current pixel pointer 520 and returning to step 508 and is processed in the same manner.
- final image pixel is processed and the final LUT completed 540 , all of the pixels with segment membership are sorted into a labeled-segment table of potential red-eye segments 542 .
- All Categories may be Global Correction or Local Region Based.
- U.S. Pat. No. 6,421,468 to Ratnakar et al. disclose sharpening an image by transforming the image representation into a frequency-domain representation and by selectively applying scaling factors to certain frequency domain characteristics of an image. The modified frequency domain representation is then back-transformed into the spatial domain and provides a sharpened version of the original image.
- U.S. Pat. No. 6,393,148 to Bhaskar discloses automatic contrast enhancement of an image by increasing the dynamic range of the tone levels within an image without causing distortion or shifts to the color map of said image.
- US patent application 2002/0105662 to Patton et al. discloses modifying a portion of an image in accordance with colormetric parameters. More particularly it discloses the steps of (i) identifying a region representing skin tone in an image; (ii) displaying a plurality of renderings for said skin tone; (iii) allowing a user to select one of said renderings and (iv) modifying the skin tone regions in the images in accordance with the rendering of said skin tone selected by the user.
- US patent application 2003/0052991 to Stavely et al. discloses simulating fill flash in digital photography.
- a digital camera shoots a series of photographs of a scene at various focal distances. These pictures are subsequently analyzed to determine the distances to different objects in the scene. Then regions of these pictures have their brightness selectively adjusted based on the aforementioned distance calculations and are then combined to form a single, photographic image.
- US patent application 2001/0031142 to Whiteside is concerned with a scene recognition method and a system using brightness and ranging mapping. It uses auto-ranging and brightness measurements to adjust image exposure to ensure that both background and foreground objects are correctly illuminated in a digital image.
- Example patents include U.S. Pat. No. 6,473,199 to Gilman et al. which describes a method for correcting for exposure in a digital image and includes providing a plurality of exposure and tone scale correcting nonlinear transforms and selecting the appropriate nonlinear transform from the plurality of nonlinear transforms and transforming the digital image to produce a new digital image which is corrected for exposure and tone scale.
- U.S. Pat. No. 5,991,456 to Rahman et al. describes a method of improving a digital image.
- the image is initially represented by digital data indexed to represent positions on a display.
- the digital data is indicative of an intensity value Ii (x,y) for each position (x,y) in each i-th spectral band.
- the intensity value for each position in each i-th spectral band is adjusted to generate an adjusted intensity value for each position in each i-th spectral band.
- Each surround function Fn (x,y) is uniquely scaled to improve an aspect of the digital image, e.g., dynamic range compression, color constancy, and lightness rendition.
- a novel color restoration step is added to give the image true-to-life color that closely matches human observation.
- a difference is determined between the max and min block values for each sector. If this difference exceeds a pre-determined threshold the sector is marked active.
- a histogram of weighted counts of active sectors against average luminance sector values is plotted and the histogram is shifted to using a pre-determined criteria so that the average luminance sector values of interest will fall within a destination window corresponding to the tonal reproduction capability of a destination application or output device.
- Another area of image enhancement in the prior art relates to brightness adjustment and color matching between color spaces.
- U.S. Pat. No. 6,459,436 to Kumada et al. describes transforming image date from device dependent color spaces to device-independent Lab color spaces and back again. Image data is initially captured in a color space representation which is dependent on the input device. This is subsequently converted into a device independent color space. Gamut mapping (hue restoration) is performed in the device independent color space and the image data may then be mapped back to a second device-dependent color space.
- U.S. Pat. No. 6,268,939 to Klassen et al. is also concerned correcting luminance and chrominance data in digital color images.
- More specifically Klassen is concerned with optimizing the transformations between device dependent and device independent color spaces by applying subsampling of the luminance and chrominance data.
- Another patent in this category is U.S. Pat. No. 6,192,149 to Eschback et al. which discloses improving the quality of a printed image by automatically determining the image gamma and then adjusting the gamma of a printer to correspond to that of the image.
- Eschback is concerned with enhancing the printed quality of a digital image and not the digital image itself, if does teach a means for automatically determining the gamma of a digital image. This information could be used to directly adjust image gamma, or used as a basis for applying other enhancements to the original digital image.
- U.S. Pat. No. 6,101,271 to Yamashita et al. discloses implementing a gradation correction to an RGB image signal which allows image brightness to be adjusted without affecting the image hue and saturation.
- U.S. Pat. No. 6,516,154 to Parulski et al. discloses suggesting improvements to a digital image after it has been captured by a camera.
- the user may crop, re-size or adjust color balance before saving a picture; alternatively the user may choose to re-take a picture using different settings on the camera.
- the suggestion of improvements is made by the camera user-interface.
- Parulski does not teach the use of image analysis and corrective image processing to automatically initiate in-camera corrective actions upon an acquired digital image.
- Lin et al. discloses automatically improving the appearance of faces in images based on automatically detecting such images in the digital image. Lin describes modification of lightness contrast and color levels of the image to produce better results.
- Certain embodiments compensate for sub-optimally acquired images where degradations in the acquired image may affect the correct operation of redeye detection, prior to or in conjunction with applying the detection and correction stage.
- Certain embodiments improve the overall success rate and reduces the false positive rate of red eye detection and reduction by compensating for non-optimally acquired images by performing image analysis on the acquired image and determining and applying corrective image processing based on said image analysis prior to or in conjunction with applying one or many redeye detection filters to the acquired image.
- Such corrections or enhancements may include applying global or local color space conversion, exposure compensation, noise reduction, sharpening, blurring or tone reproduction transformations.
- image analysis is performed on a sub-sampled copy of the main acquired image where possible, enhancing the performance of this invention inside devices with limited computational capability such as hand held devices and in particular digital cameras or printers.
- the pre-filtering process is optimized by applying when possible, as determined from the image analysis, the image transformations at the pixel level during the redeye detection process thus compensating for non-optimally acquired images without requiring that corrective image processing be applied to the full resolution image.
- the redeye filter chain is configured for optimal performance based on image analysis of an acquired image to enhance the execution red eye detection and reduction process. Such configuration takes place in the form of variable parameters for the algorithm and variable ordering and selection of sub-filters in the process.
- Certain embodiments operate uniformly on both pixels which are members of a defect and its bounding region thus avoiding the need to determine individually if pixels in the neighborhood of said defect are members of the defect and to subsequently apply correcting algorithms to such pixels on an individual basis.
- variables that could significantly effect the success of the red-eye detection algorithm such as noise, color shifts, incorrect exposure, blur, over sharpening etc, may be pre-eliminated before performing the detection process, thus improving the success rate.
- these variables may be pre-accounted for by changing the parameters for the detection process, thus improving the performance and the success rate.
- An advantage provided herein is that by bringing images into a known and better defined image quality, the criteria for detection can be tightened and narrowed down, thus providing higher accuracy both in the positive detection and reduction in the false detection.
- a further advantage provided herein is that by accounting for the reasons for suboptimal image quality the parameters for the detection and correction algorithm may be modified, thus providing higher accuracy both in the positive detection and reduction in the false detection without the need to modify the image.
- An additional advantage provided herein is that misclassification of pixels and regions belonging to defect areas is reduced if not altogether avoided, which means a reduction of undetected correct positives.
- An additional advantage provided herein is that color misclassification of pixels and regions belonging to non-defect areas is reduced if not avoided, which means a reduction of false positives.
- a further advantage provided herein is that certain embodiments can be implemented to run sufficiently fast and accurately to allow individual images in a batch to be analyzed and corrected in real-time prior to printing.
- Yet a further advantage of preferred embodiments of the present invention is that they have a sufficiently low requirement for computing power and memory resources to allow it to be implemented inside digital cameras as part of the post-acquisition processing step.
- a further advantage provided herein is that certain embodiments are not limited in their detection of red-eye defects by requirements for clearly defined skin regions matching a human face.
- a further advantage provided herein is the ability to concatenate image quality transformations and red eye detection to improve overall performance.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
Description
- This application is a Continuation application that claims priority to U.S. patent application Ser. No. 12/142,134, filed Jun. 19, 2008, which claims priority to U.S. provisional patent application 60/945,558, filed Jun. 21, 2007, and which is a continuation-in-part (CIP) of U.S. patent application Ser. No. 11/233,513, filed Sep. 21, 2005, now U.S. Pat. No. 7,587,085, which is a continuation-in-part (CIP) which claims the benefit of priority to U.S. patent application Ser. No. 11/182,718, filed Jul. 15, 2005, now abandoned, which is a CIP of U.S. application Ser. No. 11/123,971, filed May 6, 2005, now U.S. Pat. No. 7,436,998, and which is a CIP of U.S. application Ser. No. 10/976,336, filed Oct. 28, 2004, now U.S. Pat. No. 7,536,036.
- This application is related to U.S. patent application Ser. No. 11/573,713, filed Feb. 14, 2007, which claims priority to U.S. provisional patent application No. 60/773,714, filed Feb. 14, 2006, and to PCT application no. PCT/EP2006/008229, filed Aug. 15, 2006.
- This application also is related to 11/024,046, filed Dec. 27, 2004, which is a CIP of U.S. patent application Ser. No. 10/608,772, filed Jun. 26, 2003.
- This application also is related to PCT/US2006/021393, filed Jun. 2, 2006, which is a CIP of Ser. No. 10/608,784, filed Jun. 26, 2003.
- This application also is related to U.S. application Ser. No. 10/985,657, filed Nov. 10, 2004.
- This application also is related to U.S. application Ser. No. 11/462,035, filed Aug. 2, 2006, which is a CIP of U.S. application Ser. No. 11/282,954, filed Nov. 18, 2005.
- This application also is related to U.S. patent application Ser. No. 11/460,218, filed Jul. 26, 2006, which claims priority to U.S. provisional patent application Ser. No. 60/776,338, filed Feb. 24, 2006.
- This application also is related to U.S. patent application Ser. No. 12/063,089, filed Feb. 6, 2008, which is a CIP of U.S. Ser. No. 11/766,674, filed Jun. 21, 2007, which is a CIP of U.S. Ser. No. 11/753,397, which is a CIP of U.S. Ser. No. 11/765,212, filed Aug. 11, 2006, now U.S. Pat. No. 7,315,631.
- This application also is related to U.S. patent application Ser. No. 11/674,650, filed Feb. 13, 2007, which claims priority to U.S. provisional patent application Ser. No. 60/773,714, filed Feb. 14, 2006.
- This application is related to U.S. Ser. No. 11/836,744, filed Aug. 9, 2007, which claims priority to U.S. provisional patent application Ser. No. 60/821,956, filed Aug. 9, 2006.
- This application is also related to U.S. Ser. Nos. 12/140,048, 12/140,125, 12/140,532, 12/140,827, 12/140,950, and 12/141,042.
- Each of these priority and related applications are hereby incorporated by reference.
- The present invention relates to digital image processing, and more particularly to a method and apparatus for red-eye detection in an acquired digital image.
- Red-eye is a phenomenon in flash photography where a flash is reflected within a subjects eye and appears in a photograph as a red dot where the black pupil of the subject's eye would normally appear. The unnatural glowing red of an eye is due to internal reflections from the vascular membrane behind the retina, which is rich in blood vessels. This objectionable phenomenon is well understood to be caused in part by a small angle between the flash of the camera and the lens of the camera. This angle has decreased with the miniaturization of cameras with integral flash capabilities. Additional contributors include the relative closeness of the subject to the camera and ambient light levels.
- Digital cameras are becoming more popular and smaller in size. U.S. Pat. No. 6,407,777 to DeLuca describes a method and apparatus where a red eye filter is digitally implemented in the capture device. The success or failure of such filter relies on the quality of the detection and correction process.
- Most algorithms that involve image analysis and classification, are statistical in nature. There is therefore a need to develop tools which will improve the probability of successful detection, while reducing the probability of false detection, while maintaining optimal execution, especially in limited computational devices such as in digital cameras. In many cases knowledge of the image characteristics such as image quality may affect the design parameters and decisions the detection and correction software needs to implement. For example an image with suboptimal exposure may deteriorate the overall detection of red-eye defects.
- Thus, what is needed is a method of improving the efficiency and/or success rate of algorithms for detecting and reducing red-eye phenomenon.
- A method for red-eye detection in an acquired digital image is provided including acquiring one or more preview or other reference images without a flash, and determining any red regions that exist within said one or more reference images. A main image is acquired with a flash of approximately a same scene as the one or more reference images. The main image is analyzed to determine any candidate red eye defect regions that exist within the main image. Any red regions determined to exist within the one or more reference images are compared with any candidate red eye defect regions determined to exist within the main image. Candidate red eye defect regions within the main image are removed as candidates if any corresponding red regions are determined also to exist within the one or more reference images.
- The analyzing may include applying a chain of one or more red-eye filters to said main image. The red-eye filter chain may include: (i) a pixel locator and segmentor; (ii) a shape analyser; (iii) a falsing analyser; and (iv) a pixel modifier. The pixel locator and segmentor may include a pixel transformer.
- A third acquired image may be corrected based on any red eye defect regions determined in the main image. The third acquired image may include a sub-sampled copy of the main image.
- The analyzing may include recognizing one or more faces or types of faces, or combinations thereof, in the main image.
- The method may include correcting the main image based on analysis of at least one preview image and determining a degree of blur; a degree of dust contamination; a color imbalance; a white imbalance; a gamma error; a texture characteristic error; or noise characteristics, or combinations thereof. The method may also include recognizing a same face or type of face, or both, in a second image as in the main image, and applying one or more same corrective processes to the second image as the main image.
- The method may include determining correcting to apply to the main image based on analysis of at least one preview image. The correcting may include (i) contrast normalization and image sharpening; (ii) image color adjustment and tone scaling; (iii) exposure adjustment and digital fill flash; (iv) brightness adjustment with color space matching; (v) image auto-gamma determination with image enhancement.; (v) image enhancement; or (vi) face based image enhancement, or combinations thereof.
- The method may include, responsive to determining to apply correcting to the main image, also determining that such correcting cannot be beneficially applied to the main image, and disabling the correcting and providing an indication of such to a user.
- The determining to apply correcting may include interacting with a user to determine the corrections to be made and/or performing a color space transformation.
- The method may include determining a sequence in which to apply more than one correcting action to the main image.
- The method may be performed in whole or in part within a portable digital camera.
- A digital image acquisition device is also provided, including one or more optics and a sensor for acquiring digital images including main images and relatively low resolution preview images, a processor, and a processor readable medium having stored thereon digital code for programming the processor to perform any of the methods of red eye detection or other image correction described herein.
-
FIG. 1( a) shows a prior art in-camera redeye detection system; -
FIG. 1( b) shows an improved redeye detection system according to an embodiment of the present invention; -
FIG. 2( a) is a flowchart illustrating the operation of the system ofFIG. 1( b); -
FIG. 2( b) is a flowchart illustrating an alternative mode of operation of the system ofFIG. 1( b); -
FIG. 2( c) illustrates another alternative mode of operation of the system ofFIG. 1( b); -
FIG. 2( d) is a flowchart illustrating a further alternative mode of operation of the system ofFIG. 1( b); -
FIG. 2( e) is a flowchart illustrating a still further alternative mode of operation of the system ofFIG. 1( b); -
FIG. 3 shows the redeye filter chain ofFIG. 1( b) in more detail; -
FIG. 4( a) illustrates the operation of portions ofFIGS. 2( a), 2(b), 2(d) & 2(e) in more detail; -
FIG. 4( b) illustrates an alternative implementation ofFIG. 4( a); -
FIG. 4( c) is a flowchart illustrating the operation of a portion of the system ofFIG. 1( b); and -
FIGS. 5( a) and 5(b) illustrate the operation of a red-eye filter chain according to an embodiment of the present invention. - Several embodiments are described herein that use information obtained from reference images for processing a main image. That is, the data that are used to process the main image come at least not solely from the image itself, but instead or also from one or more separate “reference” images.
- Reference images provide supplemental meta data, and in particular supplemental visual data to an acquired image, or main image. The reference image can be a single instance, or in general, a collection of one or more images varying from each other. The so-defined reference image(s) provides additional information that may not be available as part of the main image.
- Example of a spatial collection may be multiple sensors all located in different positions relative to each other. Example of temporal distribution can be a video stream.
- The reference image differs from the main captured image, and the multiple reference images differ from each other in various potential manners which can be based on one or combination of permutations in time (temporal), position (spatial), optical characteristics, resolution, and spectral response, among other parameters.
- One example is temporal disparity. In this case, the reference image is captured before and/or after the main captured image, and preferably just before and/or just after the main image. Examples may include preview video, a pre-exposed image, and a post-exposed image. In certain embodiments, such reference image uses the same optical system as the acquired image, while in other embodiments, wholly different optical systems or optical systems that use one or more different optical components such as a lens, an optical detector and/or a program component.
- Alternatively, a reference image may differ in the location of secondary sensor or sensors, thus providing spatial disparity. The images may be taken simultaneously or proximate to or in temporal overlap with a main image. In this case, the reference image may be captured using a separate sensor located away from the main image sensor. The system may use a separate optical system, or via some splitting of a single optical system into a plurality of sensors or a plurality of sub-pixels of a same sensor. As digital optical systems become smaller dual or multi sensor capture devices will become more ubiquitous. Some added registration and/or calibration may be typically involved when two optical systems are used.
- Alternatively, one or more reference images may also be captured using different spectral responses and/or exposure settings. One example includes an infra red sensor to supplement a normal sensor or a sensor that is calibrated to enhance specific ranges of the spectral response such as skin tone, highlights or shadows.
- Alternatively, one or more reference images may also be captured using different capture parameters such as exposure time, dynamic range, contrast, sharpness, color balance, white balance or combinations thereof based on any image parameters the camera can manipulate.
- Alternatively, one or more reference images may also be captured using a secondary optical system with a differing focal length, depth of field, depth of focus, exit pupil, entry pupil, aperture, or lens coating, or combinations thereof based on any optical parameters of a designed lens.
- Alternatively, one or more reference images may also capture a portion of the final image in conjunction with other differentials. Such example may include capturing a reference image that includes only the center of the final image, or capturing only the region of faces from the final image. This allows saving capture time and space while keeping as reference important information that may be useful at a later stage.
- Reference images may also be captured using varying attributes as defined herein of nominally the same scene recorded onto different parts of a same physical sensor. As an example, one optical subsystem focuses the scene image onto a small area of the sensor, while a second optical subsystem focuses the scene image, e.g., the main image, onto a much larger area of the sensor. This has the advantage that it involves only one sensor and one post-processing section, although the two independently acquired scene images will be processed separately, i.e., by accessing the different parts of the sensor array. This approach has another advantage, which is that a preview optical system may be configured so it can change its focal point slightly, and during a capture process, a sequence of preview images may be captured by moving an optical focus to different parts of the sensor. Thus, multiple preview images may be captured while a single main image is captured. An advantageous application of this embodiment would be motion analysis.
- Getting data from a reference image in a preview or postview process is akin to obtaining meta data rather than the image-processing that is performed using the meta data. That is, the data used for processing a main image, e.g., to enhance its quality, is gathered from one or more preview or postview images, while the primary source of image data is contained within the main image itself. This preview or postview information can be useful as clues for capturing and/or processing the main image, whether it is desired to perform red-eye detection and correction, face tracking, motion blur processing, dust artefact correction, illumination or resolution enhancement, image quality determination, foreground/background segmentation, and/or another image enhancement processing technique. The reference image or images may be saved as part of the image header for post processing in the capture device, or alternatively after the data is transferred on to an external computation device. In some cases, the reference image may only be used if the post processing software determines that there is missing data, damaged data or need to replace portions of the data.
- In order to maintain storage and computation efficiency, the reference image may also be saved as a differential of the final image. Example may include a differential compression or removal of all portions that are identical or that can be extracted from the final image.
- In one example involving red-eye correction, a face detection process may first find faces, find eyes in a face, and check if the pupils are red, and if red pupils are found, then the red color pupils are corrected, e.g., by changing their color to black. Another red-eye process may involve first finding red in a digital image, checking whether the red pixels are contained in a face, and checking whether the red pixels are in the pupil of an eye. Depending on the quality of face detection available, one or the other of these may be preferred. Either of these may be performed using one or more preview or postview images, or otherwise using a reference image, rather than or in combination with, checking the main image itself. A red-eye filter may be based on use of acquired preview, postview or other reference image or images, and can determine whether a region may have been red prior to applying a flash.
- Another known problem involves involuntary blinking. In this case, the post processing may determine that the subject's eyes were closed or semi closed. If there exists a reference image that was captured time-wise either a fraction of a second before or after such blinking, the region of the eyes from the reference image can replace the blinking eye portion of the final image.
- In some cases as defined above, the camera may store as the reference image only high resolution data of the Region of Interest (ROI) that includes the eye locations to offer such retouching.
- Multiple reference images may be used, for example, in a face detection process, e.g., a selected group of preview images may be used. By having multiple images to choose from, the process is more likely to have a more optimal reference image to operate with. In addition, a face tracking process generally utilizes two or more images anyway, beginning with the detection of a face in at least one of the images. This provides an enhanced sense of confidence that the process provides accurate face detection and location results.
- Moreover, a perfect image of a face may be captured in a reference image, while a main image may include an occluded profile or some other less than optimal feature. By using the reference image, the person whose profile is occluded may be identified and even have her head rotated and unblocked using reference image data before or after taking the picture. This can involve upsampling and aligning a portion of the reference image, or just using information as to color, shape, luminance, etc., determined from the reference image. A correct exposure on a region of interest or ROI may be extrapolated using the reference image. The reference image may include a lower resolution or even subsampled resolution version of the main image or another image of substantially a same scene as the main image.
- Meta data that is extracted from one or more reference images may be advantageously used in processes involving face detection, face tracking, red-eye, dust or other unwanted image artefact detection and/or correction, or other image quality assessment and/or enhancement process. In this way, meta data, e.g., coordinates and/or other characteristics of detected faces, may be derived from one or more reference images and used for main image quality enhancement without actually looking for faces in the main image.
- A reference image may also be used to include multiple emotions of a single subject into a single object. Such emotions may be used to create more comprehensive data of the person, such as smile, frown, wink, and/or blink. Alternatively, such data may also be used to post process editing where the various emotions can be cut-and-pasted to replace between the captured and the reference image. An example may include switching between a smile to a sincere look based on the same image.
- Finally, the reference image may be used for creating a three-dimensional representation of the image which can allow rotating subjects or the creation of three dimensional representations of the scene such as holographic imaging or lenticular imaging.
- A reference image may include an image that differs from a main image in that it may have been captured at a different time before or after the main image. The reference image may have spatial differences such as movements of a subject or other object in a scene, and/or there may be a global movement of the camera itself. The reference image may, preferably in many cases, have lower resolution than the main image, thus saving valuable processing time, bytes, bitrate and/or memory, and there may be applications wherein a higher resolution reference image may be useful, and reference images may have a same resolution as the main image. The reference image may differ from the main image in a planar sense, e.g., the reference image can be infrared or Gray Scale, or include a two bit per color scheme, while the main image may be a full color image. Other parameters may differ such as illumination, while generally the reference image, to be useful, would typically have some common overlap with the main image, e.g., the reference image may be of at least a similar scene as the main image, and/or may be captured at least somewhat closely in time with the main image.
- Some cameras (e.g., the Kodak V570, see http://www.dcviews.com/_kodak/v570.htm) have a pair of CCDs, which may have been designed to solve the problem of having a single zoom lens. A reference image can be captured at one CCD while the main image is being simultaneously captured with the second CCD, or two portions of a same CCD may be used for this purpose. In this case, the reference image is neither a preview nor a postview image, yet the reference image is a different image than the main image, and has some temporal or spatial overlap, connection or proximity with the main image. A same or different optical system may be used, e.g., lens, aperture, shutter, etc., while again this would typically involve some additional calibration. Such dual mode system may include a IR sensor, enhanced dynamic range, and/or special filters that may assist in various algorithms or processes.
- In the context of blurring processes, i.e., either removing camera motion blur or adding blur to background sections of images, a blurred image may be used in combination with a non-blurred image to produce a final image having a non-blurred foreground and a blurred background. Both images may be deemed reference images which are each partly used to form a main final image, or one may be deemed a reference image having a portion combined into a main image. If two sensors are used, one could save a blurred image at the same time that the other takes a sharp image, while if only a single sensor is used, then the same sensor could take a blurred image followed by taking a sharp image, or vice-versa. A map of systematic dust artefact regions may be acquired using one or more reference images.
- Reference images may also be used to disqualify or supplement images which have with unsatisfactory features such as faces with blinks, occlusions, or frowns.
- A method is provided for distinguishing between foreground and background regions of a digital image of a scene. The method includes capturing first and second images of nominally the same scene and storing the captured images in DCT-coded format. These images may include a main image and a reference image, and/or simply first and second images either of which images may comprise the main image. The first image may be taken with the foreground more in focus than the background, while the second image may be taken with the background more in focus than the foreground. Regions of the first image may be assigned as foreground or background according to whether the sum of selected high order DCT coefficients decreases or increases for equivalent regions of the second image. In accordance with the assigning, one or more processed images based on the first image or the second image, or both, are rendered at a digital rendering device, display or printer, or combinations thereof.
- This method lends itself to efficient in-camera implementation due to the relatively less-complex nature of calculations utilized to perform the task.
- In the present context, respective regions of two images of nominally the same scene are said to be equivalent if, in the case where the two images have the same resolution, the two regions correspond to substantially the same part of the scene. If, in the case where one image has a greater resolution than the other image, the part of the scene corresponding to the region of the higher resolution image is substantially wholly contained within the part of the scene corresponding to the region of the lower resolution image. Preferably, the two images are brought to the same resolution by sub-sampling the higher resolution image or upsampling the lower resolution image, or a combination thereof. The two images are preferably also aligned, sized or other process to bring them to overlapping as to whatsoever relevant parameters for matching.
- Even after subsampling, upsampling and/or alignment, the two images may not be identical to each other due to slight camera movement or movement of subjects and/or objects within the scene. An additional stage of registering the two images may be utilized.
- Where the first and second images are captured by a digital camera, the first image may be a relatively high resolution image, and the second image may be a relatively low resolution pre- or post-view version of the first image.
- While the image is captured by a digital camera, the processing may be done in the camera as post processing, or externally in a separate device such as a personal computer or a server computer. In such case, both images can be stored. In the former embodiment, two DCT-coded images can be stored in volatile memory in the camera for as long as they are being used for foreground/background segmentation and final image production. In the latter embodiment, both images may be preferably stored in non-volatile memory. In the case of lower resolution pre-or-post view images, the lower resolution image may be stored as part of the file header of the higher resolution image.
- In some cases only selected regions of the image are stored as two separated regions. Such cases include foreground regions that may surround faces in the picture. In one embodiment, if it is known that the images contain a face, as determined, for example, by a face detection algorithm, processing can be performed just on the region including and surrounding the face to increase the accuracy of delimiting the face from the background.
- Inherent frequency information as to DCT blocks is used to provide and take the sum of high order DCT coefficients for a DCT block as an indicator of whether a block is in focus or not. Blocks whose high order frequency coefficients drop when the main subject moves out of focus are taken to be foreground with the remaining blocks representing background or border areas. Since the image acquisition and storage process in a digital camera typically codes captured images in DCT format as an intermediate step of the process, the method can be implemented in such cameras without substantial additional processing.
- This technique is useful in cases where differentiation created by camera flash, as described in U.S. application Ser. No. 11/217,788, published as 2006/0039690, incorporated by reference (see also U.S. Ser. No. 11/421,027) may not be sufficient. The two techniques may also be advantageously combined to supplement one another.
- Methods are provided that lend themselves to efficient in-camera implementation due to the computationally less rigorous nature of calculations used in performing the task in accordance with embodiments described herein.
- A method is also provided for determining an orientation of an image relative to a digital image acquisition device based on a foreground/background analysis of two or more images of a scene.
-
FIG. 1 illustrates a prior art in-camera redeye system. Within the camera 100 a main image is acquired 105 from a sensor subsystem. This image is further processed 110 based on image acquisition parameters such as ambient lighting, length of exposure, usage of pre-flash and flash, lens focal length & aperture settings, etc. This image processing is pre-calibrated during the design of the camera and, due to the non-linear relationships between the various acquisition parameters, it typically involves a significant amount of empirical testing using as broad a range of image capture conditions as is practical. Thus, even though modern digital cameras have much improved auto-focus and auto-exposure algorithms it is still possible to capture images of non-optimal quality either through incorrect camera settings or through encountering conditions which are not fully accounted for by the empirical calibrations process for that camera. - After this image processing is completed the main acquired and processed image is normally committed to non-volatile storage in camera memory, or in an
onboard storage card 170. However if the image was captured using a flash then the possibility of redeye defects implies that the image should first be passed through an in-camera redeye filter 90. A more detailed description of such a filter can be found in U.S. Pat. No. 6,407,777 to DeLuca herein incorporated by reference. Briefly it comprises of (i) apixel locator filter 92 which detects candidate eye-defect pixels based on a color analysis and then groups said pixels into redeye candidate regions; (ii) ashape analyzer filter 94 which determines if a eye candidate region is acceptable in terms of geometry, size and compactness and further analyzes neighbouring features such as eyebrows and iris regions; and (iii) afalsing filter 98 which eliminates candidate regions based on a wide range of criteria. Any candidate regions which survive the falsing filter are then modified by apixel modifier 96 and the corrected image 170-2 may then be stored in themain image store 170. - This prior art system typically will also feature a sub-sampler which can generate lower resolution versions 170-3 of the main acquired and processed image 170-1. This sub-sampling unit may be implemented in either software or may be hardware based and is, primarily, incorporated in modern digital cameras to facilitate the generation of thumbnail images for the main camera display.
-
FIG. 1( b) illustrates a preferred embodiment of red-eye detection system according to the present invention. The system improves on the prior art by providing an additionalimage analysis prefilter 130 and animage compensation prefilter 135 to the prior art imaging chain to reduce the overall incidence of errors in theredeye detection process 90 for non-optimally acquired images. - The
image analysis prefilter 130 combines one or more techniques for determining image quality. Such techniques are well known to one familiar in the art of image processing and in particular image editing and enhancements. Thus, the prefilter provides an in-camera analysis of a number of characteristics of an acquired, processed image with a view to determining if these characteristics lie within acceptable limits. It will be clear to those skilled in the art that the exact combination of analysis techniques will be dependent on the characteristics of the non-optimally acquired images generated by a particular digital camera. In addition, the determination of what image quality matters need to be addressed is primarily dependent on the effect of such characteristics on thered eye filter 90. Thus, as illustrative examples: (i) a low-end digital camera may omit complex noise filtering circuitry on its sensor as it is targeted at cost-sensitive markets and may employ low quality optics for similar reasons. Thus it may be susceptible to a greater degree of image noise and exhibit a poor dynamic range for white and color balance; (ii) a high-end professional camera will have a much greater dynamic range for color and white balance but may require more sophisticated image analysis to compensate for motion blur, sensor dust and other image distortions that are of concern to professional photographers. - Accordingly we shall provide some examples of image analysis techniques for exemplary purposes only and it will be understood these are not intended to limit the techniques which may be utilized in implementing the present invention.
- One subsystem of the image analysis prefilter is a blur analyzer 130-1, which performs an image analysis to determine blurred regions within a digital image—this operate on either the full size main image 170-1 or one or more sub-sampled copies of the image 170-3. One technique for in-camera blur detection is outlined in US patent application 2004/0120598 to Feng which describes a computationally efficient means to determine blur by analysis of DCT coefficients in a JPEG image. In common with the other sub-systems of the
prefilter 130, the analyser provides a measure of the blur in the supplied image(s) to be used later in theprefilter 135. This measure could be as simple as an index between 0 and 1 indicating the degree of blur. However, it could also indicate which regions of the image are blurred and the extent to which these are blurred. - A further subsystem of the image analysis prefilter is a dust analyzer 130-2. The problems caused by dust on imaging devices are well known in the prior art. In the context of the present invention it is important to track the location and severity of dust particles as these may interfere with the correct detection of eye-defects when the two forms of defect overlap. Of particular relevance are techniques where the detection of defects in a digital image is based solely on analysis of the digital image and that do not directly relate to the image acquisition process. For example U.S. Pat. No. 6,233,364 to Krainiouk et al. discloses determining anomalous image regions based on the difference between the gradient of an image at a set of grid points and the local mean of the image gradient. This technique generates few false positives in “noisy” regions of an image such as those representing leaves in a tree, or pebbles on a beach. U.S. Pat. No. 6,125,213 to Morimoto discloses detecting potential defect or “trash” regions within an image based on a comparison of the quadratic differential value of a pixel with a pre-determined threshold value. In addition, Morimoto discloses correcting “trash” regions within an image by successively interpolating from the outside of the “trash” region to the inside of this region—although this does not need to be performed by the subsystem 130-2. U.S. Pat. No. 6,266,054 to Lawton et al discloses automating the removal of narrow elongated distortions from a digital image utilizing the characteristics of image regions bordering the distortion. US patent application 2003/0039402 and WIPO patent application WO-03/019473 both to Robins et al. disclose detecting defective pixels by applying a median filter to an image and subtracting the result from the original image to obtain a difference image. This is used to construct at least one defect map and as such provide a measure of the effect of dust on an image supplied to the subsystem 130-2.
- U.S. Pat. No. 6,035,072 to Read discloses mapping defects or dirt, which affect an image acquisition device. A plurality of images are processed and stationary components which are common between images are detected and assigned a high probability of being a defect. Additional techniques which are employed to modify defect probability include median filtering, sample area detection and dynamic adjustment of scores. This dynamic defect detection process allows defect compensation, defect correction and alerting an operator of the likelihood of defects, but from the point of view of the preferred embodiment, it is the map which is produced which indicates to the
prefilter 135 the degree to which the supplied images are affected by dust and/or defects. - Additional subsystems of the image analysis prefilter are a white balance analyzer 130-3, a color balance analyzer 130-4, and a gamma/luminance analyzer 130-5. In the embodiment, each of these provides, for example, an indicator of the degree to which each of these characteristics deviates from optimal and by which the supplied image might be corrected. Those skilled in the art will realize that such techniques are practiced in a digital camera as part of corrective image processing based on
acquisition settings 110. Prior art techniques which can be employed in embodiments of the present invention also exist for post-processing of an acquired image to enhance its appearance. Some representative examples are now described: - U.S. Pat. No. 6,249,315 to Holm teaches how a spatially blurred and sub-sampled version of an original image can be used to obtain statistical characteristics of a scene or original image. In Holm, this information is combined with the tone reproduction curves and other characteristics of an output device or media to provide an enhancement strategy for digital images, whereas in the preferred embodiment, an analysis prefilter employing the technique of Holm preferably provides the color characteristics of the supplied image to the
prefilter 135. - U.S. Pat. No. 6,268,939 to Klassen et al. teaches correcting luminance and chrominance data in digital color images. Specifically, Klassen is concerned with optimizing the transformations between device dependent and device independent color spaces by applying subsampling of the luminance and chrominance data.
- U.S. Pat. No. 6,192,149 to Eschback et al. discloses improving the quality of a printed image by automatically determining the image gamma and then adjusting the gamma of a printer to correspond to that of the image. Although Eschback is concerned with enhancing the printed quality of a digital image and not the digital image itself, if does teach a means for automatically determining the gamma of a digital image and as such can be used in an analysis pre-filter in embodiments of the present invention. U.S. Pat. No. 6,101,271 to Yamashita et al. discloses implementing a gradation correction to an RGB image signal which allows image brightness to be adjusted without affecting the image hue and saturation
- A further subsystem of the image analysis prefilter is an image texture analyzer 130-6 which allows texture information to be gathered from the acquired and processed main image. This information can be useful both in determining different regions within an image and, when combined with information derived from other image analysis filters such as the blur analyzer 130-1 or a noise analyzer 130-7 it can enable automatically enhancement of an image by applying deblurring or denoising techniques. US patent application 2002/0051571 to Jackway et al discloses texture analysis for digital images. US patent application 2002/0090133 to Kim et al discloses measuring color-texture distances within a digital images and thus offering improved segmentation for regions within digital images.
- A further subsystem of the image analysis prefilter is the noise analyzer 130-7 which produces a measure of the effect of noise on the image supplied to the subsystem 130-7. A further illustrative subsystem of the
image analysis prefilter 130 is an object/region analyzer 130-8 which allows localized analysis of image regions. One particular region which will invariably be found in an image with eye-defects is a human face region. The detection of a face region in an image with eye-defects is simplified as described in US patent application 2004/0119851 to Kaku. Again, an analysis pre-filter employing Kaku would therefore provide indicators of where faces regions are to be found in a supplied image to thepre-filter 135. - The last illustrative subsystem of the
image analysis prefilter 130 is a face recognition analyzer 130-9 which includes a database of pre-determined data obtained from training performed on a personal image collection (not shown) loaded onto the digital camera in order to recognize a person associated with a determined region preferably acquired by the analyzer 130-8 and to provide an indicator of the person or person(s) whose faces have been recognized in an image. Alternatively, the face recognition analyzer 130-9 may provide an indicator of the types of any faces recognized in the image provided to the pre-filter 130-9, for example, a child or adult face, or African, Asian or Caucasian face. - In one embodiment, the analyser 130-9 comprises a set of classifiers which enable multiple sets of face (and/or non-face) data to be combined to provide improved recognition of persons found in an image. The types of classifiers used can be based on skin color, age characteristics, eye-shape and/or eye-brow thickness, the person's hair and/or clothing, poses associated with a person and/or whether or not a person may be wearing makeup, such as eye-shadow or lipstick, or glasses, as preferably obtained from the training performed on the personal image collection.
- One particular advantage of employing a face recognition analyzer 130-9 as an element of the image analysis prefilter is that it enables additional image processing modules to perform face and peripheral region analysis which will enable a determination of known persons within an image. A more detailed description of the preferred person recognizer 135-2 a is provided in co-pending application Ser. No. 11/027,001, filed Dec. 29, 2004, and hereby incorporated by reference. For the person recognizer 135-2 a to function more effectively an additional database component containing classifier signatures associated with known persons is preferably included. This database will typically be derived from a personal collection of images maintained by the owner of a digital camera and, in most typical embodiments, these will be stored off-camera. Further details on the creation and management of exemplary embodiments of such image collections and associated off-camera and in-camera databases is given in co-pending application Ser. No. 10/764,339, filed Jan. 22, 2004 and Ser. No. 11/027,001, filed Dec. 29, 2004 which are hereby incorporated by reference.
- The image analysis prefilter may also incorporate a module to separate background and foreground regions of an image (not shown). Such a module is described in co-pending application entitled “Foreground/Background Segmentation in Digital Images With Differential Exposure Calculations, serial number not yet assigned (FN-122), filed Aug. 30, 2005, hereby incorporated by reference, and may be advantageously employed to reduce the area of an image to which a redeye filter is applied, thus speeding up the execution time. In such a case the image is not necessarily corrected, or the filter chain is not necessarily adapted but the method of application of the filter chain to the image is altered.
- Turning now to the
image compensation prefilter 135. In the present embodiment, a combination of image correction analyzer 135-2 and a redeye subfilter database 135-3 p1 (i) interpret the results of the image analysis performed by theimage analysis prefilter 130; (ii) if corrective image processing is active, determine an optimal correction strategy for application to the acquired, processed image, or a subsampled copy thereof, (iii) if adaption of the redeye filter chain is implemented, determine any parameter/filter conflicts and further determines an optimal adaption of the redeye filter chain (described later); and (iv) if both corrective image processing and filter adaption are active, determine an optimal combination of each. - For example, if the analyzer 130-9 has recognized one or more persons or types of persons in an image, a customized redeye filter set stored as a set of rules in the database 135-3 may be applied to the image. To understand how such customization can improve the performance of a redeye filter we cite some examples of known aspects of the redeye phenomenon which are person specific.
- For example, children and babies are particularly susceptible to redeye. They are also more prone to certain types of redeye, e.g. “bright-eye” where the eye is almost completely white with only a reddish periphery, which can be often more difficult to analyze and correct.
- In addition, racial or ethnic characteristics can cause differences in the color characteristics of the redeye phenomenon. For example, Asian people often exhibit a dull reddish or even “brownish” form of redeyem while persons of Indian descent often exhibit redeye effects with a distinctly “purplish” hue. The extent of these characteristics may vary somewhat from individual to individual.
- As such, knowledge of the type of person in an image can be used by the analyzer 135-2 to determine the filters, the order of the filters and/or the filter parameters to be applied to an image. For example, the filter parameters may be changed on the basis of skin color in that a distinctive set of prototype values could be available for each person; or age characteristics, to enable a higher tolerance of certain color and/or luminance-based filters; eye-shape and/or eye-brow thickness which are person specific; and/or whether or not a person is wearing glasses, which can introduce strong glints resulting in detection errors for standard filter sets. Similarly, the filter order may be changed depending, on the ‘identity’ of the person in the image, i.e. whether or not the person is wearing makeup and/or glasses. For example, if a person is wearing eye shadow and/or lipstick, certain skin filters might not be applied. Instead, alternative filters could be used to determine a uniform color/texture in place of the normal skin filter.
- The actual corrective image processing 135-1 will typically be implemented as a library of image processing algorithms which may be applied in a variety of sequences and combinations to be determined by the image correction analyzer 135-2. In many digital cameras some of these algorithms will have partial or full hardware support thus improving the performance of the
compensation prefilter 135. - It was already remarked that the
analysis prefilter 130 can operate on a subsampled copy of the main image 170-3. In the same way the detection phase of theredeye filter 90 can be applied to a subsampled copy of the main image 170-3, although not necessarily of the same resolution. Thus where corrective image processing is used by the image compensation prefilter it will also be applied to a subsampled copy of the main image 170-3. This has significant benefits with respect to computation speed and computing resources, making it particularly advantageous for in-camera embodiments. - The image correction analyzer 135-2 may not always be able to determine an optimal correction strategy for an acquired, processed image due to conflicts between image processing algorithms, or between the filter adaptions required for the redeye filter chain. In other instances, where a strategy can be determined but the image correction analyzer 135-2 may be aware that the strategy is marginal and may not improve image quality it may be desirable to obtain user input. Thus the image correction analyzer 135-2 may generate a
user indication 140 and in certain embodiments may also employ additional user interaction to assist in the image correction and redeye filter processes. -
FIG. 2 a toFIG. 2 e illustrate several alternative embodiments of the present invention which are described as follows: -
- (i) In
FIG. 2( a) an acquired, processed main image, or alternatively a subsampled copy thereof, is initially loaded,step 201 to respective sub-systems of theanalysis prefilter 130,step 202. These produce their measurements and a determination is made if any of the image quality characteristics lie within or outside acceptable thresholds is made by the image correction analyser 135-2,step 204. If image quality is within acceptable limits for each of the image characteristics analyzed then theredeye filter 90 can be applied normally and no corrective image processing is required. However, if certain image characteristics do lie outside acceptable tolerances then additional analysis is performed by the analyser 135-2 to determine if corrective image processing can be applied 206. If some of the analyzed image characteristics lie too far outside acceptable thresholds, or if a disadvantageous combination of image characteristics is determined, it may not be possible to correct the image reliably prior to applying the redeye filter. Thus thefilter 90 can be disabled 220, auser indication 140 can be provided and processing is completed for thisparticular image 224, without performing the red eye correction or performing the process with lower probability of success. However, if the image can be repaired, 206-YES, the image is correctedstep 208, prior to executing thered eye algorithm 90. In the preferred embodiment, the process of correcting the image, 208 may be performed on the full resolution image, or alternatively on a subsampled image or a copy of the image. The exact nature and possibilities for such corrections, 208, whether locally or globally are described later. In any case, the corrected image needs only be stored temporarily and can be discarded after red-eye processing is complete, 209. It should be noted that performing the pre-filtering, 208 on the image, does not means that the actual red-eye detection and reduction algorithm, 90 has to be modified to account for possible variability. Nonetheless, as image quality supplied to thefilter 90 is improved, the red eye algorithm can use tighter parameters and more well defined restrictions as to the nature of the red eye features that are to be identified so producing improved results. - (ii)
FIG. 2( b) corresponds withFIG. 2( a) except that it includes an additional determining step, 240 which follows the determination that corrective image processing is possible, 206. This additional step determines if the corrective image processing to be applied to the image can be provided by a globally applied transformation of the image pixels. The most popular global transformations are matrix multiplication or lookup table transformations. For example, the analysis provided by filters 130-3 . . . 130-5 may indicate to the analyser 135-2 that the principle cause of image non-optimality is a reddish color cast. In this case, a simple transformation of the red image component, R.fwdarw.R′ is sufficient to compensate for the image non-optimality. Another example will be an image that is under exposed and a tone reproduction curve (TRC) needs to be corrected. Global transformations have the advantage of being relatively computationally efficient and with a potential to be highly optimized. In addition, such transformations may be performed within theredeye filter 90 itself, for example, as part of the pixel locator andregion segmentation process 92 described in more detail later in relation toFIGS. 3 and 5 , so reducing the overhead involved in performing this correction. For the moment, it is sufficient to say that in step 242, a pixel transformation within the pixel locator and region segmentor 92 of the red-eye filter is configured. It will also been seen that the steps 240, 242 may be performed as an alternative to other corrections step 208, in parallel with other corrections or in series with other corrections prior to execution of the red-eye filter 90. - (iii) In
FIG. 2( c) instead of corrective image processing to compensate for a non-optimally acquired image, the analyser 135-2 adapts the redeye filter chain to provide image compensation for the redeye detection process.Steps step 250. Typically this determining step will involve the image correction analyzer 135-2 obtaining the relevant data from an in-camera data repository such as the redeye subfilter database 135-3. After the affected subfilters have been determined 250, the next step is to determine if subfilter compensation is possible 252. This will depend on the different image characteristics which are outside acceptable thresholds and the relevant sets of redeye subfilters affected by each out-of-tolerance image characteristic. If filter chain adaption is possible then the filter chain is modified 254 and the redeye filter is applied 90. If subfilter compensation is not possible due to filter, or parameter-based conflicts then steps 220, 140, and 224 are performed as in the previous embodiments. The subfilter determining process is further described inFIG. 4( b) and an overview of the redeye subfilter matrix is given inFIG. 3 .
- (i) In
- The following example illustrates the concept of applying the results of the analysis stage to modify the filter chain of the correction process and the red eye detection process as opposed to modification of the image pixels. It is assumed that a pixel {R.sub.0,G.sub.0,B.sub.0} after the needed correction,
step 208, is transformed to pixel value {R.sub.1,G.sub.1,B.sub.1} by a transformation T: T[{R.sub.0,G.sub.0,B.sub.0}]={R.sub.1,G.sub.1,B.sub.1}. For illustrative purposes, we assume that the first stage of the red eye detection algorithm, as defined inblock 92 ofFIG. 1( a) is comparison to a known value, to determine if the pixel is, in simplified terms, red or not. The value of the pixel in to compare with is {R′,G′,B′}. However, the two steps above of correcting and comparing may be combined simply by transforming the static value of {R′,G′,B′} based on the inverse of the correction transformation. Thus, the preliminary preparatory stage will be: {R″,G″,B″}=T.sup.−1[{R′,G′,B′}] and the pixel by pixel comparison, as adapted,step 254 to the necessary needed transformations will comprise the following test: IF{R.sub.0,G.sub.0,B.sub.0}.gtoreq.{R″,G″,B″}. By doing so, the entire image is not corrected, but the comparison is similar to the state as if the image was corrected. The complexity and number of necessary steps compared to the original algorithm is exactly the same, with the extra value that the image algorithm now is taking into account the sub-optimal quality of the image. - T[{R.sub.0,G.sub.0,B.sub.0}].alpha.{R′,G′,B′}={R.sub.0,G.sub.0,B.sub-.0}.alpha.T.sup.−1[{R′,G′,B′}]={R.sub.0,G.sub.0,B.sub.0}.alpha.{R″,G″,B′-′}
- Where .alpha. denotes the relationship between the objects.
- Of course, such adaptation may be more complex than the simplified example above, and may include change of multiple values in the algorithm or change in the order the various filters are applied, or change in the weight of the various filters. However, the improvement in performance may justify the added architectural complexity.
- (iv)
FIG. 2( d) illustrates a combination of the embodiments described in 2(b) and 2(c). This embodiment is identical to the previous embodiments except that if subfilter compensation is not possible 252 it incorporates two additional steps to determining if corrective image processing can be applied 206 and if this is possible asecond step 208 to apply said corrective image processing. Note that subfilter adaption is preferred to corrective image processing as it requires practically no computational resources, but only changes the input parameters of the subfilters which comprise the redeye filter chain and the composition and order-of-execution of the chain itself. However in certain circumstances correction of the original acquired image by image processing means may provide more reliable redeye detection, or be desirable as an end in itself. - (v)
FIG. 2( e) describes an alternative variation of the algorithm. This is identical to the embodiment ofFIG. 2( a) except that after determining if corrective image processing is possible 206, corrective image processing is applied to both the main acquired image 170-1 and a subsampled copy 170-3 thereof, step 208-1. A second additional step then saves the corrected acquired image 170-2, in themain image store 170,step 209, and auser indication 140 is generated to inform the camera user that an improved image is available. Additional steps may be added to allow the user to select between original 170-1 and corrected images 170-2 if so desired. In this embodiment,redeye detection redeye correction 96 is applied to the corrected copy of the main acquired image. In other embodiments corrective image processing would not be applied to the full-sized main image 170-1 so that the redeye correction would be applied to the uncorrected main image. -
FIG. 3 shows the principle subfilter categories which exist within the mainredeye filter 90. While each of the component filters will be referred to in sequence, it will be appreciated that where appropriate more than one of these filters may be applied at a given time and the decisions above to modify the filter chain can include a decision not alone as to which filters may be executed in a sequence, but also on which filters can be applied in parallel sequences. As described above, the pixel transformer filter 92-0 allows global pixel-level transformations of images during color determining and pixel grouping operations. Also, within the pixel locator andregion segmenter 92 we find pixel color filters 92-1 which perform the initial determining if a pixel has a color indicative of a flash eye defect; a region segmentor 92-2 which segments pixels into candidate redeye groupings; regional color filters 92-3, color correlation filters 92-4, and color distribution filters 92-5 which operate on candidate regions based these criteria. In addition the pixel locator andregion segmenter 92 contains two additional functional blocks which do not contribute directly to the color determining and segmentation operations but are nevertheless intertwined with the operation of the pixel locator and region segmenter. The resegmentation engine 92-6 is a functional block which is particularly useful for analyzing difficult eye defects. It allows the splitting 92-6 a and regrouping 92-6 b of borderline candidate regions based on a variety of threshold criteria. - After candidate eye-defect groupings have been determined by the
segmenter 92, ashape analyzer 94 next applies a set of subfilters to determine if a particular candidate grouping is physically compatible with known eye-defects. Thus some basic geometric filters are first applied 94-1 followed by additional filters to determine region compactness 94-2 and boundary continuity 94-3. Further determining is then performed based on region size 94-4, and a series of additional filters then determine if neighbouring features exist which are indicative of eye shape 94-5, eyebrows 94-6 and iris regions 94-7. In certain embodiments of the present invention the redeye filter may additionally use anthropometric data to assist in the accurate determining of such features. - Now the remaining candidate regions are passed to a
falsing analyzer 98 which contains a range of subfilter groups which eliminate candidate regions based on a range of criteria including lips filters 98-1, face region filters 98-2, skin texture filters 98-3, eye-glint filters 98-4, white region filters 98-5, region uniformity filters 98-6, skin color filters 98-7, and eye-region falsing filters 98-8. Further to these standard filters a number of specialized filters may also be included as part of thefalsing analyzer 98. In particular we mention a category of filter based on the use of acquired preview images 98-9 which can determine if a region was red prior to applying a flash. This particular filter may also be incorporated as part of the initialregion determining process 92, as described in co-pending U.S. application Ser. No. 10/919,226 from August, 2004 entitled “Red-Eye Filter Method And Apparatus” herein incorporated by reference. An additional category of falsing filter employs image metadata determined from the camera acquisition process 98-10. This category of filter can be particularly advantageous when combined with anthropometric data as described in PCT Application No. PCT/EP2004/008706. Finally an additional category of filter is a user confirmation filter 98-11 which can be optionally used to request a final user input at the end of the detection process. This filter can be activated or disabled based on how sub-optimal the quality of an acquired image is. - The
pixel modifier 96 is essentially concerned with the correction of confirmed redeye regions. Where an embodiment of the invention incorporates a face recognition module 130-9 then the pixel modifier may advantageously employ data from an in-camera known person database (not shown) to indicate aspects of the eye color of a person in the image. This can have great benefit as certain types of flash eye-defects in an image can destroy indications of original eye color. - In the preferred embodiment, an additional component of the
redeye filter 90 is afilter chain adapter 99. This component is responsible for combining, and sequencing the subfilters of theredeye filter 90 and for activating each filter with a set of input parameters corresponding to the parameter list(s) 99-1 supplied from theimage compensation prefilter 135. - Finally, it is remarked in the context of
FIG. 3 that although the pixel locator ®ion segmenter 92, theshape analyzer 94 and thefalsing analyzer 98 are illustrated as separate components it is not intended to exclude the possibility that subfilters from these components may be applied in out-of-order sequences. As an illustrative example, regions which pass all the falsing filters except for the region uniformity filter 98-6 may be returned to the resegmentation engine 92-6 to determine if the region was incorrectly segmented. Thus a subfilter from the pixel locator and region segmentor 92 may be used to add an additional capability to thefalsing analysis 98. -
FIG. 4 shows in more detail the operation of theimage analysis 130 andimage compensation prefilters 135. In this example the operation of thecompensation prefilter 135, and more particularly the operation of the image correction analyzer 135-2 has been separated into two functional modes:FIG. 4( a) illustrates the workflow for the determining and performing corrective image processing (so corresponding generally tosteps FIGS. 2( a),(b),(d) and (e)) whileFIG. 4( b) describes the determining and performing filter chain adaption including determining if a single chain, or a combination of multiple filter chains will compensate for the non-optimal image characteristics determined by the image analysis prefilter 130 (so corresponding generally to step 250,252 and 254 ofFIGS. 2( c) and 2(d)).FIG. 4( c) illustrates an exemplary embodiment of the workflow of theimage analysis prefilter 130. - In
FIG. 4( a) the image correction analyzer 135-2 first loads an imagecharacteristic list 401 obtained from theimage analysis prefilter 130. This list will allow the correction analyzer to quickly determine if a simple image correction is required or if a number of image characteristics will requirecorrection 402. In the case of a single characteristic the correction analyzer can immediately apply the relevantcorrective image processing 412 followed by some tests of the correctedimage 414 to ensure that image quality is at least not deteriorated by the applied corrective technique. If these tests are passed 416 then the image can be passed on to theredeye filter 90 for eye defect correction. Otherwise, if corrective image processing has failed the sanity tests 416 then an additional test may be made to determine if filter chain adaption is possible 422. In this case the algorithm will initiate the workflow described inFIG. 4( b) for determining the requiredfilter chain adaptions 450. If corrective image processing has failed 416 and filter chain adaption is not possible 422 then the correction analyzer will disable the redeye filter for thisimage 220, and provide a user indication to thateffect 140 after which it will pass control back to the main in-camera application 224. Note that in certain embodiments the user indication may be interactive and may provide an option to allow the normal redeye filter process to proceed on the uncorrected image, or alternatively offer additional user-selectable choices for additional image analysis and/or correction strategies. - Now returning to the determining step between single and multiple image
characteristics requiring correction 402 we now describe the correction approach for multiple image characteristics. Typically an image which was non-optimally acquired will suffer from one major deficiency and a number of less significant deficiencies. We will refer to these as primary and secondary image deficiencies. The next step in the workflow process is to determine theprimary image deficiency 404. After this has been successfully determined from the image characteristics list the next step is to determine the interdependencies between this primary correction required and said secondary image characteristics. Typically there will be more than one approach to correcting the primary image characteristic and the correction analyzer must next determine the effects of these alternative correction techniques on thesecondary image characteristics 406 before correction can be initiated. If any of the secondary characteristics are likely to deteriorate significantly and all alternative correction technique for the primary image characteristic are exhausted then the correction analyzer may determine that these interdependencies cannot be resolved 408. In the present embodiment an additional test is next made to determine if filter chain adaption is possible 422. In this case the algorithm will initiate the workflow described inFIG. 4( b) for determining the requiredfilter chain adaptions 450. If corrective image processing has failed 416 and filter chain adaption is not possible 422 then the correction analyzer will disable the redeye filter for thisimage 220, and provide a user indication to thateffect 140 after which it will pass control back to the main in-camera application 224. - Given that the secondary interdependencies can be resolved 408 the correction analyzer next proceeds to determine the
image processing chain 410. In certain embodiments this step may incorporate the determining of additional corrective techniques which can further enhance the primary correction technique which has been determined. In such an embodiment the correction analyzer will, essentially, loop back throughsteps step 408 will require access to a relatively complex knowledgebase 135-4. In the present embodiment this is implemented as a series of look-up-tables (LUTs) which may be embedded in the non-volatile memory of a digital camera. The content of the knowledgebase is highly dependent on (i) the image characteristics determined by the image analysis prefilter and (ii) the correction techniques available to the compensation prefilter and (iii) the camera within which the invention operates. Thus it will be evident to those skilled in the art that the knowledgebase will differ significantly from one embodiment to another. It is also desirable that said knowledgebase can be easily updated by a camera manufacturer and, to some extent, modified by an end-user. Thus various embodiments would store, or allow updating of the knowledgebase from (i) a compact flash or other memory card; (ii) a USB link to a personal computer; (iii) a network connection for a networked/wireless camera and (iv) from a mobile phone network for a camera which incorporates the functionality of a mobile phone. In other alternative embodiments, where the camera is networked, the knowledgebase may reside on a remote server and may respond to requests from the camera for the resolving of a certain set of correction interdependencies. - An example of image characteristics determined by the image analysis prefilter is a person or type of person recognised by the analyzer 130-9. Once a person or type of person has been recognized using the face recognition analyzer, 130-9, it is preferred to determine whether a customized redeye filter set is available and if it has been loaded onto the camera. If this data is not available, or if a person could not be recognized from a detected face, a generic filter set will be applied to the detected face region. If a person is recognized, the redeye filter will be modified according to a customised profile loaded on the camera and stored in the database 135-3. In general, this profile is based on an analysis of previous images of the recognised person or type of person and is designed to optimise both the detection and correction of redeye defects for the individual or type of person.
- In particular, certain types of flash eye defects may completely destroy the iris color of an eye. This can generally not be restored by conventional image processing. However, if a simple model of a person's eye is available from the image correction knowledge base 135-4 which incorporates the appropriate geometric, dimensional and color information, then much improved systems and methods of redeye correction can be provided.
- Now once the corrective image processing chain has been determined it is applied to the
image 412 and a number of sanity checks are applied 412 to ensure that the image quality is not degraded by thecorrection process 416. If these tests fail then it may be that the determined interdependencies were marginal or that an alternative image processing strategy is still available 418. If this is so then the image processing chain is modified 420 and corrective image processing is reapplied 412. This loop may continue until all alternative image processing chains have been exhausted. It is further remarked that the entire image processing chain may not be applied each time. For example, if the differences between image processing chains is a single filter then a temporary copy of the input image to that filter is kept and said filter is simply reapplied with different parameter settings. If, however step 418 determines that all corrective measures have been tried it will next move to step 422 which determines if filter chain adaption is possible. Now returning to step 416, if the corrective image processing is applied successfully then the image is passed on to theredeye filter 90. -
FIG. 4( b) describes an alternative embodiment of the correction analyzer 135-2 which determines if filter chain adaption is possible and then modifies the redeye filter appropriately. Initially the image characteristics list is loaded 401 and for each characteristic a set of filters which require adaption is determined 452. This is achieved through referencing the external database 135-3 and the comments and discussion provided in the context of the image correction knowledgebase 135-4 apply equally here. - Now once the filter lists for each image characteristic have been determined the correction analyzer must determine which filters overlap a plurality of
image characteristics 454 and, additionally determine if there are conflicts between the filter adaptions required for each of the plurality ofimage characteristics 456. If such conflicts exist the correction analyzer must next decide if they can be resolved 460. To provide a simple illustrative example we consider two image characteristics which both require an adaption of the threshold of the main redness filter in order to compensate for the measured non-optimallity of each. If the first characteristic requires a lowering of the redness threshold by, say, 10% and the second characteristic requires a lowering of the same threshold by, say 15% then the correction analyzer must next determine from the knowledgebase the result of compensating for the first characteristic with a lowered threshold of 15% rather than the initially requested 10%. Such an adjustment will normal be an inclusive one and the correction analyzer may determine that the conflict can be resolved by adapting the threshold of the main redness filter to 15%. However it might also determine that the additional 5% reduction in said threshold will lead to an unacceptable increase in false positives during the redeye filtering process and that this particular conflict cannot be simply resolved. - If such filter conflicts cannot be simply resolved an alternative is to determine if they are separable 466. If they are separable that implies that two distinct redeye filter processes can be run with different filter chains and the results of the two detection processes can be merged prior to correcting the defects. In the case of the example provided above this implies that one detection process would be run to compensate for a first image characteristic with a threshold of 10% and a second detection process will be run for the second image characteristic with a threshold of 15%. The results of the two detection processes will then be combined in either an exclusive or an inclusive manner depending on the separability determination obtained from the subfilter database 135-3. In embodiments where a face recognition module 130-9 is employed, a separate detection process may be determined and selectively applied to the image for each known person.
- Returning to step 460, we see that if filter conflicts can be resolved, the correction analyzer will prepare a single filter
chain parameter list 462 which will then be loaded 464 to thefilter chain adapter 99 of theredeye filter 90 illustrated inFIG. 3 . Alternatively, if filter conflicts cannot be resolved, but are determined to be separable 466 the correction analyzer prepares a number of parameter lists 468 for the filter chain adapter which are then loaded 464 as in the previous case. The redeye filter is then applied 90. - However, if filter conflicts cannot be resolved and are not separable the correction analyzer will then make a determination if image processing compensation might be possible 422. If so then the image processing compensation workflow of
FIG. 4( a) may be additionally employed 400. If it is determined that image processing compensation is not possible then the correction analyzer will disable the redeye filter for thisimage 220, and provide a user indication to thateffect 140 after which it will pass control back to the main in-camera application 224. -
FIG. 4( c) describes the workflow of theimage analysis prefilter 130 illustrated inFIG. 1( b). This performs an image processing analysis of at least one image characteristic according to at least one of a plurality of image processing techniques. Preferably, the output of this analysis should be a simple measure of goodness of the analyzed image characteristic. For the purposes of an exemplary discussion we suppose that said measure is a percentage of the optimum for said characteristic. Thus 100% represents perfect quality for the measured image characteristic; values above 95% represent negligible image distortions/imperfections in said characteristic; values above 85% represent noticeable, but easily correctable distortions/imperfections and values above 60% represent major distortions/imperfections which require major image processing to correct the image characteristic. Values below 60% imply that the image is too badly distorted to be correctable. - The first step in this workflow is to load or, if it is already loaded in memory, to access the image to be analyzed. The analysis prefilter next analyzes a first characteristic of said
image 482 and determines a measure of goodness. Now if said characteristic is above a first threshold (95%) 486 then it is marked as not requiringcorrective measures 487 in the characteristic list. If it is below said first threshold, but above a second threshold (85%) 488 then it is marked as requiring secondarycorrective measures 489. If it is below said second threshold, but above a third threshold (60%) 490 then it is marked as requiring primarycorrective measures 491 and if below saidthird threshold 492 it is marked asuncorrectable 493. Now it is remarked that for some embodiments of the present invention which combine corrective image processing with filter chain adaption there may be two distinct sets of thresholds, one relating to the correctability using image processing techniques and the second relating to the degree of compensation possible using filter chain adaption. We further remark that for image compensation through filter chain adaption that certain filters may advantageously scale their input parameters directly according to the measure of goodness of certain image characteristics. As an illustrative example consider the redness threshold of the main color filter which, over certain ranges of values, may be scaled directly according to a measure of excessive “redness” in the color balance of a non-optimally acquired image. Thus, the image characteristic list may additionally include the raw measure of goodness of each image characteristic. In an alternative embodiment only the raw measure of goodness will be exported from theimage analysis prefilter 130 and the threshold based determining ofFIG. 4( c) will be performed within the correction analyzer 135-2 in which case threshold values may be determined from the image correction knowledgebase 135-4. - Returning to 493 we note that images of such poor quality may require a second image acquisition process to be initiated and so it is implicit in 493 that for certain embodiments of the present invention it may be desirable that an alarm/interrupt indication is sent to the main camera application.
- Now the main loop continues by determining if the currently analyzed characteristic is the last image characteristic to be analyzed 496. If not it returns to analyzing the
next image characteristic 482. If it is the last characteristic it then passes the image characteristics list to theimage compensation prefilter 494 and returns control to themain camera application 224. It should be remarked that in certain embodiments that a plurality of image characteristics may be grouped together and analyzed concurrently, rather than on a one-by-one basis. This may be preferable if several image characteristics have significant overlap in the image processing steps required to evaluate them. It may also be preferable where a hardware co-processor or DSP unit is available as part of the camera hardware and it is desired to batch run or parallelize the computing of image characteristics on such hardware subsystems. - A third principle embodiment of the present invention has already been briefly described. This is the use of a global pixel-level transformation of the image within the redeye filter itself and relies on the corrective image processing, as determined by the correction analyzer 135-2, being implementable as a global pixel-level transformation of the image. Those skilled in the art will realize that such a requirement implies that certain of the image analyzer elements which comprise the
image analysis prefilter 130 are not relevant to this embodiment. For example dust analysis, object/region analysis, noise analysis and certain forms of image blur cannot be corrected by such transformations. However many other image characteristics are susceptible to such transformations. Further, we remark that this alternative embodiment may be combined with the other two principle embodiments of the invention to compliment each other. - In
FIG. 5( a) we illustrate an exemplary embodiment of the red pixel locating and red region segmenting workflow which occurs within the redeye filter as steps 92-1 and 92-2. This workflow has been modified to incorporate a global pixel-level transformation 92-0 of the image as an integral element of the color determining and region grouping steps of the redeye filter. It is implicit in this embodiment that the correction analyzer has determined that a global pixel level transformation can achieve the required image compensation. The image to be processed by the redeye filter is first loaded 502 and the labeling LUT for the region grouping process ininitialized 504. Next the current pixel and pixel neighbourhoods are initialized 506. -
FIG. 5( b) shows a diagrammatic representation of a 4-pixel neighborhood 562, shaded light gray in the figure and containing the three upper pixels and the pixel to the left of thecurrent pixel 560, shaded dark gray in the figure. This 4-pixel neighborhood is used in the labeling algorithm of this exemplary embodiment. A look-up table, LUT, is defined to hold correspondence labels. - Returning to step 506 we see that after initialization is completed the next step for the workflow of
FIG. 5( a) is to begin a recursive iteration through all the pixels of an image in a raster-scan from top-left to bottom-right. The first operation on each pixel is to apply the global pixel transformation 508. It is assumed that the loaded image is an RGB bitmap and the global pixel transformation is of the form: P(R,G,B).fwdarw.P(R′,G′,B′), where the red, green and blue values of the current pixel, P(R,G,B) are mapped to a shifted set of color space values, P(R′,G′,B′). There are a number of advantages in performing this corrective transformation at the same time as the color determining and pixel grouping. In particular it is easier to optimize the computational performance of the algorithm which is important for in-camera implementations. Following step 508 the workflow next determines if the current pixel satisfies membership criteria for acandidate redeye region 510. Essentially this implies that the current pixel has color properties which are compatible with an eye defect; this does not necessarily imply that the pixel is red as a range of other colors can be associated with flash eye defects. If the current pixel satisfies membership criteria for asegment 510, i.e., if it is sufficiently “red”, then the algorithm checks for other “red” pixels in the 4-pixel neighborhood 512. If there are no other “red” pixels, then the current pixel is assigned membership of the current label 530. The LUT is then updated 532 and the current label value is incremented 534. If there are other “red” pixels in the 4-pixel neighborhood then the current pixel is given membership in the segment with thelowest label value 514 and the LUT is updated accordingly 516. After the current pixel has been labeled as part of a “red”segment 512 or 530, or has been categorized as “non-red” duringstep 510, a test is then performed to determine if it is the last pixel in theimage 518. If the current pixel is the last pixel in the image then a final update of the LUT is performed 540. Otherwise the next image pixel is obtained by incrementing thecurrent pixel pointer 520 and returning to step 508 and is processed in the same manner. Once the final image pixel is processed and the final LUT completed 540, all of the pixels with segment membership are sorted into a labeled-segment table of potential red-eye segments 542. - With regard to the exemplary details of corrective image processing 135-1 which may be employed in the present invention we remark that a broad range of techniques exist for automatic or semi-automatic image correction and enhancement. For ease of discussion we can group these into 6 main subcategories as follows: [0106] (i) Contrast Normalization and Image Sharpening. [0107] (ii) Image Color Adjustment and Tone Reproduction Scaling. [0108] (iii) Exposure Adjustment and Digital Fill Flash [0109] (iv) Brightness Adjustment with Color Space Matching; Image Auto-Gamma determination with Image Enhancement. [0110] (v) In-Camera Image Enhancement [0111] (vi) Face Based Image Enhancement
- All Categories may be Global Correction or Local Region Based.
- U.S. Pat. No. 6,421,468 to Ratnakar et al. disclose sharpening an image by transforming the image representation into a frequency-domain representation and by selectively applying scaling factors to certain frequency domain characteristics of an image. The modified frequency domain representation is then back-transformed into the spatial domain and provides a sharpened version of the original image. U.S. Pat. No. 6,393,148 to Bhaskar discloses automatic contrast enhancement of an image by increasing the dynamic range of the tone levels within an image without causing distortion or shifts to the color map of said image.
- US patent application 2002/0105662 to Patton et al. discloses modifying a portion of an image in accordance with colormetric parameters. More particularly it discloses the steps of (i) identifying a region representing skin tone in an image; (ii) displaying a plurality of renderings for said skin tone; (iii) allowing a user to select one of said renderings and (iv) modifying the skin tone regions in the images in accordance with the rendering of said skin tone selected by the user. U.S. Pat. No. 6,438,264 to Gallagher et al. discloses compensating image color when adjusting the contrast of a digital color image including the steps of (i) receiving a tone scale function; (ii) calculating a local slope of the tone scale function for each pixel of the digital image; (iii) calculating a color saturation signal from the digital color image and (iv) adjusting the color saturation signal for each pixel of the color image based on the local tone scale slope. The image enhancements of Gallagher et al. are applied to the entire image and are based on a global tone scale function. Thus this technique may be implemented as a global pixel-level color space transformation. U.S. Pat. No. 6,249,315 to Holm teaches how a spatially blurred and sub-sampled version of an original image can be used to obtain statistical characteristics of a scene or original image. This information is combined with the tone reproduction curves and other characteristics of an output device or media to provide an enhancement strategy for optimized output of a digital image. All of this processing can be performed automatically, although the Holm also allows for simple, intuitive manual adjustment by a user.
- US patent application 2003/0052991 to Stavely et al. discloses simulating fill flash in digital photography. In Stavely a digital camera shoots a series of photographs of a scene at various focal distances. These pictures are subsequently analyzed to determine the distances to different objects in the scene. Then regions of these pictures have their brightness selectively adjusted based on the aforementioned distance calculations and are then combined to form a single, photographic image. US patent application 2001/0031142 to Whiteside is concerned with a scene recognition method and a system using brightness and ranging mapping. It uses auto-ranging and brightness measurements to adjust image exposure to ensure that both background and foreground objects are correctly illuminated in a digital image. Much of the earlier prior art is focused on the application of corrections and enhancement of the entire image, rather than on selected regions of an image and thus discuss the correction of image exposure and tone scale as opposed to fill flash. Example patents include U.S. Pat. No. 6,473,199 to Gilman et al. which describes a method for correcting for exposure in a digital image and includes providing a plurality of exposure and tone scale correcting nonlinear transforms and selecting the appropriate nonlinear transform from the plurality of nonlinear transforms and transforming the digital image to produce a new digital image which is corrected for exposure and tone scale. U.S. Pat. No. 5,991,456 to Rahman et al. describes a method of improving a digital image. The image is initially represented by digital data indexed to represent positions on a display. The digital data is indicative of an intensity value Ii (x,y) for each position (x,y) in each i-th spectral band. The intensity value for each position in each i-th spectral band is adjusted to generate an adjusted intensity value for each position in each i-th spectral band. Each surround function Fn (x,y) is uniquely scaled to improve an aspect of the digital image, e.g., dynamic range compression, color constancy, and lightness rendition. For color images, a novel color restoration step is added to give the image true-to-life color that closely matches human observation. However these do not teach the concept of regional analysis and regional adjustment of image intensity or exposure levels. U.S. Pat. No. 5,818,975 to Goodwin et al. teaches area selective exposure adjustment. Goodwin describes how a digital image can have the dynamic range of its scene brightness reduced to suit the available dynamic brightness range of an output device by separating the scene into two regions—one with a high brightness range and one with a low brightness range. A brightness transform is derived for both regions to reduce the brightness of the first region and to boost the brightness of the second region, recombining both regions to reform an enhanced version of the original image for the output device. This technique is analogous to an early implementation of digital fill flash. Another example is U.S. Pat. No. 5,724,456 to Boyack et al. which teaches brightness adjustment of images using digital scene analysis. Boyack partitions the image into blocks and larger groups of blocks, known as sectors. It then determines an average luminance block value. A difference is determined between the max and min block values for each sector. If this difference exceeds a pre-determined threshold the sector is marked active. A histogram of weighted counts of active sectors against average luminance sector values is plotted and the histogram is shifted to using a pre-determined criteria so that the average luminance sector values of interest will fall within a destination window corresponding to the tonal reproduction capability of a destination application or output device.
- Another area of image enhancement in the prior art relates to brightness adjustment and color matching between color spaces. For example U.S. Pat. No. 6,459,436 to Kumada et al. describes transforming image date from device dependent color spaces to device-independent Lab color spaces and back again. Image data is initially captured in a color space representation which is dependent on the input device. This is subsequently converted into a device independent color space. Gamut mapping (hue restoration) is performed in the device independent color space and the image data may then be mapped back to a second device-dependent color space. U.S. Pat. No. 6,268,939 to Klassen et al. is also concerned correcting luminance and chrominance data in digital color images. More specifically Klassen is concerned with optimizing the transformations between device dependent and device independent color spaces by applying subsampling of the luminance and chrominance data. Another patent in this category is U.S. Pat. No. 6,192,149 to Eschback et al. which discloses improving the quality of a printed image by automatically determining the image gamma and then adjusting the gamma of a printer to correspond to that of the image. Although Eschback is concerned with enhancing the printed quality of a digital image and not the digital image itself, if does teach a means for automatically determining the gamma of a digital image. This information could be used to directly adjust image gamma, or used as a basis for applying other enhancements to the original digital image. U.S. Pat. No. 6,101,271 to Yamashita et al. discloses implementing a gradation correction to an RGB image signal which allows image brightness to be adjusted without affecting the image hue and saturation.
- U.S. Pat. No. 6,516,154 to Parulski et al. discloses suggesting improvements to a digital image after it has been captured by a camera. The user may crop, re-size or adjust color balance before saving a picture; alternatively the user may choose to re-take a picture using different settings on the camera. The suggestion of improvements is made by the camera user-interface. However Parulski does not teach the use of image analysis and corrective image processing to automatically initiate in-camera corrective actions upon an acquired digital image.
- In US patent application 20020172419, Lin et al., discloses automatically improving the appearance of faces in images based on automatically detecting such images in the digital image. Lin describes modification of lightness contrast and color levels of the image to produce better results.
- Additional methods of face-based image enhancement are described in co-pending U.S. application Ser. No. 11/024,046, which is hereby incorporated by reference.
- Any of the embodiments described herein may be combined in whole or in part with any of the features described in the following in alternative embodiments.
- There is provided in certain embodiments a method and apparatus for red-eye detection in an acquired digital image as claimed in the appended claims.
- Certain embodiments compensate for sub-optimally acquired images where degradations in the acquired image may affect the correct operation of redeye detection, prior to or in conjunction with applying the detection and correction stage.
- Certain embodiments improve the overall success rate and reduces the false positive rate of red eye detection and reduction by compensating for non-optimally acquired images by performing image analysis on the acquired image and determining and applying corrective image processing based on said image analysis prior to or in conjunction with applying one or many redeye detection filters to the acquired image. Such corrections or enhancements may include applying global or local color space conversion, exposure compensation, noise reduction, sharpening, blurring or tone reproduction transformations.
- In certain embodiments, image analysis is performed on a sub-sampled copy of the main acquired image where possible, enhancing the performance of this invention inside devices with limited computational capability such as hand held devices and in particular digital cameras or printers.
- In certain embodiments, the pre-filtering process is optimized by applying when possible, as determined from the image analysis, the image transformations at the pixel level during the redeye detection process thus compensating for non-optimally acquired images without requiring that corrective image processing be applied to the full resolution image.
- In certain embodiments, the redeye filter chain is configured for optimal performance based on image analysis of an acquired image to enhance the execution red eye detection and reduction process. Such configuration takes place in the form of variable parameters for the algorithm and variable ordering and selection of sub-filters in the process.
- Certain embodiments operate uniformly on both pixels which are members of a defect and its bounding region thus avoiding the need to determine individually if pixels in the neighborhood of said defect are members of the defect and to subsequently apply correcting algorithms to such pixels on an individual basis.
- Using certain embodiments, variables that could significantly effect the success of the red-eye detection algorithm such as noise, color shifts, incorrect exposure, blur, over sharpening etc, may be pre-eliminated before performing the detection process, thus improving the success rate.
- Alternatively or in addition these variables may be pre-accounted for by changing the parameters for the detection process, thus improving the performance and the success rate.
- An advantage provided herein is that by bringing images into a known and better defined image quality, the criteria for detection can be tightened and narrowed down, thus providing higher accuracy both in the positive detection and reduction in the false detection.
- A further advantage provided herein is that by accounting for the reasons for suboptimal image quality the parameters for the detection and correction algorithm may be modified, thus providing higher accuracy both in the positive detection and reduction in the false detection without the need to modify the image.
- An additional advantage provided herein is that misclassification of pixels and regions belonging to defect areas is reduced if not altogether avoided, which means a reduction of undetected correct positives.
- An additional advantage provided herein is that color misclassification of pixels and regions belonging to non-defect areas is reduced if not avoided, which means a reduction of false positives.
- A further advantage provided herein is that certain embodiments can be implemented to run sufficiently fast and accurately to allow individual images in a batch to be analyzed and corrected in real-time prior to printing.
- Yet a further advantage of preferred embodiments of the present invention is that they have a sufficiently low requirement for computing power and memory resources to allow it to be implemented inside digital cameras as part of the post-acquisition processing step.
- Yet a further advantage provided herein is that certain embodiments have a sufficiently low requirement for computing power and memory resources to allow them to be implemented as a computer program on a hand-held personal digital assistant (PDA), mobile phone or other digital appliance suitable for picture display
- A further advantage provided herein is that certain embodiments are not limited in their detection of red-eye defects by requirements for clearly defined skin regions matching a human face.
- A further advantage provided herein is the ability to concatenate image quality transformations and red eye detection to improve overall performance.
- The present invention is not limited to the embodiments described above herein, which may be amended or modified without departing from the scope of the present invention as set forth in the appended claims, and structural and functional equivalents thereof.
- In methods that may be performed according to preferred embodiments herein and at may have been described above and/or claimed below, the operations have been described in selected typographical sequences. However, the sequences have been selected and so ordered for typographical convenience and are not intended to imply any particular order for performing the operations.
- In addition, all references cited above herein, in addition to the background and summary of the invention sections, are hereby incorporated by reference into the detailed description of the preferred embodiments as disclosing alternative embodiments and components.
Claims (21)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/947,731 US7953251B1 (en) | 2004-10-28 | 2010-11-16 | Method and apparatus for detection and correction of flash-induced eye defects within digital images using preview or other reference images |
US13/113,648 US8135184B2 (en) | 2004-10-28 | 2011-05-23 | Method and apparatus for detection and correction of multiple image defects within digital images using preview or other reference images |
Applications Claiming Priority (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/976,336 US7536036B2 (en) | 2004-10-28 | 2004-10-28 | Method and apparatus for red-eye detection in an acquired digital image |
US11/123,971 US7436998B2 (en) | 2004-10-28 | 2005-05-06 | Method and apparatus for red-eye detection in an acquired digital image based on image quality pre and post filtering |
US11/182,718 US20060093238A1 (en) | 2004-10-28 | 2005-07-15 | Method and apparatus for red-eye detection in an acquired digital image using face recognition |
US11/233,513 US7587085B2 (en) | 2004-10-28 | 2005-09-21 | Method and apparatus for red-eye detection in an acquired digital image |
US94555807P | 2007-06-21 | 2007-06-21 | |
US12/142,134 US8320641B2 (en) | 2004-10-28 | 2008-06-19 | Method and apparatus for red-eye detection using preview or other reference images |
US12/947,731 US7953251B1 (en) | 2004-10-28 | 2010-11-16 | Method and apparatus for detection and correction of flash-induced eye defects within digital images using preview or other reference images |
Related Parent Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/142,134 Continuation US8320641B2 (en) | 2004-10-28 | 2008-06-19 | Method and apparatus for red-eye detection using preview or other reference images |
US13/113,648 Continuation US8135184B2 (en) | 2004-10-28 | 2011-05-23 | Method and apparatus for detection and correction of multiple image defects within digital images using preview or other reference images |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/113,648 Continuation US8135184B2 (en) | 2004-10-28 | 2011-05-23 | Method and apparatus for detection and correction of multiple image defects within digital images using preview or other reference images |
Publications (2)
Publication Number | Publication Date |
---|---|
US20110122297A1 true US20110122297A1 (en) | 2011-05-26 |
US7953251B1 US7953251B1 (en) | 2011-05-31 |
Family
ID=46330309
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/142,134 Active 2028-02-06 US8320641B2 (en) | 2004-10-28 | 2008-06-19 | Method and apparatus for red-eye detection using preview or other reference images |
US12/947,731 Active US7953251B1 (en) | 2004-10-28 | 2010-11-16 | Method and apparatus for detection and correction of flash-induced eye defects within digital images using preview or other reference images |
US13/113,648 Active US8135184B2 (en) | 2004-10-28 | 2011-05-23 | Method and apparatus for detection and correction of multiple image defects within digital images using preview or other reference images |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/142,134 Active 2028-02-06 US8320641B2 (en) | 2004-10-28 | 2008-06-19 | Method and apparatus for red-eye detection using preview or other reference images |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/113,648 Active US8135184B2 (en) | 2004-10-28 | 2011-05-23 | Method and apparatus for detection and correction of multiple image defects within digital images using preview or other reference images |
Country Status (1)
Country | Link |
---|---|
US (3) | US8320641B2 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090196466A1 (en) * | 2008-02-05 | 2009-08-06 | Fotonation Vision Limited | Face Detection in Mid-Shot Digital Images |
US8330831B2 (en) | 2003-08-05 | 2012-12-11 | DigitalOptics Corporation Europe Limited | Method of gathering visual meta data using a reference image |
US20130057926A1 (en) * | 2006-10-17 | 2013-03-07 | Samsung Electronics Co., Ltd | Image compensation in regions of low image contrast |
US20130259322A1 (en) * | 2012-03-31 | 2013-10-03 | Xiao Lin | System And Method For Iris Image Analysis |
US8682097B2 (en) | 2006-02-14 | 2014-03-25 | DigitalOptics Corporation Europe Limited | Digital image enhancement with reference images |
US8896725B2 (en) | 2007-06-21 | 2014-11-25 | Fotonation Limited | Image capture device with contemporaneous reference image capture mechanism |
Families Citing this family (92)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8948468B2 (en) | 2003-06-26 | 2015-02-03 | Fotonation Limited | Modification of viewing parameters for digital images using face detection information |
US7792970B2 (en) | 2005-06-17 | 2010-09-07 | Fotonation Vision Limited | Method for establishing a paired connection between media devices |
US7970182B2 (en) | 2005-11-18 | 2011-06-28 | Tessera Technologies Ireland Limited | Two stage detection for photographic eye artifacts |
US7574016B2 (en) | 2003-06-26 | 2009-08-11 | Fotonation Vision Limited | Digital image processing using face detection information |
US7565030B2 (en) | 2003-06-26 | 2009-07-21 | Fotonation Vision Limited | Detecting orientation of digital images using face detection information |
US7269292B2 (en) | 2003-06-26 | 2007-09-11 | Fotonation Vision Limited | Digital image adjustable compression and resolution using face detection information |
US8369650B2 (en) | 2003-09-30 | 2013-02-05 | DigitalOptics Corporation Europe Limited | Image defect map creation using batches of digital images |
JP4121026B2 (en) | 2004-01-21 | 2008-07-16 | 富士フイルム株式会社 | Imaging apparatus and method, and program |
US8320641B2 (en) | 2004-10-28 | 2012-11-27 | DigitalOptics Corporation Europe Limited | Method and apparatus for red-eye detection using preview or other reference images |
US8503800B2 (en) | 2007-03-05 | 2013-08-06 | DigitalOptics Corporation Europe Limited | Illumination detection using classifier chains |
US7715597B2 (en) | 2004-12-29 | 2010-05-11 | Fotonation Ireland Limited | Method and component for image recognition |
US7965875B2 (en) | 2006-06-12 | 2011-06-21 | Tessera Technologies Ireland Limited | Advances in extending the AAM techniques from grayscale to color images |
US8055067B2 (en) | 2007-01-18 | 2011-11-08 | DigitalOptics Corporation Europe Limited | Color segmentation |
JP5547730B2 (en) | 2008-07-30 | 2014-07-16 | デジタルオプティックス・コーポレイション・ヨーロッパ・リミテッド | Automatic facial and skin beautification using face detection |
US8077926B2 (en) * | 2009-01-14 | 2011-12-13 | Himax Technologies Limited | Method of motion detection using adaptive threshold |
EP2421249A4 (en) * | 2009-04-16 | 2013-02-20 | Panasonic Corp | Imaging device, external flash detection method, program, and integrated circuit |
US8594383B2 (en) * | 2009-05-27 | 2013-11-26 | Hewlett-Packard Development Company, L.P. | Method and apparatus for evaluating printed images |
WO2010141337A2 (en) * | 2009-06-03 | 2010-12-09 | Kla-Tencor Corporation | Adaptive signature detection |
US8670597B2 (en) | 2009-08-07 | 2014-03-11 | Google Inc. | Facial recognition with social network aiding |
US9087059B2 (en) | 2009-08-07 | 2015-07-21 | Google Inc. | User interface for presenting search results for multiple regions of a visual query |
US9135277B2 (en) | 2009-08-07 | 2015-09-15 | Google Inc. | Architecture for responding to a visual query |
US8811742B2 (en) | 2009-12-02 | 2014-08-19 | Google Inc. | Identifying matching canonical documents consistent with visual query structural information |
US9183224B2 (en) * | 2009-12-02 | 2015-11-10 | Google Inc. | Identifying matching canonical documents in response to a visual query |
US9176986B2 (en) | 2009-12-02 | 2015-11-03 | Google Inc. | Generating a combination of a visual query and matching canonical document |
US20110128288A1 (en) * | 2009-12-02 | 2011-06-02 | David Petrou | Region of Interest Selector for Visual Queries |
US8805079B2 (en) | 2009-12-02 | 2014-08-12 | Google Inc. | Identifying matching canonical documents in response to a visual query and in accordance with geographic information |
US9405772B2 (en) * | 2009-12-02 | 2016-08-02 | Google Inc. | Actionable search results for street view visual queries |
US8977639B2 (en) | 2009-12-02 | 2015-03-10 | Google Inc. | Actionable search results for visual queries |
US9852156B2 (en) | 2009-12-03 | 2017-12-26 | Google Inc. | Hybrid use of location sensor data and visual query to return local listings for visual query |
US8339471B2 (en) | 2009-12-31 | 2012-12-25 | DigitalOptics Corporation Europe Limited | Auto white balance algorithm using RGB product measure |
US20110216157A1 (en) | 2010-03-05 | 2011-09-08 | Tessera Technologies Ireland Limited | Object Detection and Rendering for Wide Field of View (WFOV) Image Acquisition Systems |
JP4875762B2 (en) * | 2010-05-26 | 2012-02-15 | シャープ株式会社 | Image processing apparatus, image display apparatus, and image pickup apparatus |
US8830360B1 (en) * | 2010-08-25 | 2014-09-09 | Sri International | Method and apparatus for optimizing image quality based on scene content |
US9389774B2 (en) | 2010-12-01 | 2016-07-12 | Sony Corporation | Display processing apparatus for performing image magnification based on face detection |
TWI455043B (en) * | 2011-03-01 | 2014-10-01 | Hon Hai Prec Ind Co Ltd | System and method for avoiding to capture red eye images using camera |
US8860816B2 (en) | 2011-03-31 | 2014-10-14 | Fotonation Limited | Scene enhancements in off-center peripheral regions for nonlinear lens geometries |
US8896703B2 (en) | 2011-03-31 | 2014-11-25 | Fotonation Limited | Superresolution enhancment of peripheral regions in nonlinear lens geometries |
US8687840B2 (en) * | 2011-05-10 | 2014-04-01 | Qualcomm Incorporated | Smart backlights to minimize display power consumption based on desktop configurations and user eye gaze |
WO2012169174A1 (en) * | 2011-06-08 | 2012-12-13 | パナソニック株式会社 | Image processing device and image processing method |
JP5713885B2 (en) * | 2011-12-26 | 2015-05-07 | キヤノン株式会社 | Image processing apparatus, image processing method, program, and storage medium |
US9294667B2 (en) | 2012-03-10 | 2016-03-22 | Digitaloptics Corporation | MEMS auto focus miniature camera module with fixed and movable lens groups |
JP5833049B2 (en) * | 2012-05-30 | 2015-12-16 | 富士フイルム株式会社 | Image processing method, image processing apparatus, and image processing program |
US8935246B2 (en) | 2012-08-08 | 2015-01-13 | Google Inc. | Identifying textual terms in response to a visual query |
US9001268B2 (en) | 2012-08-10 | 2015-04-07 | Nan Chang O-Film Optoelectronics Technology Ltd | Auto-focus camera module with flexible printed circuit extension |
US9007520B2 (en) | 2012-08-10 | 2015-04-14 | Nanchang O-Film Optoelectronics Technology Ltd | Camera module with EMI shield |
US9242602B2 (en) | 2012-08-27 | 2016-01-26 | Fotonation Limited | Rearview imaging systems for vehicle |
US9633263B2 (en) * | 2012-10-09 | 2017-04-25 | International Business Machines Corporation | Appearance modeling for object re-identification using weighted brightness transfer functions |
US9124762B2 (en) | 2012-12-20 | 2015-09-01 | Microsoft Technology Licensing, Llc | Privacy camera |
US9055207B2 (en) | 2012-12-31 | 2015-06-09 | Digitaloptics Corporation | Auto-focus camera module with MEMS distance measurement |
US8977077B2 (en) * | 2013-01-21 | 2015-03-10 | Apple Inc. | Techniques for presenting user adjustments to a digital image |
CN103974058A (en) * | 2013-01-24 | 2014-08-06 | 鸿富锦精密工业(深圳)有限公司 | Image noise analysis system and method |
US9196084B2 (en) | 2013-03-15 | 2015-11-24 | Urc Ventures Inc. | Determining object volume from mobile device images |
US10402846B2 (en) | 2013-05-21 | 2019-09-03 | Fotonation Limited | Anonymizing facial expression data with a smart-cam |
US9336583B2 (en) | 2013-06-17 | 2016-05-10 | Cyberlink Corp. | Systems and methods for image editing |
US9002085B1 (en) * | 2013-10-22 | 2015-04-07 | Eyenuk, Inc. | Systems and methods for automatically generating descriptions of retinal images |
JP6000929B2 (en) * | 2013-11-07 | 2016-10-05 | 株式会社ソニー・インタラクティブエンタテインメント | Information processing device |
US9316808B1 (en) | 2014-03-16 | 2016-04-19 | Hyperion Development, LLC | Optical assembly for a wide field of view point action camera with a low sag aspheric lens element |
US10545314B1 (en) | 2014-03-16 | 2020-01-28 | Navitar Industries, Llc | Optical assembly for a compact wide field of view digital camera with low lateral chromatic aberration |
US9995910B1 (en) | 2014-03-16 | 2018-06-12 | Navitar Industries, Llc | Optical assembly for a compact wide field of view digital camera with high MTF |
US9726859B1 (en) | 2014-03-16 | 2017-08-08 | Navitar Industries, Llc | Optical assembly for a wide field of view camera with low TV distortion |
US9316820B1 (en) | 2014-03-16 | 2016-04-19 | Hyperion Development, LLC | Optical assembly for a wide field of view point action camera with low astigmatism |
US9091843B1 (en) | 2014-03-16 | 2015-07-28 | Hyperion Development, LLC | Optical assembly for a wide field of view point action camera with low track length to focal length ratio |
US10386604B1 (en) | 2014-03-16 | 2019-08-20 | Navitar Industries, Llc | Compact wide field of view digital camera with stray light impact suppression |
US9494772B1 (en) | 2014-03-16 | 2016-11-15 | Hyperion Development, LLC | Optical assembly for a wide field of view point action camera with low field curvature |
US10139595B1 (en) | 2014-03-16 | 2018-11-27 | Navitar Industries, Llc | Optical assembly for a compact wide field of view digital camera with low first lens diameter to image diagonal ratio |
US9824271B2 (en) * | 2014-06-25 | 2017-11-21 | Kodak Alaris Inc. | Adaptable eye artifact identification and correction system |
US10192134B2 (en) | 2014-06-30 | 2019-01-29 | Microsoft Technology Licensing, Llc | Color identification using infrared imaging |
CN104700353B (en) * | 2015-02-11 | 2017-12-05 | 小米科技有限责任公司 | Image filters generation method and device |
US9805280B2 (en) * | 2015-02-16 | 2017-10-31 | Netspark Ltd. | Image analysis systems and methods |
CN104873172A (en) * | 2015-05-11 | 2015-09-02 | 京东方科技集团股份有限公司 | Apparatus having physical examination function, and method, display apparatus and system thereof |
US10122996B2 (en) * | 2016-03-09 | 2018-11-06 | Sony Corporation | Method for 3D multiview reconstruction by feature tracking and model registration |
US10403037B1 (en) | 2016-03-21 | 2019-09-03 | URC Ventures, Inc. | Verifying object measurements determined from mobile device images |
US9495764B1 (en) | 2016-03-21 | 2016-11-15 | URC Ventures, Inc. | Verifying object measurements determined from mobile device images |
US10346953B2 (en) * | 2016-08-23 | 2019-07-09 | Microsoft Technology Licensing, Llc | Flash and non-flash images in flash artifact removal |
US10401598B2 (en) | 2017-01-26 | 2019-09-03 | Navitar, Inc. | Lens attachment for a high etendue modular zoom lens |
US10275648B2 (en) * | 2017-02-08 | 2019-04-30 | Fotonation Limited | Image processing method and system for iris recognition |
JP2018148318A (en) * | 2017-03-02 | 2018-09-20 | キヤノン株式会社 | Image processing device, control method of the same, and program |
US10186049B1 (en) | 2017-03-06 | 2019-01-22 | URC Ventures, Inc. | Determining changes in object structure over time using mobile device images |
JP2019070870A (en) | 2017-10-05 | 2019-05-09 | カシオ計算機株式会社 | Image processing device, image processing method and program |
JP7087331B2 (en) * | 2017-10-05 | 2022-06-21 | カシオ計算機株式会社 | Image processing equipment, image processing methods and programs |
GB2570447A (en) * | 2018-01-23 | 2019-07-31 | Canon Kk | Method and system for improving construction of regions of interest |
US11961216B2 (en) | 2019-04-17 | 2024-04-16 | Shutterfly, Llc | Photography session assistant |
US10839502B2 (en) | 2019-04-17 | 2020-11-17 | Shutterfly, Llc | Photography session assistant |
US11048917B2 (en) * | 2019-07-31 | 2021-06-29 | Baidu Usa Llc | Method, electronic device, and computer readable medium for image identification |
US12080104B2 (en) * | 2019-09-12 | 2024-09-03 | Semiconductor Energy Laboratory Co., Ltd. | Classification method |
US11450012B2 (en) * | 2019-10-31 | 2022-09-20 | Kla Corporation | BBP assisted defect detection flow for SEM images |
US11989899B2 (en) | 2021-02-09 | 2024-05-21 | Everypoint, Inc. | Determining object structure using physically mounted devices with only partial view of object |
US11282291B1 (en) | 2021-02-09 | 2022-03-22 | URC Ventures, Inc. | Determining object structure using fixed-location cameras with only partial view of object |
TWI775356B (en) * | 2021-03-19 | 2022-08-21 | 宏碁智醫股份有限公司 | Image pre-processing method and image processing apparatus for fundoscopic image |
US11741618B2 (en) | 2021-03-22 | 2023-08-29 | Everypoint, Inc. | Performing object modeling by combining visual data from images with motion data of the image acquisition device |
US11798136B2 (en) | 2021-06-10 | 2023-10-24 | Bank Of America Corporation | Automated teller machine for detecting security vulnerabilities based on document noise removal |
US12067748B2 (en) * | 2021-09-28 | 2024-08-20 | International Business Machines Corporation | Selection of image label color based on image understanding |
Citations (97)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4047187A (en) * | 1974-04-01 | 1977-09-06 | Canon Kabushiki Kaisha | System for exposure measurement and/or focus detection by means of image senser |
US4317991A (en) * | 1980-03-12 | 1982-03-02 | Honeywell Inc. | Digital auto focus system utilizing a photodetector array |
US4367027A (en) * | 1980-03-12 | 1983-01-04 | Honeywell Inc. | Active auto focus system improvement |
US4448510A (en) * | 1981-10-23 | 1984-05-15 | Fuji Photo Film Co., Ltd. | Camera shake detection apparatus |
US4638364A (en) * | 1984-10-30 | 1987-01-20 | Sanyo Electric Co., Ltd. | Auto focus circuit for video camera |
US4796043A (en) * | 1985-09-13 | 1989-01-03 | Minolta Camera Kabushiki Kaisha | Multi-point photometric apparatus |
US5008946A (en) * | 1987-09-09 | 1991-04-16 | Aisin Seiki K.K. | System for recognizing image |
US5018017A (en) * | 1987-12-25 | 1991-05-21 | Kabushiki Kaisha Toshiba | Electronic still camera and image recording method thereof |
US5051770A (en) * | 1986-01-20 | 1991-09-24 | Scanera S.C. | Image processing device for controlling the transfer function of an optical system |
US5111231A (en) * | 1989-07-27 | 1992-05-05 | Canon Kabushiki Kaisha | Camera system |
US5150432A (en) * | 1990-03-26 | 1992-09-22 | Kabushiki Kaisha Toshiba | Apparatus for encoding/decoding video signals to improve quality of a specific region |
US5227837A (en) * | 1989-05-12 | 1993-07-13 | Fuji Photo Film Co., Ltd. | Photograph printing method |
US5278923A (en) * | 1992-09-02 | 1994-01-11 | Harmonic Lightwaves, Inc. | Cascaded optical modulation system with high linearity |
US5280530A (en) * | 1990-09-07 | 1994-01-18 | U.S. Philips Corporation | Method and apparatus for tracking a moving object |
US5291234A (en) * | 1987-02-04 | 1994-03-01 | Asahi Kogaku Kogyo Kabushiki Kaisha | Auto optical focus detecting device and eye direction detecting optical system |
US5305048A (en) * | 1991-02-12 | 1994-04-19 | Nikon Corporation | A photo taking apparatus capable of making a photograph with flash by a flash device |
US5311240A (en) * | 1992-11-03 | 1994-05-10 | Eastman Kodak Company | Technique suited for use in multi-zone autofocusing cameras for improving image quality for non-standard display sizes and/or different focal length photographing modes |
US5331544A (en) * | 1992-04-23 | 1994-07-19 | A. C. Nielsen Company | Market research method and system for collecting retail store and shopper market research data |
US5353058A (en) * | 1990-10-31 | 1994-10-04 | Canon Kabushiki Kaisha | Automatic exposure control apparatus |
US5384912A (en) * | 1987-10-30 | 1995-01-24 | New Microtime Inc. | Real time video image processing system |
US5384615A (en) * | 1993-06-08 | 1995-01-24 | Industrial Technology Research Institute | Ambient depth-of-field simulation exposuring method |
US5430809A (en) * | 1992-07-10 | 1995-07-04 | Sony Corporation | Human face tracking system |
US5432863A (en) * | 1993-07-19 | 1995-07-11 | Eastman Kodak Company | Automated detection and correction of eye color defects due to flash illumination |
US5450504A (en) * | 1992-05-19 | 1995-09-12 | Calia; James | Method for finding a most likely matching of a target facial image in a data base of facial images |
US5488429A (en) * | 1992-01-13 | 1996-01-30 | Mitsubishi Denki Kabushiki Kaisha | Video signal processor for detecting flesh tones in am image |
US5493409A (en) * | 1990-11-29 | 1996-02-20 | Minolta Camera Kabushiki Kaisha | Still video camera having a printer capable of printing a photographed image in a plurality of printing modes |
US5496106A (en) * | 1994-12-13 | 1996-03-05 | Apple Computer, Inc. | System and method for generating a contrast overlay as a focus assist for an imaging device |
US5543952A (en) * | 1994-09-12 | 1996-08-06 | Nippon Telegraph And Telephone Corporation | Optical transmission system |
US5633678A (en) * | 1995-12-20 | 1997-05-27 | Eastman Kodak Company | Electronic still camera for capturing and categorizing images |
US5638139A (en) * | 1994-04-14 | 1997-06-10 | Texas Instruments Incorporated | Motion adaptive scan-rate conversion using directional edge interpolation |
US5638136A (en) * | 1992-01-13 | 1997-06-10 | Mitsubishi Denki Kabushiki Kaisha | Method and apparatus for detecting flesh tones in an image |
US5652669A (en) * | 1994-08-12 | 1997-07-29 | U.S. Philips Corporation | Optical synchronization arrangement |
US5680481A (en) * | 1992-05-26 | 1997-10-21 | Ricoh Corporation | Facial feature extraction method and apparatus for a neural network acoustic and visual speech recognition system |
US5706362A (en) * | 1993-03-31 | 1998-01-06 | Mitsubishi Denki Kabushiki Kaisha | Image tracking apparatus |
US5710833A (en) * | 1995-04-20 | 1998-01-20 | Massachusetts Institute Of Technology | Detection, recognition and coding of complex objects using probabilistic eigenspace analysis |
US5715325A (en) * | 1995-08-30 | 1998-02-03 | Siemens Corporate Research, Inc. | Apparatus and method for detecting a face in a video image |
US5724456A (en) * | 1995-03-31 | 1998-03-03 | Polaroid Corporation | Brightness adjustment of images using digital scene analysis |
US5745668A (en) * | 1993-08-27 | 1998-04-28 | Massachusetts Institute Of Technology | Example-based image analysis and synthesis using pixelwise correspondence |
US5764803A (en) * | 1996-04-03 | 1998-06-09 | Lucent Technologies Inc. | Motion-adaptive modelling of scene content for very low bit rate model-assisted coding of video sequences |
US5764790A (en) * | 1994-09-30 | 1998-06-09 | Istituto Trentino Di Cultura | Method of storing and retrieving images of people, for example, in photographic archives and for the construction of identikit images |
US5771307A (en) * | 1992-12-15 | 1998-06-23 | Nielsen Media Research, Inc. | Audience measurement system and method |
US5774747A (en) * | 1994-06-09 | 1998-06-30 | Fuji Photo Film Co., Ltd. | Method and apparatus for controlling exposure of camera |
US5774754A (en) * | 1994-04-26 | 1998-06-30 | Minolta Co., Ltd. | Camera capable of previewing a photographed image |
US5774591A (en) * | 1995-12-15 | 1998-06-30 | Xerox Corporation | Apparatus and method for recognizing facial expressions and facial gestures in a sequence of images |
US5774129A (en) * | 1995-06-07 | 1998-06-30 | Massachusetts Institute Of Technology | Image analysis and synthesis networks using shape and texture information |
US5781650A (en) * | 1994-02-18 | 1998-07-14 | University Of Central Florida | Automatic feature detection and age classification of human faces in digital images |
US5802208A (en) * | 1996-05-06 | 1998-09-01 | Lucent Technologies Inc. | Face recognition using DCT-based feature vectors |
US5812193A (en) * | 1992-11-07 | 1998-09-22 | Sony Corporation | Video camera system which automatically follows subject changes |
US5818975A (en) * | 1996-10-28 | 1998-10-06 | Eastman Kodak Company | Method and apparatus for area selective exposure adjustment |
US5870138A (en) * | 1995-03-31 | 1999-02-09 | Hitachi, Ltd. | Facial image processing |
US5905807A (en) * | 1992-01-23 | 1999-05-18 | Matsushita Electric Industrial Co., Ltd. | Apparatus for extracting feature points from a facial image |
US5911139A (en) * | 1996-03-29 | 1999-06-08 | Virage, Inc. | Visual image database search engine which allows for different schema |
US5915980A (en) * | 1997-09-29 | 1999-06-29 | George M. Baldock | Wiring interconnection system |
US5966549A (en) * | 1997-09-09 | 1999-10-12 | Minolta Co., Ltd. | Camera |
US6016354A (en) * | 1997-10-23 | 2000-01-18 | Hewlett-Packard Company | Apparatus and a method for reducing red-eye in a digital image |
US6028960A (en) * | 1996-09-20 | 2000-02-22 | Lucent Technologies Inc. | Face feature analysis for automatic lipreading and character animation |
US6035074A (en) * | 1997-05-27 | 2000-03-07 | Sharp Kabushiki Kaisha | Image processing apparatus and storage medium therefor |
US6053268A (en) * | 1997-01-23 | 2000-04-25 | Nissan Motor Co., Ltd. | Vehicle environment recognition system |
US6061055A (en) * | 1997-03-21 | 2000-05-09 | Autodesk, Inc. | Method of tracking objects with an imaging device |
US6072094A (en) * | 1997-08-06 | 2000-06-06 | Merck & Co., Inc. | Efficient synthesis of cyclopropylacetylene |
US6097470A (en) * | 1998-05-28 | 2000-08-01 | Eastman Kodak Company | Digital photofinishing system including scene balance, contrast normalization, and image sharpening digital image processing |
US6101271A (en) * | 1990-10-09 | 2000-08-08 | Matsushita Electrial Industrial Co., Ltd | Gradation correction method and device |
US6108437A (en) * | 1997-11-14 | 2000-08-22 | Seiko Epson Corporation | Face recognition apparatus, method, system and computer readable medium thereof |
US6115052A (en) * | 1998-02-12 | 2000-09-05 | Mitsubishi Electric Information Technology Center America, Inc. (Ita) | System for reconstructing the 3-dimensional motions of a human figure from a monocularly-viewed image sequence |
US6121953A (en) * | 1997-02-06 | 2000-09-19 | Modern Cartoons, Ltd. | Virtual reality system for sensing facial movements |
US6173068B1 (en) * | 1996-07-29 | 2001-01-09 | Mikos, Ltd. | Method and apparatus for recognizing and classifying individuals based on minutiae |
US6184926B1 (en) * | 1996-11-26 | 2001-02-06 | Ncr Corporation | System and method for detecting a human face in uncontrolled environments |
US6188777B1 (en) * | 1997-08-01 | 2001-02-13 | Interval Research Corporation | Method and apparatus for personnel detection and tracking |
US6192149B1 (en) * | 1998-04-08 | 2001-02-20 | Xerox Corporation | Method and apparatus for automatic detection of image target gamma |
US6240198B1 (en) * | 1998-04-13 | 2001-05-29 | Compaq Computer Corporation | Method for figure tracking using 2-D registration |
US6246790B1 (en) * | 1997-12-29 | 2001-06-12 | Cornell Research Foundation, Inc. | Image indexing using color correlograms |
US6246779B1 (en) * | 1997-12-12 | 2001-06-12 | Kabushiki Kaisha Toshiba | Gaze position detection apparatus and method |
US6249315B1 (en) * | 1997-03-24 | 2001-06-19 | Jack M. Holm | Strategy for pictorial digital image processing |
US6252976B1 (en) * | 1997-08-29 | 2001-06-26 | Eastman Kodak Company | Computer program product for redeye detection |
US6263113B1 (en) * | 1998-12-11 | 2001-07-17 | Philips Electronics North America Corp. | Method for detecting a face in a digital image |
US6267939B1 (en) * | 1997-07-22 | 2001-07-31 | Huntsman Corporation Hungary Vegyipari Termelo-Fejleszto Reszvenytarsasag | Absorbent composition for purifying gases which contain acidic components |
US6282317B1 (en) * | 1998-12-31 | 2001-08-28 | Eastman Kodak Company | Method for automatic determination of main subjects in photographic images |
US6292575B1 (en) * | 1998-07-20 | 2001-09-18 | Lau Technologies | Real-time facial recognition and verification system |
US6349373B2 (en) * | 1998-02-20 | 2002-02-19 | Eastman Kodak Company | Digital image management system having method for managing images according to image groups |
US6351556B1 (en) * | 1998-11-20 | 2002-02-26 | Eastman Kodak Company | Method for automatically comparing content of images for classification into events |
US6393148B1 (en) * | 1999-05-13 | 2002-05-21 | Hewlett-Packard Company | Contrast enhancement of an image using luminance and RGB statistical metrics |
US6400830B1 (en) * | 1998-02-06 | 2002-06-04 | Compaq Computer Corporation | Technique for tracking objects through a series of images |
US6404900B1 (en) * | 1998-06-22 | 2002-06-11 | Sharp Laboratories Of America, Inc. | Method for robust human face tracking in presence of multiple persons |
US6407777B1 (en) * | 1997-10-09 | 2002-06-18 | Deluca Michael Joseph | Red-eye filter method and apparatus |
US6421468B1 (en) * | 1999-01-06 | 2002-07-16 | Seiko Epson Corporation | Method and apparatus for sharpening an image by scaling elements of a frequency-domain representation |
US6438264B1 (en) * | 1998-12-31 | 2002-08-20 | Eastman Kodak Company | Method for compensating image color when adjusting the contrast of a digital color image |
US6438234B1 (en) * | 1996-09-05 | 2002-08-20 | Swisscom Ag | Quantum cryptography device and method |
US6441854B2 (en) * | 1997-02-20 | 2002-08-27 | Eastman Kodak Company | Electronic camera with quick review of last captured image |
US6456732B1 (en) * | 1998-09-11 | 2002-09-24 | Hewlett-Packard Company | Automatic rotation, cropping and scaling of images for printing |
US6504951B1 (en) * | 1999-11-29 | 2003-01-07 | Eastman Kodak Company | Method for detecting sky in images |
US6504942B1 (en) * | 1998-01-23 | 2003-01-07 | Sharp Kabushiki Kaisha | Method of and apparatus for detecting a face-like region and observer tracking display |
US6516154B1 (en) * | 2001-07-17 | 2003-02-04 | Eastman Kodak Company | Image revising camera and method |
US6526156B1 (en) * | 1997-01-10 | 2003-02-25 | Xerox Corporation | Apparatus and method for identifying and tracking objects with view-based representations |
US6526161B1 (en) * | 1999-08-30 | 2003-02-25 | Koninklijke Philips Electronics N.V. | System and method for biometrics-based facial feature extraction |
US6529630B1 (en) * | 1998-03-02 | 2003-03-04 | Fuji Photo Film Co., Ltd. | Method and device for extracting principal image subjects |
US6549641B2 (en) * | 1997-10-30 | 2003-04-15 | Minolta Co., Inc. | Screen image observing device and method |
US7689009B2 (en) * | 2005-11-18 | 2010-03-30 | Fotonation Vision Ltd. | Two stage detection for photographic eye artifacts |
Family Cites Families (348)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS5451556A (en) * | 1977-09-29 | 1979-04-23 | Canon Inc | Distance measuring apparatus |
US4168510A (en) * | 1978-01-16 | 1979-09-18 | Cbs Inc. | Television system for displaying and recording paths of motion |
JPS58199307A (en) | 1982-05-18 | 1983-11-19 | Olympus Optical Co Ltd | Focusing detecting device |
US4673276A (en) * | 1984-02-24 | 1987-06-16 | Canon Kabushiki Kaisha | Blur detecting device for a camera |
US4970683A (en) | 1986-08-26 | 1990-11-13 | Heads Up Technologies, Inc. | Computerized checklist with predetermined sequences of sublists which automatically returns to skipped checklists |
US4975969A (en) | 1987-10-22 | 1990-12-04 | Peter Tal | Method and apparatus for uniquely identifying individuals by particular physical characteristics and security system utilizing the same |
JP2713978B2 (en) | 1988-05-13 | 1998-02-16 | キヤノン株式会社 | Automatic focusing device for camera |
US4970663A (en) | 1989-04-28 | 1990-11-13 | Avid Technology, Inc. | Method and apparatus for manipulating digital video data |
US5231674A (en) * | 1989-06-09 | 1993-07-27 | Lc Technologies, Inc. | Eye tracking method and apparatus |
US5063603A (en) | 1989-11-06 | 1991-11-05 | David Sarnoff Research Center, Inc. | Dynamic method for recognizing objects and image processing system therefor |
US5164831A (en) | 1990-03-15 | 1992-11-17 | Eastman Kodak Company | Electronic still camera providing multi-format storage of full and reduced resolution images |
US5161204A (en) | 1990-06-04 | 1992-11-03 | Neuristics, Inc. | Apparatus for generating a feature matrix based on normalized out-class and in-class variation matrices |
US5274714A (en) | 1990-06-04 | 1993-12-28 | Neuristics, Inc. | Method and apparatus for determining and organizing feature vectors for neural network recognition |
US5164992A (en) | 1990-11-01 | 1992-11-17 | Massachusetts Institute Of Technology | Face recognition system |
US5262820A (en) | 1991-05-27 | 1993-11-16 | Minolta Camera Kabushiki Kaisha | Camera having a blur detecting device |
JP2790562B2 (en) | 1992-01-06 | 1998-08-27 | 富士写真フイルム株式会社 | Image processing method |
JPH06178261A (en) | 1992-12-07 | 1994-06-24 | Nikon Corp | Digital still camera |
US6798834B1 (en) | 1996-08-15 | 2004-09-28 | Mitsubishi Denki Kabushiki Kaisha | Image coding apparatus with segment classification and segmentation-type motion prediction circuit |
US5835616A (en) | 1994-02-18 | 1998-11-10 | University Of Central Florida | Face detection using templates |
US5852669A (en) | 1994-04-06 | 1998-12-22 | Lucent Technologies Inc. | Automatic face and facial feature location detection for low bit rate model-assisted H.261 compatible coding of video |
US6714665B1 (en) | 1994-09-02 | 2004-03-30 | Sarnoff Corporation | Fully automated iris recognition system utilizing wide and narrow fields of view |
US6426779B1 (en) | 1995-01-04 | 2002-07-30 | Sony Electronics, Inc. | Method and apparatus for providing favorite station and programming information in a multiple station broadcast system |
US6128398A (en) | 1995-01-31 | 2000-10-03 | Miros Inc. | System, method and application for the recognition, verification and similarity ranking of facial or other object patterns |
BR9607845A (en) | 1995-03-22 | 1999-11-30 | Idt Deutschland Gmbh | Method and apparatus for coordinating movement determination across multiple frames. |
US5844573A (en) | 1995-06-07 | 1998-12-01 | Massachusetts Institute Of Technology | Image compression by pointwise prototype correspondence using shape and texture information |
US5912980A (en) * | 1995-07-13 | 1999-06-15 | Hunke; H. Martin | Target acquisition and tracking |
US5842194A (en) | 1995-07-28 | 1998-11-24 | Mitsubishi Denki Kabushiki Kaisha | Method of recognizing images of faces or general images using fuzzy combination of multiple resolutions |
US5850470A (en) | 1995-08-30 | 1998-12-15 | Siemens Corporate Research, Inc. | Neural network for locating and recognizing a deformable object |
US5963670A (en) | 1996-02-12 | 1999-10-05 | Massachusetts Institute Of Technology | Method and apparatus for classifying and identifying images |
US6151073A (en) | 1996-03-28 | 2000-11-21 | Fotonation, Inc. | Intelligent camera flash system |
US6188776B1 (en) | 1996-05-21 | 2001-02-13 | Interval Research Corporation | Principle component analysis of images for the automatic location of control points |
JP2907120B2 (en) | 1996-05-29 | 1999-06-21 | 日本電気株式会社 | Red-eye detection correction device |
US5991456A (en) | 1996-05-29 | 1999-11-23 | Science And Technology Corporation | Method of improving a digital image |
US5978519A (en) | 1996-08-06 | 1999-11-02 | Xerox Corporation | Automatic image cropping |
US20030118216A1 (en) | 1996-09-04 | 2003-06-26 | Goldberg David A. | Obtaining person-specific images in a public venue |
US5852823A (en) | 1996-10-16 | 1998-12-22 | Microsoft | Image classification and retrieval system using a query-by-example paradigm |
US6765612B1 (en) | 1996-12-09 | 2004-07-20 | Flashpoint Technology, Inc. | Method and system for naming images captured by a digital camera |
US6125213A (en) | 1997-02-17 | 2000-09-26 | Canon Kabushiki Kaisha | Image processing method, an image processing apparatus, and a storage medium readable by a computer |
US7057653B1 (en) | 1997-06-19 | 2006-06-06 | Minolta Co., Ltd. | Apparatus capable of image capturing |
US6009209A (en) | 1997-06-27 | 1999-12-28 | Microsoft Corporation | Automated removal of red eye effect from a digital image |
US7548238B2 (en) * | 1997-07-02 | 2009-06-16 | Nvidia Corporation | Computer graphics shader systems and methods |
AUPO798697A0 (en) | 1997-07-15 | 1997-08-07 | Silverbrook Research Pty Ltd | Data processing method and apparatus (ART51) |
KR19990030882A (en) | 1997-10-07 | 1999-05-06 | 이해규 | Digital still camera with adjustable focus position and its control method |
US7042505B1 (en) * | 1997-10-09 | 2006-05-09 | Fotonation Ireland Ltd. | Red-eye filter method and apparatus |
US7352394B1 (en) * | 1997-10-09 | 2008-04-01 | Fotonation Vision Limited | Image modification based on red-eye filter analysis |
US7630006B2 (en) | 1997-10-09 | 2009-12-08 | Fotonation Ireland Limited | Detecting red eye filter and apparatus using meta-data |
US7738015B2 (en) * | 1997-10-09 | 2010-06-15 | Fotonation Vision Limited | Red-eye filter method and apparatus |
US6128397A (en) | 1997-11-21 | 2000-10-03 | Justsystem Pittsburgh Research Center | Method for finding all frontal faces in arbitrarily complex visual scenes |
JPH11175699A (en) * | 1997-12-12 | 1999-07-02 | Fuji Photo Film Co Ltd | Picture processor |
US6268939B1 (en) | 1998-01-08 | 2001-07-31 | Xerox Corporation | Method and apparatus for correcting luminance and chrominance data in digital color images |
US6148092A (en) | 1998-01-08 | 2000-11-14 | Sharp Laboratories Of America, Inc | System for detecting skin-tone regions within an image |
US6278491B1 (en) * | 1998-01-29 | 2001-08-21 | Hewlett-Packard Company | Apparatus and a method for automatically detecting and reducing red-eye in a digital image |
US6483521B1 (en) | 1998-02-02 | 2002-11-19 | Matsushita Electric Industrial Co., Ltd. | Image composition method, image composition apparatus, and data recording media |
US6556708B1 (en) | 1998-02-06 | 2003-04-29 | Compaq Computer Corporation | Technique for classifying objects within an image |
JPH11231358A (en) | 1998-02-19 | 1999-08-27 | Nec Corp | Optical circuit and its production |
JP3657769B2 (en) | 1998-03-19 | 2005-06-08 | 富士写真フイルム株式会社 | Image processing method and image processing apparatus |
US6567983B1 (en) | 1998-04-10 | 2003-05-20 | Fuji Photo Film Co., Ltd. | Electronic album producing and viewing system and method |
US6301370B1 (en) | 1998-04-13 | 2001-10-09 | Eyematic Interfaces, Inc. | Face recognition from video images |
JP2000048184A (en) | 1998-05-29 | 2000-02-18 | Canon Inc | Method for processing image, and method for extracting facial area and device therefor |
AUPP400998A0 (en) | 1998-06-10 | 1998-07-02 | Canon Kabushiki Kaisha | Face detection in digital images |
US6496607B1 (en) | 1998-06-26 | 2002-12-17 | Sarnoff Corporation | Method and apparatus for region-based allocation of processing resources and control of input image formation |
US6362850B1 (en) | 1998-08-04 | 2002-03-26 | Flashpoint Technology, Inc. | Interactive movie creation from one or more still images in a digital imaging device |
DE19837004C1 (en) | 1998-08-14 | 2000-03-09 | Christian Eckes | Process for recognizing objects in digitized images |
GB2341231A (en) | 1998-09-05 | 2000-03-08 | Sharp Kk | Face detection in an image |
US6285410B1 (en) * | 1998-09-11 | 2001-09-04 | Mgi Software Corporation | Method and system for removal of flash artifacts from digital images |
US6134339A (en) | 1998-09-17 | 2000-10-17 | Eastman Kodak Company | Method and apparatus for determining the position of eyes and for correcting eye-defects in a captured frame |
US6606398B2 (en) | 1998-09-30 | 2003-08-12 | Intel Corporation | Automatic cataloging of people in digital photographs |
JP3291259B2 (en) | 1998-11-11 | 2002-06-10 | キヤノン株式会社 | Image processing method and recording medium |
DK1138011T3 (en) | 1998-12-02 | 2004-03-08 | Univ Manchester | Determination of facial subspace |
US6473199B1 (en) | 1998-12-18 | 2002-10-29 | Eastman Kodak Company | Correcting exposure and tone scale of digital images captured by an image capture device |
US6396599B1 (en) * | 1998-12-21 | 2002-05-28 | Eastman Kodak Company | Method and apparatus for modifying a portion of an image in accordance with colorimetric parameters |
JP2000197050A (en) | 1998-12-25 | 2000-07-14 | Canon Inc | Image processing unit and its method |
US6463163B1 (en) | 1999-01-11 | 2002-10-08 | Hewlett-Packard Company | System and method for face detection using candidate image region selection |
US7038715B1 (en) | 1999-01-19 | 2006-05-02 | Texas Instruments Incorporated | Digital still camera with high-quality portrait mode |
AUPP839199A0 (en) | 1999-02-01 | 1999-02-25 | Traffic Pro Pty Ltd | Object recognition & tracking system |
US6778216B1 (en) | 1999-03-25 | 2004-08-17 | Texas Instruments Incorporated | Method and apparatus for digital camera real-time image correction in preview mode |
US7106374B1 (en) | 1999-04-05 | 2006-09-12 | Amherst Systems, Inc. | Dynamically reconfigurable vision system |
JP2000324437A (en) | 1999-05-13 | 2000-11-24 | Fuurie Kk | Video database system |
US6993157B1 (en) | 1999-05-18 | 2006-01-31 | Sanyo Electric Co., Ltd. | Dynamic image processing method and device and medium |
US6760485B1 (en) | 1999-05-20 | 2004-07-06 | Eastman Kodak Company | Nonlinearly modifying a rendered digital image |
US6967680B1 (en) | 1999-05-28 | 2005-11-22 | Microsoft Corporation | Method and apparatus for capturing images |
US7248300B1 (en) | 1999-06-03 | 2007-07-24 | Fujifilm Corporation | Camera and method of photographing good image |
US6879705B1 (en) | 1999-07-14 | 2005-04-12 | Sarnoff Corporation | Method and apparatus for tracking multiple objects in a video sequence |
US6501857B1 (en) | 1999-07-20 | 2002-12-31 | Craig Gotsman | Method and system for detecting and classifying objects in an image |
US6545706B1 (en) | 1999-07-30 | 2003-04-08 | Electric Planet, Inc. | System, method and article of manufacture for tracking a head of a camera-generated image of a person |
JP4378804B2 (en) | 1999-09-10 | 2009-12-09 | ソニー株式会社 | Imaging device |
WO2001028238A2 (en) | 1999-10-08 | 2001-04-19 | Sarnoff Corporation | Method and apparatus for enhancing and indexing video and audio signals |
US6937773B1 (en) | 1999-10-20 | 2005-08-30 | Canon Kabushiki Kaisha | Image encoding method and apparatus |
US6792135B1 (en) | 1999-10-29 | 2004-09-14 | Microsoft Corporation | System and method for face detection through geometric distribution of a non-intensity image property |
EP1107166A3 (en) | 1999-12-01 | 2008-08-06 | Matsushita Electric Industrial Co., Ltd. | Device and method for face image extraction, and recording medium having recorded program for the method |
US6754389B1 (en) | 1999-12-01 | 2004-06-22 | Koninklijke Philips Electronics N.V. | Program classification using object tracking |
KR100343223B1 (en) | 1999-12-07 | 2002-07-10 | 윤종용 | Apparatus for eye and face detection and method thereof |
US6516147B2 (en) | 1999-12-20 | 2003-02-04 | Polaroid Corporation | Scene recognition method and system using brightness and ranging mapping |
US20030035573A1 (en) | 1999-12-22 | 2003-02-20 | Nicolae Duta | Method for learning-based object detection in cardiac magnetic resonance images |
JP2001186323A (en) | 1999-12-24 | 2001-07-06 | Fuji Photo Film Co Ltd | Identification photograph system and picture on processing method |
US7043465B2 (en) | 2000-02-24 | 2006-05-09 | Holding B.E.V.S.A. | Method and device for perception of an object by its shape, its size and/or its orientation |
US6940545B1 (en) | 2000-02-28 | 2005-09-06 | Eastman Kodak Company | Face detecting camera and method |
US6807290B2 (en) | 2000-03-09 | 2004-10-19 | Microsoft Corporation | Rapid computer modeling of faces for animation |
US7106887B2 (en) | 2000-04-13 | 2006-09-12 | Fuji Photo Film Co., Ltd. | Image processing method using conditions corresponding to an identified person |
US6301440B1 (en) | 2000-04-13 | 2001-10-09 | International Business Machines Corp. | System and method for automatically setting image acquisition controls |
US20020150662A1 (en) | 2000-04-19 | 2002-10-17 | Dewis Mark Lawrence | Ethyl 3-mercaptobutyrate as a flavoring or fragrance agent and methods for preparing and using same |
JP4443722B2 (en) | 2000-04-25 | 2010-03-31 | 富士通株式会社 | Image recognition apparatus and method |
US6944341B2 (en) | 2000-05-01 | 2005-09-13 | Xerox Corporation | Loose gray-scale template matching for image processing of anti-aliased lines |
EP1158786A3 (en) | 2000-05-24 | 2005-03-09 | Sony Corporation | Transmission of the region of interest of an image |
US6700999B1 (en) | 2000-06-30 | 2004-03-02 | Intel Corporation | System, method, and apparatus for multiple face tracking |
US6747690B2 (en) | 2000-07-11 | 2004-06-08 | Phase One A/S | Digital camera with integrated accelerometers |
US6564225B1 (en) | 2000-07-14 | 2003-05-13 | Time Warner Entertainment Company, L.P. | Method and apparatus for archiving in and retrieving images from a digital image library |
AUPQ896000A0 (en) | 2000-07-24 | 2000-08-17 | Seeing Machines Pty Ltd | Facial image processing system |
JP4469476B2 (en) | 2000-08-09 | 2010-05-26 | パナソニック株式会社 | Eye position detection method and eye position detection apparatus |
US7313289B2 (en) * | 2000-08-30 | 2007-12-25 | Ricoh Company, Ltd. | Image processing method and apparatus and computer-readable storage medium using improved distortion correction |
JP4140181B2 (en) | 2000-09-08 | 2008-08-27 | 富士フイルム株式会社 | Electronic camera |
US6900840B1 (en) | 2000-09-14 | 2005-05-31 | Hewlett-Packard Development Company, L.P. | Digital camera and method of using same to view image in live view mode |
US6965684B2 (en) | 2000-09-15 | 2005-11-15 | Canon Kabushiki Kaisha | Image processing methods and apparatus for detecting human eyes, human face, and other objects in an image |
US7038709B1 (en) | 2000-11-01 | 2006-05-02 | Gilbert Verghese | System and method for tracking a subject |
JP4590717B2 (en) | 2000-11-17 | 2010-12-01 | ソニー株式会社 | Face identification device and face identification method |
US7099510B2 (en) | 2000-11-29 | 2006-08-29 | Hewlett-Packard Development Company, L.P. | Method and system for object detection in digital images |
US6975750B2 (en) | 2000-12-01 | 2005-12-13 | Microsoft Corp. | System and method for face recognition using synthesized training images |
JP2002175538A (en) * | 2000-12-08 | 2002-06-21 | Mitsubishi Electric Corp | Device and method for portrait generation, recording medium with portrait generating program recorded thereon, terminal for communication, and communication method by terminal for communication |
US6654507B2 (en) | 2000-12-14 | 2003-11-25 | Eastman Kodak Company | Automatically producing an image of a portion of a photographic image |
US6697504B2 (en) | 2000-12-15 | 2004-02-24 | Institute For Information Industry | Method of multi-level facial image recognition and system using the same |
GB2370438A (en) | 2000-12-22 | 2002-06-26 | Hewlett Packard Co | Automated image cropping using selected compositional rules. |
US6847377B2 (en) * | 2001-01-05 | 2005-01-25 | Seiko Epson Corporation | System, method and computer program converting pixels to luminance levels and assigning colors associated with luminance levels in printer or display output devices |
US7034848B2 (en) | 2001-01-05 | 2006-04-25 | Hewlett-Packard Development Company, L.P. | System and method for automatically cropping graphical images |
JP4167401B2 (en) * | 2001-01-12 | 2008-10-15 | 富士フイルム株式会社 | Digital camera and operation control method thereof |
DE50112268D1 (en) | 2001-02-09 | 2007-05-10 | Imaging Solutions Ag | Digital local image property control using masks |
GB2372658A (en) | 2001-02-23 | 2002-08-28 | Hewlett Packard Co | A method of creating moving video data from a static image |
US7027621B1 (en) | 2001-03-15 | 2006-04-11 | Mikos, Ltd. | Method and apparatus for operator condition monitoring and assessment |
US20020136433A1 (en) | 2001-03-26 | 2002-09-26 | Koninklijke Philips Electronics N.V. | Adaptive facial recognition system and method |
US6915011B2 (en) | 2001-03-28 | 2005-07-05 | Eastman Kodak Company | Event clustering of images using foreground/background segmentation |
US6873743B2 (en) * | 2001-03-29 | 2005-03-29 | Fotonation Holdings, Llc | Method and apparatus for the automatic real-time detection and correction of red-eye defects in batches of digital images or in handheld appliances |
US6760465B2 (en) | 2001-03-30 | 2004-07-06 | Intel Corporation | Mechanism for tracking colored objects in a video sequence |
US6859565B2 (en) * | 2001-04-11 | 2005-02-22 | Hewlett-Packard Development Company, L.P. | Method and apparatus for the removal of flash artifacts |
US6987892B2 (en) * | 2001-04-19 | 2006-01-17 | Eastman Kodak Company | Method, system and software for correcting image defects |
JP2002334338A (en) | 2001-05-09 | 2002-11-22 | National Institute Of Advanced Industrial & Technology | Device and method for object tracking and recording medium |
US20020172419A1 (en) | 2001-05-15 | 2002-11-21 | Qian Lin | Image enhancement using face detection |
US7031523B2 (en) * | 2001-05-16 | 2006-04-18 | Siemens Corporate Research, Inc. | Systems and methods for automatic scale selection in real-time imaging |
US6847733B2 (en) | 2001-05-23 | 2005-01-25 | Eastman Kodak Company | Retrieval and browsing of database images based on image emphasis and appeal |
TW505892B (en) | 2001-05-25 | 2002-10-11 | Ind Tech Res Inst | System and method for promptly tracking multiple faces |
JP4177598B2 (en) | 2001-05-25 | 2008-11-05 | 株式会社東芝 | Face image recording apparatus, information management system, face image recording method, and information management method |
AUPR541801A0 (en) | 2001-06-01 | 2001-06-28 | Canon Kabushiki Kaisha | Face detection in colour images with complex background |
US20020181801A1 (en) | 2001-06-01 | 2002-12-05 | Needham Bradford H. | Feature-based image correction |
US7068841B2 (en) | 2001-06-29 | 2006-06-27 | Hewlett-Packard Development Company, L.P. | Automatic digital image enhancement |
US6980691B2 (en) | 2001-07-05 | 2005-12-27 | Corel Corporation | Correction of “red-eye” effects in images |
GB0116877D0 (en) | 2001-07-10 | 2001-09-05 | Hewlett Packard Co | Intelligent feature selection and pan zoom control |
JP2003030647A (en) * | 2001-07-11 | 2003-01-31 | Minolta Co Ltd | Image processor, image processing method and program |
US6832006B2 (en) | 2001-07-23 | 2004-12-14 | Eastman Kodak Company | System and method for controlling image compression based on image emphasis |
US20030023974A1 (en) | 2001-07-25 | 2003-01-30 | Koninklijke Philips Electronics N.V. | Method and apparatus to track objects in sports programs and select an appropriate camera view |
US20030039402A1 (en) * | 2001-08-24 | 2003-02-27 | Robins David R. | Method and apparatus for detection and removal of scanned image scratches and dust |
EP1293933A1 (en) * | 2001-09-03 | 2003-03-19 | Agfa-Gevaert AG | Method for automatically detecting red-eye defects in photographic image data |
EP1288858A1 (en) * | 2001-09-03 | 2003-03-05 | Agfa-Gevaert AG | Method for automatically detecting red-eye defects in photographic image data |
US6993180B2 (en) | 2001-09-04 | 2006-01-31 | Eastman Kodak Company | Method and system for automated grouping of images |
US7027619B2 (en) | 2001-09-13 | 2006-04-11 | Honeywell International Inc. | Near-infrared method and system for use in face detection |
WO2003028377A1 (en) | 2001-09-14 | 2003-04-03 | Vislog Technology Pte Ltd. | Apparatus and method for selecting key frames of clear faces through a sequence of images |
US7262798B2 (en) | 2001-09-17 | 2007-08-28 | Hewlett-Packard Development Company, L.P. | System and method for simulating fill flash in photography |
US7298412B2 (en) | 2001-09-18 | 2007-11-20 | Ricoh Company, Limited | Image pickup device, automatic focusing method, automatic exposure method, electronic flash control method and computer program |
US7133070B2 (en) * | 2001-09-20 | 2006-11-07 | Eastman Kodak Company | System and method for deciding when to correct image-specific defects based on camera, scene, display and demographic data |
US7324246B2 (en) * | 2001-09-27 | 2008-01-29 | Fujifilm Corporation | Apparatus and method for image processing |
US7433089B2 (en) * | 2001-09-27 | 2008-10-07 | Fujifilm Corporation | Image processor |
US7110569B2 (en) | 2001-09-27 | 2006-09-19 | Koninklijke Philips Electronics N.V. | Video based detection of fall-down and other events |
JP2003107365A (en) * | 2001-09-28 | 2003-04-09 | Pentax Corp | Observation device with photographing function |
US7130864B2 (en) | 2001-10-31 | 2006-10-31 | Hewlett-Packard Development Company, L.P. | Method and system for accessing a collection of images in a database |
KR100421221B1 (en) | 2001-11-05 | 2004-03-02 | 삼성전자주식회사 | Illumination invariant object tracking method and image editing system adopting the method |
US7162101B2 (en) | 2001-11-15 | 2007-01-09 | Canon Kabushiki Kaisha | Image processing apparatus and method |
US7130446B2 (en) | 2001-12-03 | 2006-10-31 | Microsoft Corporation | Automatic detection and tracking of multiple individuals using multiple cues |
US7688349B2 (en) | 2001-12-07 | 2010-03-30 | International Business Machines Corporation | Method of detecting and tracking groups of people |
US7050607B2 (en) | 2001-12-08 | 2006-05-23 | Microsoft Corp. | System and method for multi-view face detection |
TW535413B (en) | 2001-12-13 | 2003-06-01 | Mediatek Inc | Device and method for processing digital video data |
US7221809B2 (en) | 2001-12-17 | 2007-05-22 | Genex Technologies, Inc. | Face recognition system and method |
US7167519B2 (en) | 2001-12-20 | 2007-01-23 | Siemens Corporate Research, Inc. | Real-time video object generation for smart cameras |
US6560029B1 (en) * | 2001-12-21 | 2003-05-06 | Itt Manufacturing Enterprises, Inc. | Video enhanced night vision goggle |
US7035467B2 (en) | 2002-01-09 | 2006-04-25 | Eastman Kodak Company | Method and system for processing images for themed imaging services |
US7289664B2 (en) * | 2002-01-17 | 2007-10-30 | Fujifilm Corporation | Method of detecting and correcting the red eye |
JP2003219225A (en) | 2002-01-25 | 2003-07-31 | Nippon Micro Systems Kk | Device for monitoring moving object image |
US7362354B2 (en) | 2002-02-12 | 2008-04-22 | Hewlett-Packard Development Company, L.P. | Method and system for assessing the photo quality of a captured image in a digital still camera |
US20030161506A1 (en) * | 2002-02-25 | 2003-08-28 | Eastman Kodak Company | Face detection computer program product for redeye correction |
US7254257B2 (en) | 2002-03-04 | 2007-08-07 | Samsung Electronics Co., Ltd. | Method and apparatus of recognizing face using component-based 2nd-order principal component analysis (PCA)/independent component analysis (ICA) |
US7146026B2 (en) | 2002-06-04 | 2006-12-05 | Hewlett-Packard Development Company, L.P. | Image correction system and method |
US6959109B2 (en) | 2002-06-20 | 2005-10-25 | Identix Incorporated | System and method for pose-angle estimation |
US6801642B2 (en) | 2002-06-26 | 2004-10-05 | Motorola, Inc. | Method and apparatus for limiting storage or transmission of visual information |
AU2003280516A1 (en) | 2002-07-01 | 2004-01-19 | The Regents Of The University Of California | Digital processing of video images |
US7227976B1 (en) | 2002-07-08 | 2007-06-05 | Videomining Corporation | Method and system for real-time facial image enhancement |
US20040208385A1 (en) * | 2003-04-18 | 2004-10-21 | Medispectra, Inc. | Methods and apparatus for visually enhancing images |
US7020337B2 (en) | 2002-07-22 | 2006-03-28 | Mitsubishi Electric Research Laboratories, Inc. | System and method for detecting objects in images |
JP2004062565A (en) | 2002-07-30 | 2004-02-26 | Canon Inc | Image processor and image processing method, and program storage medium |
US7110575B2 (en) | 2002-08-02 | 2006-09-19 | Eastman Kodak Company | Method for locating faces in digital color images |
US7035462B2 (en) | 2002-08-29 | 2006-04-25 | Eastman Kodak Company | Apparatus and method for processing digital images having eye color defects |
US7397969B2 (en) * | 2002-08-30 | 2008-07-08 | Fujifilm Corporation | Red eye compensation method, image processing apparatus and method for implementing the red eye compensation method, as well as printing method and printer |
US20040041121A1 (en) | 2002-08-30 | 2004-03-04 | Shigeyoshi Yoshida | Magnetic loss material and method of producing the same |
EP1398733A1 (en) | 2002-09-12 | 2004-03-17 | GRETAG IMAGING Trading AG | Texture-based colour correction |
US7194114B2 (en) | 2002-10-07 | 2007-03-20 | Carnegie Mellon University | Object finder for two-dimensional images, and system for determining a set of sub-classifiers composing an object finder |
KR100473598B1 (en) | 2002-11-04 | 2005-03-11 | 삼성전자주식회사 | System and method for detecting veilde face image |
US7154510B2 (en) | 2002-11-14 | 2006-12-26 | Eastman Kodak Company | System and method for modifying a portrait image in response to a stimulus |
GB2395778A (en) | 2002-11-29 | 2004-06-02 | Sony Uk Ltd | Face detection |
GB2395264A (en) | 2002-11-29 | 2004-05-19 | Sony Uk Ltd | Face detection in images |
US7394969B2 (en) * | 2002-12-11 | 2008-07-01 | Eastman Kodak Company | System and method to compose a slide show |
JP3954484B2 (en) | 2002-12-12 | 2007-08-08 | 株式会社東芝 | Image processing apparatus and program |
US7082157B2 (en) | 2002-12-24 | 2006-07-25 | Realtek Semiconductor Corp. | Residual echo reduction for a full duplex transceiver |
EP1577705B1 (en) * | 2002-12-25 | 2018-08-01 | Nikon Corporation | Blur correction camera system |
JP4178949B2 (en) * | 2002-12-27 | 2008-11-12 | 富士ゼロックス株式会社 | Image processing apparatus, image processing method, and program thereof |
KR100946888B1 (en) * | 2003-01-30 | 2010-03-09 | 삼성전자주식회사 | Device and method for correcting a skew of image |
US7120279B2 (en) | 2003-01-30 | 2006-10-10 | Eastman Kodak Company | Method for face orientation determination in digital color images |
US7162076B2 (en) | 2003-02-11 | 2007-01-09 | New Jersey Institute Of Technology | Face detection method and apparatus |
JP2004274720A (en) | 2003-02-18 | 2004-09-30 | Fuji Photo Film Co Ltd | Data conversion apparatus and data conversion program |
US7039222B2 (en) | 2003-02-28 | 2006-05-02 | Eastman Kodak Company | Method and system for enhancing portrait images that are processed in a batch mode |
US7508961B2 (en) | 2003-03-12 | 2009-03-24 | Eastman Kodak Company | Method and system for face detection in digital images |
US7103227B2 (en) | 2003-03-19 | 2006-09-05 | Mitsubishi Electric Research Laboratories, Inc. | Enhancing low quality images of naturally illuminated scenes |
US20040228505A1 (en) | 2003-04-14 | 2004-11-18 | Fuji Photo Film Co., Ltd. | Image characteristic portion extraction method, computer readable medium, and data collection and processing device |
US7609908B2 (en) | 2003-04-30 | 2009-10-27 | Eastman Kodak Company | Method for adjusting the brightness of a digital image utilizing belief values |
DE60314851D1 (en) | 2003-05-19 | 2007-08-23 | St Microelectronics Sa | Image processing method for numerical images with exposure correction by detection of skin areas of the object |
JP2004350130A (en) | 2003-05-23 | 2004-12-09 | Fuji Photo Film Co Ltd | Digital camera |
WO2007142621A1 (en) | 2006-06-02 | 2007-12-13 | Fotonation Vision Limited | Modification of post-viewing parameters for digital images using image region or feature information |
US9129381B2 (en) * | 2003-06-26 | 2015-09-08 | Fotonation Limited | Modification of post-viewing parameters for digital images using image region or feature information |
US7920723B2 (en) | 2005-11-18 | 2011-04-05 | Tessera Technologies Ireland Limited | Two stage detection for photographic eye artifacts |
US7317815B2 (en) | 2003-06-26 | 2008-01-08 | Fotonation Vision Limited | Digital image processing composition using face detection information |
US7587085B2 (en) * | 2004-10-28 | 2009-09-08 | Fotonation Vision Limited | Method and apparatus for red-eye detection in an acquired digital image |
US8330831B2 (en) | 2003-08-05 | 2012-12-11 | DigitalOptics Corporation Europe Limited | Method of gathering visual meta data using a reference image |
US8682097B2 (en) | 2006-02-14 | 2014-03-25 | DigitalOptics Corporation Europe Limited | Digital image enhancement with reference images |
US7440593B1 (en) * | 2003-06-26 | 2008-10-21 | Fotonation Vision Limited | Method of improving orientation and color balance of digital images using face detection information |
US8553949B2 (en) | 2004-01-22 | 2013-10-08 | DigitalOptics Corporation Europe Limited | Classification and organization of consumer digital images using workflow, and face detection and recognition |
US20060093238A1 (en) * | 2004-10-28 | 2006-05-04 | Eran Steinberg | Method and apparatus for red-eye detection in an acquired digital image using face recognition |
US7680342B2 (en) | 2004-08-16 | 2010-03-16 | Fotonation Vision Limited | Indoor/outdoor classification in digital images |
US7574016B2 (en) | 2003-06-26 | 2009-08-11 | Fotonation Vision Limited | Digital image processing using face detection information |
US8593542B2 (en) | 2005-12-27 | 2013-11-26 | DigitalOptics Corporation Europe Limited | Foreground/background separation using reference images |
US8989453B2 (en) | 2003-06-26 | 2015-03-24 | Fotonation Limited | Digital image processing using face detection information |
US8363951B2 (en) | 2007-03-05 | 2013-01-29 | DigitalOptics Corporation Europe Limited | Face recognition training method and apparatus |
US8896725B2 (en) | 2007-06-21 | 2014-11-25 | Fotonation Limited | Image capture device with contemporaneous reference image capture mechanism |
US7362368B2 (en) | 2003-06-26 | 2008-04-22 | Fotonation Vision Limited | Perfecting the optics within a digital image acquisition device using face detection |
US7269292B2 (en) * | 2003-06-26 | 2007-09-11 | Fotonation Vision Limited | Digital image adjustable compression and resolution using face detection information |
US7636486B2 (en) | 2004-11-10 | 2009-12-22 | Fotonation Ireland Ltd. | Method of determining PSF using multiple instances of a nominally similar scene |
US7471846B2 (en) | 2003-06-26 | 2008-12-30 | Fotonation Vision Limited | Perfecting the effect of flash within an image acquisition devices using face detection |
US8948468B2 (en) | 2003-06-26 | 2015-02-03 | Fotonation Limited | Modification of viewing parameters for digital images using face detection information |
US7536036B2 (en) * | 2004-10-28 | 2009-05-19 | Fotonation Vision Limited | Method and apparatus for red-eye detection in an acquired digital image |
US7315630B2 (en) | 2003-06-26 | 2008-01-01 | Fotonation Vision Limited | Perfecting of digital image rendering parameters within rendering devices using face detection |
US7702236B2 (en) | 2006-02-14 | 2010-04-20 | Fotonation Vision Limited | Digital image acquisition device with built in dust and sensor mapping capability |
US7792335B2 (en) | 2006-02-24 | 2010-09-07 | Fotonation Vision Limited | Method and apparatus for selective disqualification of digital images |
US8170294B2 (en) * | 2006-11-10 | 2012-05-01 | DigitalOptics Corporation Europe Limited | Method of detecting redeye in a digital image |
US7844076B2 (en) * | 2003-06-26 | 2010-11-30 | Fotonation Vision Limited | Digital image processing using face detection and skin tone information |
US7565030B2 (en) | 2003-06-26 | 2009-07-21 | Fotonation Vision Limited | Detecting orientation of digital images using face detection information |
US7620218B2 (en) * | 2006-08-11 | 2009-11-17 | Fotonation Ireland Limited | Real-time face tracking with reference images |
US8498452B2 (en) | 2003-06-26 | 2013-07-30 | DigitalOptics Corporation Europe Limited | Digital image processing using face detection information |
US7616233B2 (en) | 2003-06-26 | 2009-11-10 | Fotonation Vision Limited | Perfecting of digital image capture parameters within acquisition devices using face detection |
US7606417B2 (en) * | 2004-08-16 | 2009-10-20 | Fotonation Vision Limited | Foreground/background segmentation in digital images with differential exposure calculations |
US7190829B2 (en) | 2003-06-30 | 2007-03-13 | Microsoft Corporation | Speedup of face detection in digital images |
US7274822B2 (en) | 2003-06-30 | 2007-09-25 | Microsoft Corporation | Face annotation for photo management |
JP3867687B2 (en) * | 2003-07-08 | 2007-01-10 | コニカミノルタフォトイメージング株式会社 | Imaging device |
EP1499111B1 (en) * | 2003-07-15 | 2015-01-07 | Canon Kabushiki Kaisha | Image sensiting apparatus, image processing apparatus, and control method thereof |
US7689033B2 (en) | 2003-07-16 | 2010-03-30 | Microsoft Corporation | Robust multi-view face detection methods and apparatuses |
US20050140801A1 (en) | 2003-08-05 | 2005-06-30 | Yury Prilutsky | Optimized performance and performance for red-eye filter method and apparatus |
US9412007B2 (en) * | 2003-08-05 | 2016-08-09 | Fotonation Limited | Partial face detector red-eye filter method and apparatus |
US20050031224A1 (en) * | 2003-08-05 | 2005-02-10 | Yury Prilutsky | Detecting red eye filter and apparatus using meta-data |
JP2005094741A (en) * | 2003-08-14 | 2005-04-07 | Fuji Photo Film Co Ltd | Image pickup device and image synthesizing method |
JP2005078376A (en) | 2003-08-29 | 2005-03-24 | Sony Corp | Object detection device, object detection method, and robot device |
US7362210B2 (en) | 2003-09-05 | 2008-04-22 | Honeywell International Inc. | System and method for gate access control |
JP2005100084A (en) | 2003-09-25 | 2005-04-14 | Toshiba Corp | Image processor and method |
US7369712B2 (en) | 2003-09-30 | 2008-05-06 | Fotonation Vision Limited | Automated statistical self-calibrating detection and removal of blemishes in digital images based on multiple occurrences of dust in images |
US7295233B2 (en) | 2003-09-30 | 2007-11-13 | Fotonation Vision Limited | Detection and removal of blemishes in digital images utilizing original images of defocused scenes |
US7424170B2 (en) | 2003-09-30 | 2008-09-09 | Fotonation Vision Limited | Automated statistical self-calibrating detection and removal of blemishes in digital images based on determining probabilities based on image analysis of single images |
US7590305B2 (en) | 2003-09-30 | 2009-09-15 | Fotonation Vision Limited | Digital camera with built-in lens calibration table |
JP2005110176A (en) * | 2003-10-02 | 2005-04-21 | Nikon Corp | Noise removing method, noise removing processing program, and noise removing device |
US7512286B2 (en) | 2003-10-27 | 2009-03-31 | Hewlett-Packard Development Company, L.P. | Assessing image quality |
JP2005128956A (en) | 2003-10-27 | 2005-05-19 | Pentax Corp | Subject determining program and digital camera |
US7274832B2 (en) | 2003-11-13 | 2007-09-25 | Eastman Kodak Company | In-plane rotation invariant object detection in digitized images |
US7596247B2 (en) | 2003-11-14 | 2009-09-29 | Fujifilm Corporation | Method and apparatus for object recognition using probability models |
JP2005182771A (en) * | 2003-11-27 | 2005-07-07 | Fuji Photo Film Co Ltd | Image editing apparatus, method, and program |
JP2005164475A (en) | 2003-12-04 | 2005-06-23 | Mitsutoyo Corp | Measuring apparatus for perpendicularity |
US7564994B1 (en) | 2004-01-22 | 2009-07-21 | Fotonation Vision Limited | Classification system for consumer digital images using automatic workflow and face detection and recognition |
JP4321287B2 (en) | 2004-02-10 | 2009-08-26 | ソニー株式会社 | Imaging apparatus, imaging method, and program |
JP4320272B2 (en) | 2004-03-31 | 2009-08-26 | 富士フイルム株式会社 | Specific area detection method, specific area detection apparatus, and program |
US7657060B2 (en) * | 2004-03-31 | 2010-02-02 | Microsoft Corporation | Stylization of video |
JP4340968B2 (en) | 2004-05-07 | 2009-10-07 | ソニー株式会社 | Image processing apparatus and method, recording medium, and program |
JP2006033793A (en) | 2004-06-14 | 2006-02-02 | Victor Co Of Japan Ltd | Tracking video reproducing apparatus |
JP4442330B2 (en) | 2004-06-17 | 2010-03-31 | 株式会社ニコン | Electronic camera and electronic camera system |
WO2006023046A1 (en) | 2004-06-21 | 2006-03-02 | Nevengineering, Inc. | Single image based multi-biometric system and method |
JP4574249B2 (en) | 2004-06-29 | 2010-11-04 | キヤノン株式会社 | Image processing apparatus and method, program, and imaging apparatus |
US7457477B2 (en) * | 2004-07-06 | 2008-11-25 | Microsoft Corporation | Digital photography with flash/no flash extension |
US7158680B2 (en) | 2004-07-30 | 2007-01-02 | Euclid Discoveries, Llc | Apparatus and method for processing video data |
KR100668303B1 (en) | 2004-08-04 | 2007-01-12 | 삼성전자주식회사 | Method for detecting face based on skin color and pattern matching |
JP4757559B2 (en) | 2004-08-11 | 2011-08-24 | 富士フイルム株式会社 | Apparatus and method for detecting components of a subject |
US7119838B2 (en) | 2004-08-19 | 2006-10-10 | Blue Marlin Llc | Method and imager for detecting the location of objects |
EP1812968B1 (en) * | 2004-08-25 | 2019-01-16 | Callahan Cellular L.L.C. | Apparatus for multiple camera devices and method of operating same |
US7502498B2 (en) | 2004-09-10 | 2009-03-10 | Available For Licensing | Patient monitoring apparatus |
JP4408779B2 (en) * | 2004-09-15 | 2010-02-03 | キヤノン株式会社 | Image processing device |
US7333963B2 (en) | 2004-10-07 | 2008-02-19 | Bernard Widrow | Cognitive memory and auto-associative neural network based search engine for computer and network located images and photographs |
WO2006040761A2 (en) * | 2004-10-15 | 2006-04-20 | Oren Halpern | A system and a method for improving the captured images of digital still cameras |
US7730406B2 (en) * | 2004-10-20 | 2010-06-01 | Hewlett-Packard Development Company, L.P. | Image processing system and method |
US8320641B2 (en) | 2004-10-28 | 2012-11-27 | DigitalOptics Corporation Europe Limited | Method and apparatus for red-eye detection using preview or other reference images |
JP4383399B2 (en) | 2004-11-05 | 2009-12-16 | 富士フイルム株式会社 | Detection target image search apparatus and control method thereof |
US7593603B1 (en) | 2004-11-30 | 2009-09-22 | Adobe Systems Incorporated | Multi-behavior image correction tool |
US7599521B2 (en) | 2004-11-30 | 2009-10-06 | Honda Motor Co., Ltd. | Vehicle vicinity monitoring apparatus |
US7734067B2 (en) | 2004-12-07 | 2010-06-08 | Electronics And Telecommunications Research Institute | User recognition system and method thereof |
DE102004062315A1 (en) | 2004-12-20 | 2006-06-29 | Mack Ride Gmbh & Co Kg | Water ride |
US20060006077A1 (en) | 2004-12-24 | 2006-01-12 | Erie County Plastics Corporation | Dispensing closure with integral piercing unit |
US7315631B1 (en) | 2006-08-11 | 2008-01-01 | Fotonation Vision Limited | Real-time face tracking in a digital image acquisition device |
US7715597B2 (en) | 2004-12-29 | 2010-05-11 | Fotonation Ireland Limited | Method and component for image recognition |
US8503800B2 (en) | 2007-03-05 | 2013-08-06 | DigitalOptics Corporation Europe Limited | Illumination detection using classifier chains |
CN100358340C (en) | 2005-01-05 | 2007-12-26 | 张健 | Digital-camera capable of selecting optimum taking opportune moment |
JP4755490B2 (en) * | 2005-01-13 | 2011-08-24 | オリンパスイメージング株式会社 | Blur correction method and imaging apparatus |
US7454058B2 (en) | 2005-02-07 | 2008-11-18 | Mitsubishi Electric Research Lab, Inc. | Method of extracting and searching integral histograms of data samples |
US7620208B2 (en) | 2005-02-09 | 2009-11-17 | Siemens Corporate Research, Inc. | System and method for detecting features from images of vehicles |
JP4216824B2 (en) | 2005-03-07 | 2009-01-28 | 株式会社東芝 | 3D model generation apparatus, 3D model generation method, and 3D model generation program |
JP4639869B2 (en) | 2005-03-14 | 2011-02-23 | オムロン株式会社 | Imaging apparatus and timer photographing method |
US20060203106A1 (en) | 2005-03-14 | 2006-09-14 | Lawrence Joseph P | Methods and apparatus for retrieving data captured by a media device |
JP4324170B2 (en) | 2005-03-17 | 2009-09-02 | キヤノン株式会社 | Imaging apparatus and display control method |
US7801328B2 (en) | 2005-03-31 | 2010-09-21 | Honeywell International Inc. | Methods for defining, detecting, analyzing, indexing and retrieving events using video image processing |
US7600214B2 (en) * | 2005-04-18 | 2009-10-06 | Broadcom Corporation | Use of metadata for seamless updates |
JP4519708B2 (en) | 2005-05-11 | 2010-08-04 | 富士フイルム株式会社 | Imaging apparatus and method, and program |
JP2006318103A (en) | 2005-05-11 | 2006-11-24 | Fuji Photo Film Co Ltd | Image processor, image processing method, and program |
JP4906034B2 (en) | 2005-05-16 | 2012-03-28 | 富士フイルム株式会社 | Imaging apparatus, method, and program |
JP2006338377A (en) * | 2005-06-02 | 2006-12-14 | Fujifilm Holdings Corp | Image correction method, apparatus, and program |
US20060280375A1 (en) | 2005-06-08 | 2006-12-14 | Dalton Dan L | Red-eye correction method and apparatus with user-adjustable threshold |
JP4498224B2 (en) | 2005-06-14 | 2010-07-07 | キヤノン株式会社 | Image processing apparatus and method |
JP2006350498A (en) | 2005-06-14 | 2006-12-28 | Fujifilm Holdings Corp | Image processor and image processing method and program |
JP2007006182A (en) | 2005-06-24 | 2007-01-11 | Fujifilm Holdings Corp | Image processing apparatus and method therefor, and program |
JP4573725B2 (en) | 2005-08-01 | 2010-11-04 | イーストマン コダック カンパニー | Imaging apparatus having a plurality of optical systems |
US7574069B2 (en) * | 2005-08-01 | 2009-08-11 | Mitsubishi Electric Research Laboratories, Inc. | Retargeting images for small displays |
US20070035628A1 (en) * | 2005-08-12 | 2007-02-15 | Kunihiko Kanai | Image-capturing device having multiple optical systems |
US7606392B2 (en) * | 2005-08-26 | 2009-10-20 | Sony Corporation | Capturing and processing facial motion data |
US20070047834A1 (en) * | 2005-08-31 | 2007-03-01 | International Business Machines Corporation | Method and apparatus for visual background subtraction with one or more preprocessing modules |
JP4429241B2 (en) | 2005-09-05 | 2010-03-10 | キヤノン株式会社 | Image processing apparatus and method |
JP4799101B2 (en) | 2005-09-26 | 2011-10-26 | 富士フイルム株式会社 | Image processing method, apparatus, and program |
JP2007094549A (en) | 2005-09-27 | 2007-04-12 | Fujifilm Corp | Image processing method, device and program |
US7555149B2 (en) | 2005-10-25 | 2009-06-30 | Mitsubishi Electric Research Laboratories, Inc. | Method and system for segmenting videos using face detection |
US7747071B2 (en) * | 2005-10-27 | 2010-06-29 | Hewlett-Packard Development Company, L.P. | Detecting and correcting peteye |
US20070098303A1 (en) | 2005-10-31 | 2007-05-03 | Eastman Kodak Company | Determining a particular person from a collection |
JP4626493B2 (en) | 2005-11-14 | 2011-02-09 | ソニー株式会社 | Image processing apparatus, image processing method, program for image processing method, and recording medium recording program for image processing method |
US7599577B2 (en) | 2005-11-18 | 2009-10-06 | Fotonation Vision Limited | Method and apparatus of correcting hybrid flash artifacts in digital images |
US7953253B2 (en) | 2005-12-31 | 2011-05-31 | Arcsoft, Inc. | Face detection on mobile devices |
US7643659B2 (en) | 2005-12-31 | 2010-01-05 | Arcsoft, Inc. | Facial feature detection on mobile devices |
JP4970468B2 (en) | 2006-02-14 | 2012-07-04 | デジタルオプティックス・コーポレイション・ヨーロッパ・リミテッド | Image blur processing |
WO2007095553A2 (en) | 2006-02-14 | 2007-08-23 | Fotonation Vision Limited | Automatic detection and correction of non-red eye flash defects |
US7804983B2 (en) * | 2006-02-24 | 2010-09-28 | Fotonation Vision Limited | Digital image acquisition control and correction method and apparatus |
US7551754B2 (en) * | 2006-02-24 | 2009-06-23 | Fotonation Vision Limited | Method and apparatus for selective rejection of digital images |
IES20060564A2 (en) | 2006-05-03 | 2006-11-01 | Fotonation Vision Ltd | Improved foreground / background separation |
US7539533B2 (en) | 2006-05-16 | 2009-05-26 | Bao Tran | Mesh network monitoring appliance |
IES20070229A2 (en) | 2006-06-05 | 2007-10-03 | Fotonation Vision Ltd | Image acquisition method and apparatus |
US7965875B2 (en) | 2006-06-12 | 2011-06-21 | Tessera Technologies Ireland Limited | Advances in extending the AAM techniques from grayscale to color images |
US7515740B2 (en) | 2006-08-02 | 2009-04-07 | Fotonation Vision Limited | Face recognition with combined PCA-based datasets |
US7551800B2 (en) | 2006-08-09 | 2009-06-23 | Fotonation Vision Limited | Detection of airborne flash artifacts using preflash image |
US7916897B2 (en) | 2006-08-11 | 2011-03-29 | Tessera Technologies Ireland Limited | Face tracking for controlling imaging parameters |
US7403643B2 (en) * | 2006-08-11 | 2008-07-22 | Fotonation Vision Limited | Real-time face tracking in a digital image acquisition device |
JP4937673B2 (en) * | 2006-08-15 | 2012-05-23 | 富士通株式会社 | Semiconductor light-emitting element, manufacturing method thereof, and semiconductor light-emitting device |
US8055067B2 (en) | 2007-01-18 | 2011-11-08 | DigitalOptics Corporation Europe Limited | Color segmentation |
EP2115662B1 (en) | 2007-02-28 | 2010-06-23 | Fotonation Vision Limited | Separating directional lighting variability in statistical face modelling based on texture space decomposition |
EP2145288A4 (en) | 2007-03-05 | 2013-09-04 | Digitaloptics Corp Europe Ltd | Red eye false positive filtering using face location and orientation |
EP2123008A4 (en) | 2007-03-05 | 2011-03-16 | Tessera Tech Ireland Ltd | Face categorization and annotation of a mobile phone contact list |
US8212864B2 (en) | 2008-01-30 | 2012-07-03 | DigitalOptics Corporation Europe Limited | Methods and apparatuses for using image acquisition data to detect and correct image defects |
JP5056625B2 (en) * | 2008-07-01 | 2012-10-24 | 富士通株式会社 | Circuit design apparatus and circuit design method |
US8081254B2 (en) | 2008-08-14 | 2011-12-20 | DigitalOptics Corporation Europe Limited | In-camera based method of detecting defect eye with high accuracy |
-
2008
- 2008-06-19 US US12/142,134 patent/US8320641B2/en active Active
-
2010
- 2010-11-16 US US12/947,731 patent/US7953251B1/en active Active
-
2011
- 2011-05-23 US US13/113,648 patent/US8135184B2/en active Active
Patent Citations (99)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4047187A (en) * | 1974-04-01 | 1977-09-06 | Canon Kabushiki Kaisha | System for exposure measurement and/or focus detection by means of image senser |
US4317991A (en) * | 1980-03-12 | 1982-03-02 | Honeywell Inc. | Digital auto focus system utilizing a photodetector array |
US4367027A (en) * | 1980-03-12 | 1983-01-04 | Honeywell Inc. | Active auto focus system improvement |
US4448510A (en) * | 1981-10-23 | 1984-05-15 | Fuji Photo Film Co., Ltd. | Camera shake detection apparatus |
US4638364A (en) * | 1984-10-30 | 1987-01-20 | Sanyo Electric Co., Ltd. | Auto focus circuit for video camera |
US4796043A (en) * | 1985-09-13 | 1989-01-03 | Minolta Camera Kabushiki Kaisha | Multi-point photometric apparatus |
US5051770A (en) * | 1986-01-20 | 1991-09-24 | Scanera S.C. | Image processing device for controlling the transfer function of an optical system |
US5291234A (en) * | 1987-02-04 | 1994-03-01 | Asahi Kogaku Kogyo Kabushiki Kaisha | Auto optical focus detecting device and eye direction detecting optical system |
US5008946A (en) * | 1987-09-09 | 1991-04-16 | Aisin Seiki K.K. | System for recognizing image |
US5384912A (en) * | 1987-10-30 | 1995-01-24 | New Microtime Inc. | Real time video image processing system |
US5018017A (en) * | 1987-12-25 | 1991-05-21 | Kabushiki Kaisha Toshiba | Electronic still camera and image recording method thereof |
US5227837A (en) * | 1989-05-12 | 1993-07-13 | Fuji Photo Film Co., Ltd. | Photograph printing method |
US5111231A (en) * | 1989-07-27 | 1992-05-05 | Canon Kabushiki Kaisha | Camera system |
US5150432A (en) * | 1990-03-26 | 1992-09-22 | Kabushiki Kaisha Toshiba | Apparatus for encoding/decoding video signals to improve quality of a specific region |
US5280530A (en) * | 1990-09-07 | 1994-01-18 | U.S. Philips Corporation | Method and apparatus for tracking a moving object |
US6101271A (en) * | 1990-10-09 | 2000-08-08 | Matsushita Electrial Industrial Co., Ltd | Gradation correction method and device |
US5353058A (en) * | 1990-10-31 | 1994-10-04 | Canon Kabushiki Kaisha | Automatic exposure control apparatus |
US5493409A (en) * | 1990-11-29 | 1996-02-20 | Minolta Camera Kabushiki Kaisha | Still video camera having a printer capable of printing a photographed image in a plurality of printing modes |
US5305048A (en) * | 1991-02-12 | 1994-04-19 | Nikon Corporation | A photo taking apparatus capable of making a photograph with flash by a flash device |
US5638136A (en) * | 1992-01-13 | 1997-06-10 | Mitsubishi Denki Kabushiki Kaisha | Method and apparatus for detecting flesh tones in an image |
US5488429A (en) * | 1992-01-13 | 1996-01-30 | Mitsubishi Denki Kabushiki Kaisha | Video signal processor for detecting flesh tones in am image |
US5905807A (en) * | 1992-01-23 | 1999-05-18 | Matsushita Electric Industrial Co., Ltd. | Apparatus for extracting feature points from a facial image |
US5331544A (en) * | 1992-04-23 | 1994-07-19 | A. C. Nielsen Company | Market research method and system for collecting retail store and shopper market research data |
US5450504A (en) * | 1992-05-19 | 1995-09-12 | Calia; James | Method for finding a most likely matching of a target facial image in a data base of facial images |
US5680481A (en) * | 1992-05-26 | 1997-10-21 | Ricoh Corporation | Facial feature extraction method and apparatus for a neural network acoustic and visual speech recognition system |
US5430809A (en) * | 1992-07-10 | 1995-07-04 | Sony Corporation | Human face tracking system |
US5278923A (en) * | 1992-09-02 | 1994-01-11 | Harmonic Lightwaves, Inc. | Cascaded optical modulation system with high linearity |
US5311240A (en) * | 1992-11-03 | 1994-05-10 | Eastman Kodak Company | Technique suited for use in multi-zone autofocusing cameras for improving image quality for non-standard display sizes and/or different focal length photographing modes |
US5812193A (en) * | 1992-11-07 | 1998-09-22 | Sony Corporation | Video camera system which automatically follows subject changes |
US5771307A (en) * | 1992-12-15 | 1998-06-23 | Nielsen Media Research, Inc. | Audience measurement system and method |
US5706362A (en) * | 1993-03-31 | 1998-01-06 | Mitsubishi Denki Kabushiki Kaisha | Image tracking apparatus |
US5384615A (en) * | 1993-06-08 | 1995-01-24 | Industrial Technology Research Institute | Ambient depth-of-field simulation exposuring method |
US5432863A (en) * | 1993-07-19 | 1995-07-11 | Eastman Kodak Company | Automated detection and correction of eye color defects due to flash illumination |
US5748764A (en) * | 1993-07-19 | 1998-05-05 | Eastman Kodak Company | Automated detection and correction of eye color defects due to flash illumination |
US5745668A (en) * | 1993-08-27 | 1998-04-28 | Massachusetts Institute Of Technology | Example-based image analysis and synthesis using pixelwise correspondence |
US5781650A (en) * | 1994-02-18 | 1998-07-14 | University Of Central Florida | Automatic feature detection and age classification of human faces in digital images |
US5638139A (en) * | 1994-04-14 | 1997-06-10 | Texas Instruments Incorporated | Motion adaptive scan-rate conversion using directional edge interpolation |
US5774754A (en) * | 1994-04-26 | 1998-06-30 | Minolta Co., Ltd. | Camera capable of previewing a photographed image |
US5774747A (en) * | 1994-06-09 | 1998-06-30 | Fuji Photo Film Co., Ltd. | Method and apparatus for controlling exposure of camera |
US5652669A (en) * | 1994-08-12 | 1997-07-29 | U.S. Philips Corporation | Optical synchronization arrangement |
US5543952A (en) * | 1994-09-12 | 1996-08-06 | Nippon Telegraph And Telephone Corporation | Optical transmission system |
US5764790A (en) * | 1994-09-30 | 1998-06-09 | Istituto Trentino Di Cultura | Method of storing and retrieving images of people, for example, in photographic archives and for the construction of identikit images |
US5496106A (en) * | 1994-12-13 | 1996-03-05 | Apple Computer, Inc. | System and method for generating a contrast overlay as a focus assist for an imaging device |
US5724456A (en) * | 1995-03-31 | 1998-03-03 | Polaroid Corporation | Brightness adjustment of images using digital scene analysis |
US5870138A (en) * | 1995-03-31 | 1999-02-09 | Hitachi, Ltd. | Facial image processing |
US5710833A (en) * | 1995-04-20 | 1998-01-20 | Massachusetts Institute Of Technology | Detection, recognition and coding of complex objects using probabilistic eigenspace analysis |
US5774129A (en) * | 1995-06-07 | 1998-06-30 | Massachusetts Institute Of Technology | Image analysis and synthesis networks using shape and texture information |
US5715325A (en) * | 1995-08-30 | 1998-02-03 | Siemens Corporate Research, Inc. | Apparatus and method for detecting a face in a video image |
US5774591A (en) * | 1995-12-15 | 1998-06-30 | Xerox Corporation | Apparatus and method for recognizing facial expressions and facial gestures in a sequence of images |
US5633678A (en) * | 1995-12-20 | 1997-05-27 | Eastman Kodak Company | Electronic still camera for capturing and categorizing images |
US5911139A (en) * | 1996-03-29 | 1999-06-08 | Virage, Inc. | Visual image database search engine which allows for different schema |
US5764803A (en) * | 1996-04-03 | 1998-06-09 | Lucent Technologies Inc. | Motion-adaptive modelling of scene content for very low bit rate model-assisted coding of video sequences |
US5802208A (en) * | 1996-05-06 | 1998-09-01 | Lucent Technologies Inc. | Face recognition using DCT-based feature vectors |
US6173068B1 (en) * | 1996-07-29 | 2001-01-09 | Mikos, Ltd. | Method and apparatus for recognizing and classifying individuals based on minutiae |
US6438234B1 (en) * | 1996-09-05 | 2002-08-20 | Swisscom Ag | Quantum cryptography device and method |
US6028960A (en) * | 1996-09-20 | 2000-02-22 | Lucent Technologies Inc. | Face feature analysis for automatic lipreading and character animation |
US5818975A (en) * | 1996-10-28 | 1998-10-06 | Eastman Kodak Company | Method and apparatus for area selective exposure adjustment |
US6184926B1 (en) * | 1996-11-26 | 2001-02-06 | Ncr Corporation | System and method for detecting a human face in uncontrolled environments |
US6526156B1 (en) * | 1997-01-10 | 2003-02-25 | Xerox Corporation | Apparatus and method for identifying and tracking objects with view-based representations |
US6053268A (en) * | 1997-01-23 | 2000-04-25 | Nissan Motor Co., Ltd. | Vehicle environment recognition system |
US6121953A (en) * | 1997-02-06 | 2000-09-19 | Modern Cartoons, Ltd. | Virtual reality system for sensing facial movements |
US6441854B2 (en) * | 1997-02-20 | 2002-08-27 | Eastman Kodak Company | Electronic camera with quick review of last captured image |
US6061055A (en) * | 1997-03-21 | 2000-05-09 | Autodesk, Inc. | Method of tracking objects with an imaging device |
US6249315B1 (en) * | 1997-03-24 | 2001-06-19 | Jack M. Holm | Strategy for pictorial digital image processing |
US6035074A (en) * | 1997-05-27 | 2000-03-07 | Sharp Kabushiki Kaisha | Image processing apparatus and storage medium therefor |
US6267939B1 (en) * | 1997-07-22 | 2001-07-31 | Huntsman Corporation Hungary Vegyipari Termelo-Fejleszto Reszvenytarsasag | Absorbent composition for purifying gases which contain acidic components |
US6445810B2 (en) * | 1997-08-01 | 2002-09-03 | Interval Research Corporation | Method and apparatus for personnel detection and tracking |
US6188777B1 (en) * | 1997-08-01 | 2001-02-13 | Interval Research Corporation | Method and apparatus for personnel detection and tracking |
US6072094A (en) * | 1997-08-06 | 2000-06-06 | Merck & Co., Inc. | Efficient synthesis of cyclopropylacetylene |
US6252976B1 (en) * | 1997-08-29 | 2001-06-26 | Eastman Kodak Company | Computer program product for redeye detection |
US5966549A (en) * | 1997-09-09 | 1999-10-12 | Minolta Co., Ltd. | Camera |
US5915980A (en) * | 1997-09-29 | 1999-06-29 | George M. Baldock | Wiring interconnection system |
US6407777B1 (en) * | 1997-10-09 | 2002-06-18 | Deluca Michael Joseph | Red-eye filter method and apparatus |
US6016354A (en) * | 1997-10-23 | 2000-01-18 | Hewlett-Packard Company | Apparatus and a method for reducing red-eye in a digital image |
US6549641B2 (en) * | 1997-10-30 | 2003-04-15 | Minolta Co., Inc. | Screen image observing device and method |
US6108437A (en) * | 1997-11-14 | 2000-08-22 | Seiko Epson Corporation | Face recognition apparatus, method, system and computer readable medium thereof |
US6246779B1 (en) * | 1997-12-12 | 2001-06-12 | Kabushiki Kaisha Toshiba | Gaze position detection apparatus and method |
US6246790B1 (en) * | 1997-12-29 | 2001-06-12 | Cornell Research Foundation, Inc. | Image indexing using color correlograms |
US6504942B1 (en) * | 1998-01-23 | 2003-01-07 | Sharp Kabushiki Kaisha | Method of and apparatus for detecting a face-like region and observer tracking display |
US6400830B1 (en) * | 1998-02-06 | 2002-06-04 | Compaq Computer Corporation | Technique for tracking objects through a series of images |
US6115052A (en) * | 1998-02-12 | 2000-09-05 | Mitsubishi Electric Information Technology Center America, Inc. (Ita) | System for reconstructing the 3-dimensional motions of a human figure from a monocularly-viewed image sequence |
US6349373B2 (en) * | 1998-02-20 | 2002-02-19 | Eastman Kodak Company | Digital image management system having method for managing images according to image groups |
US6529630B1 (en) * | 1998-03-02 | 2003-03-04 | Fuji Photo Film Co., Ltd. | Method and device for extracting principal image subjects |
US6192149B1 (en) * | 1998-04-08 | 2001-02-20 | Xerox Corporation | Method and apparatus for automatic detection of image target gamma |
US6240198B1 (en) * | 1998-04-13 | 2001-05-29 | Compaq Computer Corporation | Method for figure tracking using 2-D registration |
US6097470A (en) * | 1998-05-28 | 2000-08-01 | Eastman Kodak Company | Digital photofinishing system including scene balance, contrast normalization, and image sharpening digital image processing |
US6404900B1 (en) * | 1998-06-22 | 2002-06-11 | Sharp Laboratories Of America, Inc. | Method for robust human face tracking in presence of multiple persons |
US6292575B1 (en) * | 1998-07-20 | 2001-09-18 | Lau Technologies | Real-time facial recognition and verification system |
US6456732B1 (en) * | 1998-09-11 | 2002-09-24 | Hewlett-Packard Company | Automatic rotation, cropping and scaling of images for printing |
US6351556B1 (en) * | 1998-11-20 | 2002-02-26 | Eastman Kodak Company | Method for automatically comparing content of images for classification into events |
US6263113B1 (en) * | 1998-12-11 | 2001-07-17 | Philips Electronics North America Corp. | Method for detecting a face in a digital image |
US6438264B1 (en) * | 1998-12-31 | 2002-08-20 | Eastman Kodak Company | Method for compensating image color when adjusting the contrast of a digital color image |
US6282317B1 (en) * | 1998-12-31 | 2001-08-28 | Eastman Kodak Company | Method for automatic determination of main subjects in photographic images |
US6421468B1 (en) * | 1999-01-06 | 2002-07-16 | Seiko Epson Corporation | Method and apparatus for sharpening an image by scaling elements of a frequency-domain representation |
US6393148B1 (en) * | 1999-05-13 | 2002-05-21 | Hewlett-Packard Company | Contrast enhancement of an image using luminance and RGB statistical metrics |
US6526161B1 (en) * | 1999-08-30 | 2003-02-25 | Koninklijke Philips Electronics N.V. | System and method for biometrics-based facial feature extraction |
US6504951B1 (en) * | 1999-11-29 | 2003-01-07 | Eastman Kodak Company | Method for detecting sky in images |
US6516154B1 (en) * | 2001-07-17 | 2003-02-04 | Eastman Kodak Company | Image revising camera and method |
US7689009B2 (en) * | 2005-11-18 | 2010-03-30 | Fotonation Vision Ltd. | Two stage detection for photographic eye artifacts |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8330831B2 (en) | 2003-08-05 | 2012-12-11 | DigitalOptics Corporation Europe Limited | Method of gathering visual meta data using a reference image |
US8682097B2 (en) | 2006-02-14 | 2014-03-25 | DigitalOptics Corporation Europe Limited | Digital image enhancement with reference images |
US20130057926A1 (en) * | 2006-10-17 | 2013-03-07 | Samsung Electronics Co., Ltd | Image compensation in regions of low image contrast |
US8896725B2 (en) | 2007-06-21 | 2014-11-25 | Fotonation Limited | Image capture device with contemporaneous reference image capture mechanism |
US9767539B2 (en) | 2007-06-21 | 2017-09-19 | Fotonation Limited | Image capture device with contemporaneous image correction mechanism |
US10733472B2 (en) | 2007-06-21 | 2020-08-04 | Fotonation Limited | Image capture device with contemporaneous image correction mechanism |
US20090196466A1 (en) * | 2008-02-05 | 2009-08-06 | Fotonation Vision Limited | Face Detection in Mid-Shot Digital Images |
US8494286B2 (en) | 2008-02-05 | 2013-07-23 | DigitalOptics Corporation Europe Limited | Face detection in mid-shot digital images |
US20130259322A1 (en) * | 2012-03-31 | 2013-10-03 | Xiao Lin | System And Method For Iris Image Analysis |
Also Published As
Publication number | Publication date |
---|---|
US20110221936A1 (en) | 2011-09-15 |
US20080317339A1 (en) | 2008-12-25 |
US8135184B2 (en) | 2012-03-13 |
US8320641B2 (en) | 2012-11-27 |
US7953251B1 (en) | 2011-05-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7953251B1 (en) | Method and apparatus for detection and correction of flash-induced eye defects within digital images using preview or other reference images | |
US7587085B2 (en) | Method and apparatus for red-eye detection in an acquired digital image | |
US8254674B2 (en) | Analyzing partial face regions for red-eye detection in acquired digital images | |
US20060093238A1 (en) | Method and apparatus for red-eye detection in an acquired digital image using face recognition | |
US7436998B2 (en) | Method and apparatus for red-eye detection in an acquired digital image based on image quality pre and post filtering | |
US10733472B2 (en) | Image capture device with contemporaneous image correction mechanism | |
EP1800259B1 (en) | Image segmentation method and system | |
JP4267688B2 (en) | Two-stage detection of photo eye artifacts | |
US8593542B2 (en) | Foreground/background separation using reference images | |
US8170350B2 (en) | Foreground/background segmentation in digital images | |
US8498496B2 (en) | Method and apparatus for filtering red and/or golden eye artifacts | |
M Corcoran et al. | Advances in the detection & repair of flash-eye defects in digital images-a review of recent patents | |
IES84135Y1 (en) | Method and apparatus for red-eye detection in an acquired digital image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: FOTONATION IRELAND LIMITED, IRELAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:STEINBERG, ERAN;BIGIOI, PETRONEL;CORCORAN, PETER;AND OTHERS;SIGNING DATES FROM 20080807 TO 20080904;REEL/FRAME:025708/0186 Owner name: TESSERA TECHNOLOGIES IRELAND LIMITED, IRELAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:FOTONATION IRELAND LIMITED;REEL/FRAME:025708/0425 Effective date: 20100531 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: DIGITALOPTICS CORPORATION EUROPE LIMITED, IRELAND Free format text: CHANGE OF NAME;ASSIGNOR:TESSERA TECHNOLOGIES IRELAND LIMITED;REEL/FRAME:027164/0840 Effective date: 20110713 |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
AS | Assignment |
Owner name: FOTONATION LIMITED, IRELAND Free format text: CHANGE OF NAME;ASSIGNOR:DIGITALOPTICS CORPORATION EUROPE LIMITED;REEL/FRAME:034524/0693 Effective date: 20140609 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 12 |
|
AS | Assignment |
Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA Free format text: SECURITY INTEREST;ASSIGNORS:ADEIA GUIDES INC.;ADEIA IMAGING LLC;ADEIA MEDIA HOLDINGS LLC;AND OTHERS;REEL/FRAME:063529/0272 Effective date: 20230501 |