US20180075586A1 - Ghost artifact removal system and method - Google Patents

Ghost artifact removal system and method Download PDF

Info

Publication number
US20180075586A1
US20180075586A1 US15/261,819 US201615261819A US2018075586A1 US 20180075586 A1 US20180075586 A1 US 20180075586A1 US 201615261819 A US201615261819 A US 201615261819A US 2018075586 A1 US2018075586 A1 US 2018075586A1
Authority
US
United States
Prior art keywords
image
blocks
mask
absolute differences
exposure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US15/261,819
Other versions
US9916644B1 (en
Inventor
Sarvesh Swami
Donghui Wu
Timofey Uvarov
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Omnivision Technologies Inc
Original Assignee
Omnivision Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Omnivision Technologies Inc filed Critical Omnivision Technologies Inc
Priority to US15/261,819 priority Critical patent/US9916644B1/en
Assigned to OMNIVISION TECHNOLOGIES, INC. reassignment OMNIVISION TECHNOLOGIES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: WU, DONGHUI, SWAMI, SARVESH, UVAROV, TIMOFEY
Priority to TW106128907A priority patent/TWI658731B/en
Priority to CN201710749433.0A priority patent/CN107809602B/en
Application granted granted Critical
Publication of US9916644B1 publication Critical patent/US9916644B1/en
Publication of US20180075586A1 publication Critical patent/US20180075586A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/001Image restoration
    • G06T5/003Deblurring; Sharpening
    • G06T5/004Unsharp masking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/61Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
    • G06T5/75
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/672Focus control based on electronic image sensor signals based on the phase difference signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • H04N25/58Control of the dynamic range involving two or more exposures
    • H04N25/587Control of the dynamic range involving two or more exposures acquired sequentially, e.g. using the combination of odd and even image fields
    • H04N25/589Control of the dynamic range involving two or more exposures acquired sequentially, e.g. using the combination of odd and even image fields with different integration times, e.g. short and long exposures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics

Abstract

A method for removing a ghost artifact from a multiple-exposure image of a scene method includes steps of generating and segmenting a difference mask, determining a lower threshold and an upper threshold, generating a refined mask, and generating a corrected image. The difference mask includes a plurality of absolute differences in luminance-values between the multiple-exposure image and a first image of the scene. The segmenting step involves segmenting the difference mask into a plurality of blocks. The lower and upper thresholds are based on statistical properties of the blocks. The method generates the refined mask by mapping each absolute difference to a respective one of a plurality refined values, of the refined mask, equal to a function of the absolute difference, the lower threshold, and the upper threshold. The corrected image is a weighted sum of the first image and the multiple-exposure image, weights being based on the refined mask.

Description

    BACKGROUND
  • Many consumer electronics products include at least one camera. These include tablet computers, mobile phones, and smart watches. In such products, and in digital still cameras themselves, high-dynamic range (HDR) functionality enables consumers to produces images of scenes having a larger dynamic range of luminosity than with cameras lacking such functionality.
  • For example, FIG. 1 depicts a camera 130 imaging a scene 120 having a high dynamic range of luminance. Scene 120 includes a person 121 in front of a window 122, through which a sunny scene 123 is visible. Camera 130 includes an imaging lens (not shown), an image sensor 132, a memory 110, and a microprocessor 140 communicatively coupled to the image sensor. Image sensor 132 includes a pixel array 134A and may include a color filter array (CFA) 136 thereon. Pixel array 134 includes a plurality of pixels 134, not shown in FIG. 1 for clarity of illustration. Each color filter of CFA 136 is aligned with a respective pixel 134 of pixel array 134A. The imaging lens images scene 120 onto image sensor 132. Image sensor 132 also includes circuitry 138 that includes at least one analog-to-digital converter.
  • Indoor lighting, not shown, illuminates the front of person 121 facing the camera, while sunlight illuminates sunny scene 123. In scene 120, person 121 and sunny scene 123 have respective luminosities 121L and 123L, not shown in FIG. 1. Since the sunlight is significantly brighter than the indoor lighting, luminosity 123L far exceeds luminosity 121L such that scene 120 has a high dynamic range of luminosity. Standard digital imaging enables capture of scene 120 using a single exposure time optimized for either luminosity 121L or 123L. When the exposure time is optimized for luminosity 121L, person 121 is properly exposed while sunny scene 123 is overexposed. When the exposure time is optimized for luminosity 123L, sunny scene 123 is properly exposed while person 121 is underexposed.
  • With HDR imaging, camera 130 captures multiple images, each with a different exposure time, of scene 120 and stores them in memory 110. Microprocessor 140 processes the multiple images to form a composite HDR image 190. HDR images are prone to image artifacts resulting from movement, between capture of the multiple images, of either objects in scene 120 or of camera 100. The artifacts, known as “ghosts,” appear as semi-transparent images of the moving object trailing behind the moving object. For example, HDR image 190 includes ghost artifacts 194 of the right hand of person 121.
  • SUMMARY OF THE INVENTION
  • The embodiments disclosed herein enable removal of ghost-artifacts from HDR images.
  • In an embodiment, a method for removing a ghost artifact from a multiple-exposure image of a scene is disclosed. The method includes steps of generating and segmenting a difference mask, determining a lower threshold and an upper threshold, generating a refined mask, and generating a corrected image. The difference mask includes a plurality of absolute differences between luminance values of the multiple-exposure image and luminance values of a first image of the scene. Each absolute difference corresponds to one of a respective plurality of pixel locations of the multiple-exposure image. In the segmenting step, the method segments the difference mask into a plurality of blocks. The lower and upper thresholds are both based on statistical properties of the plurality of blocks. The method generates the refined mask by mapping each of the plurality of absolute differences to a respective one of a plurality refined values, of the refined mask, equal to a function of the absolute difference, the lower threshold, and the upper threshold. The corrected image is a weighted sum of the first image and the multiple-exposure image. The weights of the weighted sum are based on the refined mask.
  • In an embodiment, a ghost-artifact remover is disclosed for removing a ghost artifact from a multiple-exposure image of a scene. The ghost-artifact remover includes a memory and a microprocessor. The memory stores non-transitory computer-readable instructions and is adapted to store the multiple-exposure image. The microprocessor is adapted to execute the instructions to perform the steps of the above-disclosed method.
  • In an embodiment, a method is disclosed for determining an optimal block count into which to segment a difference mask generated from a difference of two images captured with a same image sensor. For each of a plurality of gray cards each having a different uniform reflectance, the method (1) captures a respective gray-card image, having plurality of pixel values corresponding to a plurality of sensor pixels, by imaging the respective gray card on to the image sensor, (2) determines, from the plurality of pixel values, an average pixel value and a variance therefrom, and (3) determines a local-optimum sample size as a function of the average pixel value and the variance. The method determines a global sample size as a statistical average of the plurality of local-optimum sample sizes. The method also determines the optimal block count as an integer proximate a quotient of (a) a total number of the plurality of sensor pixels and (b) the global sample size.
  • BRIEF DESCRIPTION OF THE FIGURES
  • FIG. 1 depicts a camera imaging a scene having a high dynamic range of luminance.
  • FIG. 2 shows an embodiment of a ghost-artifact remover that may be implemented within of FIG. 1.
  • FIG. 3 depicts a color filter array (CFA), which is an example of the CFA of the camera of FIG. 1.
  • FIGS. 4A and 4B depict a single-exposure image and a multiple-exposure image, respectively, of a same scene.
  • FIG. 5 depicts a difference mask that is an absolute difference between luminance value sets based on the images of FIGS. 4A and 4B.
  • FIG. 6 depicts a combined image that is a weighted sum of images of FIGS. 4A and 4B, and the difference mask of FIG. 5.
  • FIG. 7 depicts a thresholded mask generated by applying a single thresholding operation to difference mask of FIG. 5.
  • FIG. 8 depicts a combined image that is a weighted sum of images of FIGS. 4A and 4B, and the difference mask of FIG. 7.
  • FIG. 9 is a schematic pixel-value histogram of the difference mask of FIG. 5.
  • FIG. 10 depicts a dually-thresholded difference mask generated from difference mask of FIG. 5, in an embodiment.
  • FIG. 11 depicts a corrected image that is a weighted sum of images of FIGS. 4A and 4B, and the difference mask of FIG. 10.
  • FIG. 12 depicts an embodiment of a segmented difference mask, which is difference mask 500 segmented into a plurality of blocks.
  • FIG. 13 is a flowchart illustrating a method for determining optimal block count in which to segment a difference mask, in an embodiment.
  • FIG. 14 depicts a schematic illustrating an implementation of the method of FIG. 13.
  • FIG. 15 is a flowchart illustrating a method for removing a ghost artifact from a multiple-exposure image of a scene, in an embodiment.
  • FIG. 16 is a flowchart illustrating optional steps of the method of FIG. 15 related to determining a lower threshold, in an embodiment.
  • FIG. 17 is a flowchart illustrating optional steps of the method of FIG. 15 related to determining an upper threshold, in an embodiment.
  • FIGS. 18A and 18B depict HDR images each formed from combining a same single-exposure image, a same multiple-exposure image, and different difference masks.
  • DETAILED DESCRIPTION OF THE EMBODIMENTS
  • FIG. 2 shows a ghost-artifact remover 200 that combines an image 201 and a multiple-exposure image 202 to generate a corrected image 238. Ghost-artifact remover 200 may be implemented within camera 130. Images 201 and 202 are of the same scene. Image 201 is, for example, a single-exposure image.
  • Ghost-artifact remover 200 includes a microprocessor 240 and a memory 210 that stores software 220 that includes machine-readable instructions. Microprocessor 240 may be a digital signal processor such as an image processor. Memory 210 may include one or both of volatile memory (e.g., SRAM, DRAM, or any combination thereof) and nonvolatile memory (e.g., FLASH, ROM, magnetic media, optical media, or any combination thereof).
  • Memory 210 and microprocessor 240 may function as memory 110 and microprocessor 140, respectively, of camera 130, FIG. 1. Microprocessor 240 is adapted to execute the instructions to perform functions of ghost-artifact remover 200 as described herein. Software 220 includes the following software modules: a luminance value generator 221, a mask generator 222, an image segmenter 224, mask thresholder 226, a mask mapper 227, and an image fuser 228. Memory 210 is also shown storing one or both of image 201, multiple-exposure image 202, a first luminance value set 231A, a second luminance value set 231B, a difference mask 232A, and a refined mask 232B. Memory 210 also includes calibration data 234 for use by image segmenter 224. Software 220 may also include a calibrator 229 for generating calibration data 234. Memory 210 may store images 201 and 202 in either an image file format, such as JPEG and TIFF, or a raw image format, such as TIFF/EP and Digital Negative (DNG).
  • FIG. 3 depicts a CFA 336, which is an example of CFA 136 of camera 130. CFA 336 includes an interleaved array of color filter cells 301, 302, 303, and 304. Each color filter cell 301-304 is a two-by-two array of color filters, such as a Bayer cell 311, such that CFA 336 is a Bayer array. Each Bayer cell 311 includes one red color filter (“R”), two green color filters (“G”), and one blue color filter (“B”). While color filter cells 301-304 are structurally identical, they are differentiated herein because, as discussed below, image sensor pixels beneath each filter cell 301-304 have different exposure times when capturing a multiple-exposure image 202. Herein, a red pixel, a green pixel, and a blue pixel denote imager sensor pixels aligned beneath to a red color filter, a green color filter, and a blue color filter, respectively.
  • FIGS. 4A and 4B depict a single-exposure image 401 and a multiple-exposure image 402, respectively of a same scene captured by camera 130 that includes CFA 336 of FIG. 3. Images 401 and 402 are examples of single-exposure image 201 and a multiple-exposure image 202, respectively. Single-exposure image 401 results from camera 130 capturing the scene with pixels 134 beneath each color filter cell 301-304 having the same exposure time t401.
  • Multiple-exposure image 402 results from camera 130 capturing the scene with pixels beneath each color filter cell 301-304 having a respective exposure times tA, tB, tC, and tD. Consequently, multiple-exposure image 402 is an interleaved composite of four single-exposure images having different exposure times, which enables multiple-exposure image to have a higher dynamic range than single-exposure image 401. The first, second, third, and fourth single-exposure images are captured by pixels 134 beneath color filter cells 301-304, respectively. Accordingly, the first, second, third, and fourth single-exposure images have lower resolution than single-exposure image 401. Hence, while multiple-exposure image 402 has a larger dynamic range than single-exposure image 401, it also has a lower resolution.
  • Multiple-exposure image 402 includes an artifact 403 that is not present in single-exposure image 401. Artifact 403 is an image of a person, which the Applicant intentionally located in the field of view of camera 130 when capturing multiple-exposure image 402 for purposes of illustrating the ghost-removal method of the present invention.
  • FIG. 5 depicts a difference mask 500 that is an absolute difference between respective luminance values of single-exposure image 401 and multiple-exposure image 402. Luminance values of single-exposure image 401 and multiple-exposure image 402 are examples of first luminance value set 231A and second luminance value set 231B, respectively. Difference mask 500 is an example of difference mask 232A stored in memory 210 of ghost-artifact remover 200, FIG. 2. The luminance values used to generate difference mask 500 are based on the following relationship between a luminance value Y and pixel values R, G, and B or red, green, and blue sensor pixels used to capture images 401 and 402: Y=0.30R+0.59G+0.11B. Without departing from the scope hereof, coefficients of R, G, and B used to determine luminance value Y may vary from those presented. White regions of difference mask 500 denote minimum luminance differences, while black regions of difference mask 500 denote maximum luminance differences between images 401 and 402.
  • In the above example, luminance value sets 231A,B are generated from R, G, and B values of images 401 and 402. Images 401 and 402 result from demosaicing “raw” sensor pixel values from image sensor 132. Alternatively, luminance value sets 231A,B may be generated directly from raw sensor pixel values from image sensor 132, that is, independent of a demosaicing process. For example, when CFA 136 is a Bayer pattern, raw sensor pixel values from image sensor 132 includes pixel values corresponding to red, green, and blue pixels. Luminance value sets 231A,B may be generated from these pixel values and be independent of demosaicing used to generate images 201 and 202.
  • FIG. 6 depicts a combined image 600, which is a weighted sum of single-exposure image 401, multiple-exposure image 402, and difference mask 500. By including both images 401 and 402, combined image 600 has both the high-resolution of single-exposure image 401 and the high dynamic range of multiple-exposure image 402. Equation (1) is a mathematical representation of combined image 600, where data arrays M500, I401, I402, and I600 represent difference mask 500 and images 401, 402, and 600 as respectively.

  • I 600 =I 401(1−M 500)+I 402 M 500  Eq. (1)
  • Combined image 600 includes a ghost artifact 603, which is the remainder of artifact 403 of multiple-exposure image 402 after multiplication by difference mask 500 in Eq. (1) to yield combined image 600.
  • Exposure time t401 of single-exposure image 401 is at least approximately equal to one of exposure times tA, tB, tC, and tD of multiple-exposure image 402. For example, exposure time t401 is within five percent of exposure time tA. Such similarity of exposure times enables optimal combination of images 401 and 402, as in Eq. (1), and similar equations (2) and (3) below.
  • Conventional methods replace difference mask with a single-threshold mask to prevent ghost artifact 603 from appearing in combined image 600. FIG. 7 depicts a thresholded mask 700 generated by applying a single thresholding operation to difference mask 500. The single thresholding operation sets any pixel value below a threshold to zero (displayed as white in FIG. 7), such that thresholded mask 700 is difference mask 500 with selected noise removed.
  • FIG. 8 depicts a combined image 800, which is a weighted sum of single-exposure image 401, multiple-exposure image 402, and thresholded mask 700. Equation (2) is a mathematical representation of combined image 800, where data arrays M700 and I800 represent thresholded mask 700 and combined image 800 as respectively.

  • I 800 =I 401(1−M 700)+I402 M 700  Eq. (2)
  • Combined image 800 is cropped to emphasize presence of a ghost artifact 803 therein, which is the remainder of artifact 403 of multiple-exposure image 402 after multiplication by thresholded mask 700 in Eq. (2) to yield combined image 800. The existence of ghost artifact 803 demonstrates a shortcoming of thresholded mask 700. An improved mask would eliminate more or all of artifact 403 and hence prevent ghost artifact such as ghost artifacts 603.
  • Applicant has determined that such an improved mask results from applying two threshold operations to difference mask 500. FIG. 9 is a schematic pixel-value histogram 900 of difference mask 500. As pixel values within difference mask represent absolute differences between pixel values used to generate difference mask 500, pixel values of histogram 900 represent absolute differences. As discussed above, thresholded mask 700 sets pixel values of all pixels having pixel values below a lower threshold, such as a threshold 711, to zero.
  • By contrast, a dually-thresholded difference mask of the present invention imposes two threshold operations on difference mask 500. FIG. 10 depicts a dually-thresholded difference mask 1000 generated from difference mask 500. A first thresholding operation on difference mask 500 sets all pixels thereof having pixel values below a lower threshold 911 to zero. A second thresholding operation on difference mask 500 sets all pixels thereof having pixel values above an upper threshold 919 to zero. The first and second thresholding operations may be combined into a single thresholding operation. FIG. 9 shows lower threshold 911 to be higher than threshold 711 for illustrative purposes only.
  • The remaining, non-thresholded, absolute differences in difference mask 500 constitute an intermediate mask 510 (not shown) denoted by data array M510. Absolute differences of intermediate mask 510 range between and including lower threshold 911 and upper threshold 919, and hence have a smaller range than difference mask 500. Accordingly, mask mapper 227 maps each absolute difference of intermediate mask 510 to a value between (and optionally including) zero and one to yield dually-thresholded difference mask 1000, as shown in Eq. (3). In Eq. (3), LT911 and UT919 denote lower threshold 911 and upper threshold 919 respectively.

  • M 1000=[(M 510 −LT 911)/(UT 919 −LT 911)]a  Eq. (3)
  • While exponent a equals one in the example of dually-thresholded difference mask 1000, exponent a may exceed one without departing from the scope hereof.
  • FIG. 11 depicts a corrected image 1100, which is a weighted sum of single-exposure image 401, multiple-exposure image 402, and dually-thresholded difference mask 1000. Equation (4) is a mathematical representation of corrected image 1100, where data arrays M1000 and I1100 represent dually-thresholded difference mask 1000 and corrected image 1100 as respectively. Ghost-artifact remover 200 implements Eq. (4) via execution of image fuser 228.

  • 1100 =I 401(1−M 1000)+I 402 M 1000  Eq. (4)
  • Corrected image 1100 is cropped to emphasize absence a ghost artifact, such as ghost artifact 803 of combined image 800 generated from thresholded mask 700.
  • FIG. 12 depicts a segmented difference mask 1200, which is difference mask 500 segmented into a plurality of blocks 1202. In this example, the plurality of blocks 1202 incudes nine blocks 1202(1-9). Segmenting of difference mask 500 is a means to determining lower threshold 911 and upper threshold 919 used to generate dually-thresholded difference mask 1000. FIG. 12 includes statistics of pixel values (absolute differences) for each block 1201-1209. As pixel values within each block 1202 represent absolute differences between pixel values used to generate difference mask 500, pixel values of a difference mask are also referred to as absolute differences.
  • While segmented difference mask 1200 includes nine blocks 1202, a segmented image mask may include fewer or more blocks without departing from the scope hereof. For example, for the image sensor used to capture images 401 and 402, applicant has determined that an optimal block size occupies between eight percent and fifteen percent of the image size. These percentages correspond to approximately seven blocks and twelve blocks respectively. Accordingly, the nine blocks of segmented image mask 1200 is a suitable choice because it enables a square array of equally-sized and equally-oriented blocks.
  • While blocks 1202 of segmented difference mask are arranged in a square array (three-by-three in this case), blocks 1202 may be arranged in a non-square array, such as a one-by-nine array, without departing from the scope hereof. For example, in imaging applications where motion is likely to be confined to horizontal or vertical bands, blocks of a segmented image mask may be shaped accordingly.
  • Applicant determined lower threshold 911 by averaging a top quantile of pixel values of blocks 1202 that have a variance σ in a range σmin≦σ≦σmax. In the example of determining lower threshold 911, such “noisy blocks” correspond to blocks 1202(2-5, 6-9), σmin=0.01 and σmax=200, and the top quantile of pixel values in 1202(2-5, 6-9) corresponds to the maximum pixel value shown in FIG. 12. Hence, lower threshold 911 is

  • LT 911=(20+8.2+10+12+60+5+3)/7=17.
  • Blocks 1202 that lack moving objects have noise and their variance is lower compared to the blocks 1202 with moving objects. In this example, low-variance or “noisy blocks” are blocks 1202(2-5, 6-9), while the blocks 1202(1,6) include moving objects. In block 1202(1), the variance results from motion of the tree. In block 1202(7), the variance results from motion (or in this case, appearance) of artifact 403.
  • Applicant determined upper threshold 919 as an average of pixel values in blocks 1202 that have a mean pixel value exceeding lower threshold 911. In this example, such pixel blocks correspond to blocks 1202(1,6), such that upper threshold 919 is the average of respective average pixel values of blocks 1202(1,6): UT919=½(20.7+25.3)=23. Alternatively, upper threshold 919 may be computed simply as the average of pixel values in blocks 1202(1,6), which, when each block 1202 includes the same number of pixels, yields the same value as averaging respective average pixel values of individual blocks 1202(1,6).
  • The effectiveness of dually-thresholded difference mask 1000 at preventing ghost artifacts in a combined image depends on optimal determination of thresholds 911 and 919, which in turn depends on optimal segmentation of a difference mask, such as difference mask 500. In difference mask 500, some absolute differences between single-exposure image 401 and multiple-exposure image 402 results from noise, while other absolute differences result from moving objects, such as artifact 403. Ideally, lower threshold 911 is determined to remove only noise-borne absolute differences, while upper threshold is determined to remove only motion-borne absolute differences.
  • When blocks 1202 are too small, thresholding is sub-optimal because of false-positive errors in motion detection. That is, upper threshold 919 is too low such that, in addition to removing absolute differences corresponding to motion, the resulting dually-thresholded difference mask would also remove absolute differences not corresponding to motion. When blocks 1202 are too large, thresholding is sub-optimal because of false-negative errors in motion detection. That is, lower threshold 911 is too low such that non-thresholded noise-borne absolute differences impede thresholding of motion-borne absolute differences.
  • Optimal segmentation of a difference mask depends on the image sensor used to capture the images from which the difference mask was generated. For example, optimal segmentation of difference mask 500 depends on image sensor 132 used to capture images 401 and 402.
  • FIG. 13 is a flowchart illustrating a method 1300 for determining optimal block count in which to segment a difference mask generated from a difference of two images captured with a same image sensor. FIG. 14 depicts a schematic 1400 illustrating an implementation of method 1300. Schematic 1400 includes a plurality of gray cards 1402(1, 2, . . . N), camera 130, a memory 1410, and a processor 1440. Each gray card 1402 has a different uniform reflectance. Memory 1410 includes calibration software 1429, which is an example of calibrator 229, FIG. 2. Memory 1410 and processor 1440 are, for example, memory 210 and microprocessor 240 of ghost-artifact remover 200. Alternatively, memory 1410 and processor 1440 are part of a separate image-processing device, such as personal computer. Method 1300 is, for example, implemented by processor 1440 executing calibration software 1429. FIGS. 13 and 14 are best viewed together in the following description.
  • Method 1300 includes steps 1310, 1320, and 1330, which are performed for each of a plurality of gray cards each having a different uniform reflectance. For example, steps 1310, 1320, and 1330 are performed for each gray card 1402.
  • In step 1310, method 1300 captures a respective gray-card image, having plurality of pixel values corresponding to a plurality of sensor pixels, by imaging the respective gray card on to the image sensor. In an example of step 1310, camera 130 images one gray card 1402 onto image sensor 132 to produce a gray-card image 1411 having a plurality of pixel values 1412. Gray-card image 1411 is stored in memory 1410.
  • Method 1300 may include an optional step 1315, in which the gray-card image is at least one of (a) cropped and (b) modified to remove artifacts. In an example of step 1315, gray card 1402(2) is cropped to remove portions thereof outside of a region 1403. Artifacts may be associated with the imaging lens of camera 130 used to image gray card 1402.
  • In step 1320, method 1300 determines, from the plurality of pixel values, an average pixel value and a variance therefrom. In an example of step 1320, calibration software 1429 determines mean pixel values 1421 and pixel-value variances 1422 corresponding to respective gray-card images 1411. Herein, mean pixel values 1421(1,2, . . . , N) are also denoted by μ1, μ2, . . . μN respectively. Similarly, pixel-value variances 1422(1, 2, . . . , N) are also denoted by σ1 2, σ1 2, . . . σN 2 respectively, where a is the standard deviation of pixel values 1412(i).
  • In step 1330, method 1300 determines a local-optimum sample size as a function of the average and the variance. Step 1330 may include step 1332, in which method 1300 determines the local-optimum sample size that is proportional to a ratio of the variance to the average. In an example of step 1332, calibration software 1429 determines a pixel sample size 1430(i) corresponding to each gray-card image 1411(i). Herein, pixel sample size 1430(i) is also denoted by ni.
  • Pixel sample size 1430(i) corresponds to the number of pixel values 1412(i) required to compute a sample-average pixel value {tilde over (x)}i that deviates from the corresponding mean pixel value μi by less than a predetermined error W corresponding to a 95% confidence interval. For a Gaussian-distributed sample of image sensor pixels with sample size ni, the standard deviation of the average pixel value {tilde over (x)}i is σ{tilde over (x)} i i√{square root over (ni)}, where the 95% confidence interval {tilde over (x)}i±2σi/√{square root over (ni )} such that the total error W=4σi/√{square root over (ni)}. Total error W may be expressed as a product of a tolerance E with respect to mean pixel value μi, W=εμi, where ε=0.01, for example. Tolerance ε may be stored as a tolerance 1428 in memory 1410. Hence, pixel sample size 1430(i) may be expressed by Eq. (5):
  • n i = ( 4 σ i ɛμ i ) 2 Eq . ( 5 )
  • Equation (5) shows that pixel sample size 1430(i) increases with a ratio σi 2i of pixel-value variance to the pixel-value average.
  • In step 1340, method 1300 determines a global sample size as a statistical average global sample size as a statistical average of the plurality of local-optimum sample sizes. In an example of step 1340, calibration software 1429 determines a global sample size 1442. Global sample size 1442 may be a straight or weighted sum of pixel sample sizes 1430.
  • In step 1350, method 1300 determines the optimal block count as an integer proximate a quotient of (a) a total number of the plurality of sensor pixels and (b) the global sample size. Integer Nis for example the nearest integer greater than or less than the quotient. Pixel array 134A of camera 130 includes a total number of pixels M134. In an example of step 1340, calibration software determines block count 1450 as an integer proximate that quotient of M134 and global sample size 1442, that is, M134 divided by global sample size 1442. In the example of segmented difference mask 1200, FIG. 12, block count 1450 equals nine.
  • FIG. 15 is a flowchart illustrating a method 1500 for removing a ghost artifact from a multiple-exposure image of a scene. Method 1500 is, for example, implemented by microprocessor 240 executing software 220 (FIG. 2). Steps 1501, 1502, 1505, and 1506 are optional.
  • In step 1501, method 1500 captures a first image. In an example of step 1501, camera 130 captures single-exposure image 401. This example of step 1501 may include steps of (a) converting, with one or more analog-to-digital converters of circuitry 138, each pixel charge to a respective first digital pixel value, (b) storing the first digital pixel values in memory 210 as image 401, and (c) computing, with microprocessor 240, the luminance values of image 401 from the first digital pixel values to yield first luminance value set 231A.
  • In step 1502, method 1500 captures a multiple-exposure image. In an example of step 1502, camera 130 captures multiple-exposure image 402. This example of step 1502 may include steps of (a) converting, with one or more analog-to-digital converters of circuitry 138, each pixel charge to a respective second digital pixel value, (b) storing the second digital pixel values in memory 210 as multiple-exposure image 402, and (c) computing, with microprocessor 240, the luminance values of image 402 from the second digital pixel values to yield second luminance value set 231B.
  • In step 1505, method 1500 calculates a first set of luminance values throughout the first image. In an example of step 1505 luminance value generator 221 (FIG. 2) calculates a first luminance value set 231A corresponding to some or all of the pixels of single-exposure image 401.
  • In step 1506, method 1500 calculates a second set of luminance values throughout the multiple-exposure image. In an example of step 1506 luminance value generator 221 calculates a second luminance value set 231B corresponding to some or all of the pixels of multiple-exposure image 402.
  • In step 1510, method 1500 generates a difference mask including a plurality of absolute differences between corresponding luminance values within the first and second luminance value sets. In an example of step 1510, mask generator 222 generates difference mask 500 (FIG. 5) defined by absolute differences between corresponding luminance values within the first and second sets of luminance values of single-exposure image 401 and multiple-exposure image 402, respectively. Given a first luminance value of a first pixel of a first image and a second luminance value of second pixel of a second image, the two luminance values are “corresponding” when the first pixel and the second pixel have equal or nearly equal pixel coordinates. Two pixel coordinates are nearly equal when, for example, a relative difference in their horizontal (or vertical) position is less than one percent of the total number of pixels in the horizontal (or vertical) direction.
  • In step 1520, method 1500 segment the difference mask into a plurality of blocks. In an example of step 1520, image segmenter 224 segments difference mask into a plurality of blocks 1202 of segmented difference mask 1200 (FIG. 12).
  • In step 1530, method 1500 determines a lower threshold based on statistical properties of the plurality of blocks. In an example of step 1530, mask thresholder 226 determines lower threshold 911. Step 1530 may include at least one of steps 1610 and 1620 shown in FIG. 16.
  • In step 1610, method 1500 determines a top-quantile of absolute differences in blocks having a variance of absolute differences within a predetermined range. The top quantile of pixel values in a given block corresponds to the highest k pixel values of block 1202, where k is a positive integer less than the total number of sensor pixels in block 1202. In an example of step 1610, mask thresholder 226 determines a top-quantile of absolute differences in blocks 1202(2-5, 6-9).
  • In step 1620, method 1500 computes a statistical average of each top-quantile of absolute differences. In an example of step 1620, mask thresholder 226 computes a statistical average of each top-quantile of absolute differences in blocks 1202(2-5, 6-9). In the example of lower threshold 911, FIG. 9, the top quantile corresponds to k=1.
  • In step 1540, method 1500 determines an upper threshold based on statistical properties of the plurality of blocks. Step 1540 may include step 1542, in which method 1500 determines the upper threshold as a statistical average of absolute differences in a subset of the plurality of blocks. In an example of step 1540, mask thresholder 226 determines upper threshold 919, where the subset of blocks consists of blocks 1202(1) and 1202(6). Step 1540 may include at least one of steps 1700 shown in FIG. 17.
  • Steps 1700 includes steps 1710, 1720, 1732, and 1734. In step 1710, method 1500 determines, for each of the plurality of blocks, a respective block-average equal to statistical average of absolute differences in the block within a predetermined range. In an example of step 1710, absolute difference values range from zero to two hundred fifty-five (28−1) and mask thresholder 226 determines a respective block-average of each block 1202, using only absolute differences less than or equal to 27 (one hundred twenty-eight). Selecting such a lower range of attainable absolute differences avoids absolute differences corresponding to noise and large absolute differences (e.g., from bright areas) that may yield a suboptimal upper threshold. A “large absolute difference” of a block is, for example, an absolute difference exceeding a median absolute difference of the block. FIG. 12 denotes block-averages of blocks 1202 as a “mean,” for example, block-average of block 1202(1) is 20.71.
  • Step 1720 is a decision. When any of the block averages exceed a minimum value, method 1500 proceeds to step 1732. Otherwise, method 1500 proceeds to step 1734. The minimum value may be greater than or equal to lower threshold. The minimum value may exceed the lower threshold to ensure a minimum range of absolute differences between the lower and upper threshold. In an example of step 1720, mask thresholder 226 determines that blocks 1202(1) and 1202(6) have respective block-averages exceeding lower threshold 911.
  • In step 1732, method 1500 determines the upper threshold as a statistical average of block-averages that exceed the minimum value. In an example, of step 1732, mask thresholder 226 determines upper threshold 911 as an average of respective block averages of blocks 1202(1) and 1202(6).
  • In step 1734, method 1500 determines the upper threshold as a statistical average of block-averages of blocks having a high variance of absolute differences relative to remaining blocks. As shown in the above examples of step 1720 and 1732 this case does not apply to blocks 1202. However, if the respective block-averages of blocks 1202(1) and 1202(6) were less than a minimum value, e.g., lower threshold 911 (equal to seventeen), then method 1500 would proceed from step 1720 to step 1734. In such a case, mask thresholder 226 would determine the upper threshold as an average of blocks 1202 having, relative to other blocks, at least one of (i) a high mean absolute difference and (ii) a high variance of absolute difference. For example, the mean and variance of blocks 1202(1) and 1202(6) are significantly higher than the mean and variance of other blocks 1202. Accordingly, both the examples of step 1732 and the example of step 1734, applied to segmented difference mask 1200, determine the upper threshold to be upper threshold 919.
  • In step 1550, method 1500 thresholds the difference mask, according to the lower threshold and the upper threshold, to yield a thresholded mask. In an example of step 1550, mask mapper 227 thresholds difference mask 500, according to lower threshold 911 and upper threshold 919, to yield intermediate mask 510.
  • In step 1560, method 1500 generates a refined mask by mapping each of a plurality of absolute differences, of the intermediate mask, to a respective one of a plurality refined values, of the refined mask, equal to a function of the absolute difference, the lower threshold, and the upper threshold. In an example of step 1560, mask thresholder 226 generates dually-thresholded difference mask 1000 via Eq. (3).
  • In step 1570, method 1500 generates a corrected image as a weighted sum of the first image and the multiple-exposure image. The weights of the weighted sum are based on the refined mask. In an example of step 1570, image fuser 228 generates corrected image 1100 from images 401 and 402 per Eq. (4).
  • FIGS. 18A and 18B depict, respectively, HDR image 1800 and HDR image 1820 each formed from combining a same single-exposure image and a same multiple-exposure image. HDR image 1800 was formed via a conventional single-threshold mask while HDR image 1820 was formed using ghost-artifact remover 200 executing method 1500 of the present invention. Both HDR images 1800 and 1820 include a sleeve 1802 and a finger 1804. HDR image 1800 includes ghosts artifacts 1812 and 1814 of sleeve 1802 and finger 1804 respectively. Analogous artifacts are not visible in HDR image 1820.
  • Features described above as well as those claimed below may be combined in various ways without departing from the scope hereof. The following examples illustrate some possible, non-limiting combinations:
  • (A1) denotes a method for removing a ghost artifact from a multiple-exposure image of a scene includes steps of generating and segmenting a difference mask, determining a lower threshold and an upper threshold, generating a refined mask, and generating a corrected image. The difference mask includes a plurality of absolute differences between luminance values of the multiple-exposure image and luminance values of a first image of the scene. Each absolute difference corresponds to one of a respective plurality of pixel locations of the multiple-exposure image. In the segmenting step, the method segments the difference mask into a plurality of blocks. The lower threshold is based on statistical properties of the plurality of blocks. The upper threshold is a statistical average of absolute differences in a subset of the plurality of blocks. The method generates the refined mask by mapping each of the plurality of absolute differences to a respective one of a plurality refined values, of the refined mask, equal to a function of the absolute difference, the lower threshold, and the upper threshold. The corrected image is a weighted sum of the first image and the multiple-exposure image. The weights of the weighted sum are based on the refined mask.
  • (A2) In any method denoted by (A1), the step of determining the lower threshold may include determining a plurality of noisy blocks as blocks having a variance of absolute differences within a predetermined range. and determining, in each noisy block, a top-quantile of absolute differences.
  • (A3) In the method denoted by (A2), the step of determining the lower threshold may further include computing a statistical average of each top-quantile of absolute differences.
  • (A4) In any method denoted by one of (A1) through (A3), in the step of determining an upper threshold, each block of the subset of blocks may have a mean absolute difference exceeding the lower threshold.
  • (A5) In any method denoted by one of (A1) through (A4), the step of determining an upper threshold may further include determining a statistical average of absolute differences in a subset of the plurality of blocks.
  • (A6) In any method denoted by (A5) in the step of determining an upper threshold, the subset of blocks corresponding to blocks in a top quantile of variance of absolute differences.
  • (A7) In any method denoted by one of (A5) and (A6), in the step of determining an upper threshold, the statistical average of absolute differences may include only a subset of absolute differences, between a minimum and a maximum of the plurality of absolute differences, that excludes noise and absolute differences.
  • (A8) In any method denoted by one of (A1) through (A7), in which the first image is a single-exposure image having a first exposure time and the multiple-exposure image is formed from a plurality of images having a respective plurality of second exposure times, one of the second exposure times may be substantially equal to the first exposure time.
  • (A9) Any method denoted by one of (A1) through (A8) may further include capturing the first image with an image sensor and capturing the multiple-exposure image with the image sensor.
  • (A10) In any method denoted by (A9), in which the image sensor includes a plurality of sensor pixels each having a pixel charge corresponding to a respective intensity of light from the scene incident thereon, the step of capturing the first image may include converting, with an analog-to-digital converter, each pixel charge to a respective first digital pixel value, storing the first digital pixel values in a memory communicatively coupled to a microprocessor; and computing, with the microprocessor, the luminance values of the first image from the first digital pixel values. The step of capturing the multiple-exposure image may include converting, with an analog-to-digital converter, each pixel charge to a respective second digital pixel value, storing the second digital pixel values in a memory communicatively coupled to a microprocessor; and computing, with the microprocessor, the luminance values of the first image from the second digital pixel values.
  • (B1) denotes a method for determining optimal block count into which to segment a difference mask generated from a difference of two images captured with a same image sensor. For each of a plurality of gray cards each having a different uniform reflectance, the method (1) captures a respective gray-card image, having plurality of pixel values corresponding to a plurality of sensor pixels, by imaging the respective gray card on to the image sensor, (2) determines, from the plurality of pixel values, an average pixel value and a variance therefrom, and (3) determines a local-optimum sample size as a function of the average pixel value and the variance. The method determines a global sample size as a statistical average of the plurality of local-optimum sample sizes. The method also determines the optimal block count as an integer proximate a quotient of (a) a total number of the plurality of sensor pixels and (b) the global sample size.
  • (C1) A ghost-artifact remover, for removing a ghost artifact from a multiple-exposure image of a scene, includes a memory and a microprocessor. The memory stores non-transitory computer-readable instructions and is adapted to store the multiple-exposure image. The microprocessor is adapted to execute the instructions to perform the steps of any method denoted by one of (A1)-(A10).
  • Changes may be made in the above methods and systems without departing from the scope hereof. It should thus be noted that the matter contained in the above description or shown in the accompanying drawings should be interpreted as illustrative and not in a limiting sense. The following claims are intended to cover all generic and specific features described herein, as well as all statements of the scope of the present method and system, which, as a matter of language, might be said to fall therebetween.

Claims (20)

What is claimed is:
1. A method for removing a ghost artifact from a multiple-exposure image of a scene, the method comprising:
generating a difference mask including a plurality of absolute differences between luminance values of the multiple-exposure image and luminance values of a first image of the scene;
segmenting the difference mask into a plurality of blocks;
determining a lower threshold based on statistical properties of the plurality of blocks;
determining an upper threshold based on statistical properties of the plurality of blocks;
generating a refined mask by mapping each of the plurality of absolute differences to a respective one of a plurality refined values, of the refined mask, equal to a function of the absolute difference, the lower threshold, and the upper threshold; and
generating a corrected image as a weighted sum of the first image and the multiple-exposure image, weights of the weighted sum being based on the refined mask.
2. The method of claim 1, the step of determining the lower threshold comprising:
determining a plurality of noisy blocks as blocks having a variance of absolute differences within a predetermined range; and
determining, in each noisy block, a top-quantile of absolute differences.
3. The method of claim 2, the step of determining the lower threshold further comprising computing a statistical average of each top-quantile of absolute differences.
4. The method of claim 1, in the step of determining an upper threshold, each block of the subset of the plurality of blocks having a mean absolute difference exceeding the lower threshold.
5. The method of claim 1, the step of determining an upper threshold further comprising determining a statistical average of absolute differences in a subset of the plurality of blocks.
6. The method of claim 5, in the step of determining an upper threshold, the subset of the plurality of blocks corresponding to blocks in a top quantile of variance of absolute differences.
7. The method of claim 5, in the step of determining an upper threshold, the statistical average of absolute differences including only a subset of absolute differences, between a minimum and a maximum of the plurality of absolute differences, that excludes noise and absolute differences.
8. The method of claim 1, the first image being a single-exposure image having a first exposure time, the multiple-exposure image being formed from a plurality of images having a respective plurality of second exposure times, one of the second exposure times being substantially equal to the first exposure time.
9. The method of claim 1, further comprising:
capturing the first image with an image sensor; and
capturing the multiple-exposure image with the image sensor.
10. The method of claim 9, the image sensor including a plurality of sensor pixels each having a pixel charge corresponding to a respective intensity of light from the scene incident thereon,
the step of capturing the first image comprising:
converting, with an analog-to-digital converter, each pixel charge to a respective first digital pixel value;
storing the first digital pixel values in a memory communicatively coupled to a microprocessor; and
computing, with the microprocessor, the luminance values of the first image from the first digital pixel values; and
the step of capturing the multiple-exposure image comprising:
converting, with an analog-to-digital converter, each pixel charge to a respective second digital pixel value;
storing the second digital pixel values in a memory communicatively coupled to a microprocessor; and
computing, with the microprocessor, the luminance values of the first image from the second digital pixel values.
11. A method for determining optimal block count into which to segment a difference mask generated from a difference of two images captured with a same image sensor, the method comprising:
for each of a plurality of gray cards each having a different uniform reflectance:
capturing a respective gray-card image, having plurality of pixel values corresponding to a plurality of sensor pixels, by imaging the respective gray card on to the image sensor;
determining, from the plurality of pixel values, an average pixel value and a variance therefrom;
determining a local-optimum sample size as a function of the average pixel value and the variance;
determining a global sample size as a statistical average of the plurality of local-optimum sample sizes;
determining the optimal block count as an integer proximate a quotient of (a) a total number of the plurality of sensor pixels and (b) the global sample size.
12. A ghost-artifact remover for removing a ghost artifact from a multiple-exposure image of a scene comprising:
a memory storing non-transitory computer-readable instructions and adapted to store the multiple-exposure image;
a microprocessor adapted to execute the instructions to:
generate a difference mask including a plurality of absolute differences between luminance values of the multiple-exposure image and luminance values of a first image of the scene;
segment the difference mask into a plurality of blocks;
determine a lower threshold based on statistical properties of the plurality of blocks;
determine an upper threshold as a statistical average of absolute differences in a subset of the plurality of blocks;
generate a refined mask by mapping each of the plurality of absolute differences to a respective one of a plurality refined values, of the refined mask, equal to a function of the absolute difference, the lower threshold, and the upper threshold; and
generate a corrected image as a weighted sum of the first image and the multiple-exposure image, the weights of the weighted sum being based on the refined mask.
13. The ghost-artifact remover of claim 12, the microprocessor further adapted to execute the instructions to, when determining the lower threshold,
determine a plurality of noisy blocks as blocks having a variance of absolute differences within a predetermined range; and
determine in each noisy block, a top-quantile of absolute differences.
14. The ghost-artifact remover of claim 13 the microprocessor further adapted to execute the instructions to, when determining the lower threshold, compute a statistical average of each top-quantile of absolute differences.
15. The ghost-artifact remover of claim 12, each block of the subset of the plurality of blocks having a mean absolute difference exceeding the lower threshold.
16. The ghost-artifact remover of claim 12, when the lower threshold exceeds a mean absolute difference of each block, the subset of the plurality of blocks corresponding to blocks in a top quantile of variance of absolute differences.
17. The ghost-artifact remover of claim 12, the statistical average of absolute differences including only absolute differences in a predetermined lower range of attainable absolute differences.
18. The ghost-artifact remover of claim 12, the first image being a single-exposure image having a first exposure time, the multiple-exposure image formed from a plurality of images having a respective plurality of second exposure times, one of the second exposure times being substantially equal to the first exposure time.
19. The ghost-artifact remover of claim 12, the microprocessor further adapted to execute the instructions to:
capture the first image with an image sensor; and
capture the multiple-exposure image with the image sensor.
20. The method of claim 19, the image sensor including a plurality of sensor pixels each having a pixel charge corresponding to a respective intensity of light from the scene incident thereon, the microprocessor further adapted to execute the instructions to,
when capturing the first image:
converting, with an analog-to-digital converter, each pixel charge to a respective first digital pixel value;
storing the first digital pixel values in a memory communicatively coupled to a microprocessor; and
computing, with the microprocessor, the luminance values of the first image from the first digital pixel values; and,
when capturing the multiple-exposure image:
converting, with an analog-to-digital converter, each pixel charge to a respective second digital pixel value;
storing the second digital pixel values in a memory communicatively coupled to a microprocessor; and
computing, with the microprocessor, the luminance values of the first image from the second digital pixel values.
US15/261,819 2016-09-09 2016-09-09 Ghost artifact removal system and method Active US9916644B1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US15/261,819 US9916644B1 (en) 2016-09-09 2016-09-09 Ghost artifact removal system and method
TW106128907A TWI658731B (en) 2016-09-09 2017-08-25 Ghost artifact removal system and method
CN201710749433.0A CN107809602B (en) 2016-09-09 2017-08-28 Remove the method and ghost artifact remover of terrible artifact

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/261,819 US9916644B1 (en) 2016-09-09 2016-09-09 Ghost artifact removal system and method

Publications (2)

Publication Number Publication Date
US9916644B1 US9916644B1 (en) 2018-03-13
US20180075586A1 true US20180075586A1 (en) 2018-03-15

Family

ID=61525690

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/261,819 Active US9916644B1 (en) 2016-09-09 2016-09-09 Ghost artifact removal system and method

Country Status (3)

Country Link
US (1) US9916644B1 (en)
CN (1) CN107809602B (en)
TW (1) TWI658731B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10178326B2 (en) * 2016-11-29 2019-01-08 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and apparatus for shooting image and terminal device
CN110288630A (en) * 2019-06-27 2019-09-27 浙江工业大学 A kind of moving target ghost suppressing method of background modeling

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10863105B1 (en) * 2017-06-27 2020-12-08 Amazon Technologies, Inc. High dynamic range imaging for event detection and inventory management
US10546369B2 (en) * 2018-01-09 2020-01-28 Omnivision Technologies, Inc. Exposure level control for high-dynamic-range imaging, system and method
CN108419023B (en) * 2018-03-26 2020-09-08 华为技术有限公司 Method for generating high dynamic range image and related equipment
CN112840637B (en) 2018-09-07 2022-04-05 杜比实验室特许公司 Automatic exposure method
WO2020051361A1 (en) 2018-09-07 2020-03-12 Dolby Laboratories Licensing Corporation Auto exposure of image sensors based upon entropy variance
CN111127353B (en) * 2019-12-16 2023-07-25 重庆邮电大学 High-dynamic image ghost-removing method based on block registration and matching
CN113421195B (en) * 2021-06-08 2023-03-21 杭州海康威视数字技术股份有限公司 Image processing method, device and equipment

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8018999B2 (en) * 2005-12-05 2011-09-13 Arcsoft, Inc. Algorithm description on non-motion blur image generation project
US8406569B2 (en) * 2009-01-19 2013-03-26 Sharp Laboratories Of America, Inc. Methods and systems for enhanced dynamic range images and video from multiple exposures
US8774559B2 (en) * 2009-01-19 2014-07-08 Sharp Laboratories Of America, Inc. Stereoscopic dynamic range image sequence
US8520083B2 (en) * 2009-03-27 2013-08-27 Canon Kabushiki Kaisha Method of removing an artefact from an image
US8570396B2 (en) * 2009-04-23 2013-10-29 Csr Technology Inc. Multiple exposure high dynamic range image capture
US8525900B2 (en) * 2009-04-23 2013-09-03 Csr Technology Inc. Multiple exposure high dynamic range image capture
US8760537B2 (en) * 2010-07-05 2014-06-24 Apple Inc. Capturing and rendering high dynamic range images
US8599284B2 (en) * 2011-10-11 2013-12-03 Omnivision Technologies, Inc. High dynamic range sub-sampling architecture
CN103297701B (en) * 2012-02-27 2016-12-14 江苏思特威电子科技有限公司 Formation method and imaging device
CN202535464U (en) * 2012-02-27 2012-11-14 徐辰 Imaging device
US10255888B2 (en) * 2012-12-05 2019-04-09 Texas Instruments Incorporated Merging multiple exposures to generate a high dynamic range image
US8902336B2 (en) * 2013-01-30 2014-12-02 Altasens, Inc. Dynamic, local edge preserving defect pixel correction for image sensors with spatially arranged exposures
US9338349B2 (en) 2013-04-15 2016-05-10 Qualcomm Incorporated Generation of ghost-free high dynamic range images
US9131201B1 (en) * 2013-05-24 2015-09-08 Google Inc. Color correcting virtual long exposures with true long exposures
US9077913B2 (en) * 2013-05-24 2015-07-07 Google Inc. Simulating high dynamic range imaging with virtual long-exposure images
US20150009355A1 (en) 2013-07-05 2015-01-08 Himax Imaging Limited Motion adaptive cmos imaging system
US9123141B2 (en) * 2013-07-30 2015-09-01 Konica Minolta Laboratory U.S.A., Inc. Ghost artifact detection and removal in HDR image processing using multi-level median threshold bitmaps
CN104349066B (en) * 2013-07-31 2018-03-06 华为终端(东莞)有限公司 A kind of method, apparatus for generating high dynamic range images

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10178326B2 (en) * 2016-11-29 2019-01-08 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and apparatus for shooting image and terminal device
CN110288630A (en) * 2019-06-27 2019-09-27 浙江工业大学 A kind of moving target ghost suppressing method of background modeling

Also Published As

Publication number Publication date
TWI658731B (en) 2019-05-01
US9916644B1 (en) 2018-03-13
CN107809602B (en) 2019-11-15
TW201813371A (en) 2018-04-01
CN107809602A (en) 2018-03-16

Similar Documents

Publication Publication Date Title
US9916644B1 (en) Ghost artifact removal system and method
US10547772B2 (en) Systems and methods for reducing motion blur in images or video in ultra low light with array cameras
US10425599B2 (en) Exposure selector for high-dynamic range imaging and associated method
US10699395B2 (en) Image processing device, image processing method, and image capturing device
US9883125B2 (en) Imaging systems and methods for generating motion-compensated high-dynamic-range images
EP2193656B1 (en) Multi-exposure pattern for enhancing dynamic range of images
US8374459B2 (en) Dynamic image compression method for human face detection
US10511785B2 (en) Temporally aligned exposure bracketing for high dynamic range imaging
US8660350B2 (en) Image segmentation devices and methods based on sequential frame image of static scene
US10572974B2 (en) Image demosaicer and method
CN104284096B (en) The method and system of multiple target automatic exposure and gain control based on pixel intensity distribution
EP2775719A1 (en) Image processing device, image pickup apparatus, and storage medium storing image processing program
EP1677516A1 (en) Signal processing system, signal processing method, and signal processing program
CN108921823A (en) Image processing method, device, computer readable storage medium and electronic equipment
CN112532855B (en) Image processing method and device
US20120127336A1 (en) Imaging apparatus, imaging method and computer program
US9860456B1 (en) Bayer-clear image fusion for dual camera
EP3363193B1 (en) Device and method for reducing the set of exposure times for high dynamic range video imaging
US10546369B2 (en) Exposure level control for high-dynamic-range imaging, system and method
US9800796B1 (en) Apparatus and method for low dynamic range and high dynamic range image alignment
US11102422B2 (en) High-dynamic range image sensor and image-capture method
US11593918B1 (en) Gradient-based noise reduction
US20100104182A1 (en) Restoring and synthesizing glint within digital image eye features
US20200351417A1 (en) Image processing
TWI388201B (en) Image processing apparatus, image processing method, and digital camera of using the mask to diminish the noise

Legal Events

Date Code Title Description
AS Assignment

Owner name: OMNIVISION TECHNOLOGIES, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SWAMI, SARVESH;WU, DONGHUI;UVAROV, TIMOFEY;SIGNING DATES FROM 20161028 TO 20170225;REEL/FRAME:041584/0384

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4