US9270959B2 - Dynamic color shading correction - Google Patents
Dynamic color shading correction Download PDFInfo
- Publication number
- US9270959B2 US9270959B2 US14/099,603 US201314099603A US9270959B2 US 9270959 B2 US9270959 B2 US 9270959B2 US 201314099603 A US201314099603 A US 201314099603A US 9270959 B2 US9270959 B2 US 9270959B2
- Authority
- US
- United States
- Prior art keywords
- color shading
- color
- component
- shading
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- H04N9/735—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/88—Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/61—Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/61—Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
- H04N25/611—Correction of chromatic aberration
-
- H04N5/3572—
-
- H04N9/045—
Definitions
- the systems and methods disclosed herein relate generally to image capture devices, and more particularly, to correcting color distortion in captured images.
- Brightness shading also known as vignetting
- Vignetting is a position dependent decrease in the amount of light transmitted by an optical system causing darkening of an image near the edges.
- Vignetting which affects both film and digital cameras, refers to a decrease in the amount of light transmitted by an optical system near the periphery of the lens field-of-view (FOV) causing gradual darkening of an image at the edges. Vignetting can be effectively fixed by calibrating the lens roll off distortion function of the camera.
- Color shading is similar in effect and manifests as shift in color near the edges of the sensor. Color shading distortion is specific to digital cameras.
- the spectral sensitivity of a typical charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) sensor is higher at the red end of the spectrum of visible light than at the blue end of the spectrum, and also extends considerably into the near infrared (IR) spectrum.
- IR near infrared
- the relatively high sensitivity of such sensors to IR light can cause errors in color reproduction. Therefore, in most digital sensors, the IR sensitivity is limited by a thin-film reflective IR filter at the face of the sensor that blocks the infrared wavelength while passing visible light.
- the transmittance of the IR filter shifts to shorter wavelengths as the angle of light incident on the filter increases. Accordingly, longer wavelengths (such as red light) can be blocked more at the edges of the image sensor due to larger incident light ray angles, resulting in a spatially non-uniform color temperature in the image.
- color shading artifacts are corrected by per-unit-calibration, which measures color shading profiles of an individual image capture device under a set of illuminants from images of a flat-field scene captured by the image capture device under each illuminant in the set.
- the inverse of the two-dimensional profile of color shading under each illuminant is stored as a correction table to compensate for color shading artifacts in images captured by the image capture device.
- the image capture device first employs a white balance algorithm to detect the scene illuminant from the captured data on the sensor and then selects the corresponding correction table to compensate for color shading distortion.
- Color shading artifacts in a captured image may deteriorate the quality of the image.
- existing methods for correcting color shading artifacts are complex, computationally expensive, sensor-dependent, and do not accurately correct color shading artifacts in scenes with multiple sources of illumination, scenes with objects having surfaces that vary the wavelength of incident light by reflection, or scenes with illumination sources that do not have a corresponding precalculated correction table.
- the color shading correction techniques described herein are calculated on the fly for any image sensor, and may be based on the illumination source type of an image scene as well as the objects in the captured image.
- vignetting and color shading artifacts may depend on the chief ray angle (CRA) of light striking the imaging sensor. This factor may make wide-angle lens designs in compact cameras more prone to shading distortions. While vignetting can be fixed relatively easily using calibration of the optical system, color shading is a more complicated phenomenon primarily caused by interaction of the incident angle of the light with the infrared (IR) reflective filter on the sensor. Color shading depends on the spectra of the scene illuminant as well as the surface reflectances of the objects being imaged and, therefore, cannot be fixed robustly using pre-calibration techniques.
- CRA chief ray angle
- the infrared filters used to block unwanted infrared light from being captured by an image sensor generally have a steep cutoff at a chosen wavelength.
- the wavelength at which the cutoff occurs changes depending on the angle of incidence of the incoming light rays.
- the cutoff wavelength of a typical thin-film reflective IR filter is a function of the angle of the light arriving at the sensor, shifting monotonically towards the blue end of the spectrum with increase in the incident angle of light. Therefore, towards the edges of the camera field-of-view (FOV) where the chief ray angle (CRA) of the lens is greater, the IR filter cuts out more red light than at the center of the FOV.
- FOV camera field-of-view
- CRA chief ray angle
- the response of an imaging sensor equipped with a reflective IR filter is spatially non-uniform, resulting in a visually unpleasing color shading in the captured image.
- Color shading is typically more severe for compact, wide-angle optical systems, for example imaging devices in mobile phones.
- the compact size of camera modules used in mobile phones, coupled with relatively wide angle lenses, means the lens is very close to the image sensor, which thus receives light at angles that can become quite steep at the corners and edges of the image. The result is a significant variation in the color response across the image.
- other physical phenomena e.g., lens vignetting, dependence of optical spatial crosstalk on CRA, and dependence of pixel quantum efficiency on the wavelengths of incident photons, may also contribute to color shading artifacts.
- this per-unit calibration method described above fails to provide a robust solution. For example, there is great variety in the number of possible illuminants with different wavelength spectra, making calibration of the color shading correction tables under all possible light sources costly and time-inefficient. Even if an image capture device has been calibrated to compensate for shading under all possible light sources, the illuminant classification determined by a performing a white balance analysis on the captured scene statistics may be incorrect. A wrong white balance determination may result in selection of an incorrect correction table for compensating for color shading, and the incorrect correction table may provide only a partial correction of the shading artifact. Further, per-unit calibration correction methods are not capable of operating successfully in mixed lighting conditions.
- pre-calibrated per-unit correction tables are accurate only for the calibrated sensor and for other sensors which do not deviate too much from the calibrated sensor's reference in their shading characteristics. Additionally, since color shading is a function of the wavelength spectra of the illuminant as well as of the objects being illuminated, pre-calibrated tables may not provide highly accurate correction, as shading correction with pre-calibrated tables only account for dependence of the shading characteristics on the illuminant spectra. Finally, the exact angle at which the IR filter is mounted with respect to the sensor is subject to a spread of manufacturing mechanical tolerances, potentially leading to off-center shading that varies device-to-device. Accordingly, pre-calibrated tables may not provide for accurate correction, as the pre-calibrated table may assume a different center location for the shading than is actually occurring in a device due to manufacturing
- One aspect relates to a method in an electronic device for correcting color shading artifacts in a captured image, the method comprising receiving image data comprising the captured image and scene statistics, the scene statistics comprising a downsampled version of the captured image; accessing a reference table, wherein the reference table comprises shading correction data calibrated on a reference module under a typical illumination; correcting color shading in the scene statistics using the reference table; estimating color shading in the corrected scene statistics; updating the reference table based on the estimated color shading; and correcting color shading in the captured image using the updated reference table.
- a dynamic color shading correction apparatus comprising a correction table data repository configured to store a reference table, wherein the reference table comprises shading correction data calibrated on a reference module under a typical illumination; an initial color shading correction module configured to receive image data comprising a captured image and scene statistics, the scene statistics comprising a downsampled version of the captured image, and to perform preliminary color shading correction on the scene statistics using the reference table; a color shading estimation module configured to estimate color shading in the scene statistics; a table updating module configured to generate an updated table from the reference table and the estimated color shading; and a color shading correction module configured to correct color shading artifacts in the image data using the updated table
- Another aspect relates to an iterative color shading estimation process comprising obtaining a plurality of hue components from scene statistics of a captured image, wherein the scene statistics represent a downsampled version of the captured image; initializing an iterative problem of solving for a color shading component value and an intrinsic color component value; performing a first iteration of the iterative problem; determining whether the color shading component value and the intrinsic color component value have converged; and if the color shading component value and the intrinsic color component value have not converged, performing an additional iteration of the iterative problem, and, if the color shading component value and the intrinsic color component value have converged, outputting the color shading component.
- Another aspect relates to an aggressive color shading estimation process comprising obtaining a plurality of hue components from scene statistics of a captured image, wherein the scene statistics represent a downsampled version of the captured image; detecting a plurality of partial gradients of a color shading component directly from at least one hue component of the scene statistics; and reconstructing the color shading component from the plurality of partial gradients.
- FIG. 1A illustrates an example of light incident on a filter at different chief ray angles
- FIG. 1B illustrates example shifts in transmittance of light through the filter based on the incident angles illustrated in FIG. 1A ;
- FIG. 1C illustrates a grayscale approximation of the spatial color nonuniformity of a captured image resulting from the transmittance shifts illustrated in FIG. 1B ;
- FIG. 2 illustrates a schematic block diagram of an example system with dynamic color shading correction capabilities
- FIG. 3 illustrates a schematic block diagram of an embodiment of a dynamic color shading corrector
- FIG. 4 illustrates an embodiment of a dynamic color shading correction process
- FIG. 5 illustrates an embodiment of a color shading estimation process.
- Embodiments relate to systems and methods for correcting color shading in a captured digital image.
- the color shading correction techniques described herein provide a framework for dynamic color shading correction (DCSC), which can estimate the color shading of a captured image on the fly from the scene statistics of the captured image and can use the estimated shading for color shading correction.
- DCSC dynamic color shading correction
- the DCSC framework can use a color shading estimate method to separate out the color shading component from the actual image content by its unique characteristics in the gradient domain, for example, the characteristic that the color shading component is generally a slowly varying function.
- the DCSC technique can use the color shading estimate to update a pre-calibrated color shading correction table to accurately compensate for the color non-uniformity in the captured image regardless of the scene illuminant or surface reflectances.
- the DCSC framework does not assume that the scene illuminant is included in the pre-calibrated correction table, nor does it rely on a white balance algorithm to attempt detection of the scene illuminant, so can accurately correct for the specific color non-uniformity in the captured image.
- the DCSC technique is computationally efficient and can provide color shading correction in real-time, and in some embodiments operates efficiently and in real-time when implemented on a mobile device.
- An embodiment of the color shading correction techniques may apply spatially-variant and color-channel-dependent gains to image pixels to dynamically compensate for a lack of color uniformity in a captured image.
- pixels farther away from the image center are multiplied with larger gains, while pixels at the image center get a unity gain.
- the two-dimensional profile of color shading distortion in an image depends on the actual wavelengths of light photons arriving at the sensor and, therefore, depends not only on the scene illuminant but also on the surface reflectance of the objects being imaged.
- correction gains used for correcting color shading artifacts in an image should be updated in response to a change in either the image scene illuminant or the image scene content.
- One embodiment is provided with a pre-calculated correction table, calibrated on a reference device under typical illumination.
- examples may be described as a process, which is depicted as a flowchart, a flow diagram, a finite state diagram, a structure diagram, or a block diagram.
- a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel, or concurrently, and the process can be repeated. In addition, the order of the operations may be re-arranged.
- a process is terminated when its operations are completed.
- a process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc.
- a process corresponds to a software function
- its termination corresponds to a return of the function to the calling function or the main function.
- FIG. 1A illustrates an example of light incident on a filter 108 at different chief ray angles.
- Filter 108 is positioned above an image sensor 110 .
- light ray 102 A is shown as being substantially normal to filter 108
- light ray 104 A has a greater angle of incidence than light ray 102 A
- light ray 106 A has a greater angle of incidence than light ray 104 A.
- the graphical representations of wavelengths 102 B, 104 B, 106 B illustrate how the spectra of light transmitted through the filter 108 to the sensor 110 shifts towards shorter wavelengths as the angle of incidence increases.
- FIG. 1B illustrates example shifts in transmittance of light through the filter as a function of wavelength based on the incident angles illustrated in FIG. 1A .
- Transmittance spectrum 112 corresponds to incident light 102 A at a substantially normal angle to the filter 108
- transmittance spectrum 114 corresponds to incident light 104 A
- transmittance spectrum 116 corresponds to incident light 106 A
- transmittance spectrum 118 is an example of the spectrum of light typically present in an image scene.
- the cut-on and cut-off wavelengths for light transmitted through the filter shifts towards shorter wavelengths as the angle of incidence increases, and therefore the sensor 110 receives different spectra of light in different regions of the sensor based on the incident angle of light on each portion of the sensor.
- FIG. 1C illustrates a grayscale approximation of an example illustration of the spatial color nonuniformity of a captured image resulting from the transmittance shifts illustrated in FIG. 1B .
- a center region 132 typically has warmer color tones than a cooler edge region 134 due to the transmittance shift based on incident light angle.
- FIG. 2 illustrates a high-level block diagram of an example system 200 with dynamic color shading correction capabilities, the system 200 having a set of components including a processor 220 linked to an imaging sensor 215 having a filter 260 .
- a working memory 205 , storage 210 , electronic display 225 , and memory 230 are also in communication with the processor 220 .
- System 200 may be a device such as cell phone, digital camera, tablet computer, personal digital assistant, or the like.
- System 200 may also be a more stationary device such as a desktop personal computer, video conferencing station, or the like that uses an internal or external camera for capturing images.
- System 200 can also be a combination of an image capture device and a separate processing device receiving image data from the image capture device.
- a plurality of applications may be available to the user on system 200 . These applications may include traditional photographic applications, capture of still images and video, dynamic color correction applications, and brightness shading correction applications, among others.
- the image capture system 200 includes the image sensor 215 for capturing images.
- the image sensor 215 can be, for example, a charge-coupled device (CCD), a complementary metal-oxide semiconductor (CMOS) sensor, or the like.
- the image sensor 215 may be coupled to the processor 220 to transmit a captured image to the image processor 220 .
- a filter 260 may be positioned near or within sensor 215 , for example an infrared cutoff filter designed to reflect or block mid-infrared wavelengths while passing visible light.
- the image processor 220 may be configured to perform various operations on a received captured image in order to output a high quality color corrected image, as will be described in more detail below.
- Processor 220 may be a general purpose processing unit or a processor specially designed for imaging applications. As shown, the processor 220 is connected to a memory 330 and a working memory 205 . In the illustrated embodiment, the memory 230 stores an imaging sensor control module 235 , dynamic color shading correction module 240 , capture control module 245 , and operating system 250 . These modules include instructions that configure the processor to perform various image processing and device management tasks.
- Working memory 205 may be used by processor 220 to store a working set of processor instructions contained in the modules of memory 330 . Alternatively, working memory 205 may also be used by processor 220 to store dynamic data created during the operation of device 200 .
- the processor 220 is configured by several modules stored in the memory 230 .
- the imaging sensor control module 235 includes instructions that configure the processor 320 to adjust the focus position of imaging sensor 215 .
- the imaging sensor control module 235 also includes instructions that configure the processor 220 to capture images with the imaging sensor 215 . Therefore, processor 220 , along with image capture control module 235 , imaging sensor 215 , filter 260 , and working memory 205 represent one means for capturing an image or sequence of images to be corrected for color shading.
- the dynamic color shading correction module 240 includes instructions that configure the processor 320 to correct color shading in a captured image.
- the dynamic color shading correction module 240 can estimate a color shading component in the captured image, and can use the estimated color shading component to generate a correction table to correct the color shading in the image.
- the dynamic color shading correction module 240 can store a pre-calibrated reference table, which is calibrated to correct color shading on a reference device under typical illumination.
- the pre-calibrated reference table can be stored in the data store 210 .
- the dynamic color shading correction module 240 can use the estimated color shading to dynamically update the pre-calibrated reference table to correct the color shading in a captured image.
- Capture control module 245 may include instructions that control the overall image capture functions of the system 200 .
- the capture control module 245 may include instructions that call subroutines to configure the processor 220 to capture image data of a target image scene using the imaging sensor 215 .
- Capture control module 245 may then call the dynamic color shading corrector module 235 to correct color shading due to the filter 260 or other causes.
- Capture control module 245 may also call other processing modules not illustrated, for example a vignetting estimation and correction module.
- Operating system module 250 configures the processor 220 to manage the memory and processing resources of the system 200 .
- operating system module 255 may include device drivers to manage hardware resources such as the electronic display 225 , storage 210 , or imaging sensor 215 . Therefore, in some embodiments, instructions contained in the image processing modules discussed above may not interact with these hardware resources directly, but instead interact through standard subroutines or APIs located in operating system component 250 . Instructions within operating system 250 may then interact directly with these hardware components.
- the processor 220 may be further configured to control the display 225 to display the captured image to a user.
- the display 225 may be external to an imaging device including the image sensor 215 or may be part of the imaging device.
- the display 225 may also be configured to provide a view finder for a user prior to capturing an image, or may be configured to display a captured image stored in memory or recently captured by the user.
- the display 225 may comprise an LCD or LED screen, and may implement touch sensitive technologies.
- Processor 220 may write data to storage module 210 , for example data representing captured images, color shading estimation, and correction table data. While storage module 210 is represented graphically as a traditional disk device, those with skill in the art would understand that the storage module 210 may be configured as any storage media device.
- the storage module 210 may include a disk drive, such as a floppy disk drive, hard disk drive, optical disk drive or magneto-optical disk drive, or a solid state memory such as a FLASH memory, RAM, ROM, and/or EEPROM.
- the storage module 210 can also include multiple memory units, and any one of the memory units may be configured to be within the image capture device 200 , or may be external to the image capture system 200 .
- the storage module 210 may include a ROM memory containing system program instructions stored within the image capture system 200 .
- the storage module 210 may also include memory cards or high speed memories configured to store captured images which may be removable from the camera.
- FIG. 2 depicts a system comprising separate components to include a processor, imaging sensor, and memory
- processors imaging sensor
- memory memory components
- FIG. 2 depicts a system comprising separate components to include a processor, imaging sensor, and memory
- these separate components may be combined in a variety of ways to achieve particular design objectives.
- the memory components may be combined with processor components to save cost and improve performance.
- FIG. 2 illustrates two memory components-memory component 230 comprising several modules and a separate memory 205 comprising a working memory—one with skill in the art would recognize several embodiments utilizing different memory architectures.
- a design may utilize ROM or static RAM memory for the storage of processor instructions implementing the modules contained in memory 230 .
- processor instructions may be read at system startup from a disk storage device that is integrated into system 200 or connected via an external device port. The processor instructions may then be loaded into RAM to facilitate execution by the processor.
- working memory 205 may be a RAM memory, with instructions loaded into working memory 205 before execution by the processor 220 .
- FIG. 3 illustrates a schematic block diagram of an embodiment of a dynamic color shading corrector 240 .
- the dynamic color shading corrector 240 includes a preliminary correction module 305 , a color shading estimation module 310 , a table updating module 315 , and a color shading correction module 320 .
- dynamic color shading corrector 240 can be implemented in other image capture systems suitable for color shading correction.
- the preliminary correction module 305 receives image data and scene statistics 330 .
- the scene statistics can be a down-sampled version of the captured image, and in some embodiments can be a combination of four per-channel images for each Bayer channel—R. Gr, Gb, and B—of the captured image.
- the dynamic color shading corrector 240 can receive only image data including a captured image, and can additionally include a scene statistics module (not illustrated) to generate scene statistics from the captured image.
- the preliminary correction module 305 can be programmed to generate the scene statistics from image data of a captured image.
- the preliminary correction module 305 can use a pre-calibrated reference table to correct at least some of the color shading in the scene statistics, where the pre-calibrated reference table is calculated on a reference image capture device capturing an image of a flat field under known illuminants.
- the pre-calibrated reference table can be stored in the correction table data repository 325 .
- shading correction is performed in the Bayer image domain, and the table can include four sub-tables each associated with the one of the four Bayer channels. The correction may be performed by multiplying the per-channel sub-table of the reference table with the per-channel scene statistics for each Bayer channel.
- the preliminarily corrected scene statistics can be sent to the color shading estimation module 310 .
- the color shading estimation module 310 can be configured with instructions that determine a color shading component from the corrected scene statistics. For example, RGB corrected scene statistics can be transformed into hue components, and the color shading component can be estimated from the hue components.
- the hue components for instance, can represent a linear combination in the logarithmic domain of a color shading component, representing the color shading gradient of the captured image, and an intrinsic color component, representing the true hue content of the captured image scene.
- the color shading estimation module 310 can recover the color shading component for a captured image. In embodiments correcting color shading in video images, the color shading estimation module 310 can recover the color shading component for some or each frame in the video image sequence. In some embodiments, two hue channels may be used, and two color shading components can be recovered.
- the table updating module 315 can receive data representing the color shading component or components from the color shading estimation module 310 and use the color shading components to dynamically update the pre-calibrated reference table to yield optimal color correction for the captured image.
- the table can be dynamically updated for optimal correction for each frame in the video sequence based on the color shading components of each frame.
- two color shading components may each correspond to one of a red hue channel and a blue hue channel.
- the table updating module 315 can divide the sub-table corresponding to the blue channel by the blue color shading component and can divide the sub-table corresponding to the red channel by the red color shading component.
- the updated correction table can be stored in the correction table data repository 325 .
- the updated correction table can be output to the color shading correction module 320 along with the captured image data.
- the color shading correction module 320 can use the updated correction table to correct color shading in the captured image. This correction may be performed separately for each bayer channel by an element-wise multiplication of the per-channel reference table with the per-channel scene statistics.
- dynamic color shading corrector 240 is discussed in the context of the system 200 of FIG. 2 , it can be implemented on its own or in any system suitable for color shading correction.
- the process 400 may be implemented in an image capture device such as a camera that takes still photographs or in a video camera that captures a sequence of image frames.
- the process 400 may be implemented on a computing system, for example including a processor, integrated into the image capture device, or may be implemented on a separate computing device which receives image data from the image capture device.
- the process 400 may be used to correct color shading in images captured by digital image capture devices such as digital cameras, mobile phone cameras, web cameras, tablet computer cameras, and gaming console cameras, to name a few examples.
- the process 400 may provide advantages in particular for compact cameras with wide fields of view.
- the DCSC method is provided with a single pre-calibrated color shading correction table for a reference image sensor under one illuminant.
- the single color shading correction table referred to herein as the reference table T°, may be calibrated on a reference image capture device under a typical illumination of a flat field.
- the DCSC process 400 may be performed in the Bayer-image domain, and accordingly a shading correction table may include four sub-tables, each sub-table associated with one of the four bayer domains R, Gr, Gb, and B.
- image data is received, the image data comprising scene statistics of a captured image.
- Scene statistics refers to a down-sampled version of the captured image.
- Bayer-image statistics X can be denoted as a combination of four per-channel images X R ⁇ R m ⁇ n , X Gr ⁇ R m ⁇ n , X Gb ⁇ R m ⁇ n , and X B ⁇ R m ⁇ n . Since color shading may vary very gradually in the spatial domain, it may be sufficient to base the estimate of the color shading profile on the scene statistics rather than on the full-resolution captured image, significantly reducing the computational complexity of the process.
- the image statistics may be captured at a higher spatial resolution than the correction tables in some embodiments, and in other embodiments the image statistics and correction tables may be the same resolution.
- step 415 in which the reference table is used to correct the image statistics.
- this correction may be performed separately for each bayer channel by an element-wise multiplication of the per-channel reference table with the per-channel scene statistics.
- the process 400 estimates the color shading in the image statistics corrected by the reference table.
- a color shading estimation technique may be used to separate out a slow-varying shading function from the actual image content.
- the corrected scene statistics in the RGB bayer domain may be transformed into hue components.
- the corrected scene statistics may be transformed into hue components H1 and H2 using the following transformations:
- Each hue channel represents a linear combination (in logarithmic domain) of a color shading component and an intrinsic (actual) color component representing true hue content of the scene.
- the color shading estimation may recover the shading components from the observed hue components for the image captured.
- a single-channel color shading estimation may be performed.
- the color shading may be a smooth function of the incident angle of light arriving at the image sensor. Since the pre-calibrated reference table is also smooth spatially, the color shading left over after correcting the scene statistics with the reference table may also be a slow-varying signal. Therefore, the shading component S may be a smooth mask with dense and small gradients.
- the intrinsic color component I of the scene statistics may be a piece-wise smooth pattern which contains a small number of non-zero gradients. Accordingly, we can recover the two additive components S and I from the observed hue H by minimizing the number of non-zero gradients in I and the magnitude of gradients in S.
- the measure of gradient sparsity on a two-dimensional (2D) map Z ⁇ Rm ⁇ n i.e., the number of non-zero gradient components, can be defined as follows: C ( Z ) # ⁇ i
- the color shading can recover the color shading component S and intrinsic color component I by performing the following minimization:
- ⁇ is a weight parameter directly controlling the significance of the gradient sparsity of I and M (S) ⁇ i (D x,i ⁇ right arrow over (S) ⁇ ) 2 +(D y,i ⁇ right arrow over (S) ⁇ ) 2 denotes the Sum-of-the-Squares-of-Gradient-Magnitudes (SSGM) on S. Accordingly, some embodiments of the color shading estimation enforce the smoothness of S by minimizing its squared gradient magnitudes, while seeking the piecewise smoothness (or gradient sparsity) of I by minimizing the number of its non-zero gradient components.
- the sparsity measure C(I) can be replaced by t is two different variants C 1 (I) and C 2 (I), which respectively define the sparsity of I in the gradient-magnitude and partial-gradient domains:
- the two additive components may be recovered from the observed hue by minimizing the number of non-zero gradients in the intrinsic color component and the magnitude of gradients in the shading component.
- Some embodiments may be calculated through an iterative process, and to reduce latency other embodiments may be calculated through a single iteration. For example, to calculate color shading correction tables in real time for some or all frames in a video sequence, the color shading may be estimated through a single iteration in order to reduce run time.
- a joint-channel color shading estimation may be performed to recover the R/G and B/G components of color shading from two observed hue channels in a joint manner.
- Such joint-channel color shading estimation may be calculated based on the fact that the intrinsic color components often contain non-zero gradients at the same locations.
- the separate-channel color shading estimation framework described above can be extended to recover the R/G and B/G components of color shading from the two observed hue channels in a joint manner.
- the joint-channel color shading estimation can take advantage of the fact that the intrinsic color components I 1 and I 2 often contain non-zero gradients at the same locations. Given the observed hue channels H 1 and H 2 , the joint-channel color shading estimation technique recovers its intrinsic component (I 1 , I 2 ) and color shading component (S 1 , S 2 ) by
- the process 400 dynamically updates the reference table.
- the reference table may be dynamically updated to yield the correction table optimal for correcting the image.
- the correction table may be calculated for each of the t-th frame in the video sequence, where t denotes the time of the captured frame.
- the table may be updated according to the following set of equations, where the division operations may be performed element-wise:
- T R t T R 0 S 1 t
- T Gr t T Gr 0
- T Gb t T Gb 0
- T B t T B 0 S 2 t , ( 7 )
- the process 400 uses the updated reference table to correct color shading in the captured image. Accordingly, the DCSC process 400 does not require costly per-unit-calibration and is robust to module-to-module variations. In addition, the DCSC process 400 neither assumes that the illuminant associated with the captured image is included in the pre-calibration, nor relies on a white balance algorithm to give an accurate detection of the scene illuminant.
- FIG. 5 illustrates an embodiment of a color shading estimation process 500 .
- Some implementations of process 500 can take place at block 420 of FIG. 4 .
- the color shading estimation process 500 can be implemented alone or as a part of any technique suitable for color shading estimation or correction.
- the process 500 can be executed by any system suitable for color shading estimation.
- the process 500 begins at optional block 505 in which the color shading estimation module 310 determines the allowable latency for the process 500 .
- the color shading estimation module 310 may determine whether the input image data is a video sequence of images being corrected in real time, corresponding to a low amount of allowable latency.
- the amount of allowable latency may be higher.
- an embodiment of the iterative sub-process can take somewhere in the range of approximately 200 mx to approximately 300 ms, which can be considered a large latency for real-time applications of dynamic color shading correction.
- the color shading estimation module 310 determines whether to execute an iterative or aggressive color shading estimation sub-process.
- the iterative sub-process can require tens of iterations to converge, causing latency in dynamic color shading correction, while the aggressive sun-process can obtain a valid solution after only one iteration.
- the iterative color shading estimation process can be executed for higher quality results in circumstances associated with higher allowable amounts of latency, greater processing capabilities, or for joint-channel color shading estimation, in some implementations.
- the aggressive color shading estimation process can be executed in circumstances associated with lower allowable amounts of latency or in systems with lower processing capabilities, in some implementations.
- optional blocks 505 and 510 can be bypassed and the color shading estimation module 310 can be configured to perform one of the iterative and aggressive color shading estimation sub-processes.
- the iterative color shading estimation sub-process begins at block 530 in which the color shading estimation module 310 obtains the hue component or components from scene statistics representing a captured image.
- the color shading estimation module 310 initializes an iterative problem of solving for the color shading component and the intrinsic color shading component, for example as defined above in Eq. (3).
- some embodiments can apply an augmented Lagrangian method by introducing a Lagrangian multiplier Y ⁇ R m ⁇ n and an over-regularization parameter ⁇ to transform the equality constraint of Eq. (3) into the augmented Lagrangian function as follows:
- the simplified augmented Lagrangian function can be minimized by iterating between the following two steps: Solve S k+1 ,G x k+1 ,G y k+1 ,I k+1 by min ( S,G x ,G y ,I,Y k ). (11) Update Y k+1 ⁇ Y k + ⁇ ( S k+1 +I k+1 ⁇ H ). (12)
- the color shading estimation module 310 can separately optimize the color shading component S and partial gradient parameters G x and G y as defined in Eq. (11) with the intrinsic color component I k and Lagrangian multiplier Y k fixed.
- the color shading component S can be optimized by minimizing:
- the color shading estimation module 310 can recover I with S k+1 , G x k+1 , G y k+1 , Y k fixed according to Eq. (11).
- the intrinsic color component I can be recovered by minimizing:
- the color shading estimation module 310 can update Y according to Eq. (12).
- the color shading estimation module 310 can determine whether S and I have converged. If S and I have not converged, then the process 500 loops back to block 540 to perform another iteration of optimization of Eq. (11) and Eq. (12). If S and I have converged, then the process 500 transitions to block 560 to output the color shading component S, for example for color shading correction.
- the aggressive color shading estimation sub-process begins at block 515 in which the color shading estimation module 310 obtains the hue component or components from scene statistics representing a captured image.
- the intrinsic color component I has much larger gradient magnitudes than the color shading component S. This indicates that a roughly correct estimate of color shading can be obtained in most cases without tens of iterations between the piecewise-smoothness enforcement on I and the smooth filtering on I.
- the color shading estimation module 310 directly detects the partial gradients of the color shading component S from those of the hue channel H according to their values.
- the partial gradients (G x , G y ) of the color shading component S can be jointly recovered as:
- G ⁇ x , i ⁇ D x , i ⁇ H if ⁇ ⁇ ( D x , i ⁇ H ) 2 ⁇ ⁇ ⁇ 0 else
- G ⁇ y , i ⁇ D y , i ⁇ H if ⁇ ⁇ ( D y , i ⁇ H ) 2 ⁇ ⁇ ⁇ 0 else . ( 17 )
- the partial gradients at the bottom boundary and the right boundary can be recovered as follows:
- the color shading estimation module 310 reconstructs the color shading component using the detected partial gradients.
- the intrinsic component S can be recovered by two sequential steps, as shown by Eq. (19) and Eq. (20):
- the framework for dynamic color shading correction (DCSC) described herein can estimate the color shading on the fly from the scene statistics of a captured image and use the estimated shading for color shading correction.
- DCSC dynamic color shading correction
- At the core of the DCSC framework is a color shading estimation method that separates out the color shading component from the actual image content by its unique characteristic in the gradient domain.
- An iterative color shading estimation process can solve this problem by applying the alternating direction.
- An aggressive color shading estimation process can be used to reduce the running time of the color shading estimation to be less than approximately 10 ms in some embodiments, which enables the DCSC to handle a dynamic scene.
- the technology is operational with numerous other general purpose or special purpose computing system environments or configurations.
- Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, processor-based systems, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
- instructions refer to computer-implemented steps for processing information in the system. Instructions can be implemented in software, firmware or hardware and include any type of programmed step undertaken by components of the system.
- a processor may be any conventional general purpose single- or multi-chip processor such as a Qualcomm® processor, a Pentium® processor, a Pentium® Pro processor, a 8051 processor, a MIPS® processor, a Power PC® processor, or an Alpha® processor.
- the processor may be any conventional special purpose processor such as a digital signal processor or a graphics processor.
- the processor typically has conventional address lines, conventional data lines, and one or more conventional control lines.
- each of the modules comprises various subroutines, procedures, definitional statements and macros.
- Each of the modules are typically separately compiled and linked into a single executable program. Therefore, the description of each of the modules is used for convenience to describe the functionality of the preferred system.
- the processes that are undergone by each of the modules may be arbitrarily redistributed to one of the other modules, combined together in a single module, or made available in, for example, a shareable dynamic link library.
- the system may be used in connection with various operating systems such as Linux®, UNIX® or Microsoft Windows®.
- the system may be written in any conventional programming language such as C, C++, BASIC, Pascal, or Java, and ran under a conventional operating system.
- C, C++, BASIC, Pascal, Java, and FORTRAN are industry standard programming languages for which many commercial compilers can be used to create executable code.
- the system may also be written using interpreted languages such as Perl, Python or Ruby.
- DSP digital signal processor
- ASIC application specific integrated circuit
- FPGA field programmable gate array
- a general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine.
- a processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
- the functions and methods described may be implemented in hardware, software, or firmware executed on a processor, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
- Computer-readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
- a storage medium may be any available media that can be accessed by a computer.
- such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
- any connection is properly termed a computer-readable medium.
- the software is transmitted from a web site, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave
- the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium.
- Disk and disc includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
Abstract
Description
where the division operations in the above set of equations are performed element-wise. Each hue channel represents a linear combination (in logarithmic domain) of a color shading component and an intrinsic (actual) color component representing true hue content of the scene. The color shading estimation may recover the shading components from the observed hue components for the image captured.
C(Z)#{i|D x,i {right arrow over (Z)}≠0,D y,i {right arrow over (Z)}≠0}, (2)
where #{ } is the counting operator and Di{right arrow over (Z)}, [Dx,i{right arrow over (Z)}; Dy,i{right arrow over (Z)}] denotes the 2D partial gradients of Z at location i (note: Dx,i and Dy,iεRm×n×1).
where λ is a weight parameter directly controlling the significance of the gradient sparsity of I and M (S)Σi(Dx,i{right arrow over (S)})2+(Dy,i{right arrow over (S)})2 denotes the Sum-of-the-Squares-of-Gradient-Magnitudes (SSGM) on S. Accordingly, some embodiments of the color shading estimation enforce the smoothness of S by minimizing its squared gradient magnitudes, while seeking the piecewise smoothness (or gradient sparsity) of I by minimizing the number of its non-zero gradient components.
where ∥ ∥0 denotes the l0-norm.
where [I1:I2]εRm×n×2 denotes a 3D cube consisting of two 2D layers I1 and I2. The gradient sparsity measures for joint-channel color shading estimation are denoted as:
where DxεRmn×mn is generated by concatenating Dx,I, 1≦i≦MN along the vertical direction and DyεRmn×mn is generated in a similar way. By introducing the auxiliary gradient parameters Gx and Gy, where Gx=DxI and Gy=DyI, the gradient sparsity measures can be reformulated as:
which can be simplified using an operator splitting technique as follows:
Solve S k+1 ,G x k+1 ,G y k+1 ,I k+1 by min (S,G x ,G y ,I,Y k). (11)
Update Y k+1 ←Y k+β(S k+1 +I k+1 −H). (12)
and the partial gradients Gx and Gy can be optimized by minimizing:
or can be separately recovered as:
where (Gx,i,j, Gy,i,j) denote the partial gradients of S at location (i, j).
where τ is set as a very small constant (e.g., 10−10) and the recovered color shading component S is shifted such that its mean is zero, and where and −1 denote Fourier and inverse Fourier transforms, respectively. Next, the
Claims (29)
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/099,603 US9270959B2 (en) | 2013-08-07 | 2013-12-06 | Dynamic color shading correction |
PCT/US2014/049603 WO2015020958A2 (en) | 2013-08-07 | 2014-08-04 | Dynamic color shading correction |
CN201480043956.8A CN105453543B (en) | 2013-08-07 | 2014-08-04 | Dynamic color shading correction |
EP14756150.0A EP3031202B1 (en) | 2013-08-07 | 2014-08-04 | Dynamic color shading correction |
KR1020167004457A KR101688373B1 (en) | 2013-08-07 | 2014-08-04 | Dynamic color shading correction |
JP2016526184A JP6054585B2 (en) | 2013-08-07 | 2014-08-04 | Dynamic color shading correction |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201361863353P | 2013-08-07 | 2013-08-07 | |
US14/099,603 US9270959B2 (en) | 2013-08-07 | 2013-12-06 | Dynamic color shading correction |
Publications (2)
Publication Number | Publication Date |
---|---|
US20150042844A1 US20150042844A1 (en) | 2015-02-12 |
US9270959B2 true US9270959B2 (en) | 2016-02-23 |
Family
ID=52448331
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/099,603 Active 2034-02-06 US9270959B2 (en) | 2013-08-07 | 2013-12-06 | Dynamic color shading correction |
Country Status (6)
Country | Link |
---|---|
US (1) | US9270959B2 (en) |
EP (1) | EP3031202B1 (en) |
JP (1) | JP6054585B2 (en) |
KR (1) | KR101688373B1 (en) |
CN (1) | CN105453543B (en) |
WO (1) | WO2015020958A2 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160155217A1 (en) * | 2014-11-21 | 2016-06-02 | Samsung Electronics Co., Ltd. | Image processing apparatus and method |
US20190095713A1 (en) * | 2017-09-28 | 2019-03-28 | Gopro, Inc. | Scene classification for image processing |
US11216981B2 (en) | 2019-07-26 | 2022-01-04 | Cnh Industrial America Llc | System and method for calibrating image data during an agricultural operation using a color indicator |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8872853B2 (en) | 2011-12-01 | 2014-10-28 | Microsoft Corporation | Virtual light in augmented reality |
US9652892B2 (en) * | 2013-10-29 | 2017-05-16 | Microsoft Technology Licensing, Llc | Mixed reality spotlight |
GB2550070B (en) | 2015-09-18 | 2021-11-24 | Shanghai United Imaging Healthcare Co Ltd | System and method for computer tomography |
CN106551703B (en) * | 2015-09-30 | 2018-10-30 | 上海联影医疗科技有限公司 | Computer tomography method and computed tomography imaging system |
US10148873B2 (en) * | 2015-12-22 | 2018-12-04 | Mitsubishi Electric Research Laboratories, Inc. | Method and system for motion adaptive fusion of optical images and depth maps acquired by cameras and depth sensors |
CN106886786A (en) * | 2017-02-24 | 2017-06-23 | 上海巽晔计算机科技有限公司 | A kind of effective image processing system |
EP3596695A1 (en) * | 2017-03-15 | 2020-01-22 | Flir Systems, Inc. | Systems and methods for reducing low-frequency non-uniformity in images |
KR102415509B1 (en) | 2017-11-10 | 2022-07-01 | 삼성전자주식회사 | Face verifying method and apparatus |
JP7249207B2 (en) * | 2019-05-28 | 2023-03-30 | シャープ株式会社 | Shading Correction Signal Generating Apparatus, MFP, and Shading Correction Signal Generating Method |
WO2021187432A1 (en) * | 2020-03-16 | 2021-09-23 | 日東電工株式会社 | Optical filter, method for manufacturing same, and optical module |
CN113905218B (en) * | 2021-05-25 | 2022-10-28 | 荣耀终端有限公司 | Color shading correction method, electronic device, chip system and storage medium |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030220741A1 (en) | 2002-05-23 | 2003-11-27 | Olympus Optical Co., Ltd. | Signal processing apparatus and signal processing program |
US20060087707A1 (en) | 2004-10-25 | 2006-04-27 | Konica Minolta Photo Imaging, Inc. | Image taking apparatus |
US20070146506A1 (en) | 2005-12-23 | 2007-06-28 | Microsoft Corporation | Single-image vignetting correction |
US20080101693A1 (en) * | 2006-10-26 | 2008-05-01 | Intelligence Frontier Media Laboratory Ltd | Video image based tracking system for identifying and tracking encoded color surface |
US20090153697A1 (en) | 2007-12-12 | 2009-06-18 | Anthony Michael King | Method For Providing Image Illumination Calibration For An Imaging Apparatus |
US20110090381A1 (en) | 2009-10-20 | 2011-04-21 | Apple Inc. | System and method for processing image data using an image processing pipeline of an image signal processor |
US20110149109A1 (en) * | 2009-12-21 | 2011-06-23 | Electronics And Telecommunications Research Institute | Apparatus and method for converting color of taken image |
US20120120255A1 (en) | 2009-07-21 | 2012-05-17 | Frederic Cao | Method for estimating a defect in an image-capturing system, and associated systems |
US20120249828A1 (en) * | 2011-03-28 | 2012-10-04 | Aptina Imaging Corporation | Apparataus and method of automatic color shading removal in cmos image sensors |
US20120263360A1 (en) * | 2011-04-15 | 2012-10-18 | Georgia Tech Research Corporation | Scatter correction methods |
US20130021484A1 (en) | 2011-07-20 | 2013-01-24 | Broadcom Corporation | Dynamic computation of lens shading |
US20130050529A1 (en) * | 2011-08-26 | 2013-02-28 | Casio Computer Co., Ltd. | Image processing device, image processing method and storage medium |
US20130322701A1 (en) * | 2012-05-30 | 2013-12-05 | Ricoh Company, Ltd. | Printer consistency measurement, evaluation and correction |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4603319B2 (en) * | 2004-09-01 | 2010-12-22 | パナソニック株式会社 | Image input device |
JP2008053931A (en) * | 2006-08-23 | 2008-03-06 | Fujifilm Corp | Imaging apparatus |
-
2013
- 2013-12-06 US US14/099,603 patent/US9270959B2/en active Active
-
2014
- 2014-08-04 JP JP2016526184A patent/JP6054585B2/en active Active
- 2014-08-04 WO PCT/US2014/049603 patent/WO2015020958A2/en active Application Filing
- 2014-08-04 CN CN201480043956.8A patent/CN105453543B/en active Active
- 2014-08-04 EP EP14756150.0A patent/EP3031202B1/en active Active
- 2014-08-04 KR KR1020167004457A patent/KR101688373B1/en active IP Right Grant
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030220741A1 (en) | 2002-05-23 | 2003-11-27 | Olympus Optical Co., Ltd. | Signal processing apparatus and signal processing program |
US20060087707A1 (en) | 2004-10-25 | 2006-04-27 | Konica Minolta Photo Imaging, Inc. | Image taking apparatus |
US20070146506A1 (en) | 2005-12-23 | 2007-06-28 | Microsoft Corporation | Single-image vignetting correction |
US20080101693A1 (en) * | 2006-10-26 | 2008-05-01 | Intelligence Frontier Media Laboratory Ltd | Video image based tracking system for identifying and tracking encoded color surface |
US20090153697A1 (en) | 2007-12-12 | 2009-06-18 | Anthony Michael King | Method For Providing Image Illumination Calibration For An Imaging Apparatus |
US20120120255A1 (en) | 2009-07-21 | 2012-05-17 | Frederic Cao | Method for estimating a defect in an image-capturing system, and associated systems |
US20110090381A1 (en) | 2009-10-20 | 2011-04-21 | Apple Inc. | System and method for processing image data using an image processing pipeline of an image signal processor |
US20110149109A1 (en) * | 2009-12-21 | 2011-06-23 | Electronics And Telecommunications Research Institute | Apparatus and method for converting color of taken image |
US20120249828A1 (en) * | 2011-03-28 | 2012-10-04 | Aptina Imaging Corporation | Apparataus and method of automatic color shading removal in cmos image sensors |
US20120263360A1 (en) * | 2011-04-15 | 2012-10-18 | Georgia Tech Research Corporation | Scatter correction methods |
US20130021484A1 (en) | 2011-07-20 | 2013-01-24 | Broadcom Corporation | Dynamic computation of lens shading |
US20130050529A1 (en) * | 2011-08-26 | 2013-02-28 | Casio Computer Co., Ltd. | Image processing device, image processing method and storage medium |
US20130322701A1 (en) * | 2012-05-30 | 2013-12-05 | Ricoh Company, Ltd. | Printer consistency measurement, evaluation and correction |
Non-Patent Citations (15)
Title |
---|
Aggarwal M., et al., "On Cosine-fourth and Vignetting Effects in Real Lenses," Vision, 2001. ICCV 2001. Proceedings. Eighth IEEE International Conference on, vol. 1, pp. 472-479. IEEE, 2001. |
Agranov G., et al., "Crosstalk and Microlens Study in a Color Cmos Image Sensor," Electron Devices, IEEE Transactions on, 50(1): 4-11, 2003. |
Allen E., et al., "The Manual of Photography and Digital Imaging," Focal Press, 2010; TOC. |
Boyd S., et al., "Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers," Foundations and Trends in Machine Learning, 3(1):1122, pp. 1-122, 2011. |
Glowinski. R ., "Numerical Methods for Nonlinear Variational Problems". Springer-Verlag, 1984, Chapter VI, pp. 166-194. |
International Search Report and Written Opinion-PCT/US2014/049603-ISA/EPO-Feb. 25, 2015 (133537WO). |
Kim S.J., et al., "Robust Radiometric Calibration and Vignetting Correction," Pattern Analysis and Machine Intelligence, IEEE Transactions on, 30(4): 562-576, 2008. |
Lin Z., et al., "The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted low-rank Matrices," Technical Report UILU-ENG-09-2214, University of Illinois at Urbana-Champaign (UIUC), 2010. |
Partial International Search Report-PCT/US2014/049603-ISA/EPO-Nov. 6, 2014 (133537WO). |
Ray S. F., "Applied Photographic Optics: Lenses and Optical Systems for Photography, Film, Video, Electronic and Digital Imaging," 3rd Edition; Focal Press, 2002; TOC. |
Wang Y., et al., "A New Alternating Minimization Algorithm for Total Variation Image Reconstruction," SIAM J. Imaging Sciences, 1(3):248272, pp. 248-272, 2008. |
Xu L., et al., "Image Smoothing via I0 Gradient Minimization," ACM Transactions on Graphics (TOG)-Proceedings of ACM SIGGRAPH Asia, 11 pages, 30(6), 2011. |
Yang J., et al., "A Fast Alternating Direction Method for TVL1-L2 Signal Reconstruction from Partial Fourier Data," IEEE Journal of Selected Topics in Signal Processing, pp. 282-297, 2008. |
Yu W., et al., "Vignetting Distortion Correction Method for High Quality Digital Imaging," In Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on, vol. 3, pp. 666-669. IEEE, 2004. |
Zheng Y., et al., "Single-image Vignetting Correction," Pattern Analysis and Machine Intelligence, IEEE Transactions on, 31(12):2243-2256, 2009. |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160155217A1 (en) * | 2014-11-21 | 2016-06-02 | Samsung Electronics Co., Ltd. | Image processing apparatus and method |
US9661290B2 (en) * | 2014-11-21 | 2017-05-23 | Samsung Electronics Co., Ltd. | Image processing apparatus and method |
US20190095713A1 (en) * | 2017-09-28 | 2019-03-28 | Gopro, Inc. | Scene classification for image processing |
US10970552B2 (en) * | 2017-09-28 | 2021-04-06 | Gopro, Inc. | Scene classification for image processing |
US11238285B2 (en) | 2017-09-28 | 2022-02-01 | Gopro, Inc. | Scene classification for image processing |
US11216981B2 (en) | 2019-07-26 | 2022-01-04 | Cnh Industrial America Llc | System and method for calibrating image data during an agricultural operation using a color indicator |
Also Published As
Publication number | Publication date |
---|---|
WO2015020958A3 (en) | 2015-04-09 |
KR101688373B1 (en) | 2016-12-20 |
JP6054585B2 (en) | 2016-12-27 |
WO2015020958A2 (en) | 2015-02-12 |
EP3031202A2 (en) | 2016-06-15 |
US20150042844A1 (en) | 2015-02-12 |
CN105453543B (en) | 2017-02-08 |
CN105453543A (en) | 2016-03-30 |
EP3031202B1 (en) | 2019-02-20 |
KR20160040596A (en) | 2016-04-14 |
JP2016525309A (en) | 2016-08-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9270959B2 (en) | Dynamic color shading correction | |
RU2543974C2 (en) | Auto-focus control using image statistics data based on coarse and fine auto-focus scores | |
US10298863B2 (en) | Automatic compensation of lens flare | |
RU2530009C1 (en) | Method and system for processing images with doubled image sensor | |
RU2537038C2 (en) | Automatic white balance processing with flexible colour space selection | |
JP5643320B2 (en) | Temporal filtering technology for image signal processing | |
US8531542B2 (en) | Techniques for acquiring and processing statistics data in an image signal processor | |
US8922704B2 (en) | Techniques for collection of auto-focus statistics | |
US8472712B2 (en) | System and method for applying lens shading correction during image processing | |
US9344636B2 (en) | Scene motion correction in fused image systems | |
AU2010308437A1 (en) | System and method for detecting and correcting defective pixels in an image sensor | |
CN107707789B (en) | Method, computing device and storage medium for providing a color high resolution image of a scene | |
US20220138964A1 (en) | Frame processing and/or capture instruction systems and techniques | |
WO2006112814A1 (en) | Edge-sensitive denoising and color interpolation of digital images | |
Koskiranta | Improving Automatic Imaging Algorithms with Dual Camera System |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: QUALCOMM INCORPORATED, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SHU, XIANBIAO;JIANG, XIAOYUN;SIDDIQUI, HASIB AHMED;SIGNING DATES FROM 20131126 TO 20131202;REEL/FRAME:031736/0149 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |