WO2010081229A1 - Multiplexed imaging - Google Patents

Multiplexed imaging Download PDF

Info

Publication number
WO2010081229A1
WO2010081229A1 PCT/CA2010/000055 CA2010000055W WO2010081229A1 WO 2010081229 A1 WO2010081229 A1 WO 2010081229A1 CA 2010000055 W CA2010000055 W CA 2010000055W WO 2010081229 A1 WO2010081229 A1 WO 2010081229A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
image
components
filter
imaging array
Prior art date
Application number
PCT/CA2010/000055
Other languages
French (fr)
Inventor
Gordon Wetzstein
Ivo Bodo Ihrke
Wolfgang Heidrich
Original Assignee
The University Of British Columbia
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The University Of British Columbia filed Critical The University Of British Columbia
Priority to US13/142,851 priority Critical patent/US8860856B2/en
Priority to EP10731003.9A priority patent/EP2387848B1/en
Priority to CN201080004934.2A priority patent/CN102282840B/en
Priority to JP2011545599A priority patent/JP5563597B2/en
Publication of WO2010081229A1 publication Critical patent/WO2010081229A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4015Image demosaicing, e.g. colour filter arrays [CFA] or Bayer patterns
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/21Indexing scheme for image data processing or generation, in general involving computational photography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the invention relates to imaging and has application to photography. Some embodiments of the invention relate to the acquisition of high dynamic range (HDR) images. Some embodiments of the invention relate to the acquisition of color images.
  • HDR high dynamic range
  • Various example embodiments of the invention provide: cameras, camera systems,
  • One aspect of the invention provides methods for obtaining image data.
  • the methods comprise acquiring image data by exposing an imaging array to optical radiation and operating the imaging array.
  • Acquiring the image data comprises spatially modulating a response of the imaging array to each of a plurality of components of the optical radiation according to a corresponding basis function for an invertible transformation.
  • the method applies the transformation to the image data to yield transformed image data.
  • the transformed image data comprises spatially-separated image copies corresponding respectively to the plurality of components.
  • the method extracts the spatially-separated image copies from the transformed image data and applies an inverse of the transformation to each of the extracted image copies.
  • acquiring the image data comprises allowing the optical radiation to pass through an optical filter before interacting with pixels of the imaging array.
  • Another aspect provides an automated method for reconstructing pixel values for saturated pixels in an image.
  • the method comprises obtaining image data comprising a band-limited exposure of an image having some saturated pixels.
  • the exposure is spatially modulated.
  • the spatial modulation occurs at one or more and, in some embodiments, two or more spatial frequencies by functions that differently attenuate the exposure.
  • the method identifies the saturated pixels in the image data, computes a Fourier transform of the image data and sets up an optimization problem in which pixel values for the saturated components are unknown and an error measure to be minimized comprises a difference between Fourier domain image copies corresponding to the two or more spatial frequencies.
  • the method numerically solves the optimization problem to obtain pixel values for the saturated pixels.
  • Another aspect of the invention provides imaging arrays comprising a filter wherein the filter transmissivity for each of a plurality of spectral bands varies spatially with a distinct spatial frequency.
  • Another aspect of the invention provides an automated image processing system comprising a processor and software instructions for execution by the processor.
  • the software instructions comprise instructions that configure the processor to: obtain image data comprising a band-limited exposure of an image having some saturated pixels wherein the exposure is spatially modulated at two or more spatial frequencies by functions that differently attenuate the exposure; identify the saturated pixels in the image data; compute a Fourier transform of the image data; set up an optimization problem in which pixel values for the saturated components are unknown and an error measure to be minimized comprises a difference between Fourier domain image copies corresponding to the two or more spatial frequencies; and numerically solve the optimization problem to obtain pixel values for the saturated pixels.
  • Figure 1 is a flow chart illustrating a method for color imaging according to a first example embodiment of the invention.
  • Figure 2 illustrates the application of the method of Figure 1 to an example image.
  • Figures 3A, 3B and 3C show tiles that may be assembled to make filters according to example embodiments that have spatial variation at multiple spatial frequencies.
  • Figure 4 is a graph which permits comparison of the translucency of a filter according to an example embodiment of the invention to a prior art filter.
  • Figure 4 A is a graph showing translucency of a filter according to an embodiment of the invention as a function of a number of color channels.
  • Figure 5 schematically illustrates the operation in the Fourier domain of a gradient filter according to an example embodiment.
  • Figures 6 A and 6B respectively show an example one dimensional image signal for a scanline of an image and its Fourier transform in the absence of clipping (as a result of saturation or otherwise).
  • Figures 6C and 6D respectively show the same example one dimensional image signal for a scanline of an image and its Fourier transform in the presence of clipping.
  • Figure 7 is a flowchart illustrating a method for restoring saturated pixel values according to an example embodiment.
  • Figure 8 is a functional block diagram illustrating a camera system according to an example embodiment.
  • a basic implementation of the invention involves obtaining an exposure to optical radiation in which the response of pixels in an imaging array to different components of the optical radiation is modulated spatially according to two or more different basis functions for a transformation.
  • the transformation is a Fourier transformation
  • the basis functions may be sine or cosine functions.
  • the resulting image data is then transformed according to the transformation to yield transformed image data.
  • information corresponding to the different components is spatially separated. For example, making the exposure through a sum of cosines filter results in exact copies of the original image in higher spatial frequency regions of the Fourier transformed image. That information can be extracted by selecting a corresponding portion of the transformed image data.
  • An inverse transformation performed on the corresponding portion of the transformed image data yields component image data corresponding to one of the components.
  • the transformation should have the property that multiplication by a function in the spatial domain corresponds to a convolution in the transformed domain.
  • One class of transformation that possesses this property is the Fourier transformation.
  • This basic implementation may be applied in various ways, hi some embodiments, spatial modulation according to the basis functions is combined with spectral filtering. Such embodiments facilitate separation of an image into different color components.
  • the color components may be recombined to provide a color image, hi an example embodiment the color components correspond at least to red, green and blue (R, G and B) colors.
  • Figure 1 is a flow chart illustrating a method 10 for color imaging according to a first example embodiment of the invention.
  • Figure 2 illustrates the application of method 10 to an example image 21.
  • Block 12 comprises operating an imaging array to obtain image data 22.
  • the technology used to implement the light-sensing pixels is not critical.
  • the pixels may be pixels of a CMOS light sensor, an active pixel sensor (APS) array, a charge-coupled device (CCD) array, etc. Each pixel produces an output that is a function of the light incident on the pixel during the exposure.
  • CMOS light sensor CMOS light sensor
  • APS active pixel sensor
  • CCD charge-coupled device
  • the imaging array comprises a filter that light passes through before being detected by light-sensing pixels or some other mechanism for spatially modulating a response of the light-sensing pixels to components of the image.
  • the filter is in the image plane. Each pixel therefore receives light that has passed through a corresponding location (x, y) on the filter.
  • the filter has a filter function/that varies with position. In general,/is a function of wavelength, ⁇ , as well as the spatial coordinates x and y.
  • the filter function may be given by:
  • b,(k) are basis functions that describe the color spectra.
  • Each color spectrum b,( ⁇ ) can be considered to represent a component of the incident light (which can be any optical radiation) incident on the imaging array.
  • the color spectra may be non-overlapping and may be color primaries but this is not mandatory. In the following example, the color spectra are non-overlapping color primaries and N is the number of color primaries.
  • b t ( ⁇ ) may, for example, comprise bandpass filters or narrow-band notch filters.
  • £,( ⁇ ) include filters that filter according to a high-pass and/or a low-pass filter characteristic.
  • 6,( ⁇ ) are provided that pass red, green and blue light respectively.
  • b t (k) correspond to primary colors of a printing or display device.
  • Filters for application as described herein are not required to have optical densities or transmissivities that vary continuously with position on the filter.
  • the optical density or transmissivity may be constant over the area of any pixel of the imaging array used to acquire image data. This is not mandatory however.
  • the spatial frequencies in the filter are chosen so that the filter is spatially periodic with a period equal to a multiple of the pitch of pixels in the imaging array.
  • the filter may be constructed as an array of identical tiles.
  • Figure 3 A shows a 5x5 filter array that may be used as a filter in an embodiment
  • Figures 3B and 3C show example 4x4 filter arrays that may be used as filters in other example embodiments.
  • Such filters may be patterned directly onto an imaging array.
  • the filter array of Figure 3B encodes three spectral functions (red, green and blue filters) plus a monochromatic channel.
  • a filter like that of Figure 3B may be implemented so as to have a light transmittance of greater than 45%, for example, about 50%. As described below, it is possible to design filters of certain types as described herein that are 50% transmissive for an arbitrary number of channels.
  • are basis functions for a transformation.
  • the transformation that will be used is a Fourier transform having basis functions that are cosines having different spatial frequencies.
  • can be given by:
  • k/ and Jc 2 ' are constants (that may be equal to one another, or not, for any one basis function). Either of k/ and k 2 ' may be zero but k/ and k 2 ' are not both zero for any one basis function.
  • the blue light component will be spatially modulated a third spatial frequency (which depends upon the choices made for k, 3 and k 2 3 ).
  • image data acquired in block 12 should be spatially band-limited.
  • the image data may be read out from the imaging array in any suitable manner.
  • a Fourier transform of the image data is determined.
  • Block 14 yields transformed image data 23.
  • the Fourier transformed image includes Fourier transforms of the red, green and blue components of the image (which have deliberately been spatially modulated at different spatial frequencies) and therefore occur at different spaced apart locations in the transformed image data, hi Figure 2 it can be seen that transformed image 23 has a number of different components 23 A. It can be seen that red, green and blue components are represented at different locations in the Fourier transformed image.
  • the red, green and blue components 24 of the Fourier transformed image are each extracted. This may be achieved by cropping the Fourier transformed image, referencing the corresponding portions of the Fourier transformed image using the areas in the image where it is known that the red, green and blue components will be located, or the like.
  • the areas in the transformed image data corresponding to the components 24 are known because the spatial frequencies with which the red, green and blue components were modulated is known.
  • spatial location corresponds to frequency
  • the transformed image is logically or physically divided into tiles and tiles in the transformed image are associated with the components 24.
  • block 12 may comprise spatially band- limiting the image (i.e. limiting the maximum spatial frequencies present in the originally captured image).
  • the optical system used to focus optical radiation on the imaging array in block 12 maybe defocussed slightly while acquiring the image; the optical system used to direct light onto the imaging array may include a diffuser, for example a holographic diffuser, in an optical path at or upstream from the imaging array which spreads the optical radiation slightly; an anti-aliasing filter may be provided at the imaging array; or the like.
  • Spatially band-limiting the image ensures that the image data will not include spatial frequencies high enough to cause data corresponding to different components of the image data to overlap in the transformed image data.
  • the transformed image data is made up of a number of spatially separated copies in which each of the copies represents a Fourier transform of the component of the image data corresponding to one of the filter functions b(X).
  • spectral information is optically transformed into spatial frequencies.
  • the Fourier transform creates multiple copies of the scene around the fundamental spatial frequencies of the Fourier transformation.
  • K x and K y are integers and the pair of K x and K y for any value of/ is unique.
  • This provides three basis functions. In this case, in the Fourier transform of the image data, copies corresponding to the different basis functions will be centered in tiles of a 2x2 grid. The spatial frequencies corresponding to the centers of each copy are determined by the values chosen for the fundamental frequencies/ ⁇ and //.
  • the 2D Fourier transform of the image data contains tiles that correspond to the 2D Fourier transform of the original signal, filtered by a specific spectral distribution given by the product of color channel b,( ⁇ ) and the spectral sensor response ⁇ ( ⁇ ). This can be expressed mathematically as:
  • each channel can be reconstructed by cropping the corresponding Fourier tile and performing a two-dimensional inverse Fourier transform.
  • Block 16 may comprise extracting either one of the two copies for a channel or extracting both copies and combining them (for example, by adding).
  • the image components are combined to yield a reconstructed image 26 which may be in any suitable image format.
  • the image components may be combined to provide image data in a suitable: JPEG, TIFF, GIF, PNG, BMP, or RAW data format (or any other suitable image format).
  • Reconstructed image 26 may be stored, forwarded, sent to a display device for display, sent to a printer for printing or applied in any other desired manner.
  • Method 10 can be performed with any suitable sensor array.
  • image acquisition for method 10 may be performed with a standard digital camera having a suitable filter applied to the imaging array.
  • This filter may be provided in place of a standard color filter (such as the Bayer pattern of red, green, and blue filter elements often provided in the imaging arrays of digital cameras).
  • the example embodiment described above applies a spatially varying, semi-transparent filter that follows a sum-of-cosine distribution of intensities.
  • the filter characteristics b t do not necessarily filter according to color.
  • the filter characteristics could filter in whole or in part according to some other characteristic of the incident radiation, such as polarization.
  • the filter characteristics are not limited to passing single color components.
  • Transforms of image data and inverse transforms may be determined through the application of general purpose or special purpose programmed data processors and/or by way of a suitably configured logic pipeline (either hardwired, for example, in an application specific integrated circuit (ASIC) or provided in a configurable logic, such as a field programmable gate array (FPGA)).
  • a suitably configured logic pipeline either hardwired, for example, in an application specific integrated circuit (ASIC) or provided in a configurable logic, such as a field programmable gate array (FPGA)
  • block 14 may be performed by executing a fast Fourier transform (FFT) algorithm or a discrete Fourier Transform (DFT) algorithm.
  • FFT fast Fourier transform
  • DFT discrete Fourier Transform
  • Real filters can only attenuate light but cannot amplify light or produce "negative light". Where an optical filter is used to spatially modulate image components the particular material(s) selected for the filter may have other limitations, such as limited contrast or the like. A real filter that can produce satisfactory results may be achieved, for example, by renormalizing a filter as specified by Equation (4) for each pixel with a linear function.
  • letf mm (x,y) be the minimum transmission value of the filter for a position (x,y) over all wavelengths and ⁇ etf max (x,y) be the maximum transmission of the filter for any wavelength at position (x,y).
  • ⁇ and ⁇ are possible to fulfill additional constraints, for example a constraint on overall light transmission. Imaging with such a modified filter produces a modified sensor image S . The individual pixels of s can easily be mapped back into the original range, yielding an image s that can be processed as described above.
  • FIG. 4 shows a comparison of the trans lucency of a filter as described above (curve 30) to that of a filter (curve 31) based on the assorted pixels approach as described in Narasimhan, S., and Nayar, S. 2005. Enhancing Resolution along Multiple Imaging Dimensions using Assorted Pixels. IEEE Transactions on Pattern Analysis and Machine Intelligence 27, 4 (Apr), 518-530. It can be seen that the filter as described herein is more light efficient, especially for a larger number of channels.
  • the light transmittance of a filter as described herein can be altered, for example, by increasing the ratio of ⁇ to ⁇ in Equation (5). This increases the DC term of the filter's Fourier transform (which corresponds to the mean light transmittance of the spatial filter).
  • the integral of a single normalized sinusoid is 50%.
  • the total transmissivity of a filter (or 'mask') as described herein can therefore be made to be half of the sum of the transmissivities for the individual primaries b/ ⁇ ).
  • Figure 4 A shows that a filter according to such an embodiment can have a transmissivity of 50% for an arbitrary number of color channels (curve 32).
  • the amount of information multiplexed into the Fourier image data maybe increased by modulating the optical radiation according to sinusoids that are offset in phase when acquiring the image data.
  • a filter function may be given by:
  • All of the channels contribute to the copy in the central tile of the Fourier transform image data.
  • This 'DC component of the Fourier transform image data may be processed by an inverse Fourier transform to yield a luminance image.
  • the luminance image tends to be relatively low in sensor noise.
  • the luminance image may be combined with or used with reconstructed images 25 in the creation of output image 26.
  • an RGB image obtained from reconstructed images 25 may be transformed into a space having a luminance channel (such as a YUV space) by multiplying by a suitable matrix.
  • the resulting luminance value may then be combined with or replaced by the luminance image. If desired a transformation back to RGB space or another desired color space may then be made.
  • the ratio between image resolution and the number of Fourier copies is non-fractional.
  • the number of pixels that make up a spatial tile equals the number of Dirac peaks in the Fourier domain. This can be achieved through appropriate selection of the fundamental frequencies in the x- andy- directions f x ° and f y °.
  • Optical filter kernels may be designed to encode other image information in different spatial frequencies such that records of the image information are recorded at different spaced-apart locations in the Fourier domain.
  • the following example applies a filter that approximates a derivative or gradient of the spatial frequency of an image. This information has various applications.
  • is the Dirac delta function and ® represents convolution.
  • a sine function is the function in the spatial domain that corresponds to a Dirac delta function in the Fourier domain. Therefore, copies in the Fourier domain that represent the first derivative dF/d ⁇ can be produced by applying a spatial optical filter having the following form:
  • a filter is made by giving ⁇ some small value.
  • a schematic one-dimensional illustration of the application of a filter like that defined in Equation (12) is shown in Figure 5. This filter can readily be generalized to two dimensions.
  • a filter which modulates the exposure of images with two sine waves having slightly different frequencies may be applied as described above to permit recovery of the two-dimensional Fourier gradient of a signal.
  • Saturation can be a particular difficulty in the acquisition of high dynamic range images. This is both because high dynamic range images have the capability of displaying details in shadow and/or highlight areas that a conventional image would not be expected to reproduce and because high dynamic range images may be desired in cases where high image quality is desired. Saturation results in the loss of detail in highlighted and/or shadow regions of an image.
  • One aspect of the invention provides methods for reconstructing saturated regions of images. As illustrated in Figures 6 A to 6D, it is typical that saturation artificially introduces higher spatial frequencies into an image. These higher frequencies may be distributed over the entire frequency domain. Compare Figures 6A and 6B which respectively show a band-limited signal made up of a single scan line taken from a high dynamic range image and its Fourier transform to Figures 6C and 6D showing the same signal clipped at an intensity level of 0.8 and its Fourier transform. It can be seen in Figure 6B that the spatial frequencies present in the signal of Figure 6 A are all confined within a band 34. This is expected because the signal represented by Figure 6 A is band-limited. By contrast, Figure 6D shows that the spatial frequencies present in the signal of Figure 6C are spread over a broad spectrum and include substantial high frequency components.
  • Figure 7 illustrates an example method 50 for reconstructing saturated portions of an image.
  • Method 50 generates monochrome images but can be generalized to generate color images as described below.
  • method 50 obtains a band-limited image at an imaging array.
  • Block 52 comprises applying a known spatial modulation to the image.
  • the spatial modulation may be imposed by passing incident optical radiation through a filter having a filter function comprising a sum of two or more sinusoids (e.g. cosines or sines) or more generally a spatial variation that is periodic with first and second spatial frequencies. Different ones of the sinusoids may have different amplitudes.
  • Block 52 yields image data.
  • Block 53 identifies saturated pixels in the image data.
  • Block 54 determines a Fourier transform of the image data obtained in block 52. This results in a transformed image that includes a number of differently-scaled copies. In the absence of saturation the copies are spaced apart from one another in Fourier space. However, saturation introduces high frequencies and so, in the general case, these copies will be corrupted even if the least transmissive among the neutral density filters do not saturate.
  • the next part of method 50 can be understood by noting that we have two pieces of information about the nature of the image data obtained in block 52.
  • the first is that the original signal, before being modulated, is band-limited. Therefore, the captured image should not contain any high spatial frequencies.
  • the second is that the filter copies the original signal with varying attenuation coefficients.
  • An image may be decomposed into a region in which the signal will be saturated in the image data, represented by L sal and a region in which the corresponding image data will not be saturated, represented by L unsal .
  • L unsat is equal to L, but has zeros in all saturated image parts.
  • L sat has zeros in all unsaturated pixels and unknown values elsewhere. Since the Fourier transform is linear, the same relation holds for the Fourier representations of the signal's components, namely:
  • a goal of reconstructing saturated portions of image data is to determine the unknown values in L sat from the image data or at least to determine values which result in acceptable image characteristics.
  • T 1 is a tile describing a single copy of the signal in the Fourier domain and is given by:
  • S 1 are the scaling factors for individual tiles in the Fourier domain as determined by the modulation applied to the signal and ⁇ , represents sensor noise.
  • T 1 can be written (neglecting s) as:
  • T 1 F 1 [L 11n J + F 1 [Lj + ⁇ (16)
  • F 1 is the Fourier transform that transforms a full-resolution image from the spatial domain into the subset of the frequency space that is spanned by tile /.
  • F 1 may be obtained by applying a rectangular discrete Fourier transform (DFT) of size pqx-mn .
  • DFT discrete Fourier transform
  • Equation (16) the term F, ⁇ L unsal ⁇ can readily be computed from the captured image data (neglecting the effect of sensor noise).
  • F t ⁇ L s ⁇ ⁇ includes the unknown values (the non-zero pixels of L ra( ).
  • saturated values e.g. values at the maximum output of the imaging array, or values above a saturation threshold
  • saturation noise in the Fourier domain, including high frequency saturation noise outside of the frequency band of the band-limited signal.
  • the sensor noise ⁇ is independently distributed in the spatial domain and observes a Gaussian noise distribution in the per-pixel image intensities. With this assumption, F ⁇ , ⁇ has a uniform power spectrum with a Gaussian characteristic in each Fourier coefficient. This noise model is reasonable for many real image sensors for values above the noise level of the imaging array. Making a suitable assumption about the sensor noise (a suitable assumption in some cases could be that there is no sensor noise) facilitates the application of a quadratic error norm for optimization in Fourier space.
  • Equation (17) can be expressed as a linear system of equations having as unknowns values for the clipped pixels of L sat .
  • the system of linear equations can be solved to yield a solution or approximate solution which tends to minimize the error measure.
  • R and F are matrices.
  • Matrix R will not generally be of full rank. This may be addressed by making a suitable assumption about the nature of the expected solution. For example, one can impose a condition on the nature of the combined signal L sal + L unsat . In this example, the condition imposes a spatial smoothness constraint, specifically, a curvature minimizing term. Adding the additional constraint to Equation (18) yields: (19)
  • T is a regularizer, in this case a spatial curvature operator, and ⁇ is a weighting factor.
  • a least-squares description of the error measure may be obtained by differentiating Equation (19) with respect to L sat and setting the gradient to zero. This least squares description may be expressed as:
  • Equation (20) is constant and is determined by the values in the unsaturated parts of the image data.
  • a set of values for the saturated pixels in L sat may be obtained by solving equation (20). This is done numerically. For example, the conjugate gradient for least squares algorithm as described in HANSEN, P. C. 1998. Rank-Deficient and Discrete Ill-Posed Problems: Numerical Aspects of Linear Inversion. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA may be applied to obtain a solution to Equation (20). This algorithm is advantageous because it permits matrix multiplications to be implemented by fast image processing routines instead of constructing the actual matrices. Other suitable optimization algorithms may also be applied. Many such algorithms are known.
  • Method 50 applies the approach described above by, in block 55, constructing an optimization problem in which the unknown variables are the pixel values in saturated regions of the image data and applying an optimization algorithm to obtain best-fitting values for the unknown pixel values in block 56.
  • Reconstructed image data is obtained in block 57 by inserting the pixel values determined in block 56 into the saturated regions of the image data.
  • Some embodiments optionally comprise applying an estimation algorithm to estimate true values for the pixel values in saturated regions of the image data prior to applying the optimization algorithm in block 56.
  • the estimation could be based on the assumption that the pixel values will have a local maximum at or near a centroid of a saturated region and will vary smoothly from that local maximum to the values at the boundary of the saturated region.
  • the value of the maximum may be selected based on one or more of: a size of the saturated region (e.g. a distance from the centroid to the closest point on the boundary of the saturated region); and gradients of the pixel values near boundaries of the saturated region. Making an estimate (in general a sensible rough guess) as to likely pixel values in the saturated region may improve the rate of convergence of the optimization algorithm applied in block 56. The estimate may be generated automatically.
  • a basic outline of method 50 is: • obtain a band-limited exposure of an image and, in doing so, apply one or more filter functions that spatially modulate the exposure at distinct spatial frequencies and attenuate the exposure by different amounts;
  • the image data may represent a dynamic range greater than that of an imaging array applied to obtain the original image data.
  • a method like method 50 may be performed for a luminance channel of an image.
  • a method like method 50 may be performed individually for different color components within an image. In the latter case, different color components may be spatially modulated during exposure at each of two or more spatial frequencies with a different average level of attenuation at each of the spatial frequencies.
  • Method 50 and variants thereof may also be practiced in cases where a spatial modulation is imposed by color filters that have two or more spatial frequencies such that, in the Fourier domain, images taken with such filters present two or more copies of the image.
  • Tiled filter arrays such as the commonly-used Bayer filter have this characteristic. Therefore, in alternative embodiment, band-limited image data is acquired using an imaging array comprising a Bayer color filter. A Fourier transform of the resulting image data is determined. Saturated pixels in the image data are identified. An optimization involving unknown values for the saturated pixels is constructed using two or more of the copies from the Fourier domain, the optimization is solved subject to suitable constraints to yield values for the saturated pixels.
  • all components of the image and/or all data for reconstruction of saturated areas may be obtained from a single exposure. Multiple exposures are not required.
  • processing to extract the various components of the images may be performed in various ways.
  • a camera or other image acquisition device incorporates logic for processing the captured image to extract image components as described herein and/or to extract the image components to perform further processing on the image components.
  • the captured image is transferred from an image acquisition device to another processor (which could be a personal computer, computer system, or the like) and some or all processing of the image data may be performed on the device to which the image data is downloaded. Processing, as described herein, may be performed automatically at a processor when images are transferred from a camera or other imaging device to the processor.
  • FIG. 8 illustrates a camera system 60 according to an example embodiment.
  • Camera system 60 comprises a camera 61 having an optical system 62 that focuses light onto an image plane 63 where the light can be imaged by an imaging array 64.
  • a blurring filter 66 and a filter 65 that applies spatial modulation to different image components are provided at imaging array 64.
  • a control system 68 operates imaging array 64 to obtain exposures of a scene and stores the resulting image data in a data store 67.
  • camera 61 is connected (or connectable) to a host processing system 70 that performs processing on the image data acquired by camera 61.
  • a single device provides functions of camera 61 and host processing system 70 or functions are allocated between camera 61 and host processing system 70 in some alternative manner.
  • Host processing system 70 comprises a Fourier transformation function 72 that computes a Fourier transform of image data retrieved from camera 61.
  • a data extraction component 74 is configured to extract Fourier transforms of different image components and to provide the extracted Fourier transforms to an inverse Fourier transform component 76.
  • An image encoding system 78 receives image components from inverse Fourier transform component 76, generates image data in a desired format from the image components and stores the resulting image data in a data store 79.
  • host processing system 70 comprises a display 80 and a printer 81.
  • a display driver 82 is configured to display on display 80 images corresponding to image data in data store 79.
  • a printer driver 83 is configured to print on printer 81 images corresponding to image data in data store 79.
  • an optimization system 85 receives image components from data extraction component 74 and generates values for the image data in saturated image regions. These values are provided to image encoding system 78 which incorporates them into the image data.
  • host processing system 70 comprises a general purpose computer system and the components of host processing system 70 are provided by software executing on one or more processors of the computer system. In other embodiments, at least some functional blocks of host processor system 70 are provided by hard-wired or configurable logic circuits. In other embodiments, at least some functional blocks of host processor system 70 are provided by a special purpose programmed data processor such as a digital signal processor or graphics processor.
  • Optical filters may be printed directly on an imaging array or provided in one or more separate layers applied to the imaging array.
  • the optical filters provided by way of a spatial light modulator in the light path to the imaging array may be changed during an exposure.
  • the spatial light modulator may be set to modulate incoming light with a first spatial frequency or a first set of spatial frequencies during a first part of the exposure and to switch to modulating the incoming optical radiation with a second spatial frequency or a second set of spatial frequencies during a second part of the exposure.
  • the first and second parts of the exposure are optionally of different lengths.
  • different copies in the Fourier domain provide temporal information and may also provide differently-exposed copies of the image that may be used for high dynamic range reconstruction, reconstructing saturated pixel values or the like.
  • the optical modulation of the spatial filter is rotated relative to the pixel grid of the imaging array.
  • different spectral copies have slightly different sub-pixel alignment. This can be used to recover the original image resolution by performing de-convolution with a small filter kernel corresponding to the optical low pass filter used for band-limiting.
  • the filter is rotated relative to a pixel array
  • certain types of filters may be implemented by selectively controlling the sensitivities of different sets of pixels in an imaging array. This may be done, for example, by setting the exposure times for different sets of pixels to be different or by using pixels of different sizes to vary light sensitivity across pixels, hi either case, the sensitivity of the pixels may be caused to vary in a pattern having a spatial frequency that imposes a desired modulation of some image component in the image.
  • a camera provides a separate sensor for capturing information for recreating the luminance of saturated areas in an image.
  • the separate sensor may apply any of the methods as described above.
  • the separate sensor may have a relatively low resolution (although this is not mandatory) since glare tends to limit the effect of spatial resolution of high dynamic range content.
  • Certain implementations of the invention comprise computer processors which execute software instructions which cause the processors to perform a method according to the invention for processing image data from a modulated exposure, as described herein.
  • one or more processors in a camera and/or an image processing system into which images from a camera are transferred may implement methods as described herein by executing software instructions in a program memory accessible to the processors.
  • the invention may also be provided in the form of a program product.
  • the program product may comprise any medium which carries a set of computer-readable instructions which, when executed by a data processor, cause the data processor to execute a method of the invention.
  • Program products according to the invention may be in any of a wide variety of forms.
  • the program product may comprise, for example, physical media such as magnetic data storage media including floppy diskettes, hard disk drives, optical data storage media including CD ROMs, DVDs, electronic data storage media including ROMs, flash RAM, or the like.
  • the computer-readable instructions on the program product may optionally be compressed or encrypted.
  • a component e.g. a software module, processor, assembly, device, circuit, etc.
  • reference to that component should be interpreted as including as equivalents of that component any component which performs the function of the described component (i.e., that is functionally equivalent), including components which are not structurally equivalent to the disclosed structure which performs the function in the illustrated exemplary embodiments of the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Image Processing (AREA)
  • Color Television Image Signal Generators (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

An imaging method comprises acquiring image data in which image components are spatially modulated at distinct spatial frequencies, transforming the image data into the Fourier domain and separating the image components in the Fourier domain. The image components may be transformed into the spatial domain. The image components may comprise different colors. In some embodiments saturated pixels are reconstructed by performing an optimization based on differences between image copies in the Fourier domain. Imaging apparatus may perform the imaging methods.

Description

MULTIPLEXED IMAGING
Cross Reference to Related Applications
[0001] This application claims priority from United States Patent Application No. 61/145689 filed on 19 January 2009 and entitled MULTIPLEXED IMAGING. For the purposes of the United States, this application claims the benefit under 35 U.S. C. §119 of United States Patent Application No. 61/145689 filed on 19 January 2009 and entitled MULTIPLEXED IMAGING which is hereby incorporated herein by reference.
Technical Field
[0002] The invention relates to imaging and has application to photography. Some embodiments of the invention relate to the acquisition of high dynamic range (HDR) images. Some embodiments of the invention relate to the acquisition of color images.
Summary of the Invention
[0003] Various example embodiments of the invention provide: cameras, camera systems,
imaging arrays for cameras, • methods for obtaining images,
methods for extracting multiple image characteristics from image data, and
apparatus for extracting multiple image characteristics from image data.
[0004] One aspect of the invention provides methods for obtaining image data. The methods comprise acquiring image data by exposing an imaging array to optical radiation and operating the imaging array. Acquiring the image data comprises spatially modulating a response of the imaging array to each of a plurality of components of the optical radiation according to a corresponding basis function for an invertible transformation. The method applies the transformation to the image data to yield transformed image data. The transformed image data comprises spatially-separated image copies corresponding respectively to the plurality of components. The method extracts the spatially-separated image copies from the transformed image data and applies an inverse of the transformation to each of the extracted image copies.
[0005] hi some embodiments acquiring the image data comprises allowing the optical radiation to pass through an optical filter before interacting with pixels of the imaging array.
[0006] Another aspect provides an automated method for reconstructing pixel values for saturated pixels in an image. The method comprises obtaining image data comprising a band-limited exposure of an image having some saturated pixels. The exposure is spatially modulated. The spatial modulation occurs at one or more and, in some embodiments, two or more spatial frequencies by functions that differently attenuate the exposure. The method identifies the saturated pixels in the image data, computes a Fourier transform of the image data and sets up an optimization problem in which pixel values for the saturated components are unknown and an error measure to be minimized comprises a difference between Fourier domain image copies corresponding to the two or more spatial frequencies. The method numerically solves the optimization problem to obtain pixel values for the saturated pixels.
[0007] Another aspect of the invention provides imaging arrays comprising a filter wherein the filter transmissivity for each of a plurality of spectral bands varies spatially with a distinct spatial frequency.
[0008] Another aspect of the invention provides an automated image processing system comprising a processor and software instructions for execution by the processor. The software instructions comprise instructions that configure the processor to: obtain image data comprising a band-limited exposure of an image having some saturated pixels wherein the exposure is spatially modulated at two or more spatial frequencies by functions that differently attenuate the exposure; identify the saturated pixels in the image data; compute a Fourier transform of the image data; set up an optimization problem in which pixel values for the saturated components are unknown and an error measure to be minimized comprises a difference between Fourier domain image copies corresponding to the two or more spatial frequencies; and numerically solve the optimization problem to obtain pixel values for the saturated pixels.
[0009] Further aspects of the invention and features of specific embodiments of the invention are described below.
Brief Description of the Drawings
[0010] The accompanying drawings illustrate non- limiting example embodiments of the invention.
[0011] Figure 1 is a flow chart illustrating a method for color imaging according to a first example embodiment of the invention.
[0012] Figure 2 illustrates the application of the method of Figure 1 to an example image.
[0013] Figures 3A, 3B and 3C show tiles that may be assembled to make filters according to example embodiments that have spatial variation at multiple spatial frequencies.
[0014] Figure 4 is a graph which permits comparison of the translucency of a filter according to an example embodiment of the invention to a prior art filter.
[0015] Figure 4 A is a graph showing translucency of a filter according to an embodiment of the invention as a function of a number of color channels.
[0016] Figure 5 schematically illustrates the operation in the Fourier domain of a gradient filter according to an example embodiment.
[0017] Figures 6 A and 6B respectively show an example one dimensional image signal for a scanline of an image and its Fourier transform in the absence of clipping (as a result of saturation or otherwise). Figures 6C and 6D respectively show the same example one dimensional image signal for a scanline of an image and its Fourier transform in the presence of clipping.
[0018] Figure 7 is a flowchart illustrating a method for restoring saturated pixel values according to an example embodiment.
[0019] Figure 8 is a functional block diagram illustrating a camera system according to an example embodiment.
Description
[0020] Throughout the following description, specific details are set forth in order to provide a more thorough understanding of the invention. However, the invention may be practiced without these particulars, hi other instances, well known elements have not been shown or described in detail to avoid unnecessarily obscuring the invention. Accordingly, the specification and drawings are to be regarded in an illustrative, rather than a restrictive, sense.
[0021] A basic implementation of the invention involves obtaining an exposure to optical radiation in which the response of pixels in an imaging array to different components of the optical radiation is modulated spatially according to two or more different basis functions for a transformation. For example, where the transformation is a Fourier transformation the basis functions may be sine or cosine functions. The resulting image data is then transformed according to the transformation to yield transformed image data. In the transformed image data, information corresponding to the different components is spatially separated. For example, making the exposure through a sum of cosines filter results in exact copies of the original image in higher spatial frequency regions of the Fourier transformed image. That information can be extracted by selecting a corresponding portion of the transformed image data. An inverse transformation performed on the corresponding portion of the transformed image data yields component image data corresponding to one of the components. [0022] To best achieve separation of the components in the transformed image data the transformation should have the property that multiplication by a function in the spatial domain corresponds to a convolution in the transformed domain. One class of transformation that possesses this property is the Fourier transformation.
[0023] This basic implementation may be applied in various ways, hi some embodiments, spatial modulation according to the basis functions is combined with spectral filtering. Such embodiments facilitate separation of an image into different color components. The color components may be recombined to provide a color image, hi an example embodiment the color components correspond at least to red, green and blue (R, G and B) colors.
[0024] Figure 1 is a flow chart illustrating a method 10 for color imaging according to a first example embodiment of the invention. Figure 2 illustrates the application of method 10 to an example image 21. Block 12 comprises operating an imaging array to obtain image data 22. The technology used to implement the light-sensing pixels is not critical. By way of non-limiting example, the pixels may be pixels of a CMOS light sensor, an active pixel sensor (APS) array, a charge-coupled device (CCD) array, etc. Each pixel produces an output that is a function of the light incident on the pixel during the exposure.
[0025] The imaging array comprises a filter that light passes through before being detected by light-sensing pixels or some other mechanism for spatially modulating a response of the light-sensing pixels to components of the image. In this example embodiment the filter is in the image plane. Each pixel therefore receives light that has passed through a corresponding location (x, y) on the filter. The filter has a filter function/that varies with position. In general,/is a function of wavelength, λ, as well as the spatial coordinates x and y. The image data s(x, y) is therefore given, in general, by: s(x,y,) =
Figure imgf000007_0001
(1) where τ(x, y, λ) is the response of the pixels of the sensing array to light; and /(x,y,λ) is the light irradiance on a sensor pixel. Where τ is the same for all pixels then τ can be given as τ(λ).
[0026] The filter function, /may be given by:
f(x,y,Λ) ai(x,y)bi(λ) (2)
Figure imgf000008_0001
l
where b,(k) are basis functions that describe the color spectra. Each color spectrum b,(λ) can be considered to represent a component of the incident light (which can be any optical radiation) incident on the imaging array. The color spectra may be non-overlapping and may be color primaries but this is not mandatory. In the following example, the color spectra are non-overlapping color primaries and N is the number of color primaries. bt(λ) may, for example, comprise bandpass filters or narrow-band notch filters. In some embodiments £,(λ) include filters that filter according to a high-pass and/or a low-pass filter characteristic. In some embodiments, 6,(λ) are provided that pass red, green and blue light respectively. In some embodiments, bt(k) correspond to primary colors of a printing or display device.
[0027] Filters for application as described herein are not required to have optical densities or transmissivities that vary continuously with position on the filter. The optical density or transmissivity may be constant over the area of any pixel of the imaging array used to acquire image data. This is not mandatory however.
[0028] In some embodiments, the spatial frequencies in the filter are chosen so that the filter is spatially periodic with a period equal to a multiple of the pitch of pixels in the imaging array. In such embodiments, the filter may be constructed as an array of identical tiles. For example, Figure 3 A shows a 5x5 filter array that may be used as a filter in an embodiment and Figures 3B and 3C show example 4x4 filter arrays that may be used as filters in other example embodiments. Such filters may be patterned directly onto an imaging array. [0029] The filter array of Figure 3B encodes three spectral functions (red, green and blue filters) plus a monochromatic channel. A filter like that of Figure 3B may be implemented so as to have a light transmittance of greater than 45%, for example, about 50%. As described below, it is possible to design filters of certain types as described herein that are 50% transmissive for an arbitrary number of channels.
[0030] α, are basis functions for a transformation. In this example the transformation that will be used is a Fourier transform having basis functions that are cosines having different spatial frequencies. In this case, α, can be given by:
Ct1 (x,y) = cos(2^,'x + 27ά2'y) (3)
where k/ and Jc2' are constants (that may be equal to one another, or not, for any one basis function). Either of k/ and k2' may be zero but k/ and k2' are not both zero for any one basis function.
[0031] Consider the case where (X1 corresponds to a filter bx which passes red light, α2 corresponds to a filter b2 which passes green light, and α3 corresponds to a filter b3 which passes blue light. When an exposure is taken, the red light component of the image is spatially modulated at a first spatial frequency (which depends upon the choices made for k^ and k2'), the green light will be spatially modulated at a second spatial frequency
(which depends upon the choices made for k,2 and k2 2), and the blue light component will be spatially modulated a third spatial frequency (which depends upon the choices made for k,3 and k2 3).
[0032] As described below, image data acquired in block 12 should be spatially band- limited.
[0033] The image data may be read out from the imaging array in any suitable manner. In block 14, a Fourier transform of the image data is determined. Block 14 yields transformed image data 23. hi the Fourier transformed image data, different spatial frequencies are represented at different locations. Therefore, the Fourier transformed image includes Fourier transforms of the red, green and blue components of the image (which have deliberately been spatially modulated at different spatial frequencies) and therefore occur at different spaced apart locations in the transformed image data, hi Figure 2 it can be seen that transformed image 23 has a number of different components 23 A. It can be seen that red, green and blue components are represented at different locations in the Fourier transformed image.
[0034] hi block 16, the red, green and blue components 24 of the Fourier transformed image are each extracted. This may be achieved by cropping the Fourier transformed image, referencing the corresponding portions of the Fourier transformed image using the areas in the image where it is known that the red, green and blue components will be located, or the like. The areas in the transformed image data corresponding to the components 24 are known because the spatial frequencies with which the red, green and blue components were modulated is known. In the transformed image data, spatial location corresponds to frequency, hi some embodiments the transformed image is logically or physically divided into tiles and tiles in the transformed image are associated with the components 24.
[0035] As mentioned above, block 12 may comprise spatially band- limiting the image (i.e. limiting the maximum spatial frequencies present in the originally captured image). This may be achieved in various ways. For example, the optical system used to focus optical radiation on the imaging array in block 12 maybe defocussed slightly while acquiring the image; the optical system used to direct light onto the imaging array may include a diffuser, for example a holographic diffuser, in an optical path at or upstream from the imaging array which spreads the optical radiation slightly; an anti-aliasing filter may be provided at the imaging array; or the like. Spatially band-limiting the image ensures that the image data will not include spatial frequencies high enough to cause data corresponding to different components of the image data to overlap in the transformed image data. With such spatial band- limiting the transformed image data is made up of a number of spatially separated copies in which each of the copies represents a Fourier transform of the component of the image data corresponding to one of the filter functions b(X).
[0036] When capturing a spatially band-limited scene through the filter defined by Equation (2) with α, as defined in Equation (3), spectral information is optically transformed into spatial frequencies. Specifically, the Fourier transform creates multiple copies of the scene around the fundamental spatial frequencies of the Fourier transformation. One can chose spatial frequencies such that the resulting copies will be conveniently arranged in the transformed image data.
[0037] Consider the case where the spatial variation of the filter is given by:
f(x,y, λ) = cofhπkx ιfx°x + 2τdcyfy 0y),t (X) (4)
Figure imgf000011_0001
where Kx and Ky are integers and the pair of Kx and Ky for any value of/ is unique. A suitable set of basis functions can be obtained by allowing the values of Kx and Ky to each range over 0 to Q. For example, with Q=I, (Kx, Ky ) = {(0,1), (1,0) and (1,1)} (where (Kx, Ky ) = (0,0) is trivial and excluded. This provides three basis functions. In this case, in the Fourier transform of the image data, copies corresponding to the different basis functions will be centered in tiles of a 2x2 grid. The spatial frequencies corresponding to the centers of each copy are determined by the values chosen for the fundamental frequencies/^ and //.
[0038] The 2D Fourier transform of the image data contains tiles that correspond to the 2D Fourier transform of the original signal, filtered by a specific spectral distribution given by the product of color channel b,( λ) and the spectral sensor response τ(λ). This can be expressed mathematically as:
F{LS} fy - ky'fy 0
Figure imgf000011_0002
Figure imgf000011_0003
Where F represents the Fourier transform and Z5 represents image data. Hence, each channel can be reconstructed by cropping the corresponding Fourier tile and performing a two-dimensional inverse Fourier transform.
[0039] Due to the symmetry of the Fourier transform of a cosine function a pair of copies corresponds to each channel. The two copies in each pair encode the same information. Block 16 may comprise extracting either one of the two copies for a channel or extracting both copies and combining them (for example, by adding).
[0040] In block 18, an inverse Fourier transform is computed for each of the components
24 extracted from the Fourier transformed image data 23. This yields reconstructed images
25 for each color channel.
[0041] In block 19, the image components are combined to yield a reconstructed image 26 which may be in any suitable image format. For example, the image components may be combined to provide image data in a suitable: JPEG, TIFF, GIF, PNG, BMP, or RAW data format (or any other suitable image format). Reconstructed image 26 may be stored, forwarded, sent to a display device for display, sent to a printer for printing or applied in any other desired manner.
[0042] Method 10 can be performed with any suitable sensor array. For example, image acquisition for method 10 may be performed with a standard digital camera having a suitable filter applied to the imaging array. This filter may be provided in place of a standard color filter (such as the Bayer pattern of red, green, and blue filter elements often provided in the imaging arrays of digital cameras). The example embodiment described above applies a spatially varying, semi-transparent filter that follows a sum-of-cosine distribution of intensities.
[0043] The methods described above are not limited to three colors but may be practiced with any number of color components. Also, the filter characteristics bt do not necessarily filter according to color. The filter characteristics could filter in whole or in part according to some other characteristic of the incident radiation, such as polarization. Also, the filter characteristics are not limited to passing single color components. One could, for example, modulate at one spatial frequency a filter which passes light at a plurality of different wavelength bands and blocks light having wavelengths between the bands.
[0044] Transforms of image data and inverse transforms may be determined through the application of general purpose or special purpose programmed data processors and/or by way of a suitably configured logic pipeline (either hardwired, for example, in an application specific integrated circuit (ASIC) or provided in a configurable logic, such as a field programmable gate array (FPGA)). For example, in the example embodiment described above, block 14 may be performed by executing a fast Fourier transform (FFT) algorithm or a discrete Fourier Transform (DFT) algorithm.
[0045] Real filters can only attenuate light but cannot amplify light or produce "negative light". Where an optical filter is used to spatially modulate image components the particular material(s) selected for the filter may have other limitations, such as limited contrast or the like. A real filter that can produce satisfactory results may be achieved, for example, by renormalizing a filter as specified by Equation (4) for each pixel with a linear function.
[0046] For example, letfmm(x,y) be the minimum transmission value of the filter for a position (x,y) over all wavelengths and \etfmax(x,y) be the maximum transmission of the filter for any wavelength at position (x,y). A physically realizable filter f\x, y, λj can be defined as: f{χ,y,λ) = rf{χ,y,Λ)+ Φ (6) where: 1
Figure imgf000013_0001
and:
Figure imgf000014_0001
Different values of φ and γ are possible to fulfill additional constraints, for example a constraint on overall light transmission. Imaging with such a modified filter produces a modified sensor image S . The individual pixels of s can easily be mapped back into the original range, yielding an image s that can be processed as described above.
[0047] One advantage of the methods described above over the use of conventional red green and blue filters arranged in a Bayer pattern or some other arrangements for obtaining color signals is that the filter can be more light-efficient than would be a standard filter capable of the same color separation. Figure 4 shows a comparison of the trans lucency of a filter as described above (curve 30) to that of a filter (curve 31) based on the assorted pixels approach as described in Narasimhan, S., and Nayar, S. 2005. Enhancing Resolution along Multiple Imaging Dimensions using Assorted Pixels. IEEE Transactions on Pattern Analysis and Machine Intelligence 27, 4 (Apr), 518-530. It can be seen that the filter as described herein is more light efficient, especially for a larger number of channels. The light transmittance of a filter as described herein can be altered, for example, by increasing the ratio of φ to γ in Equation (5). This increases the DC term of the filter's Fourier transform (which corresponds to the mean light transmittance of the spatial filter).
[0048] The integral of a single normalized sinusoid is 50%. The total transmissivity of a filter (or 'mask') as described herein can therefore be made to be half of the sum of the transmissivities for the individual primaries b/λ). Figure 4 A shows that a filter according to such an embodiment can have a transmissivity of 50% for an arbitrary number of color channels (curve 32). [0049] The amount of information multiplexed into the Fourier image data maybe increased by modulating the optical radiation according to sinusoids that are offset in phase when acquiring the image data. For example, a filter function may be given by:
f(x,y,λ) =
Figure imgf000015_0001
! = 1
[0050] Providing two phase shifted sinusoids at each spatial frequency permits encoding of two spectral functions b\(λ) and 62,(λ) with ie{l ... N} at one spatial frequency. The resulting Fourier transform is no longer real, but complex. If m and n identify tiles in the Fourier domain that respectively contain copies that would contain the same information if the filter of Equation (4) were used then the images filtered with spectra b'n(λ) and b2 n(k) may be recovered from:
Figure imgf000015_0002
respectively .
[0051] All of the channels contribute to the copy in the central tile of the Fourier transform image data. This 'DC component of the Fourier transform image data may be processed by an inverse Fourier transform to yield a luminance image. The luminance image tends to be relatively low in sensor noise. The luminance image may be combined with or used with reconstructed images 25 in the creation of output image 26. For example, an RGB image obtained from reconstructed images 25 may be transformed into a space having a luminance channel (such as a YUV space) by multiplying by a suitable matrix. The resulting luminance value may then be combined with or replaced by the luminance image. If desired a transformation back to RGB space or another desired color space may then be made. [0052] It can be appreciated that there is a trade-off between the number of channels being recorded and the spatial resolution of the channels, hi some embodiments the ratio between image resolution and the number of Fourier copies is non-fractional. In such embodiments, the number of pixels that make up a spatial tile equals the number of Dirac peaks in the Fourier domain. This can be achieved through appropriate selection of the fundamental frequencies in the x- andy- directions fx° and fy°.
[0053] The methods and apparatus described herein are not limited to extracting color information. For example, with suitable filters one can encode any or a combination of: spectral information, polarization information, temporal information, and dynamic range information. Optical filter kernels may be designed to encode other image information in different spatial frequencies such that records of the image information are recorded at different spaced-apart locations in the Fourier domain. The following example applies a filter that approximates a derivative or gradient of the spatial frequency of an image. This information has various applications.
[0054] Consider computing a derivative of a function by means of convolution. One can do this by providing two samples of the function spaced closely together and with inverse signs. This can be expressed (in Fourier space) as follows: dF(ω) lim δ(ω - ε) - δ(ω + ε)
® F(ω) (H) dω ► 0 2ε
where δ is the Dirac delta function and ® represents convolution. A sine function is the function in the spatial domain that corresponds to a Dirac delta function in the Fourier domain. Therefore, copies in the Fourier domain that represent the first derivative dF/dω can be produced by applying a spatial optical filter having the following form:
Figure imgf000016_0001
and then performing a Fourier transform of the resulting image data. In practice, a filter is made by giving ε some small value. A schematic one-dimensional illustration of the application of a filter like that defined in Equation (12) is shown in Figure 5. This filter can readily be generalized to two dimensions. A filter which modulates the exposure of images with two sine waves having slightly different frequencies may be applied as described above to permit recovery of the two-dimensional Fourier gradient of a signal.
[0055] One issue that arises in image acquisition is saturation. Saturation can be a particular difficulty in the acquisition of high dynamic range images. This is both because high dynamic range images have the capability of displaying details in shadow and/or highlight areas that a conventional image would not be expected to reproduce and because high dynamic range images may be desired in cases where high image quality is desired. Saturation results in the loss of detail in highlighted and/or shadow regions of an image.
[0056] One aspect of the invention provides methods for reconstructing saturated regions of images. As illustrated in Figures 6 A to 6D, it is typical that saturation artificially introduces higher spatial frequencies into an image. These higher frequencies may be distributed over the entire frequency domain. Compare Figures 6A and 6B which respectively show a band-limited signal made up of a single scan line taken from a high dynamic range image and its Fourier transform to Figures 6C and 6D showing the same signal clipped at an intensity level of 0.8 and its Fourier transform. It can be seen in Figure 6B that the spatial frequencies present in the signal of Figure 6 A are all confined within a band 34. This is expected because the signal represented by Figure 6 A is band-limited. By contrast, Figure 6D shows that the spatial frequencies present in the signal of Figure 6C are spread over a broad spectrum and include substantial high frequency components.
[0057] Figure 7 illustrates an example method 50 for reconstructing saturated portions of an image. Method 50 generates monochrome images but can be generalized to generate color images as described below. In block 52 method 50 obtains a band-limited image at an imaging array. Block 52 comprises applying a known spatial modulation to the image. The spatial modulation may be imposed by passing incident optical radiation through a filter having a filter function comprising a sum of two or more sinusoids (e.g. cosines or sines) or more generally a spatial variation that is periodic with first and second spatial frequencies. Different ones of the sinusoids may have different amplitudes. Block 52 yields image data.
[0058] Block 53 identifies saturated pixels in the image data.
[0059] Block 54 determines a Fourier transform of the image data obtained in block 52. This results in a transformed image that includes a number of differently-scaled copies. In the absence of saturation the copies are spaced apart from one another in Fourier space. However, saturation introduces high frequencies and so, in the general case, these copies will be corrupted even if the least transmissive among the neutral density filters do not saturate.
[0060] The next part of method 50 can be understood by noting that we have two pieces of information about the nature of the image data obtained in block 52. The first is that the original signal, before being modulated, is band-limited. Therefore, the captured image should not contain any high spatial frequencies. The second is that the filter copies the original signal with varying attenuation coefficients.
[0061] An image may be decomposed into a region in which the signal will be saturated in the image data, represented by Lsal and a region in which the corresponding image data will not be saturated, represented by Lunsal. Lunsat is equal to L, but has zeros in all saturated image parts. Lsat has zeros in all unsaturated pixels and unknown values elsewhere. Since the Fourier transform is linear, the same relation holds for the Fourier representations of the signal's components, namely:
F{L} = F{Lsat } + F{Lunsat } (13)
A goal of reconstructing saturated portions of image data is to determine the unknown values in Lsat from the image data or at least to determine values which result in acceptable image characteristics. [0062] An error measure in Fourier space that incorporates the known qualities of the signal can be expressed as follows:
where Er is the error measure, T1 is a tile describing a single copy of the signal in the Fourier domain and is given by:
Figure imgf000019_0002
S1 are the scaling factors for individual tiles in the Fourier domain as determined by the modulation applied to the signal and η, represents sensor noise.
[0063] From Equation (13), T1 can be written (neglecting s) as:
T1 = F1 [L11nJ + F1 [Lj + ^ (16) where F1 is the Fourier transform that transforms a full-resolution image from the spatial domain into the subset of the frequency space that is spanned by tile /. For example, where the original image is of size mχn and each tile in the Fourier transformed image has a size oϊptq then F1 may be obtained by applying a rectangular discrete Fourier transform (DFT) of size pqx-mn .
[0064] In Equation (16) the term F,{Lunsal}can readily be computed from the captured image data (neglecting the effect of sensor noise). Ft{L^ includes the unknown values (the non-zero pixels of Lra(). We know that if these unknown values were accurately present in the image data instead of being clipped then the resulting signal would be band- limited. It is the replacement of these values in the image data with saturated values (e.g. values at the maximum output of the imaging array, or values above a saturation threshold) that causes saturation noise in the Fourier domain, including high frequency saturation noise outside of the frequency band of the band-limited signal. [0065] Equations (14) and (16) can be combined to yield an expression of the error in terms of Lsat and Lumat as follows:
Er - Is1F1 {Lmsat} - S7F, {Lumal} + S1F1 {LJ - s}F} {LJ + η, +
Figure imgf000020_0001
(17)
Figure imgf000020_0002
[0066] If one desires to account for sensor noise then one can make reasonable assumptions regarding the form of the sensor noise. For example, one can assume that the sensor noise η, is independently distributed in the spatial domain and observes a Gaussian noise distribution in the per-pixel image intensities. With this assumption, F{η,} has a uniform power spectrum with a Gaussian characteristic in each Fourier coefficient. This noise model is reasonable for many real image sensors for values above the noise level of the imaging array. Making a suitable assumption about the sensor noise (a suitable assumption in some cases could be that there is no sensor noise) facilitates the application of a quadratic error norm for optimization in Fourier space.
[0067] Equation (17) can be expressed as a linear system of equations having as unknowns values for the clipped pixels of Lsat. The system of linear equations can be solved to yield a solution or approximate solution which tends to minimize the error measure.
[0068] As a simple example, consider the case where, during image acquisition, an optical filter is applied such that in the Fourier transform of the image data there is a DC peak and one copy in a higher frequency band. In this case, neglecting sensor noise, the task of minimizing the error measure can be written as a matrix equation as follows:
(18)
Figure imgf000020_0003
where R and F are matrices. Matrix R will not generally be of full rank. This may be addressed by making a suitable assumption about the nature of the expected solution. For example, one can impose a condition on the nature of the combined signal Lsal+ Lunsat. In this example, the condition imposes a spatial smoothness constraint, specifically, a curvature minimizing term. Adding the additional constraint to Equation (18) yields:
Figure imgf000021_0001
(19)
where T is a regularizer, in this case a spatial curvature operator, and λ is a weighting factor.
[0069] A least-squares description of the error measure may be obtained by differentiating Equation (19) with respect to Lsat and setting the gradient to zero. This least squares description may be expressed as:
((RF,)' [RF,) + λT;Ts)Lsal = -((RFj(RF11) + λT;Tu)Lumat (20)
where F5 and Ts are the partial linear systems of the Fourier transform and the regularizer, respectively, acting on the saturated image regions and F11 and T11 are their counterparts acting on the unsaturated image regions. The right-hand side of Equation (20) is constant and is determined by the values in the unsaturated parts of the image data.
[0070] A set of values for the saturated pixels in Lsat may be obtained by solving equation (20). This is done numerically. For example, the conjugate gradient for least squares algorithm as described in HANSEN, P. C. 1998. Rank-Deficient and Discrete Ill-Posed Problems: Numerical Aspects of Linear Inversion. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA may be applied to obtain a solution to Equation (20). This algorithm is advantageous because it permits matrix multiplications to be implemented by fast image processing routines instead of constructing the actual matrices. Other suitable optimization algorithms may also be applied. Many such algorithms are known.
[0071] Method 50 applies the approach described above by, in block 55, constructing an optimization problem in which the unknown variables are the pixel values in saturated regions of the image data and applying an optimization algorithm to obtain best-fitting values for the unknown pixel values in block 56. Reconstructed image data is obtained in block 57 by inserting the pixel values determined in block 56 into the saturated regions of the image data.
[0072] Some embodiments optionally comprise applying an estimation algorithm to estimate true values for the pixel values in saturated regions of the image data prior to applying the optimization algorithm in block 56. For example, the estimation could be based on the assumption that the pixel values will have a local maximum at or near a centroid of a saturated region and will vary smoothly from that local maximum to the values at the boundary of the saturated region. In some embodiments, the value of the maximum may be selected based on one or more of: a size of the saturated region (e.g. a distance from the centroid to the closest point on the boundary of the saturated region); and gradients of the pixel values near boundaries of the saturated region. Making an estimate (in general a sensible rough guess) as to likely pixel values in the saturated region may improve the rate of convergence of the optimization algorithm applied in block 56. The estimate may be generated automatically.
[0073] Many variations in the implementation of method 50 are possible. A basic outline of method 50 is: • obtain a band-limited exposure of an image and, in doing so, apply one or more filter functions that spatially modulate the exposure at distinct spatial frequencies and attenuate the exposure by different amounts;
identify and demarcate saturated and unsaturated components of the resulting image data; • compute the Fourier transform of the resulting image data;
set up an optimization problem in which pixel values for the unsaturated components are known and an error measure to be minimized involves a difference between Fourier domain image copies corresponding to the different spatial frequencies which were imposed during exposure of the image; • numerically solve the optimization problem to obtain pixel values for the saturated regions; and, • insert the pixel values for the saturated regions into the image data.
The image data, as recreated by method 50, may represent a dynamic range greater than that of an imaging array applied to obtain the original image data.
[0074] While the above example reconstructs saturated pixels of a monochrome image, the same approach may be applied to reconstruct pixels in color images. There are a range of ways in which this may be done. For example, a method like method 50 may be performed for a luminance channel of an image. As another example, a method like method 50 may be performed individually for different color components within an image. In the latter case, different color components may be spatially modulated during exposure at each of two or more spatial frequencies with a different average level of attenuation at each of the spatial frequencies.
[0075] Method 50 and variants thereof may also be practiced in cases where a spatial modulation is imposed by color filters that have two or more spatial frequencies such that, in the Fourier domain, images taken with such filters present two or more copies of the image. Tiled filter arrays, such as the commonly-used Bayer filter have this characteristic. Therefore, in alternative embodiment, band-limited image data is acquired using an imaging array comprising a Bayer color filter. A Fourier transform of the resulting image data is determined. Saturated pixels in the image data are identified. An optimization involving unknown values for the saturated pixels is constructed using two or more of the copies from the Fourier domain, the optimization is solved subject to suitable constraints to yield values for the saturated pixels.
[0076] Advantageously, in some embodiments, all components of the image and/or all data for reconstruction of saturated areas may be obtained from a single exposure. Multiple exposures are not required.
[0077] It can be appreciated that the invention may be implemented in a wide variety of ways. After image data has been acquired, as described herein, processing to extract the various components of the images, or other processing such as processing to reconstruct saturated regions of an image, may be performed in various ways. For example, in some embodiments, a camera or other image acquisition device incorporates logic for processing the captured image to extract image components as described herein and/or to extract the image components to perform further processing on the image components. In other embodiments, the captured image is transferred from an image acquisition device to another processor (which could be a personal computer, computer system, or the like) and some or all processing of the image data may be performed on the device to which the image data is downloaded. Processing, as described herein, may be performed automatically at a processor when images are transferred from a camera or other imaging device to the processor.
[0078] Figure 8 illustrates a camera system 60 according to an example embodiment. Camera system 60 comprises a camera 61 having an optical system 62 that focuses light onto an image plane 63 where the light can be imaged by an imaging array 64. A blurring filter 66 and a filter 65 that applies spatial modulation to different image components are provided at imaging array 64.
[0079] A control system 68 operates imaging array 64 to obtain exposures of a scene and stores the resulting image data in a data store 67. hi this example embodiment, camera 61 is connected (or connectable) to a host processing system 70 that performs processing on the image data acquired by camera 61. This is not mandatory, in some embodiments, a single device provides functions of camera 61 and host processing system 70 or functions are allocated between camera 61 and host processing system 70 in some alternative manner.
[0080] Host processing system 70 comprises a Fourier transformation function 72 that computes a Fourier transform of image data retrieved from camera 61. A data extraction component 74 is configured to extract Fourier transforms of different image components and to provide the extracted Fourier transforms to an inverse Fourier transform component 76. [0081] An image encoding system 78 receives image components from inverse Fourier transform component 76, generates image data in a desired format from the image components and stores the resulting image data in a data store 79. In the illustrated embodiment, host processing system 70 comprises a display 80 and a printer 81. A display driver 82 is configured to display on display 80 images corresponding to image data in data store 79. A printer driver 83 is configured to print on printer 81 images corresponding to image data in data store 79.
[0082] In the illustrated embodiment, an optimization system 85 receives image components from data extraction component 74 and generates values for the image data in saturated image regions. These values are provided to image encoding system 78 which incorporates them into the image data.
[0083] hi some embodiments, host processing system 70 comprises a general purpose computer system and the components of host processing system 70 are provided by software executing on one or more processors of the computer system. In other embodiments, at least some functional blocks of host processor system 70 are provided by hard-wired or configurable logic circuits. In other embodiments, at least some functional blocks of host processor system 70 are provided by a special purpose programmed data processor such as a digital signal processor or graphics processor.
[0084] A number of variations are possible in the practice of the invention. Optical filters may be printed directly on an imaging array or provided in one or more separate layers applied to the imaging array.
[0085] In some embodiments the optical filters provided by way of a spatial light modulator in the light path to the imaging array, hi such embodiments, the spatial frequencies with which incoming optical radiation is modulated may be changed during an exposure. For example, the spatial light modulator may be set to modulate incoming light with a first spatial frequency or a first set of spatial frequencies during a first part of the exposure and to switch to modulating the incoming optical radiation with a second spatial frequency or a second set of spatial frequencies during a second part of the exposure. The first and second parts of the exposure are optionally of different lengths. In such embodiments, different copies in the Fourier domain provide temporal information and may also provide differently-exposed copies of the image that may be used for high dynamic range reconstruction, reconstructing saturated pixel values or the like.
[0086] hi some embodiments the optical modulation of the spatial filter is rotated relative to the pixel grid of the imaging array. As a result, different spectral copies have slightly different sub-pixel alignment. This can be used to recover the original image resolution by performing de-convolution with a small filter kernel corresponding to the optical low pass filter used for band-limiting.
[0087] In embodiments in which the filter is rotated relative to a pixel array, it can be desirable to compensate for the image of the filter itself in Fourier space. This can be done, for example, by taking a calibration image of a uniformly-illuminated white screen through the filter. Point spread functions of tiles of the filter in the Fourier domain may be obtained by Fourier-transforming the resulting image. Artefacts in the Fourier domain resulting from mis-alignment of the filter with pixels of the imaging array may be substantially removed using the calibration image.
[0088] As an alternative to an optical filter, which provides optical pre-modulation, certain types of filters may be implemented by selectively controlling the sensitivities of different sets of pixels in an imaging array. This may be done, for example, by setting the exposure times for different sets of pixels to be different or by using pixels of different sizes to vary light sensitivity across pixels, hi either case, the sensitivity of the pixels may be caused to vary in a pattern having a spatial frequency that imposes a desired modulation of some image component in the image.
[0089] In some embodiments, a camera is provided that provides a separate sensor for capturing information for recreating the luminance of saturated areas in an image. The separate sensor may apply any of the methods as described above. The separate sensor may have a relatively low resolution (although this is not mandatory) since glare tends to limit the effect of spatial resolution of high dynamic range content.
[0090] Certain implementations of the invention comprise computer processors which execute software instructions which cause the processors to perform a method according to the invention for processing image data from a modulated exposure, as described herein. For example, one or more processors in a camera and/or an image processing system into which images from a camera are transferred may implement methods as described herein by executing software instructions in a program memory accessible to the processors. The invention may also be provided in the form of a program product. The program product may comprise any medium which carries a set of computer-readable instructions which, when executed by a data processor, cause the data processor to execute a method of the invention. Program products according to the invention may be in any of a wide variety of forms. The program product may comprise, for example, physical media such as magnetic data storage media including floppy diskettes, hard disk drives, optical data storage media including CD ROMs, DVDs, electronic data storage media including ROMs, flash RAM, or the like. The computer-readable instructions on the program product may optionally be compressed or encrypted.
[0091] Where a component (e.g. a software module, processor, assembly, device, circuit, etc.) is referred to above, unless otherwise indicated, reference to that component (including a reference to a "means") should be interpreted as including as equivalents of that component any component which performs the function of the described component (i.e., that is functionally equivalent), including components which are not structurally equivalent to the disclosed structure which performs the function in the illustrated exemplary embodiments of the invention.
[0092] As will be apparent to those skilled in the art in the light of the foregoing disclosure, many alterations and modifications are possible in the practice of this invention without departing from the spirit or scope thereof. For example: • A number of different embodiments are illustrated herein, the features of different embodiments may be combined in alternative combinations to yield further embodiments.
Accordingly, the scope of the invention is to be construed in accordance with the substance defined by the following claims.

Claims

WHAT IS CLAIMED IS:
1. A method for obtaining image data, the method comprising: acquiring image data by exposing an imaging array to optical radiation and operating the imaging array, wherein acquiring the image data comprises spatially modulating a response of the imaging array to each of a plurality of components of the optical radiation according to a corresponding basis function for an invertible transformation; applying the transformation to the image data to yield transformed image data, the transformed image data comprising spatially-separated image copies corresponding respectively to the plurality of components; extracting the spatially-separated image copies from the transformed image data; and, applying an inverse of the transformation to each of the extracted image copies.
2. A method according to claim 1 wherein acquiring the image data comprises allowing the optical radiation to pass through an optical filter before interacting with pixels of the imaging array.
3. A method according to claim 2 wherein the optical filter has a spatial variation having at least first and second spatial frequencies.
4. A method according to claim 3 wherein the optical filter comprises tiles that repeat in a multiple of a pitch of pixels of the imaging array.
5. A method according to claim 2 wherein the optical filter has a transmissivity in excess of 45%.
6. A method according to claim 1 wherein the basis functions corresponding to the plurality of components are mutually orthogonal.
7. A method according to claim 6 wherein the components comprise primary colors.
8. A method according to claim 6 wherein the components comprise a red component a blue component and a green component and acquiring the image data comprises modulating each of the red, blue and green components at a distinct spatial frequency.
9. A method according to claim 1 wherein acquiring the image data comprises band- limiting a spatial frequency of the optical radiation prior to allowing the optical radiation to interact with the imaging array.
10. A method according to claim 1 wherein a direction of the spatial modulation is not aligned with rows or columns of the imaging array.
11. A method according to claim 1 wherein the modulation is spatially periodic and has a spatial frequency that is different for each of the components.
12. A method according to claim 1 wherein the basis functions comprise sinusoids.
13. A method according to claim 11 wherein acquiring the image data comprises spatially modulating a response of the imaging array to a first one of the plurality of components of the optical radiation according to a first sinusoid having a first spatial frequency and spatially modulating a response of the imaging array to a second one of the plurality of components of the optical radiation according to a second sinusoid having the first spatial frequency and a phase difference of 1/4 wave with the first sinusoid.
14. A method according to claim 13 comprising extracting a pair of spatially-separated copies each corresponding to the first spatial frequency, computing a real part of a sum of the pair of spatially-separated copies and an imaginary part of a difference of the pair of spatially-separated copies, and applying the inverse of the transformation to the real part and the imaginary part.
15. A method according to claim 1 wherein the components comprise polarization states.
16. An automated method for reconstructing pixel values for saturated pixels in an image, the method comprising: obtaining image data comprising a band-limited exposure of an image having some saturated pixels wherein the exposure is spatially modulated at two or more spatial frequencies by functions that differently attenuate the exposure; identifying the saturated pixels in the image data; computing a Fourier transform of the image data; setting up an optimization problem in which pixel values for the saturated components are unknown and an error measure to be minimized comprises a difference between Fourier domain image copies corresponding to the two or more spatial frequencies; and numerically solving the optimization problem to obtain pixel values for the saturated pixels.
17. A method according to claim 16 comprising inserting the pixel values for the saturated pixels into the image data.
18. A method according to claim 16 wherein the optimization problem comprises a spatial curvature constraint.
19. A method according to claim 16 wherein setting up the optimization problem comprises generating an initial estimate of the pixel values for the saturated pixels.
20. A method according to claim 19 wherein the initial estimate sets a pixel value at a centroid of a saturated region to a maximum value.
- SO IL An imaging array comprising a filter wherein the filter transmissivity for each of a plurality of spectral bands varies spatially with a distinct spatial frequency.
22. An imaging array according to claim 21 wherein the transmissivity for each of the spectral bands varies sinusoidally.
23. An imaging array according to claim 21 wherein the plurality of spectral bands comprise a band passing red light, a band passing blue light and a band passing green light.
24. An automated image processing system comprising: a processor; software instructions for execution by the processor, the software instructions comprising instructions that configure the processor to: obtain image data comprising a band-limited exposure of an image having some saturated pixels wherein the exposure is spatially modulated at two or more spatial frequencies by functions that differently attenuate the exposure; identify the saturated pixels in the image data; compute a Fourier transform of the image data; set up an optimization problem in which pixel values for the saturated components are unknown and an error measure to be minimized comprises a difference between Fourier domain image copies corresponding to the two or more spatial frequencies; and numerically solve the optimization problem to obtain pixel values for the saturated pixels.
25. Apparatus having new inventive features, combinations of features of sub- combinations of features as described herein.
26. Methods comprising new and inventive steps, acts, combinations of steps and/or acts or sub-combinations of steps and/or acts as described herein.
PCT/CA2010/000055 2009-01-19 2010-01-15 Multiplexed imaging WO2010081229A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US13/142,851 US8860856B2 (en) 2009-01-19 2010-01-15 Multiplexed imaging
EP10731003.9A EP2387848B1 (en) 2009-01-19 2010-01-15 Multiplexed imaging
CN201080004934.2A CN102282840B (en) 2009-01-19 2010-01-15 Multiplexed imaging
JP2011545599A JP5563597B2 (en) 2009-01-19 2010-01-15 Multiplexed imaging

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US14568909P 2009-01-19 2009-01-19
US61/145,689 2009-01-19

Publications (1)

Publication Number Publication Date
WO2010081229A1 true WO2010081229A1 (en) 2010-07-22

Family

ID=42339365

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2010/000055 WO2010081229A1 (en) 2009-01-19 2010-01-15 Multiplexed imaging

Country Status (5)

Country Link
US (1) US8860856B2 (en)
EP (1) EP2387848B1 (en)
JP (3) JP5563597B2 (en)
CN (2) CN102282840B (en)
WO (1) WO2010081229A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012060641A (en) * 2010-09-06 2012-03-22 Commissariat A L'energie Atomique Et Aux Energies Alternatives Digital raw image demosaic method, computer program thereof, and image sensor circuit or graphic circuit thereof
WO2021020821A1 (en) 2019-07-26 2021-02-04 Samsung Electronics Co., Ltd. Processing images captured by a camera behind a display
US11721001B2 (en) 2021-02-16 2023-08-08 Samsung Electronics Co., Ltd. Multiple point spread function based image reconstruction for a camera behind a display
US11722796B2 (en) 2021-02-26 2023-08-08 Samsung Electronics Co., Ltd. Self-regularizing inverse filter for image deblurring

Families Citing this family (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10298834B2 (en) 2006-12-01 2019-05-21 Google Llc Video refocusing
US10152524B2 (en) * 2012-07-30 2018-12-11 Spatial Digital Systems, Inc. Wavefront muxing and demuxing for cloud data storage and transport
US9858649B2 (en) 2015-09-30 2018-01-02 Lytro, Inc. Depth-based image blurring
JP6231284B2 (en) * 2013-02-21 2017-11-15 クラリオン株式会社 Imaging device
US20140267250A1 (en) * 2013-03-15 2014-09-18 Intermap Technologies, Inc. Method and apparatus for digital elevation model systematic error correction and fusion
CN104103037B (en) * 2013-04-02 2017-02-15 杭州海康威视数字技术股份有限公司 Image enhancement processing method and device
US10334151B2 (en) 2013-04-22 2019-06-25 Google Llc Phase detection autofocus using subaperture images
US9383259B2 (en) * 2013-08-29 2016-07-05 Nokia Technologies Oy Method, apparatus and computer program product for sensing of visible spectrum and near infrared spectrum
US9734601B2 (en) * 2014-04-04 2017-08-15 The Board Of Trustees Of The University Of Illinois Highly accelerated imaging and image reconstruction using adaptive sparsifying transforms
JP6626257B2 (en) 2015-03-03 2019-12-25 キヤノン株式会社 Image display device, imaging device, image display method, program, storage medium
US10444931B2 (en) 2017-05-09 2019-10-15 Google Llc Vantage generation and interactive playback
US10469873B2 (en) 2015-04-15 2019-11-05 Google Llc Encoding and decoding virtual reality video
US10546424B2 (en) 2015-04-15 2020-01-28 Google Llc Layered content delivery for virtual and augmented reality experiences
US10412373B2 (en) 2015-04-15 2019-09-10 Google Llc Image capture for virtual reality displays
US10440407B2 (en) 2017-05-09 2019-10-08 Google Llc Adaptive control for immersive experience delivery
US10275898B1 (en) 2015-04-15 2019-04-30 Google Llc Wedge-based light-field video capture
US10567464B2 (en) 2015-04-15 2020-02-18 Google Llc Video compression with adaptive view-dependent lighting removal
US10341632B2 (en) 2015-04-15 2019-07-02 Google Llc. Spatial random access enabled video system with a three-dimensional viewing volume
US10419737B2 (en) 2015-04-15 2019-09-17 Google Llc Data structures and delivery methods for expediting virtual reality playback
US11328446B2 (en) 2015-04-15 2022-05-10 Google Llc Combining light-field data with active depth data for depth map generation
US10540818B2 (en) 2015-04-15 2020-01-21 Google Llc Stereo image generation and interactive playback
US10565734B2 (en) 2015-04-15 2020-02-18 Google Llc Video capture, processing, calibration, computational fiber artifact removal, and light-field pipeline
US9979909B2 (en) * 2015-07-24 2018-05-22 Lytro, Inc. Automatic lens flare detection and correction for light-field images
US9681109B2 (en) * 2015-08-20 2017-06-13 Qualcomm Incorporated Systems and methods for configurable demodulation
JP6441771B2 (en) * 2015-08-27 2018-12-19 クラリオン株式会社 Imaging device
DE102015122415A1 (en) * 2015-12-21 2017-06-22 Connaught Electronics Ltd. Method for detecting a band-limiting malfunction of a camera, camera system and motor vehicle
US10275892B2 (en) 2016-06-09 2019-04-30 Google Llc Multi-view scene segmentation and propagation
US10679361B2 (en) 2016-12-05 2020-06-09 Google Llc Multi-view rotoscope contour propagation
US10514753B2 (en) * 2017-03-27 2019-12-24 Microsoft Technology Licensing, Llc Selectively applying reprojection processing to multi-layer scenes for optimizing late stage reprojection power
US10410349B2 (en) 2017-03-27 2019-09-10 Microsoft Technology Licensing, Llc Selective application of reprojection processing on layer sub-regions for optimizing late stage reprojection power
JP6834690B2 (en) * 2017-03-30 2021-02-24 コニカミノルタ株式会社 Image processing equipment and radiation imaging system
US10594945B2 (en) 2017-04-03 2020-03-17 Google Llc Generating dolly zoom effect using light field image data
US10255891B2 (en) 2017-04-12 2019-04-09 Microsoft Technology Licensing, Llc No miss cache structure for real-time image transformations with multiple LSR processing engines
US10474227B2 (en) 2017-05-09 2019-11-12 Google Llc Generation of virtual reality with 6 degrees of freedom from limited viewer data
US10354399B2 (en) 2017-05-25 2019-07-16 Google Llc Multi-view back-projection to a light-field
US10467733B2 (en) * 2017-07-27 2019-11-05 Raytheon Company Multiplexed high dynamic range images
US10545215B2 (en) 2017-09-13 2020-01-28 Google Llc 4D camera tracking and optical stabilization
US10965862B2 (en) 2018-01-18 2021-03-30 Google Llc Multi-camera navigation interface
KR102561101B1 (en) * 2018-02-19 2023-07-28 삼성전자주식회사 Holographic display apparatus providing expanded viewing window

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001061994A1 (en) * 2000-02-18 2001-08-23 Intelligent Pixels, Inc. Very low-power parallel video processor pixel circuit
US20080285052A1 (en) * 2007-02-21 2008-11-20 Canon Kabushiki Kaisha Shape measuring apparatus, exposure apparatus, and computer

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5848020A (en) * 1981-09-17 1983-03-19 Hitachi Ltd Optical system for image pickup
US4630105A (en) * 1984-07-31 1986-12-16 Rca Corporation Symmetric color encoding shift pattern for a solid-state imager camera and decoding scheme therefor
JPH0712218B2 (en) * 1987-07-15 1995-02-08 三菱電機株式会社 Color solid-state imaging device
JPH09154146A (en) * 1995-09-28 1997-06-10 Toshiba Corp Solid-state color image pickup device
JP3229195B2 (en) * 1996-03-22 2001-11-12 シャープ株式会社 Image input device
JPH11112977A (en) * 1997-09-30 1999-04-23 Sharp Corp Image-pickup compression system
JP3837881B2 (en) * 1997-11-28 2006-10-25 コニカミノルタホールディングス株式会社 Image signal processing method and electronic camera
JP2002197466A (en) * 2000-12-27 2002-07-12 Nec Corp Device and method for extracting object area, and recording medium with object area extraction program recorded thereon
JP2002277225A (en) * 2001-03-21 2002-09-25 Ricoh Co Ltd Lighting system for optical shape measuring instrument
JP3944726B2 (en) * 2002-09-25 2007-07-18 ソニー株式会社 Imaging apparatus and method
DE112004002777B4 (en) * 2004-03-03 2013-11-07 Mitsubishi Denki K.K. Optical encoder
JP2005354610A (en) * 2004-06-14 2005-12-22 Canon Inc Image processing apparatus, image processing method and image processing program
JP4974543B2 (en) 2005-08-23 2012-07-11 株式会社フォトニックラティス Polarization imaging device
JP2008206111A (en) * 2007-02-23 2008-09-04 Victor Co Of Japan Ltd Photographing apparatus and photographing method
JP2008294741A (en) * 2007-05-24 2008-12-04 Olympus Corp Imaging system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001061994A1 (en) * 2000-02-18 2001-08-23 Intelligent Pixels, Inc. Very low-power parallel video processor pixel circuit
US20080285052A1 (en) * 2007-02-21 2008-11-20 Canon Kabushiki Kaisha Shape measuring apparatus, exposure apparatus, and computer

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
See also references of EP2387848A4
XIN LI ET AL.: "Image Demosaicing: A Systematic Survey", SPIE-IS&T, vol. 6822, pages 68221J - 1

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012060641A (en) * 2010-09-06 2012-03-22 Commissariat A L'energie Atomique Et Aux Energies Alternatives Digital raw image demosaic method, computer program thereof, and image sensor circuit or graphic circuit thereof
WO2021020821A1 (en) 2019-07-26 2021-02-04 Samsung Electronics Co., Ltd. Processing images captured by a camera behind a display
EP3928504A4 (en) * 2019-07-26 2022-04-13 Samsung Electronics Co., Ltd. Processing images captured by a camera behind a display
US11575865B2 (en) 2019-07-26 2023-02-07 Samsung Electronics Co., Ltd. Processing images captured by a camera behind a display
US11721001B2 (en) 2021-02-16 2023-08-08 Samsung Electronics Co., Ltd. Multiple point spread function based image reconstruction for a camera behind a display
US11722796B2 (en) 2021-02-26 2023-08-08 Samsung Electronics Co., Ltd. Self-regularizing inverse filter for image deblurring

Also Published As

Publication number Publication date
CN102282840A (en) 2011-12-14
JP2012515474A (en) 2012-07-05
US20110267482A1 (en) 2011-11-03
EP2387848A1 (en) 2011-11-23
JP2014039318A (en) 2014-02-27
CN103561193B (en) 2016-06-15
JP5563597B2 (en) 2014-07-30
EP2387848B1 (en) 2017-03-15
JP5722409B2 (en) 2015-05-20
EP2387848A4 (en) 2013-03-27
US8860856B2 (en) 2014-10-14
CN103561193A (en) 2014-02-05
CN102282840B (en) 2016-01-06
JP2015100127A (en) 2015-05-28

Similar Documents

Publication Publication Date Title
US8860856B2 (en) Multiplexed imaging
Zhuang et al. Hyperspectral mixed noise removal by $\ell _1 $-norm-based subspace representation
US8238683B2 (en) Image processing method
Khashabi et al. Joint demosaicing and denoising via learned nonparametric random fields
US8068680B2 (en) Processing methods for coded aperture imaging
US20190141299A1 (en) Systems and methods for converting non-bayer pattern color filter array image data
EP0996293B1 (en) Colour image processing system
Alleysson et al. Color demosaicing by estimating luminance and opponent chromatic signals in the Fourier domain
Degraux et al. Generalized inpainting method for hyperspectral image acquisition
US10012953B2 (en) Method of reconstructing a holographic image and apparatus therefor
WO2012153532A1 (en) Image capture device
WO2020139493A1 (en) Systems and methods for converting non-bayer pattern color filter array image data
Sun et al. Design of four-band multispectral imaging system with one single-sensor
WO2011119893A2 (en) Method and system for robust and flexible extraction of image information using color filter arrays
US20180122046A1 (en) Method and system for robust and flexible extraction of image information using color filter arrays
Kawase et al. Demosaicking using a spatial reference image for an anti-aliasing multispectral filter array
De Lavarène et al. Practical implementation of LMMSE demosaicing using luminance and chrominance spaces
Paul et al. Maximum accurate medical image demosaicing using WRGB based Newton Gregory interpolation method
Asiq et al. Efficient colour filter array demosaicking with prior error reduction
US7023576B1 (en) Method and an apparatus for elimination of color Moiré
Alleysson et al. Frequency selection demosaicking: A review and a look ahead
Wecksung et al. Digital image processing at EG&G
Soulez et al. Joint deconvolution and demosaicing
Honda et al. Image processing by multiple aperture scanning
Saito et al. Sharpening-demosaicking method with a total-variation-based superresolution technique

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201080004934.2

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10731003

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 13142851

Country of ref document: US

ENP Entry into the national phase

Ref document number: 2011545599

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2010731003

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2010731003

Country of ref document: EP