WO2011119893A2 - Method and system for robust and flexible extraction of image information using color filter arrays - Google Patents

Method and system for robust and flexible extraction of image information using color filter arrays Download PDF

Info

Publication number
WO2011119893A2
WO2011119893A2 PCT/US2011/029878 US2011029878W WO2011119893A2 WO 2011119893 A2 WO2011119893 A2 WO 2011119893A2 US 2011029878 W US2011029878 W US 2011029878W WO 2011119893 A2 WO2011119893 A2 WO 2011119893A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
color
sensor
optical
filter array
Prior art date
Application number
PCT/US2011/029878
Other languages
French (fr)
Other versions
WO2011119893A3 (en
Inventor
Mritunjay Singh
Tripurari Singh
Original Assignee
Mritunjay Singh
Tripurari Singh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mritunjay Singh, Tripurari Singh filed Critical Mritunjay Singh
Priority to EP11760265A priority Critical patent/EP2550808A2/en
Publication of WO2011119893A2 publication Critical patent/WO2011119893A2/en
Publication of WO2011119893A3 publication Critical patent/WO2011119893A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/133Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements including elements passing panchromatic light, e.g. filters passing white light
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise

Definitions

  • Embodiments of the present invention relate to multi-spectral imaging systems such as still cameras, video cameras, scanners and microscopes and more specifically to imaging systems that use fewer sensor elements than previous techniques for comparable image quality.
  • Images herein can be considered signals whose amplitude may represent some optical property such as intensity, color and polarization which may vary spatially but not significantly temporally during the relevant measurement period.
  • light intensity typically is detected by photosensitive sensor elements or photosites.
  • An image sensor is composed of a two dimensional regular tiling of these individual sensor elements.
  • Color imaging systems need to sample the image in at least three basic colors to synthesize a color image.
  • basic colors to refer to primary colors, secondary colors or any suitably selected set of colors.
  • all references to red, green and blue should be construed to apply to any set of basic colors.
  • Color sensing may be achieved by a variety of means such as, for example, (a) splitting the image into three identical copies, separately filtering each into the basic colors, and sensing each of them using separate image sensors, or (b) using a rotating filter disk to transmit images filtered in each of the basic colors onto the same image sensor.
  • a very popular design for capturing color images is to use a single sensor overlaid with a color filter array (" CFA" ).
  • CFA color filter array
  • This design yields red, green and blue images of equal resolution, or equivalently luminance and chrominance signals of equal bandwidth.
  • 'Luminance is defined as a weighted sum of basic color signals where all the weights are positive while "chrominance” , is defined as a weighted sum of basic color signals where at least one weight is negative.
  • the color stripe design is still used in high end cameras such as the Panavision Genesis Digital Camera, http://www.panavision.com/publish/2007/l l/18Genesis.pdf, page 2, 2007.
  • Newer CFA designs by Bayer see FIG. 4 and B.E. Bayer, "Color imaging array” , July 20 1976. US Patent 3,971,065) and others (see K. Hirakawa and P.J. Wolfe, "Spatio-spectral color filter array design for enhanced image fidelity" in Proc. of IEEE ICIP, pages II: 81-84, 2007 and L. Condat, "A New Class of Color Filter Arrays with Optimal Sensing Properties” ) make different trade-offs between luminance and chrominance bandwidths as well as the crosstalk between them.
  • FIG. 3 shows an exemplary monochrome image 310 and it's spectral image 320.
  • the spectral image is obtained by taking the logarithm of the absolute value of the Fourier transform of the image.
  • An aggressive class of techniques for close packing of color component spectra employs adaptive directional techniques during image reconstruction. These techniques assume the color component spectra of small image patches to be sparse in at least one direction. They design their CFA to generate more than one copy of chrominance spectra (see B.E. Bayer, "Color imaging array” , July 20 1976, US Patent 3,971,065), implicitly or explicitly identify the cleanest copy during the image reconstruction step and use directional filtering to demultiplex them (see E. Dubois, "Frequency-domain methods for demosai eking of bayer-sampled color images” , IEEE Signal Processing Letters, 12(12):847-850, 2005 and K.
  • Each color of the CFA, Cj(n) , i G ⁇ r, g, b ⁇ , is the superposition of these carriers scaled by an appropriate real amplitude
  • Each 3 ⁇ 4 (fc) (n), 0 ⁇ k ⁇ m can be viewed as a color component.
  • k > 1 the chrominance signals.
  • A can be interpreted as the color transform matrix
  • A the generalized inverse of A, can be interpreted as the inverse color transform.
  • FIG. 4 shows the Bayer CFA 410.
  • FIG. 5 illustrates how color information with its circular support is packed into the sensor's rectangular support. This can be most easily understood in terms of an alternative color space:
  • the central circle represents Luminance (L).
  • the four quarter circles at the vertices make up Chrominancel (CI).
  • the two semi-circles at the left and right edges make up the first copy of Chrominance2 (C2a).
  • the two semi-circles at the top and bottom edges make up the second copy of Chrominance2 (C2b).
  • the present invention overcomes problems and limitations of prior imaging methods and systems by providing novel methods and systems for, among other things, sampling an image to obtain image data and processing image data.
  • One such method comprises receiving a sample set of data generated by transforming and sampling an optical property of an original color image in a spatial basis, wherein the transformation effected is substantially local in the spatial basis and has partially overlapping spectra.
  • a generalized inverse of the transform is applied to the sample set of data to produce a set of inferred original image data, wherein the generalized inverse does not use variational minimization and does not assume constant color ratios.
  • Another such method comprises receiving a sample set of data generated by transforming and sampling an optical property of an original color image in a spatial basis, wherein the transformation effected is substantially local in the spatial basis and has partially overlapping spectra.
  • a generalized inverse of the transform that respects predetermined spectral constraints is applied to the sample set of data to produce a set of inferred original image data.
  • Another such method comprises creating an optical color filter array, providing an image sensor comprising a plurality of photosensitive sensor elements, projecting an image through the color filter array, and detecting image intensity values transmitted through the color filter array after a single exposure at sensor elements of the image sensor. Detected intensity values are read out from only a subset of sensor elements to increase speed, the subset of sensor elements being randomly chosen whereby aliasing is rendered similar to noise. The input image is inferred from the detected image intensity values.
  • a further such method comprises creating an optical color filter array, providing an image sensor comprising a plurality of photosensitive sensor elements, projecting an image through the color filter array, and detecting image intensity values transmitted through the color filter array after a single exposure at each sensor element of the image sensor.
  • the color filter array is random, whereby aliasing is rendered similar to noise.
  • the sensor elements are grouped into two or more subsets whose respective characteristics differ in at least one of the following respects: sensitivities, sensor element quantum efficiencies and electronic signal integration times.
  • the input image is inferred from the detected image intensity values.
  • Yet another such method comprises creating an optical color filter array, providing an image sensor comprising a plurality of photosensitive sensor elements, projecting an image through the color filter array, and detecting image intensity values transmitted through the color filter array after a single exposure at each sensor element of the image sensor.
  • the sensor elements are arranged substantially in a jittered pattern.
  • the input image is inferred from the detected image intensity values, while at least some higher frequency components of the detected image intensity values are set to zero prior to image inference.
  • Another such method comprises creating an optical color filter array, providing an image sensor comprising a plurality of photosensitive sensor elements,- projecting an image through the color filter array, and detecting image intensity values transmitted through the color filter array after a single exposure at each sensor element of the image sensor.
  • the sensor elements are arranged substantially in a jittered pattern.
  • the input image is inferred from the detected image intensity values, where sparsity promotion is used to reconstruct a higher resolution image.
  • a method for sampling an image comprises projecting an image onto an image sensor comprising a plurality of photosensitive sensor elements arranged in a pattern obtained by jittering the pixel locations a small distance off of a regular lattice in a random way and and detecting image intensity values after a single exposure at each sensor element of the image sensor.
  • the input image is inferred from the detected image intensity values.
  • Another such method for reducing noise in an image.
  • This method comprises receiving a sample set of data generated by transforming and sampling an optical property of an original color image in a spatial basis, wherein the transformation effected is substantially diagonal in the spatial basis and has partially overlapping spectra and approximating the Poissonian photon shot noise at each photosite with a Gaussian of the same variance and zero mean.
  • the image intensity at each photosite is used as an approximation for the mean of the Poissonian at the photosite.
  • the Gaussian of the same variance and zero mean is combined with the Gaussian noise from other sources to get a combined Gaussian.
  • the maximum likelihood estimator of the combined Gaussian is computed.
  • a further such method for reducing noise in an image.
  • This method comprises capturing image data representative of intensities of the image in the spatial domain, transforming the image data into transformed data in another domain, and applying the Expectation Maximization algorithm to the inverse transformation of the data to compute its maximum likelihood estimator.
  • An additional such method for computing a sparse representation of a signal in a basis. This method comprises inserting additional vectors from the basis into the sparse representation in multiple iterations and using a statistical estimator such as a maximum likelihood estimator to decide which additional vectors to include in each iteration.
  • a statistical estimator such as a maximum likelihood estimator
  • Yet another such method for processing an image. It comprises receiving a sample set of data generated by transforming and sampling an optical property of an original color image in a spatial basis, wherein the transformation effected is substantially diagonal in the spatial basis and has partially overlapping spectra.
  • a generalized inverse of the transformation is applied to the sample set of data to produce a set of inferred original image data, wherein the sampling is done at spatial locations which are arranged substantially in a jittered pattern.
  • FIG 1 is a flowchart showing an exemplary method for sampling a color image with a CFA in accordance with an embodiment of the present invention.
  • FIG 2 is a schematic diagram of an exemplary color imaging system in accordance with an embodiment of the present invention.
  • FIG 3 Shows a monochrome image and it's spectral image.
  • FIG 4 Shows the popular Bayer Color Filter Array.
  • FIG 5 Illustrates amplitude modulation of an image effected by the Bayer CFA.
  • FIG 6 is a diagram of an exemplary randomized color filter array in accordance with an embodiment of the present invention.
  • FIG 7 Illustrates amplitude modulation of an image effected by the CFA shown in FIG 6 in the Fourier domain.
  • FIG 8 is a code listing of a Matlab simulation of an exemplary embodiment of the invention showing image reconstruction by matrix inversion.
  • FIG 9 Shows the results of a simulation of a simple exemplary embodiment of the present invention in Matlab.
  • FIG 10 is a code listing of a Matlab simulation of an exemplary embodiment of the invention showing image reconstruction with sparsity promotion.
  • the present invention works within the broad framework of Color Filter Array based color imaging systems.
  • FIG. 1 is a flowchart showing an exemplary method of color imaging, in accordance with an embodiment of the present invention.
  • a Color Filter Array is created.
  • the incident image is filtered through this CFA.
  • the filtered image is detected by an image sensor after it is exposed to it.
  • the image is reconstructed from the image sensor output and the CFA pattern.
  • FIG. 2 is a schematic diagram of an imaging system, in accordance with an embodiment of the present invention.
  • Image 210 is focused by lens 220 onto Color Filter Array 230.
  • the filtered image is detected by image sensor 240.
  • the resulting plurality of sensed filtered image intensity values is sent to processor 250 where image reconstruction is performed.
  • Equation 10 ⁇ ( ⁇ ) ⁇ ( ⁇ ) D g (il) ,( ⁇ ) (11) and Oi(Q),i G ⁇ r,g,b ⁇ is a row vector obtained by appropriately rearranging the elements of C i ,i G ⁇ r, g,b ⁇ so as to effect the convolution of equation 9.
  • rank(A) ⁇ X ⁇ .
  • rank(A) ⁇ min(
  • ⁇ X ⁇ is k times ⁇ Y ⁇ , in the case k basic colors.
  • X can only be recovered if an image model exists that provides additional a priori information, such as restricted signal bandwidth, or sparse representation in some bases.
  • equation 10 has to be augmented with
  • x ⁇ ⁇ xXg r be a column vector formed by the concatenation of Xi, i G ⁇ r, g, b ⁇ and
  • B c 9 be a matrix similarly formed by the concatenation of ⁇ 3 ⁇ 4, i G ⁇ r, g, b ⁇ .
  • , in the case k basic colors. Hence x can only be recovered if an image model exists that provides additional a priori information, such as restricted signal bandwidth, or sparse representation in some bases. Thus, equation 14 has to be augmented with B' ⁇ x (15) where is a vector typically composed of constants, usually zeros. Combining equations 14, 15 we obtain
  • Equation 16 From equation 16 we find that a basic color value at a pixel location can be computed as a weighted sum of elements of y'. Since the only image dependent values of y' are in its sub-vector y, this reduces to a space variant filter plus an optional constant. This constant is 0 if 7 is a vector of zeros.
  • the space variant filter obtained from equation 16 has a large kernel the size of
  • Another practical consideration is the space required to store the space variant filter kernels. This can be addressed by using a periodic CFA formed by tiling the sensor with a pattern block, so that the number of filter kernels is reduced to the block size. Such tiling sufficiently preserves the random character of a random CFA as long as the block size is not too small. Rectangular blocks suffice for most applications but other, more complicated, shapes may also be employed.
  • Space variant filter kernels can be pre-computed and used for image reconstruction only if equation 12 or 15 can be set up independently of the image. This is the case for section “Image Model with Restricted Bandwidth” but not for sections “Image Model with Transform-coefficient Sparsity” and “Other Regularization Techniques for Image Reconstruction” . Hybrid approaches that combine space variant filters with adaptive algorithms are described in section "Image Model with Adaptive Color Space and Bandwidth.”
  • the rank of matrix A is determined by the choice of CFA.
  • a carefully tailored CFA with a small repeating pattern can have a rank close to or equal to ⁇ Y ⁇ .
  • CFAs have a large number of filter colors complicating their manufacture. Filters that allow the transmission of more than one basic color are known as panchromatic filters. Panchromatic filters vary in the amount of transmission of each basic color and hence come in a large number of colors.
  • CFAs comprised of a random arrangement of basic colors such as Red, Green and Blue (known collectively "RGB” ) filters have rank close to ⁇ Y ⁇ . Furthermore, some random arrangements of the basic colors can have a rank equal to Intuitively, this is due to the fact that the DFT of a random pattern is overwhelmingly likely to contain ⁇ Y ⁇ carrier frequencies of non-zero amplitude. Furthermore, practically all of these amplitudes are unique. Such CFAs are easier to manufacture than panchromatic ones. For more on eigenvalues of random matrices, see E.J. Candes, T. Tao, "Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?" , IEEE Transactions on Information Theory, 2006 and references cited therein.
  • Randomized Panchromatic CFAs can also be built wherein colors are chosen randomly. Such CFAs also have rank concentrated close to ⁇ Y ⁇ . However, these are harder to manufacture than random basic-color filter arrays.
  • FIG. 6 shows an exemplary Color Filter Array 610, in accordance with an embodiment of the present invention.
  • Red, Green and Blue filters in approximately equal numbers are distributed in a random pattern.
  • FIG. 7 Illustrates amplitude modulation of an image effected by the CFA shown in FIG. 6 in the Fourier domain.
  • Each circle represents the spectrum of a basic color signal modulated with one of the multitude of carriers. For better visibility, only a few of the carriers are shown.
  • a ⁇ -Capture transform if it has the following property when applied to a band-limited color image signal with color space dimensionality k: If the signal has circular Fourier domain support with ⁇ as many independent Fourier coefficients in each color as the number of Fourier coefficients in the transformed signal, then X% of the latter are independent linear combinations of the former.
  • the parameter ⁇ of a ⁇ -Capture transform controls the rank of the inverse transform and is a measure of the amount of color information captured in the transformed signal.
  • the color signal bandwidth restriction can also be applied in the Wavelet, Curvelet, Karhunen-Loeve and other bases where it results in many low value coefficients that can be approximated to zero.
  • chrominancel cl
  • chrominance2 c2
  • luminance basis vectors that are orthogonal to the luminance basis vector defined above have similarly low high frequency content.
  • Color spaces can be adaptively chosen for different image neighborhoods so as to minimize high frequency content of chrominance signals. For example, this can be done by pre-multiplying each basic color by a factor inversely proportional to its strength in the neighborhood, reconstructing the thus normalized image followed by de-normalization to restore the original colors.
  • Signal bandwidth can be adaptively set for each image neighborhood.
  • Signal bandwidth along the direction of the edge is lower than in the direction perpendicular to it.
  • Edge sensing algorithms can be used to take advantage of this fact, for example, and setup equation 12 or 15 with a roughly oval spectrum with major axis in the direction of the edge. This allows for the reconstruction of higher resolution images and makes the system of equations more over determined and thus robust to noise.
  • Image reconstruction then consists of selecting the most suitable color space and spectral shape for each neighborhood, and applying the corresponding space variant filter.
  • Spectral sparsity of natural images may be used to reduce the number of degrees of freedom.
  • most of the large amplitude transform coefficients (as used herein the term “transform coefficient” is to be construed as including “Fourier transform coefficient” , “Wavelet transform coefficient” , “Curvelet transform coefficient” , “Karhunen-Loeve transform coefficient” and coefficients of other vector bases spanning the space of image signals) of color components are solved for, and many of the low amplitude coefficients are neglected. If many of the low amplitude transform coefficients are first identified and set to zero, the resulting simplified inverse problem can be solved using any of the standard linear inverse problem techniques.
  • a random CFA is well suited to our objective of extracting the large valued Transform coefficients as it provides a random set of projections of the image. See E.J. Candes, T. Tao, "Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?" , IEEE Transactions on Information Theory, 2006. Random CFAs will be assumed for the rest of this section.
  • the low value transform coefficients can be identified by first reconstructing an appropriately bandwidth restricted image. This works because the neglected high frequencies have low magnitudes compared to the low frequencies (see D.L. Ruderman and W. Bialek, "Statistics of natural images: Scaling in the woods” , Physics Review Letters 73 (1994), no. 6, 814-817). Furthermore, the resulting error is randomized and the resulting dense spectral support of the error results in small value transform coefficients remaining small. Once the low value transform coefficients are identified and removed, higher frequency coefficients are reintroduced and the resulting linear problem re-solved. This process is repeated until all "large" transform coefficients are identified and solved for.
  • L ⁇ minimization also known as Basis Pursuit, is a standard Compressive Sensing technique to identify many of the high amplitude transform coefficients - and thereby the low amplitude transform coefficients. See Chen, Donoho, and Saunders, "Atomic decomposition by basis pursuit” , SIAM J. Scientific Computing, vol. 20, pp. 33-61, 1999.
  • Matching Pursuit is a faster but suboptimal greedy algorithm for identifying the large valued transform coefficients. See Mallat and Zhang, "Matching Pursuits with Time- Frequency Dictionaries” , IEEE Transactions on Signal Processing, December 1993, pp. 3397- 3415. Variants include, but are not limited to, Orthogonal Matching Pursuit, Regularized Orthogonal Matching Pursuit and Simultaneous Orthogonal Matching Pursuit.
  • the band limitedness of the image can be imposed on each iteration of the solution by projecting the iterate onto the corresponding convex set.
  • Transform sparsity image model can be used in conjunction with limited bandwidth image model to reconstruct the input image. This allows larger bandwidth images to be reconstructed than is possible with only the limited bandwidth model or, for that matter, with state of the art image reconstruction algorithms. This, in turn, allows for larger bandwidth OLPFs to be used or for OLPFs to be omitted altogether.
  • Noise has two primary components: a) additive Gaussian noise generated at the sensor due to thermal and electrical effects and b) Poissonian photon-shot noise which we approximate by a Gaussian of the same variance and zero mean.
  • Naive CFA designs when combined with simplistic matrix inversion for image reconstruction can often lead to amplification of this noise in the reconstructed image.
  • S(Q) ⁇ X + ⁇ ( ⁇ ) ⁇ ⁇ ( ⁇ ) ( 19)
  • ⁇ ⁇ ( ⁇ ) is the noisy signal output by the sensor for frequency ⁇
  • ⁇ ( ⁇ ) is a random variable representing noise
  • S(Q) is the corresponding row of the matrix S.
  • equation 13 can be rewritten as where ⁇ ⁇ ( ⁇ ) is the noisy reconstruction of ⁇ ( ⁇ ) .
  • This scheme becomes susceptible to noise amplification if the choice of CFA is such as to make some elements of S ⁇ 1 ( , ⁇ ) large in magnitude or negative. Multiplication of signals with large values amplifies noise while subtraction of strong signals to obtain a weak residual signal decreases SNR as noise energy gets added even as the signal gets subtracted.
  • Noise amplification can be reduced by employing a slightly larger photosite count that results in a more overdetermined systems of linear equations. Many reconstruction algorithms perform better on such over-determined systems of linear equations.
  • Low amplitude carriers in the CFA result in low amplitude sidebands. These low amplitude sidebands are amplified by the inverse color transform of the reconstruction step, which amplifies noise as well. CFAs should be designed so that the sum of all carrier energies is high.
  • Enforcing spectral sparsity is an effective technique for suppressing noise. This is done using the techniques of section "Image Model with Transform-coefficient Sparsity" . This leads to the linear system of equations being over determined and hence a substantial improvement in Signal to Noise Ratio.
  • Statistical estimation techniques form another effective class of noise suppression techniques.
  • Statistical Estimation is the term used to describe techniques for estimating the parameters of a probabilistic distribution from a set of samples from that distribution. In the case of an over-determined system of equations, statistical estimation can be used to approximate the means of distributions that generated the sensor readings. Least squares regression is one of the simplest forms of statistical estimation.
  • Various other estimators may also be computed on over-determined systems such as the Maximum Likelihood Estimator (MLE), Maximum A Posteriori Probability (MAP) estimator, Best Linear Unbiased Estimator or a Minimum Variance Linear Estimator.
  • MLE Maximum Likelihood Estimator
  • MAP Maximum A Posteriori Probability estimator
  • Best Linear Unbiased Estimator Best Linear Unbiased Estimator or a Minimum Variance Linear Estimator.
  • Matlab simulation was performed wherein a CFA with equal numbers of Red, Green and Blue filters arranged in a random pattern was generated. The original color image was filtered through this CFA. Various amounts of white noise was added to this and reconstruction was performed on the resultant image sensor output by inverting the linear transformation effected by the CFA in the Fourier domain.
  • Matlab is a product of The Math Works, Inc., Natick, Massachussetts, U.S.A.
  • FIG. 8 shows a snippet of Matlab code used in the simulation. While we do not provide a complete code listing of the simulation, this snippet should be sufficient for anyone of ordinary skill in the art to reproduce our results.
  • the matrix inversion step of the reconstruction is performed by the Moore-Penrose pseudoinverse algorithm.
  • FIG. 9 shows the results of this simulation.
  • 910 is the original color image.
  • 920 is the image after being filtered by an exemplary randomized RGB filter.
  • 930 is the reconstructed color image. No noise was added leading to practically perfect reconstruction (PSNR> 300dB).
  • Low intensity signals may be drowned by noise and high intensity signals may cause sensor saturation.
  • the Dynamic Range of an imaging system is a measure of the range of intensities it can capture. Sensitivity, on the other hand, measures the system's responsiveness to low incident light. High dynamic range and increased low-light sensitivity are valuable characteristics.
  • Photosite sensitivities can be controlled by any suitable method including filter element transmittance, sensor element efficiency or electronic signal integration times. As before, we allow partial spectral overlaps to increase packing efficiency and use a regular pattern of panchromatic filters or a random arrangement of high sensitivity and regular sensitivity photosites of various colors. [0104] We can deal with photosites lost to saturation or under-exposure by reducing the number of transform coefficients solved for. If a random CFA is used this can be done gracefully, as described in section "Random CFAs and graceful Error Handling" .
  • the CFA mentioned above also serves to improve image-capture in high ISO settings or of low dynamic range scenes. In these scenarios sensor element saturation and underexposure are not a problem and reduced luminance noise is the benefit.
  • Wavelength dependent refractive properties of camera optics can lead to geometric misalignment of the image in different colors. This is called Chromatic Aberration.
  • the framework of the current invention allows for the incorporation of an elegant means of computationally correcting for this optical problem during image reconstruction.
  • the framework of the present invention allows joint image reconstruction and chromatic aberration correction. This is particularly effective when color transforms that take advantage of correlations between basic colors such as red, green and blue are used, as is done in section "Image Model with Restricted Bandwidth” .
  • This is achieved by expressing the discrete samples of the undistorted image in terms of those of the observed distorted image in equation 14. This is done by first expressing the continuous signal in terms of the discretized samples and then applying the continuous domain geometric distortion of chromatic aberration to it before re-discretizing it. This expresses the discrete components of the undistorted image as image independent linear combinations of those of the observed distorted image.
  • Chromatic aberration can be described as a combination of warping in the image plane and defocusing. We derive an explicit expression below for the discrete domain linear transformation corresponding to the former. While there is no exact way of correcting for the defocusing problem, some correction may be achieved by a space varying sharpening filter such as described in Sing Bing Kang, "Automatic Removal of Chromatic Aberration from a Single Image” , 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2007) , 18-23 June 2007.
  • the discrete domain image is obtained from the continuous domain through sampling with rectangular photosites. This sampling may be described as a convolution with a 2D rectangular function followed by 2D Dirac comb sampling:
  • X t (x C t * H Box ) - H Sample (24) where is the continuous domain image in color i G ⁇ r, g, b ⁇ just before it gets filtered by the CFA and Xi is the same image discretized by the sensor.
  • x may be obtained from the ideal chromatic aberration free image x c through the warping functions Uu and u 2 i. Un and u 2 i are functions of u and u 2 and are determined offline using existing methods for characterizing Chromatic Aberration.
  • x c l (u 1 ,u 2 ) c t ⁇ u u , u 2i ) (27)
  • Equation 24 for x becomes
  • H Sample (28) which may be formally inverted: where HL P / is a low pass filter that recovers the continuous signal, an example implementation of which is
  • Equation 24 when combined with equations 27 and 29 becomes
  • CA(m,n) / du ⁇ du 2 / dv 1 dv 2 H Lp f(v n)H B - (u(u) v)H Box (m- u) (34) m, n may be rasterized to make them 1-dimensional, making this, in matrix notation
  • Random CFAs of basic colors are especially suited for Chromatic Aberration correction since they avoid large (electromagnetic) bandwidth panchromatic filters resulting is sharper images in each basic color, and can gracefully handle loss of information in accordance with the section "Random CFAs and graceful Error Handling.”
  • Aliasing of color components can be handled by arranging photosite locations in a "jittered pattern" .
  • This term to refer to any irregular arrangement, including one where photosites on a regular lattice are perturbed a small distance. This perturbation is random and smaller than the distance between any two neighboring lattice points.
  • the problem of adjacent photosites overlapping can be avoided by changing their size and geometry.
  • An example pattern, shown in FIG. 11 makes these perturbations along the x and y directions identical along the same column and row respectively.
  • a jittered sensor requires changes to equation 14 to account for non-regular lattice locations and the non-regular photosite sizes affecting box filtering that varies from photosite to photosite. This is a straightforward modification of sampling theory along the lines of section "Chromatic Aberration Correction.” This modified sampling equation allows the capture of image frequencies beyond the Nyquist limit of a regular sensor of the same size and number of photosites. Informally, a jittered sensor lattice is roughly equivalent to a sensor with a higher Nyquist frequency but with a random set of photosites removed.
  • Jittering may be used not just in color image sensors but in monochrome ones as well with the same corresponding benefits.
  • Aliasing is a problem in such systems because of reduced sensor Nyquist rate. If a random set of photosites is read out instead of line skipping and random sets of photosites are combined together instead of photosite binning, the image reconstruction problem becomes identical to the jittered sensor lattice case. Image reconstruction techniques described in section "Sensor Lattice Jittering for Increased Bandwidth and graceful Aliasing" can be used ameliorate the aliasing problem.
  • CFA based color imaging adds to imaging in these other fields: firstly, the multiplexing of images in two or more colors onto one sensor image, secondly the linear relations corresponding to filtering by a CFA, thirdly correlations between the image in different colors and fourthly linear relations corresponding to approximately circular band limitedness of the optical signal.
  • the Tikhanov regularization framework casts the signal reconstruction problem as an optimization problem and admits a variety of penalty functionals as its objective.
  • the standard formulation uses an energy minimizing term which also leads to an approximate solution to equation 36.
  • Total Variation minimization also corresponds to a particular penalty functional within the Tikhanov regularization framework.
  • Truncated Singular Valued Decomposition is another effective regularization scheme which may be used for color image reconstruction. These regularization techniques are applied to each of the multiplexed images in each basic color. For details see M. Bertero, P. Boccacci, "Introduction to Inverse Problems in Imaging” , Taylor & Francis, ISBN 0750304359, hereby incorporated by reference in its entirety.
  • the Bayesian framework may be used for regularization if a suitable prior is used. See Y. Hel-Or, "The canonical correlations of color images and their use for demosaicing" which computes a Maximum A Posteriori Estimate (MAP). We propose that this scheme may be improved by applying them on the full linear system including not just the linear transform of the CFA, as they have done, but the linear constraints corresponding to the image's frequency band hmitedness as well.
  • Conjugate Gradient and Steepest Descent are two iterative techniques that may be used on Bayesian or other non-linear Regularization schemes.
  • Band hmitedness and spectral sparsity constraints may be combined with any of the above techniques through the method of Projection On Convex Sets. In this scheme, at each iteration, the iterate is projected onto the set of band limited and spectrally sparse solutions. Solution positivity may also be imposed using a similar scheme.
  • FIG. 10 shows a listing of Matlab code used in a simulation based on the sparsity pro- motion approach to image reconstruction. This listing should be sufficient for anyone of ordinary skill in the art with a familiarity with the GPSR solver (http:/ /www. lx.it.pt/ ⁇ mtf/GPSR/) to reproduce our results. With no noise added, a PSNR figure of 42dB was obtained. With enough noise added so that the input image would have a PSNR of 39dB, the reconstructed image had a PSNR of 34dB.
  • the scalability of the present image reconstruction scheme depends on the specific Inverse Problem solution technique used.
  • linear solution techniques such as matrix inversion or MLE calculation
  • the result is a space variant FIR filter.
  • the size of such filters can be reduced by windowing and associated techniques.
  • Scalability can also be achieved by the standard technique of blocking, wherein a pixel is reconstructed by only considering a small block of photosites around it. This works well because parts of images that are far away from each other are quite independent of each other, and hence have little influence on each other.
  • the present invention may be used not just for still imaging but for video as well. Besides a trivial extension to multiple frames, algorithms that perform joint reconstruction of multiple frames leveraging their correlation may also be used.
  • the present invention may also be used in other situations where multi-spectral image sensor systems are limited by geometric constraints.
  • the present invention allows multi-spectral sampling to be folded into smaller sensors requiring smaller apertures without increased acquisition times.
  • the present invention may be used in conjunction with non-rectangular photosites and lattice arrangements including but not limited to hexagonal and octagonal schemes. [0137] The present invention may be used in image scanners.
  • the present invention may be used in acquiring multi-spectral images in different number of dimensions including ID and 3D.
  • the first system of equations is generated once and saved to file.
  • each distinct overlapping tile needs to be computed and saved.
  • r_chroma r_luma/2;
  • r_filt r_luma+0.0; Determines the number of coeffs
  • noise sigma2*randn( [size(X , 1) size(X,2)]);
  • numRows floor((size(X , 1) -overlap)/coreSize) ;
  • numCols floor((size(X,2)-overlap)/coreSize) ;
  • stitchedlmage_lpf zeros(numRows+coreSize, numCols+coreSize, c) ; Compute LPF mask
  • lpfMask GenerateEllipticalMask(M, r_filt*M, 1, 0);
  • numCoeffs sum(sum(lpfMask) ) *3 ;
  • [goodPixels, numGoodPixels] GenerateGoodPixels(usePrecompGoodPixels, M, N) ;
  • imageStack zeros(numDirs, numRows+coreSize , numCols+coreSize , c) ;
  • numNZVars numVars-size(B, 1) ;
  • numEqns size(B, l)+numGoodPixels
  • filename sprintf ( ' -°/ 0 icoeffs ,ivars-y,iNZs-y,ieqns-y,iof°/ 0 i ' , numCoeffs, ... numVars, numNZVars, numEqns, coreSize, tileSize) ;
  • filename strcat(S(l :end-4) , filename, suffix, '.png');
  • inverseFileName sprintf ( 'B-C-inverses-Xiof 0 / 0 i _Chroma_Dir_ 0 / 0 i .mat ' , ... coreSize, tileSize, uint8(r_luma*100) , uint8(r_chroma*100) , d) ;
  • stitchedlmage zeros(numRo s+coreSize, numCols+coreSize, c) ;
  • tileNum tileNum + 1;
  • C_shift circshift(C, [-(i-l)*coreSize, -(j-l)*coreSize] ) ;
  • tile(: , : ,k) ifft2(ifftshift (fftshift(fft2(tile( : , : ,k))) .*lpfMask)) end
  • Binv squeeze(inverses(inverseIndex_i , inverseIndex_j ,:,:));
  • soln((k-l)*coreSize+l) tileSoln((kc-l)*tileSize+lc) ; inverses(inverselndex_i , inverselndex_j , ...
  • numLowRankTiles numLowRankTiles + 1;
  • reconTile_lpf zeros(size(tile) ) ;
  • reconTile_lpf( : , : ,k) LpfFFT(tile( : , : ,k)+sigma2*abs ...
  • stitchedlmage_lpf (rowStart : rowStart+coreSize-1 , ...
  • overlap/2+1 overlap/2+coreSize , : ) ;
  • imageStack(d, :,:,:) stitchedlmage ;
  • minimalSize DetermineMinimalSize(numRows, numCols, coreSize, r_luma) ;
  • reducedlmage_lpf reduce(stitchedlmage_lpf , minimalSize);
  • [ME, MAE, MSE, PSNR] Metrics(stitchedlmage_lpf , Z, S) ;
  • Cx 0 which sets Fourier components of the image in each of luma, cl,c2 outside an ellipsee to zero where x is the set of image pixel values in luma, cl,c2.
  • outsideEllipse GenerateEllipses(M, majorLengths, aspect, theta);
  • outsideEllipse 1-outsideEllipse ;
  • outsideEllipse(i , : , : ) ifftshift (squeeze(outsideEllipse(i ,:,:)));
  • Binv returns the inverse of B concatenated with matrix formed by
  • the list of bad pixels also may or may not be given.
  • randPixels randperm(M*N) ;
  • goodPixelNums randPixels(l :numGoodPixels) ;
  • goodPixels reshape(goodPixels, M, N) ;
  • B_CFA zeros(nuniGoodPixels, c*M*N) ;
  • Y_good(offset+goodPixellndex) Y_lin(linearPixelIndex) ;
  • B_CFA(goodPixelIndex, linearPixellndex) r3;
  • B_CFA(goodPixelIndex, M*N+linearPixelIndex) r2;
  • B_CFA(goodPixelIndex, 2*M*N+linearPixelIndex) r6;
  • B [B; B_CFA] ;
  • sys_rank rank(B) ;
  • Binv pinv(B) ;
  • Binv Binv(:, end-M*N+l : end) ;
  • function mask GenerateEllipses(tileSize, majorLength, aspect, theta)
  • mask(i, :, :) GenerateEllipticalMask(tileSize , majorLength(i) ,
  • end function A GenerateEllipticalMask(tileSize , majorLength, aspect, ... theta)
  • rPoint rotator*point( : ) ;
  • rPoint(2) rPoint (2) /aspect ;
  • Ey X-squeeze(Y(d, :, :, :));
  • randPixels randperm(M*N) ;
  • goodPixellndices randPixels(l :numGoodPixels) ;
  • goodPixels(goodPixelIndices(p) ) 1 ;
  • goodPixels reshape(goodPixels, M, N) ;
  • helOrMatrix [r3, r2, r6; r3, -r2, r6; r3, 0, -2*r6] ;
  • Y(i, j, :) helOrMatrix*P(:) ;
  • A fftshift(fft2(X)) ;
  • alpha size(A, l)/size(A,2) ;
  • alpha2 alpha*alpha
  • centerRow floor(size(A, l)/2)+l;
  • centerCol floor(size(A, 2)/2)+l
  • C repmat(C, [ceil(size(X, l)/size(C, 1)) ceil(size(X ,2) /size(C , 2) ) 1]);
  • C C(l:size(X,l),l:size(X,2),:);
  • XFFT_orig zeros(M2, N2, c) ;
  • X_orig zeros(size(XFFT_orig)) ;
  • XFFT_orig(: , : ,k) temp(M/2+l-M2/2 : M/2+M2/2 , N/2+1-N2/2 : N/2+N2/2) ;
  • X_orig(: , : ,k) real(ifft2(ifftshift (XFFT_orig( : , : ,k) ) ) ) ) ) ;
  • X_orig real(X_orig) ;
  • reductionFactor size(X, l)*size(X,2)/(targetSize(l)*targetSize(2)) ;
  • X_orig X_orig/reductionFactor ;
  • _orig X_orig*255/max(max(max(X_orig) ) ) ;
  • Y min(max(X_orig,0) ,255) ;
  • minimalSize(2) ceil(numCols*coreSize*r_luma) ;
  • minimalSize(2) minimalSize(2)+l ;
  • err_amp E*255/max(max(max(E) ) ) ;
  • MAE sum(E)/size(E,l) ;
  • MSE sum(E)/size(E,l) ;
  • PSNR 10*loglO(255 ⁇ 2/MSE) ;
  • fftErr zeros(size(A) ) ;
  • fftErr fftshift(log(l+abs(fft2(B)-fft2(A)))))) ;
  • fftErr fftErr*256/max([fftErr(:)]) ;
  • fftErr uint8(fftErr + 0.5*ones(size(fftErr) ) ) ;

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

Methods and systems for robust and flexible extraction of image information using color filter arrays. Methods are provided comprising receiving a sample set of data generated by transforming and sampling an optical property of an original color image in a spatial basis, wherein the transformation effected is substantially local in the spatial basis and has partially overlapping spectra. A generalized inverse of the transform augmented with regularization constraints is applied to the sample set of data to infer original image data. According to one approach the generalized inverse does not use variational minimization or assume constant color ratios. According to an alternative approach regularization can take the form of predetermined spectral constraints applied to the sample set of data to infer the original image data. According to another alternative approach regularization can promote spectral sparsity. Methods are also provided for creating an optical color filter array, for sampling an image, for reducing noise in an image, and for computing a sparse representation of an image signal. Systems for carrying out these and other methods are also provided.

Description

METHOD AND SYSTEM FOR ROBUST AND FLEXIBLE EXTRACTION OF IMAGE INFORMATION USING COLOR FILTER ARRAYS
BACKGROUND OF THE INVENTION Field of the Invention
[0001] Embodiments of the present invention relate to multi-spectral imaging systems such as still cameras, video cameras, scanners and microscopes and more specifically to imaging systems that use fewer sensor elements than previous techniques for comparable image quality.
Background Information
[0002] Images herein can be considered signals whose amplitude may represent some optical property such as intensity, color and polarization which may vary spatially but not significantly temporally during the relevant measurement period. In color imaging, light intensity typically is detected by photosensitive sensor elements or photosites. An image sensor is composed of a two dimensional regular tiling of these individual sensor elements. Color imaging systems need to sample the image in at least three basic colors to synthesize a color image. We use the term "basic colors" to refer to primary colors, secondary colors or any suitably selected set of colors. We exclude color difference signals, many of which are used in popular color spaces, from the definition of basic colors. Furthermore, all references to red, green and blue should be construed to apply to any set of basic colors.
[0003] We only consider systems that sense a "substantially local" transform in the spatial domain of the image signal, which is a transform whose value at a point depends only on the signal in its close neighborhood.
[0004] Color sensing may be achieved by a variety of means such as, for example, (a) splitting the image into three identical copies, separately filtering each into the basic colors, and sensing each of them using separate image sensors, or (b) using a rotating filter disk to transmit images filtered in each of the basic colors onto the same image sensor.
[0005] However, a very popular design for capturing color images is to use a single sensor overlaid with a color filter array (" CFA" ). This includes the straightforward design wherein the value of each output pixel is determined by three sensing elements, one for each basic color, usually arranged in horizontal, vertical or diagonal stripes. This design yields red, green and blue images of equal resolution, or equivalently luminance and chrominance signals of equal bandwidth. 'Luminance" is defined as a weighted sum of basic color signals where all the weights are positive while "chrominance" , is defined as a weighted sum of basic color signals where at least one weight is negative. The color stripe design is still used in high end cameras such as the Panavision Genesis Digital Camera, http://www.panavision.com/publish/2007/l l/09/Genesis.pdf, page 2, 2007. Newer CFA designs by Bayer (see FIG. 4 and B.E. Bayer, "Color imaging array" , July 20 1976. US Patent 3,971,065) and others (see K. Hirakawa and P.J. Wolfe, "Spatio-spectral color filter array design for enhanced image fidelity" in Proc. of IEEE ICIP, pages II: 81-84, 2007 and L. Condat, "A New Class of Color Filter Arrays with Optimal Sensing Properties" ) make different trade-offs between luminance and chrominance bandwidths as well as the crosstalk between them.
[0006] Most of the early CFAs as well as their associated image reconstruction algorithms were designs based on intuition and experimentation but lacking rigorous mathematical foundation. In the paper "Color demosaicing by estimating luminance and opponent chromatic signals in the Fourier domain" , Proc. IS&T/SID 10th Color Imaging Conf, pages 331-336, 2002, D. Alleysson, S. Susstrunk, and J. Herault analyzed electromagnetic filtering performed by CFAs as amplitude modulation of the color signals in the spatial domain (as used herein the terms "demosaic" and "demosaick" are to be construed as input image reconstruction procedures and the terms "demosaicer" and "demosaicker" as input image reconstruction algorithms). This led to frequency domain image reconstruction techniques that viewed the problem as that of demultiplexing the luminance and chrominance signals via demodulation and filtering. See E. Dubois, "Frequency-domain methods for demosaicking of bayer-sampled color images" , IEEE Signal Processing Letters, 12(12):847-850, 200 and N. Lian, L. Chang, and Y.P. Tan, "Improved color filter array demosaicking by accurate luminance estimation" in IEEE International Conference on Image Processing, 2005, ICIP 2005, volume 1, 2005.
[0007] The complementary problem of designing CFAs with good frequency domain properties was attacked by D. Alleysson, S. Susstrunk, and J. Herault, "Linear demosaicing inspired by the human visual system" , IEEE Transactions on Image Processing, 14(4):439-449, 2005 wherein the doubling of the number of blue photosites in the Bayer CFA at the expense of Green photosites was suggested. This was followed by techniques to design CFAs directly in the frequency domain by K. Hirakawa and P.J. Wolfe, "Spatio-spectral color filter array design for enhanced image fidelity" in Proc. of IEEE ICIP, pages II: 81-84, 2007 and optimized by L. Condat, "A New Class of Color Filter Arrays with Optimal Sensing Properties" . These techniques fix the pattern of each basic color to consist of a small set of spatial "carriers" - two dimensional sinusoids with appropriate frequencies, phases and amplitudes - and sum over the three basic colors to arrive at the final pattern. This pattern is then overlaid on the sensor. When an image formed by the camera's lens is filtered by the CFA, it is modulated by each of the carrier frequencies. The overlap of the modulation products of the 3 primaries induces a color transform and leads to a multiplex of luminance and chrominance signals modulated at different frequencies. As long as there is limited cross-talk between the luminance and chrominance signals, and the color transform is invertible the original color image can be recovered.
[0008] An important consideration in the choice of sensor color space so far has been the high frequency content of chrominance signals. Well chosen color transforms result in chrominance signals with low high frequency content. This allows the chrominance signals to be placed close to each other and to the luminance signal in the Fourier domain without significant cross-talk. See Y. Hel-Or, "The canonical correlations of color images and their use for demosaicing" and K. Hirakawa and P.J. Wolfe, "Spatio-spectral color filter array design for enhanced image fidelity" in Proc. of IEEE ICIP, pages II: 81-84, 2007 and L. Condat, "A New Class of Color Filter Arrays with Optimal Sensing Properties" .
[0009] An important factor influencing the close packing of color component (luminance, chrominance) signals is the geometry of their spectra. Square and rectangular sampling lattices admit higher resolution along the diagonal directions than along horizontal or vertical directions. Optical systems, on the other hand, generate roughly equal resolution in all directions thereby yielding images with nearly circular spectral support. When plotted, the 2D Fourier transform of a CFA is a rectangle while that of a color component of an optical image is a circle. This leads to the problem of efficiently packing circles into rectangles. FIG. 3 shows an exemplary monochrome image 310 and it's spectral image 320. The spectral image is obtained by taking the logarithm of the absolute value of the Fourier transform of the image.
[0010] An aggressive class of techniques for close packing of color component spectra employs adaptive directional techniques during image reconstruction. These techniques assume the color component spectra of small image patches to be sparse in at least one direction. They design their CFA to generate more than one copy of chrominance spectra (see B.E. Bayer, "Color imaging array" , July 20 1976, US Patent 3,971,065), implicitly or explicitly identify the cleanest copy during the image reconstruction step and use directional filtering to demultiplex them (see E. Dubois, "Frequency-domain methods for demosai eking of bayer-sampled color images" , IEEE Signal Processing Letters, 12(12):847-850, 2005 and K. Hirakawa and TW Parks, "Adaptive homogeneity-directed demosaicing algorithm" , IEEE Transactions on Image Processing, 14(3):360-369, 2005. Also see Ron Kimmel, "Demosaicing: Image reconstruction from color ccd samples" , IEEE Trans. Image Processing, 8:1221-1228, 1999 and E. Chang, S. Cheung, and D.Y. Pan, "Color filter array recovery using a threshold-based variable number of gradients" , in Proceedings of SPIE, volume 3650, page 36, 1999). The benefits of adaptive directional image reconstruction come at a heavy cost, though, since sensing edge directions from noisy sub-sampled images is a hard problem and the non-linear nature of sensing edges making makes noise reduction a non-separable step from image reconstruction.
[0011] Universal demosaickers have also been devised that can reconstruct images sampled by any CFA. This has allowed for easy experimentation with novel CFA designs, including patterns with basic colors arranged in a random pattern (as used herein the term "random" is to be construed as including "random" and "pseudo random" ). CFA Design in the Fourier Domain
[0012] Consider a photosite located at n = [rii n2] that filters incident light x{n) =
Figure imgf000006_0001
xg(n) a¾(n)]T through color filter array c(n) = cg{n) <¾(n)] and measures the resulting, scalar signal y(n) , where y(n) = c(n) · x(n) (1)
[0013] Consider a set of real carrier sinusoids
Figure imgf000006_0002
, 1 < k < m of unit amplitude, frequencies = [ω^ , ω^] and phases φ^> G {0, }, given by
3 (n) = - e (2)
Each color of the CFA, Cj(n) , i G {r, g, b}, is the superposition of these carriers scaled by an appropriate real amplitude
Figure imgf000006_0003
k=l
[0014] The choice of carrier frequencies is a CFA design decision except for the zero frequency or DC component, whose presence is essential for all physically realizable CFAs. For this reason we set UJ W = [0 0]. It follows that af] > 0, i G {r, g, b}.
[0015] Once the sensor is exposed to image x{ri) and its mosaiced output y(n) is captured, a signal processing step is needed to reconstruct x{ri) . Assuming the carrier frequencies 1 < k < m are sufficiently separated so that sidebands centered about them do not overlap, each modulated signal can be recovered by multiplication with its respective carrier followed by convolution with a low pass filter h^kK Formally,
M (fc) (n) = (h^ * (s(fc) - y)) (n) (4)
Each ¾(fc) (n), 0 < k < m can be viewed as a color component. Motivated by the fact that > 0, i G {r, g, b}, we loosely refer to
Figure imgf000006_0004
k > 1 as the chrominance signals. [0016] It is important to note that if the carrier frequencies 1 < k < m are insufficiently separated so that all sidebands centered about them overlap, existing techniques do not prescribe any means of recovering them without crosstalk.
[0017] Since u(n) =
Figure imgf000007_0001
is generated by the modulation of the incident image x(n), it can be written as u(n) = A x(n) (5)
where A = can be interpreted as the color transform matrix, and
Figure imgf000007_0002
provided its rank is 3 or greater, x can be recovered by x(n) = A u(n) (6)
Here A , the generalized inverse of A, can be interpreted as the inverse color transform.
[0018] From the above discussion it is clear that the three decision variables for a CFA design are the carrier frequencies 1 < k < m, phases 1 < k < m and amplitudes given by the matrix A.
Bayer CFA in the Fourier Domain
[0019] For a Fourier domain analysis of the popular Bayer CFA see E. Dubois, "Frequency- domain methods for de-mosaicking of bayer-sampled color images" , IEEE Signal Processing Letters, 12(12):847-850, 2005. FIG. 4 shows the Bayer CFA 410. FIG. 5 illustrates how color information with its circular support is packed into the sensor's rectangular support. This can be most easily understood in terms of an alternative color space:
1
1 (7)
Figure imgf000007_0003
1
Figure imgf000007_0004
[0020] In FIG. 5, the central circle represents Luminance (L). The four quarter circles at the vertices make up Chrominancel (CI). The two semi-circles at the left and right edges make up the first copy of Chrominance2 (C2a). The two semi-circles at the top and bottom edges make up the second copy of Chrominance2 (C2b).
[0021] It's apparent from this figure that there is empty space between circles that goes unused. Such inefficiencies are unavoidable by existing CFAs that attempt to maintain at least one copy of luminance or chrominance spectra free from crosstalk. This invention redresses such inefficiency, among other things.
Universal Demosaickers
[0022] A few so called universal image reconstruction methods exist which work with arbitrary Red, Green and Blue CFA patterns. These include one described in Condat, "Random patterns for color filter arrays with good spectral properties" , (Research Report of the IBB, Helmholtz Zentrum Munchen, no. 08-25, Sept. 2008, Munich, Germany), IBR, hereby incorporated by reference in its entirety, which uses variation minimization to infer the original image.
[0023] Lukac et al., "Universal demosaicing for imaging pipelines with a RGB color filter array" (Pattern Recognition, vol. 38, pp. 2208-2212, 2005) IBR, hereby incorporated by reference in its entirety, presents a universal demosaicker which makes assumptions about the local constancy of color ratios.
[0024] Neither of these universal demosaickers use spectral band-limitedness constraints in a systematic way.
BRIEF SUMMARY OF THE INVENTION
[0025] The present invention overcomes problems and limitations of prior imaging methods and systems by providing novel methods and systems for, among other things, sampling an image to obtain image data and processing image data.
[0026] One such method comprises receiving a sample set of data generated by transforming and sampling an optical property of an original color image in a spatial basis, wherein the transformation effected is substantially local in the spatial basis and has partially overlapping spectra. A generalized inverse of the transform is applied to the sample set of data to produce a set of inferred original image data, wherein the generalized inverse does not use variational minimization and does not assume constant color ratios.
[0027] Another such method comprises receiving a sample set of data generated by transforming and sampling an optical property of an original color image in a spatial basis, wherein the transformation effected is substantially local in the spatial basis and has partially overlapping spectra. A generalized inverse of the transform that respects predetermined spectral constraints is applied to the sample set of data to produce a set of inferred original image data.
[0028] Another such method comprises creating an optical color filter array, providing an image sensor comprising a plurality of photosensitive sensor elements, projecting an image through the color filter array, and detecting image intensity values transmitted through the color filter array after a single exposure at sensor elements of the image sensor. Detected intensity values are read out from only a subset of sensor elements to increase speed, the subset of sensor elements being randomly chosen whereby aliasing is rendered similar to noise. The input image is inferred from the detected image intensity values.
[0029] A further such method comprises creating an optical color filter array, providing an image sensor comprising a plurality of photosensitive sensor elements, projecting an image through the color filter array, and detecting image intensity values transmitted through the color filter array after a single exposure at each sensor element of the image sensor. The color filter array is random, whereby aliasing is rendered similar to noise. The sensor elements are grouped into two or more subsets whose respective characteristics differ in at least one of the following respects: sensitivities, sensor element quantum efficiencies and electronic signal integration times. The input image is inferred from the detected image intensity values.
[0030] Yet another such method comprises creating an optical color filter array, providing an image sensor comprising a plurality of photosensitive sensor elements, projecting an image through the color filter array, and detecting image intensity values transmitted through the color filter array after a single exposure at each sensor element of the image sensor. The sensor elements are arranged substantially in a jittered pattern. The input image is inferred from the detected image intensity values, while at least some higher frequency components of the detected image intensity values are set to zero prior to image inference.
[0031] Another such method comprises creating an optical color filter array, providing an image sensor comprising a plurality of photosensitive sensor elements,- projecting an image through the color filter array, and detecting image intensity values transmitted through the color filter array after a single exposure at each sensor element of the image sensor. The sensor elements are arranged substantially in a jittered pattern. The input image is inferred from the detected image intensity values, where sparsity promotion is used to reconstruct a higher resolution image.
[0032] A method for sampling an image comprises projecting an image onto an image sensor comprising a plurality of photosensitive sensor elements arranged in a pattern obtained by jittering the pixel locations a small distance off of a regular lattice in a random way and and detecting image intensity values after a single exposure at each sensor element of the image sensor. The input image is inferred from the detected image intensity values.
[0033] Another such method is provided for reducing noise in an image. This method comprises receiving a sample set of data generated by transforming and sampling an optical property of an original color image in a spatial basis, wherein the transformation effected is substantially diagonal in the spatial basis and has partially overlapping spectra and approximating the Poissonian photon shot noise at each photosite with a Gaussian of the same variance and zero mean. The image intensity at each photosite is used as an approximation for the mean of the Poissonian at the photosite. The Gaussian of the same variance and zero mean is combined with the Gaussian noise from other sources to get a combined Gaussian. The maximum likelihood estimator of the combined Gaussian is computed.
[0034] A further such method is provided for reducing noise in an image. This method comprises capturing image data representative of intensities of the image in the spatial domain, transforming the image data into transformed data in another domain, and applying the Expectation Maximization algorithm to the inverse transformation of the data to compute its maximum likelihood estimator.
[0035] An additional such method is provided for computing a sparse representation of a signal in a basis. This method comprises inserting additional vectors from the basis into the sparse representation in multiple iterations and using a statistical estimator such as a maximum likelihood estimator to decide which additional vectors to include in each iteration.
[0036] Yet another such method is provided for processing an image. It comprises receiving a sample set of data generated by transforming and sampling an optical property of an original color image in a spatial basis, wherein the transformation effected is substantially diagonal in the spatial basis and has partially overlapping spectra. A generalized inverse of the transformation is applied to the sample set of data to produce a set of inferred original image data, wherein the sampling is done at spatial locations which are arranged substantially in a jittered pattern.
[0037] It is to be understood that this summary is provided as a means of generally determining what follows in the drawings and detailed description, and is not intended to limit the scope of the invention. Other methods and systems are disclosed and claimed herein and those described above should not be construed as exhaustive or limiting. Objects, features and advantages of the invention will be readily understood upon consideration of the following detailed description taken in conjunction with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[0038] FIG 1 is a flowchart showing an exemplary method for sampling a color image with a CFA in accordance with an embodiment of the present invention.
[0039] FIG 2 is a schematic diagram of an exemplary color imaging system in accordance with an embodiment of the present invention.
[0040] FIG 3 Shows a monochrome image and it's spectral image.
[0041] FIG 4 Shows the popular Bayer Color Filter Array.
[0042] FIG 5 Illustrates amplitude modulation of an image effected by the Bayer CFA.
[0043] FIG 6 is a diagram of an exemplary randomized color filter array in accordance with an embodiment of the present invention.
[0044] FIG 7 Illustrates amplitude modulation of an image effected by the CFA shown in FIG 6 in the Fourier domain. [0045] FIG 8 is a code listing of a Matlab simulation of an exemplary embodiment of the invention showing image reconstruction by matrix inversion.
[0046] FIG 9 Shows the results of a simulation of a simple exemplary embodiment of the present invention in Matlab.
[0047] FIG 10 is a code listing of a Matlab simulation of an exemplary embodiment of the invention showing image reconstruction with sparsity promotion.
DETAILED DESCRIPTION OF THE INVENTION
The present invention works within the broad framework of Color Filter Array based color imaging systems.
[0048] FIG. 1 is a flowchart showing an exemplary method of color imaging, in accordance with an embodiment of the present invention. In step 110, a Color Filter Array is created. In step 120, the incident image is filtered through this CFA. In step 130, the filtered image is detected by an image sensor after it is exposed to it. In step 140, the image is reconstructed from the image sensor output and the CFA pattern.
[0049] FIG. 2 is a schematic diagram of an imaging system, in accordance with an embodiment of the present invention. Image 210 is focused by lens 220 onto Color Filter Array 230. The filtered image is detected by image sensor 240. The resulting plurality of sensed filtered image intensity values is sent to processor 250 where image reconstruction is performed.
[0050] Existing CFA designs try to minimize partial spectral overlap between the different color components, or use empirically designed techniques to reconstruct the image. A novel feature of the present invention is the use of mathematically rigorous techniques that separate the color components during image reconstruction even if they have partial spectral overlap. As before, we define the overlap of two modulated spectra (including the degenerate 0 frequency) as partial if they have different carrier frequencies, and total otherwise. A General Image Capture Problem Formulation
[0051] Consider a discrete image with (Nl5N2) pixels. Denote the R, G, B color planes of the image by Xi,i G {r,g,b} and those of the CFA that filters it by <¾,i G {r,g,b}. Now, a photosite located at n = (ηι,τ^),! n\ < N , 1 < n2 N2 filters the incident light x(n) xr{n) xg(n) Xb{n) through color filter array cr(n) cg(n) cb(n) and measures the resulting noise-free, scalar signal y(n), where y(n) = c(n) · x(n) (8)
Taking (N1? N2) point 2D DFT of both sides of equation 8 we get,
(9) i£{r,g,b} where Y, Ci, Xi,i G {r,g,b} are the 2D DFT of y, Xi,i G {r,g,b} respectively and * represents 2D convolution. We define "rasterization" of a 2 dimensional signal into a vector as first casting each row of the signal as a vector and then concatenating these vectors in the order of the rows. In order to cast equation 9 in matrix form, we first define Y , X j, Ci, i G {r, g,b} as the rasterized column vector versions of Y, Xi,Ci,i G {r,g,b} respectively and
X the concatenation of Xr, Xg, X¾. Furthermore, denote DFT frequencies
Figure imgf000013_0001
the input image by ω = (ωι,ω2) and those of the CFA filtered image by Ω = (Ω!2) and their rasterized version as ώ and Ω respectively. Now equation 9 can be re-written in matrix form as
Ϋ = Α X (10) where row Ω of A is the concatenation of Oi(Q),i G {r,g,b},
Α(Ω) Α·(Ω) Dg(il) ,(Ω) (11) and Oi(Q),i G {r,g,b} is a row vector obtained by appropriately rearranging the elements of Ci ,i G {r, g,b} so as to effect the convolution of equation 9. [0052] In order to solve equation 10, we require rank(A) = \X \. However, rank(A) < min(| | , \Y\) and the number of sensor elements, \X \ is k times \Y\ , in the case k basic colors. Hence X can only be recovered if an image model exists that provides additional a priori information, such as restricted signal bandwidth, or sparse representation in some bases. Thus, equation 10 has to be augmented with
Γ = A' X (12) where Γ is a vector typically composed of constants, usually zeros. Combining equations 10, 12 we obtain
X = S-1■ Y' (13)
Y A
where Y1 is the concatenation of Y, T, S is the concatenation of A, A'
Γ A'
and S is the generalized inverse
[0053] In order to express the result of 10 in the spatial domain first consider equation 8. Let the rasterized column vector versions of y and the color planes of x, c be y, ¾, <¾, i G {r, g, b}.
Let x ≡ \ xXgr be a column vector formed by the concatenation of Xi, i G {r, g, b} and
xb
B = c9 be a matrix similarly formed by the concatenation of <¾, i G {r, g, b}.
Equation 8 can now be re-written as, y = B x (14)
[0054] Analogously to the Fourier domain formulation, to solve equation 14 we require rank(£?) = |cc| . However, rank(£?) < min(|cc| , |¾r|) and the number of sensor elements, |cc| is k times |¾r| , in the case k basic colors. Hence x can only be recovered if an image model exists that provides additional a priori information, such as restricted signal bandwidth, or sparse representation in some bases. Thus, equation 14 has to be augmented with = B' x (15) where is a vector typically composed of constants, usually zeros. Combining equations 14, 15 we obtain
x = S-1■ y' (16) y B
where y' = is a concatenation of ¾r, 7, S = is a concatenation of B, B' and 51-1
7 B<
is a generalized inverse of S.
[0055] We use the term "Generalized Inverse" to refer to transformations that have some or all properties of an inverse transformation. If a transform is invertible, its generalized inverse refers to this inverse or any transform that results in an inferred image that is sufficiently close to it in L2 norm. If the transform is not invertible, its generalized inverse is obtained by adding additional conditions such as image smoothness or sparsity. If the inverse transform is over determined, the generalized inverse may use statistical estimation to choose the most probable solution from the space of possible solutions.
[0056] From equation 16 we find that a basic color value at a pixel location can be computed as a weighted sum of elements of y'. Since the only image dependent values of y' are in its sub-vector y, this reduces to a space variant filter plus an optional constant. This constant is 0 if 7 is a vector of zeros. The space variant filter obtained from equation 16 has a large kernel the size of | ¾r | . In practical systems, this can be greatly reduced by windowing the filter kernel. Alternately, equation 16 can be set up for a small block around each pixel and only the kernel for this pixel solved for. The latter method not only generates small kernels, but does so with greatly reduced offline computation.
[0057] Another practical consideration is the space required to store the space variant filter kernels. This can be addressed by using a periodic CFA formed by tiling the sensor with a pattern block, so that the number of filter kernels is reduced to the block size. Such tiling sufficiently preserves the random character of a random CFA as long as the block size is not too small. Rectangular blocks suffice for most applications but other, more complicated, shapes may also be employed.
[0058] Space variant filter kernels can be pre-computed and used for image reconstruction only if equation 12 or 15 can be set up independently of the image. This is the case for section "Image Model with Restricted Bandwidth" but not for sections "Image Model with Transform-coefficient Sparsity" and "Other Regularization Techniques for Image Reconstruction" . Hybrid approaches that combine space variant filters with adaptive algorithms are described in section "Image Model with Adaptive Color Space and Bandwidth."
Designing CFAs to convey Maximal Information
[0059] The rank of matrix A is determined by the choice of CFA. A carefully tailored CFA with a small repeating pattern can have a rank close to or equal to \Y\ . However, such CFAs have a large number of filter colors complicating their manufacture. Filters that allow the transmission of more than one basic color are known as panchromatic filters. Panchromatic filters vary in the amount of transmission of each basic color and hence come in a large number of colors.
[0060] We present, without proof, the result that CFAs comprised of a random arrangement of basic colors such as Red, Green and Blue (known collectively "RGB" ) filters have rank close to \Y \. Furthermore, some random arrangements of the basic colors can have a rank equal to Intuitively, this is due to the fact that the DFT of a random pattern is overwhelmingly likely to contain \ Y\ carrier frequencies of non-zero amplitude. Furthermore, practically all of these amplitudes are unique. Such CFAs are easier to manufacture than panchromatic ones. For more on eigenvalues of random matrices, see E.J. Candes, T. Tao, "Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?" , IEEE Transactions on Information Theory, 2006 and references cited therein.
[0061] Randomized Panchromatic CFAs can also be built wherein colors are chosen randomly. Such CFAs also have rank concentrated close to \Y\ . However, these are harder to manufacture than random basic-color filter arrays.
[0062] FIG. 6 shows an exemplary Color Filter Array 610, in accordance with an embodiment of the present invention. In this example Red, Green and Blue filters in approximately equal numbers are distributed in a random pattern.
[0063] FIG. 7 Illustrates amplitude modulation of an image effected by the CFA shown in FIG. 6 in the Fourier domain. Each circle represents the spectrum of a basic color signal modulated with one of the multitude of carriers. For better visibility, only a few of the carriers are shown. [0064] In more general terms, we shall refer to the linear transform effected by a CFA or other optical transformation apparatus as a λ-Capture transform if it has the following property when applied to a band-limited color image signal with color space dimensionality k: If the signal has circular Fourier domain support with ^ as many independent Fourier coefficients in each color as the number of Fourier coefficients in the transformed signal, then X% of the latter are independent linear combinations of the former. The parameter λ of a λ-Capture transform controls the rank of the inverse transform and is a measure of the amount of color information captured in the transformed signal.
Random CFAs and graceful Error Handling
[0065] Information injected via equation 12 in the Fourier domain or 15 in the spatial domain may not be accurate, and thereby result in image reconstruction errors. This error is dependent on the CFA, and using a random CFA randomizes the error. Randomized error is more similar to noise than non-randomized error and is visually less objectionable. Furthermore, randomized error lacks transform coefficient sparsity and is more amenable to standard noise reduction techniques. This removes a significant part of the error signal while eliminating very little of the image information.
[0066] Image reconstruction algorithms for random CFAs make very few assumptions about arrangement of photosites of any given color, and hence can be easily adapted to work around defective photosites. The resulting error is randomized, made visually less objectionable and amenable to noise reduction.
Image Model with Restricted Bandwidth
[0067] To apply bandwidth restrictions in the spatial domain, first consider a Ni , N2 point 2D DFT matrix F whose row F( ) determines the DFT coefficient of frequency ώ of color i when multiplied with ¾, « G {r, g, b}. Also consider a color transform with k components 1≤ j≤ k that are linear combinations of the primary colors r, g, b thus defined: aU) = aU)r + aU) g + aU) b ( 1 7) where k is usually 3. Next, define G^ a F 4J)F a^F by scaling and concatenating
3 copies of F so that it computes the DFT of color component when multiplied by x. Set up equation 15 by letting matrix B' consist of rows G^ 1 < j < k that represent the DFT coefficients of color components to be set to zero, and letting be a column vector of p zeros where p is the number of rows of B'.
[0068] The color signal bandwidth restriction of the previous paragraph can equally well be written in the Fourier domain formulation by appropriately setting up equation 12.
The color signal bandwidth restriction can also be applied in the Wavelet, Curvelet, Karhunen-Loeve and other bases where it results in many low value coefficients that can be approximated to zero.
[0070] Well chosen color spaces contain most of their high frequency energy in their luminance signal and very little in their chrominance signals. This allows images to be approximately modeled with chrominance bandwidths that are a fraction of their luminance bandwidths with little error. If random CFAs are used, this error is randomized and rendered less visually objectionable and made amenable to noise reduction.
[0071] One such color space containing chrominance signals with low high frequency content is defined in terms of luminance (1), chrominancel (cl) and chrominance2 (c2) as follows:
Figure imgf000018_0002
Figure imgf000018_0001
It has been shown by Y. Hel-Or, " The canonical correlations of color images and their use for demosaicing," HP Laboratories Israel, Tech. Rep. HPL-2003-164R1, Feb. 2004 that cl, c2 and I can be considered statistically independent.
[0072] While the above definition of luminance is the preferred one, other definitions of chrominance may be used. Chrominance basis vectors that are orthogonal to the luminance basis vector defined above have similarly low high frequency content.
[0073] All techniques described in this section allows one to set up equation 12 or 15 independently of the input image, and image reconstruction can be performed with a pre-computed space variant filter.
Image Model with Adaptive Color Space and Bandwidth
[0074] Color spaces can be adaptively chosen for different image neighborhoods so as to minimize high frequency content of chrominance signals. For example, this can be done by pre-multiplying each basic color by a factor inversely proportional to its strength in the neighborhood, reconstructing the thus normalized image followed by de-normalization to restore the original colors.
[0075] Signal bandwidth can be adaptively set for each image neighborhood. Consider the vicinity of an edge, for example. Signal bandwidth along the direction of the edge is lower than in the direction perpendicular to it. Edge sensing algorithms can be used to take advantage of this fact, for example, and setup equation 12 or 15 with a roughly oval spectrum with major axis in the direction of the edge. This allows for the reconstruction of higher resolution images and makes the system of equations more over determined and thus robust to noise.
[0076] If a limited number of color spaces and a limited number of spectral shapes are considered, their solutions can be precomputed as space variant filters. Image reconstruction then consists of selecting the most suitable color space and spectral shape for each neighborhood, and applying the corresponding space variant filter.
Image Model with Transform-coefficient Sparsity
[0077] Spectral sparsity of natural images may be used to reduce the number of degrees of freedom. In this framework, explicitly or implicitly, most of the large amplitude transform coefficients (as used herein the term "transform coefficient" is to be construed as including "Fourier transform coefficient" , "Wavelet transform coefficient" , "Curvelet transform coefficient" , "Karhunen-Loeve transform coefficient" and coefficients of other vector bases spanning the space of image signals) of color components are solved for, and many of the low amplitude coefficients are neglected. If many of the low amplitude transform coefficients are first identified and set to zero, the resulting simplified inverse problem can be solved using any of the standard linear inverse problem techniques.
[0078] A random CFA is well suited to our objective of extracting the large valued Transform coefficients as it provides a random set of projections of the image. See E.J. Candes, T. Tao, "Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?" , IEEE Transactions on Information Theory, 2006. Random CFAs will be assumed for the rest of this section.
[0079] The low value transform coefficients can be identified by first reconstructing an appropriately bandwidth restricted image. This works because the neglected high frequencies have low magnitudes compared to the low frequencies (see D.L. Ruderman and W. Bialek, "Statistics of natural images: Scaling in the woods" , Physics Review Letters 73 (1994), no. 6, 814-817). Furthermore, the resulting error is randomized and the resulting dense spectral support of the error results in small value transform coefficients remaining small. Once the low value transform coefficients are identified and removed, higher frequency coefficients are reintroduced and the resulting linear problem re-solved. This process is repeated until all "large" transform coefficients are identified and solved for.
[0080] L\ minimization, also known as Basis Pursuit, is a standard Compressive Sensing technique to identify many of the high amplitude transform coefficients - and thereby the low amplitude transform coefficients. See Chen, Donoho, and Saunders, "Atomic decomposition by basis pursuit" , SIAM J. Scientific Computing, vol. 20, pp. 33-61, 1999.
[0081] Matching Pursuit is a faster but suboptimal greedy algorithm for identifying the large valued transform coefficients. See Mallat and Zhang, "Matching Pursuits with Time- Frequency Dictionaries" , IEEE Transactions on Signal Processing, December 1993, pp. 3397- 3415. Variants include, but are not limited to, Orthogonal Matching Pursuit, Regularized Orthogonal Matching Pursuit and Simultaneous Orthogonal Matching Pursuit.
[0082] We propose a modification of greedy iterative sparsity promoting techniques such as Matching pursuit and its variants wherein at each iteration, a statistical estimator such as a Maximum Likelihood Estimator (MLE) is used for determining which additional basis vector to add to the sought sparse representation. Various statistical estimators are discussed in a later section on noise sensitivity. The use of statistical estimators produces a more accurate sparse representation. [0083] Correlations between the images in the three basic colors can also be leveraged to further increase sparsity allowing more frequencies to be recovered, thereby improving reconstruction quality. Instead of promoting sparsity of the image in each of the basic colors independently, sparsity is sought in a joint representation. See Nagesh and Li, "Compressive imaging of color images" , IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), Taipei, Taiwan, April 2009. Unlike our CFA based approach, the authors of this prior work use a color wheel and a Digital Mirror Device thereby freeing them to choose any measurement basis but adding significantly to the cost of such a device and requiring multiple exposures to obtain a single image.
[0084] The band limitedness of the image can be imposed on each iteration of the solution by projecting the iterate onto the corresponding convex set.
[0085] Since natural images have approximate transform coefficient sparsity not exact transform coefficient sparsity, the above reconstruction techniques are not exact. However, since image compression techniques such as JPEG, JPEG2000, JPEG-XR, MPEG1, MPEG2, MPEG4 ASP, AVC, H.263, H.264, AVS, VC-1, WebP, WebM, Theora, Dirac etc. are also based on similar models of transform sparsity, this inaccuracy does not significantly contribute to any additional image degradation.
[0086] Transform sparsity image model can be used in conjunction with limited bandwidth image model to reconstruct the input image. This allows larger bandwidth images to be reconstructed than is possible with only the limited bandwidth model or, for that matter, with state of the art image reconstruction algorithms. This, in turn, allows for larger bandwidth OLPFs to be used or for OLPFs to be omitted altogether.
Noise sensitivity
[0087] Noise has two primary components: a) additive Gaussian noise generated at the sensor due to thermal and electrical effects and b) Poissonian photon-shot noise which we approximate by a Gaussian of the same variance and zero mean. Naive CFA designs when combined with simplistic matrix inversion for image reconstruction can often lead to amplification of this noise in the reconstructed image. S(Q) X + ε(Ω) = Υε (Ω) ( 19) where Υε (Ω) is the noisy signal output by the sensor for frequency Ω, ε(ώ) is a random variable representing noise, and S(Q) is the corresponding row of the matrix S. For the noisy case, equation 13 can be rewritten as
Figure imgf000022_0001
where Χε(ω) is the noisy reconstruction of Χ (ω) .
γοχ{Χε {ώ) - Χ (ώ) ) = ^ (5_ 1 (ώ, Ω))2 · νειτ(ε(Ω)) (21)
Ω where ε(ώ) is assumed to be uncorrelated and var(.) represents variance. Now, we can express Signal to Noise Ratio as
Figure imgf000022_0002
[0088] This scheme becomes susceptible to noise amplification if the choice of CFA is such as to make some elements of S~1 ( , Ω) large in magnitude or negative. Multiplication of signals with large values amplifies noise while subtraction of strong signals to obtain a weak residual signal decreases SNR as noise energy gets added even as the signal gets subtracted. These problems may be ameliorated by suitably designing CFAs and the image reconstruction algorithms.
CFA designs with reduced noise amplification
[0089] Noise amplification can be reduced by employing a slightly larger photosite count that results in a more overdetermined systems of linear equations. Many reconstruction algorithms perform better on such over-determined systems of linear equations.
[0090] Low amplitude carriers in the CFA result in low amplitude sidebands. These low amplitude sidebands are amplified by the inverse color transform of the reconstruction step, which amplifies noise as well. CFAs should be designed so that the sum of all carrier energies is high.
[0091] One effective design strategy is ensure that photosites of different colors are well mixed. This prevents large numbers of photosites of the same color from clumping together, thus ensuring each pixel has photosites of all colors in its vicinity. This works well because most image reconstruction algorithms are better at interpolating missing colors over small distances than they are at interpolating them over large distances. Random CFAs with this property can be generated by setting the probability distribution of picking a color at a photosite as a function of the colors of other photosites in its neighborhood.
[0092] One way of designing CFAs with minimal subtractions while maintaining high efficiency of frequency-packing is to use a small number of carriers resulting in panchromatic filters i.e. filters with a large gamut of colors.
Reducing noise amplification during image reconstruction
[0093] Enforcing spectral sparsity is an effective technique for suppressing noise. This is done using the techniques of section "Image Model with Transform-coefficient Sparsity" . This leads to the linear system of equations being over determined and hence a substantial improvement in Signal to Noise Ratio.
[0094] Statistical estimation techniques form another effective class of noise suppression techniques. Statistical Estimation is the term used to describe techniques for estimating the parameters of a probabilistic distribution from a set of samples from that distribution. In the case of an over-determined system of equations, statistical estimation can be used to approximate the means of distributions that generated the sensor readings. Least squares regression is one of the simplest forms of statistical estimation. Various other estimators may also be computed on over-determined systems such as the Maximum Likelihood Estimator (MLE), Maximum A Posteriori Probability (MAP) estimator, Best Linear Unbiased Estimator or a Minimum Variance Linear Estimator. For details, see J. Shao, "Mathematical Statistics" , Springer, ISBN 0-387-98674-X (1998).
[0095] A Matlab simulation was performed wherein a CFA with equal numbers of Red, Green and Blue filters arranged in a random pattern was generated. The original color image was filtered through this CFA. Various amounts of white noise was added to this and reconstruction was performed on the resultant image sensor output by inverting the linear transformation effected by the CFA in the Fourier domain. Matlab is a product of The Math Works, Inc., Natick, Massachussetts, U.S.A.
[0096] FIG. 8 shows a snippet of Matlab code used in the simulation. While we do not provide a complete code listing of the simulation, this snippet should be sufficient for anyone of ordinary skill in the art to reproduce our results. The matrix inversion step of the reconstruction is performed by the Moore-Penrose pseudoinverse algorithm.
[0097] FIG. 9 shows the results of this simulation. 910 is the original color image. 920 is the image after being filtered by an exemplary randomized RGB filter. 930 is the reconstructed color image. No noise was added leading to practically perfect reconstruction (PSNR> 300dB).
[0098] We introduce an efficient algorithm for computing an MLE that works in the spatial domain. It states transform constraints such as sparsity and bandwidth limits in the spatial domain, and then computes an MLE via multi-variate normal regression. This approximates the Poissonian signal with a Gaussian distribution of the same variance and zero mean. Additive Gaussian noise can be handled similarly since the sum of Gaussian random variables is also a Gaussian random variable. See D. Montgomery, E. Peck, "Introduction to Linear Regression Analysis" for more on multi-variate normal regression. Other techniques for computing an MLE such as the EM algorithm can alternately be used. For details, see A. P. Dempster, N.M. Laird, D. B. Rubin, "Maximum Likelihood from Incomplete Data via the EM Algorithm" , Journal of the Royal Statistical Society, Series B (Methodological) 39 (1): 1-38 (1977).
[0099] Regularization, which is discussed in detail in the later section "Other Regularization Techniques for Demosaicking" as a tool for solving under-determined inverse problems is also a powerful technique for noise suppression. This includes the Tikhanov method with various penalty functionals as well as the Landweber method and the Truncated Singular Valued Decomposition.
[0100] As described in the later section "Other Regularization Techniques for Demosaicking" , the Landweber method as well as some versions of the Tikhonov method belong to a class of Regularization techniques that, instead of inverting the linear transform given in equation 19, effectively invert a slightly different one:
S*■ (S - X + e(Q) ) = S* · Ϋε (Ω) (23) where S* is the conjugate transpose of S. The application of S* produces a filtering effect which reduces noise.
Dynamic Range and Sensitivity Expansion
[0101] Low intensity signals may be drowned by noise and high intensity signals may cause sensor saturation. The Dynamic Range of an imaging system is a measure of the range of intensities it can capture. Sensitivity, on the other hand, measures the system's responsiveness to low incident light. High dynamic range and increased low-light sensitivity are valuable characteristics.
[0102] Previous attempts at increasing sensitivity such as by M. Parmar and B. Wan- dell, "Interleaved Imaging: An Imaging System Inspired by Rod-Cone Vision" , in Proc. SPIE Int. Soc. Opt. Eng, 2009 have made a subset of filter elements fully transmis- sive but have done so in regular patterns and have relied on empirically designed reconstruction techniques. US Patent application 11/191,729 by Compton and Hamilton of Eastman Kodak is another attempt that suffers from similar shortcomings. Alternative methods of controlling sensitivities of different subsets of photosites include changing the quantum efficiency of sensor elements or dynamically controlling the sensor elements' signal integration times instead of by changing filter element transmittance. See, for example, http:/ /www. fujifilmusa.com/products/digital_cameras/exr/ about/index. html.
[0103] Photosites of different colors and sensitivities can be handled by equations 10 and 14 and hence supported by the mathematics of section "A General Image Capture Problem Formulation" . Photosite sensitivities can be controlled by any suitable method including filter element transmittance, sensor element efficiency or electronic signal integration times. As before, we allow partial spectral overlaps to increase packing efficiency and use a regular pattern of panchromatic filters or a random arrangement of high sensitivity and regular sensitivity photosites of various colors. [0104] We can deal with photosites lost to saturation or under-exposure by reducing the number of transform coefficients solved for. If a random CFA is used this can be done gracefully, as described in section "Random CFAs and graceful Error Handling" .
[0105] The CFA mentioned above also serves to improve image-capture in high ISO settings or of low dynamic range scenes. In these scenarios sensor element saturation and underexposure are not a problem and reduced luminance noise is the benefit.
Chromatic Aberration Correction
[0106] Wavelength dependent refractive properties of camera optics can lead to geometric misalignment of the image in different colors. This is called Chromatic Aberration. The framework of the current invention allows for the incorporation of an elegant means of computationally correcting for this optical problem during image reconstruction.
[0107] Existing solutions to this problem apply independent geometric corrections to the images in the different basic colors either before or after image reconstruction. Applying this correction before image reconstruction is problematic because each color by itself may be sub-sampled and hence aliased. Lack of accurate images in basic colors makes it impossible to effect the geometric corrections.
[0108] The other case, wherein chromatic aberration is corrected after image reconstruction, is problematic because image reconstruction usually relies on correlations between the image in different basic colors. These correlations do not hold for images with chromatic aberration. See Sing Bing Kang, "Automatic Removal of Chromatic Aberration from a Single Image" , 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2007), 18-23 June 2007.
[0109] The framework of the present invention allows joint image reconstruction and chromatic aberration correction. This is particularly effective when color transforms that take advantage of correlations between basic colors such as red, green and blue are used, as is done in section "Image Model with Restricted Bandwidth" . This is achieved by expressing the discrete samples of the undistorted image in terms of those of the observed distorted image in equation 14. This is done by first expressing the continuous signal in terms of the discretized samples and then applying the continuous domain geometric distortion of chromatic aberration to it before re-discretizing it. This expresses the discrete components of the undistorted image as image independent linear combinations of those of the observed distorted image. Correlation relations between the image in different colors mentioned in the section "Image Model with Restricted Bandwidth" may now be used on these undistorted variables. A solution of this system of equations will reconstruct the image with corrected chromatic aberration without incurring the drawbacks of the existing solutions mentioned above.
[0110] Chromatic aberration can be described as a combination of warping in the image plane and defocusing. We derive an explicit expression below for the discrete domain linear transformation corresponding to the former. While there is no exact way of correcting for the defocusing problem, some correction may be achieved by a space varying sharpening filter such as described in Sing Bing Kang, "Automatic Removal of Chromatic Aberration from a Single Image" , 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2007) , 18-23 June 2007.
[0111] The discrete domain image is obtained from the continuous domain through sampling with rectangular photosites. This sampling may be described as a convolution with a 2D rectangular function followed by 2D Dirac comb sampling:
Xt = (xC t * H Box ) - H Sample (24) where is the continuous domain image in color i G {r, g, b} just before it gets filtered by the CFA and Xi is the same image discretized by the sensor.
HBOX {UI , U2 , W1 , W2)≡ Rect2 (— ,— ) (25) w1 w2 where u\ and u2 are continuous variables spanning the sensor length and width and w\ and w2 are the photosite length and width.
HSam≠e (ui, «2 , <¾ , d2)≡— '— DiracC omb2(^- , ^) (26) where d and d2 are the photosite spacings. x may be obtained from the ideal chromatic aberration free image xc through the warping functions Uu and u2i. Un and u2i are functions of u and u2 and are determined offline using existing methods for characterizing Chromatic Aberration. xc l(u1,u2) = c t{uu, u2i) (27)
Equation 24 for x becomes
Xi = {x\ * HBox).H Sample (28) which may be formally inverted:
Figure imgf000028_0001
where HLP/ is a low pass filter that recovers the continuous signal, an example implementation of which is
HLpf = sinc(—— ). sinc(—— ) (30)
(l\ Cl2
Equation 24 when combined with equations 27 and 29 becomes
Xt = ((¾ * HLpf *
Figure imgf000028_0002
(ult, U2i) * HBox).Hsample (31)
This expresses ¾ as a linear transform of Xi as may be verified by writing the convolutions explicitly. Note that HLP/, HBox, Ηβ χ are functions of (u\, u2).
Xi(m) (u(u) - v)HBox{m - u) (32)
Figure imgf000028_0003
where the u and v integrals are over the sensor plane. This can be written as
Xi(m) = ' ^ ^CA(m,n)xj(n) (33) where
CA(m,n) = / du\du2 / dv1dv2HLpf(v n)HB- (u(u) v)HBox(m- u) (34) m, n may be rasterized to make them 1-dimensional, making this, in matrix notation
Figure imgf000029_0001
This is an image independent linear transformation in the discrete domain. As previously mentioned, this may be combined with equation 14. The solution of the resulting inverse problem will correct Chromatic Aberration without the drawbacks of existing solutions mentioned earlier.
[0112] When joint image reconstruction and chromatic aberration correction described above is applied to existing CFA designs, numerical stability problems crop up. One cause of the underlying numerical stability problem is the loss of information from misaligned photosites of different colors. This is true of both the Bayer CFA and the newer panchromatic CFAs (see K. Hirakawa and P.J. Wolfe, "Spatio-spectral color filter array design for enhanced image fidelity" in Proc. of IEEE ICIP, pages II: 81-84, 2007 and L. Condat, "A New Class of Color Filter Arrays with Optimal Sensing Properties" ). The present method of CFA design can yield an over-determined system of equations which makes it resilient to numerical stability problems.
[0113] Random CFAs of basic colors are especially suited for Chromatic Aberration correction since they avoid large (electromagnetic) bandwidth panchromatic filters resulting is sharper images in each basic color, and can gracefully handle loss of information in accordance with the section "Random CFAs and graceful Error Handling."
Sensor Lattice Jittering for Increased Bandwidth and graceful Aliasing
[0114] Aliasing occurs when the optical signal contains frequencies higher than can be captured by the sensor. Optical low pass filters are often used to remove these higher frequencies but some still leaks through due to filter non-idealities.
[0115] Aliasing of color components can be handled by arranging photosite locations in a "jittered pattern" . We use this term to refer to any irregular arrangement, including one where photosites on a regular lattice are perturbed a small distance. This perturbation is random and smaller than the distance between any two neighboring lattice points. The problem of adjacent photosites overlapping can be avoided by changing their size and geometry. An example pattern, shown in FIG. 11 makes these perturbations along the x and y directions identical along the same column and row respectively.
[0116] A jittered sensor requires changes to equation 14 to account for non-regular lattice locations and the non-regular photosite sizes affecting box filtering that varies from photosite to photosite. This is a straightforward modification of sampling theory along the lines of section "Chromatic Aberration Correction." This modified sampling equation allows the capture of image frequencies beyond the Nyquist limit of a regular sensor of the same size and number of photosites. Informally, a jittered sensor lattice is roughly equivalent to a sensor with a higher Nyquist frequency but with a random set of photosites removed.
[0117] Frequencies beyond the modeled image bandwidth are aliased to the reconstructed frequencies. Aliasing artifacts are randomized by jittering and thus spread over a large range of frequencies. This makes aliasing look similar to noise, which is less visually offensive. Dense spectral support of these artifacts also makes it amenable to noise reduction.
[0118] Techniques for reduction and hiding of aliasing described above allows the use of a milder optical low pass filter or the omission of it altogether. Practical Optical Low Pass Filters attenuate the higher sub-Nyquist frequencies so omitting them improves the signal to noise ratio of these higher frequencies.
[0119] Jittering may be used not just in color image sensors but in monochrome ones as well with the same corresponding benefits.
Pixel Binning and Line Skipping for Increased Speed
[0120] Frame rates in Video and high speed imaging are limited by sensor read-out rate. Pixel Binning and Line Skipping are two strategies commonly used to reduce read-out time by reading out fewer photosite values thereby sacrificing resolution for speed.
[0121] Aliasing is a problem in such systems because of reduced sensor Nyquist rate. If a random set of photosites is read out instead of line skipping and random sets of photosites are combined together instead of photosite binning, the image reconstruction problem becomes identical to the jittered sensor lattice case. Image reconstruction techniques described in section "Sensor Lattice Jittering for Increased Bandwidth and graceful Aliasing" can be used ameliorate the aliasing problem.
Other Regularization Techniques for Image Reconstruction
[0122] In the case where the frequency content of the image is too great for the linear transformation effected by the CFA to be invertible, we may choose to either gracefully degrade the image resolution as described in section "Random CFAs and graceful Error Handling" or attempt to partly extract it using approximation schemes. The problem of determining X from equation 10 belongs to a class of problems called Linear Inverse Problems. This class of problems has a large body of existing solution techniques. See, for example, C. R. Vogel, "Computational Methods for Inverse Problems" , Frontiers Appl. Math. 23, SIAM, Philadelphia, 2002. Many of these techniques were developed primarily in the medical and geophysical imaging fields and have to be adapted to our problem of color imaging using CFAs. There are a few salient features that CFA based color imaging adds to imaging in these other fields: firstly, the multiplexing of images in two or more colors onto one sensor image, secondly the linear relations corresponding to filtering by a CFA, thirdly correlations between the image in different colors and fourthly linear relations corresponding to approximately circular band limitedness of the optical signal.
[0123] Solution techniques to Linear Inverse Problems include, but are not limited to, Statistical Estimation and various forms of Regularization. Regularization refers to additional ad-hoc constraints such as image smoothness that allow partial reconstruction.
[0124] Regularization schemes already attempted in color imaging include Total Variation minimizing (see T. Saito, T. Komatsu, "Demosaicing approach based on extended color total-variation regularization" in Proc. of IEEE ICIP, Oct. 2008) and Edge preserving (see D. R. Cok, "Reconstruction of CCD images using template matching" , Proc. IS&T's Annu. Conf. ICPS, 1994, pp. 380-385) techniques. We propose that the Regularization techniques used in these prior works may be improved by applying them on our full linear system including not just the linear transform of the CFA, as they have done, but the linear constraints corresponding to the image's frequency band limitedness as well. [0125] We suggest an additional class of Regularization techniques that instead of inverting the linear transform given in equation 13 effectively invert a slightly different one:
S*■ S X = S*■ Ϋ(Ω) (36) where S* is the conjugate transpose of S. The application of S* produces a filtering effect which allows approximate reconstruction. The Landweber method is one iterative technique that approximately solves this equation.
[0126] The Tikhanov regularization framework casts the signal reconstruction problem as an optimization problem and admits a variety of penalty functionals as its objective. The standard formulation uses an energy minimizing term which also leads to an approximate solution to equation 36. Total Variation minimization also corresponds to a particular penalty functional within the Tikhanov regularization framework. Truncated Singular Valued Decomposition is another effective regularization scheme which may be used for color image reconstruction. These regularization techniques are applied to each of the multiplexed images in each basic color. For details see M. Bertero, P. Boccacci, "Introduction to Inverse Problems in Imaging" , Taylor & Francis, ISBN 0750304359, hereby incorporated by reference in its entirety.
[0127] The Bayesian framework may be used for regularization if a suitable prior is used. See Y. Hel-Or, "The canonical correlations of color images and their use for demosaicing" which computes a Maximum A Posteriori Estimate (MAP). We propose that this scheme may be improved by applying them on the full linear system including not just the linear transform of the CFA, as they have done, but the linear constraints corresponding to the image's frequency band hmitedness as well.
[0128] Conjugate Gradient and Steepest Descent are two iterative techniques that may be used on Bayesian or other non-linear Regularization schemes.
[0129] Band hmitedness and spectral sparsity constraints may be combined with any of the above techniques through the method of Projection On Convex Sets. In this scheme, at each iteration, the iterate is projected onto the set of band limited and spectrally sparse solutions. Solution positivity may also be imposed using a similar scheme.
[0130] FIG. 10 shows a listing of Matlab code used in a simulation based on the sparsity pro- motion approach to image reconstruction. This listing should be sufficient for anyone of ordinary skill in the art with a familiarity with the GPSR solver (http:/ /www. lx.it.pt/~mtf/GPSR/) to reproduce our results. With no noise added, a PSNR figure of 42dB was obtained. With enough noise added so that the input image would have a PSNR of 39dB, the reconstructed image had a PSNR of 34dB.
Scalability of Image Reconstruction Algorithms
[0131] The scalability of the present image reconstruction scheme depends on the specific Inverse Problem solution technique used. When linear solution techniques are used, such as matrix inversion or MLE calculation, the result is a space variant FIR filter. The size of such filters can be reduced by windowing and associated techniques.
[0132] Scalability can also be achieved by the standard technique of blocking, wherein a pixel is reconstructed by only considering a small block of photosites around it. This works well because parts of images that are far away from each other are quite independent of each other, and hence have little influence on each other.
[0133] Faster though cruder reconstruction techniques may be used to generate image previews and high quality reconstruction may be performed offline, possibly on a PC.
Other Embodiments of the Invention
[0134] The present invention may be used not just for still imaging but for video as well. Besides a trivial extension to multiple frames, algorithms that perform joint reconstruction of multiple frames leveraging their correlation may also be used.
[0135] Instead of just color images in the visual spectrum, the present invention may also be used in other situations where multi-spectral image sensor systems are limited by geometric constraints. The present invention allows multi-spectral sampling to be folded into smaller sensors requiring smaller apertures without increased acquisition times.
[0136] The present invention may be used in conjunction with non-rectangular photosites and lattice arrangements including but not limited to hexagonal and octagonal schemes. [0137] The present invention may be used in image scanners.
[0138] The present invention may be used in acquiring multi-spectral images in different number of dimensions including ID and 3D.
[0139] The following is Matlab code used to simulate an embodiment of the present invention. function Evaluate_Stitcher_Pre_Spatial_Chroma_Dir(S)
Tiling simulation of some bad pixels simultaneously solving the two sets
of equations Bx=0 which sets Fourier components of the image in each of
luma, cl, c2 outside an ellipse to zero and
Elliptical spectral constraints in directions are setup to
generate candidate images and merge them to generate the output image
Cx=b where x is the set of image pixel values in luma,cl,c2 and b is the
set of sensor pixel values in RGB.
Elimination of some equations for bad pixel simulation is done.
The first system of equations is generated once and saved to file.
If there is tile overlap, to ensure that a physically realizable CFA is
simulated, each distinct overlapping tile needs to be computed and saved.
cfaFile=,C15-condat .pngJ ;
numDirs
aspect=0.70 ;
tileSize = 15;
coreSize = 1;
spectralWeight = 0.001;
useSavedlnverses = 0;
usePrecompGoodPixels = 1;
r_luma = 0.81; Determines the number of variables solved for
r_chroma = r_luma/2;
r_filt = r_luma+0.0; Determines the number of coeffs
sigma2 = 0; *sqrt(2);
poissonC = 10;
X = double(imread(S) ) ;
C = double(imread(cfaFile) )/255 ;
noise = sigma2*randn( [size(X , 1) size(X,2)]);
overlap = tileSize-coreSize;
numRows = floor((size(X , 1) -overlap)/coreSize) ;
numCols = floor((size(X,2)-overlap)/coreSize) ;
M = tileSize;
N = M;
c = size(X,3) ;
assert (c == 3) ;
stitchedlmage_lpf = zeros(numRows+coreSize, numCols+coreSize, c) ; Compute LPF mask
lpfMask = GenerateEllipticalMask(M, r_filt*M, 1, 0);
numCoeffs = sum(sum(lpfMask) ) *3 ;
Simulate bad pixels
[goodPixels, numGoodPixels] = GenerateGoodPixels(usePrecompGoodPixels, M, N) ;
imageStack = zeros(numDirs, numRows+coreSize , numCols+coreSize , c) ;
for d=l:numDirs
Generate spectral constraints
tic;
B = determineEquations_Spatial_Chroma(M, r_luma, r_chroma, ...
(d-l)*pi/numDirs, aspect, spectralWeight) ; numVars = M*N*c;
numNZVars = numVars-size(B, 1) ;
toe;
if d==l
numEqns = size(B, l)+numGoodPixels;
generate file names
suffix = J _Dir ' ;
fprintf ( ,0/Oi coefficients. ,i variables. ,i NZs . ,i equations \nJ , ... numCoeffs, numVars, numNZVars, numEqns);
filename = sprintf ( ' -°/0icoeffs ,ivars-y,iNZs-y,ieqns-y,iof°/0i ' , numCoeffs, ... numVars, numNZVars, numEqns, coreSize, tileSize) ;
filename = strcat(S(l :end-4) , filename, suffix, '.png');
lpfFilename = sprintf ( ' -°/0icoeffs-0/0iof0/0i ' , numVars, coreSize, tileSize); lpfFilename = strcat(S(l :end-4) , lpfFilename, suffix, ' .pngJ);
end
Initialize inverse cache
[inverses, havelnverse, numlnverses] = initializeInverseCache(c, M, N, ... coreSize) ;
optionally populate inverse cache from file
inverseFileName = sprintf ( 'B-C-inverses-Xiof0/0i _Chroma_Dir_0/0i .mat ' , ... coreSize, tileSize, uint8(r_luma*100) , uint8(r_chroma*100) , d) ;
if useSavedlnverses
load(inverseFileName) ;
end
estimate core values & stitch them into an image
stitchedlmage = zeros(numRo s+coreSize, numCols+coreSize, c) ;
tileNum = 0; numLo RankTiles = 0;
fprintf (J\n Row =>);
for i=l:numRows
rowStart = (i-l)*coreSize + 1;
fprintf(J ' , i) ;
for j=l:numCols
colStart = (j-l)*coreSize + 1;
tileNum = tileNum + 1;
C_shift = circshift(C, [-(i-l)*coreSize, -(j-l)*coreSize] ) ;
tile = X(rowStart :rowStart+tileSize-l, colStart : colStart+tileSize-1 ,:) ; noiseTile = noise(rowStart : rowStart+tileSize-1 , ...
colStart : colStart+tileSize-1 , : ) ;
Filter image
for k=l: size(X,3)
tile(: , : ,k) = ifft2(ifftshift (fftshift(fft2(tile( : , : ,k))) .*lpfMask)) end
inverselndex_i = mod(i, numlnverses)+1 ;
inverselndex_j = mod(j , numlnverses)+1 ;
Y = Sample(tile, C_shift) ;
Y = max(zeros(size(Y)) , Y + noiseTile);
if havelnverse(inverselndex_i , inverselndex_j )
Binv = squeeze(inverses(inverseIndex_i , inverseIndex_j ,:,:));
soln = Binv*Y( : ) ;
else
[tileSoln, Binv, lowRank] = ...
EvaluateO_Tile_WithPre_Spatial_Chroma(Y, C_shift , B, ...
numGoodPixels, usePrecompGoodPixels , goodPixels, noiseTile) ; havelnverse(inverselndex_i , inverselndex_j) = 1 ;
soln = zeros(coreSize*coreSize, 1) ;
for k=l: coreSize
kc = k+overlap/2;
for 1=1: coreSize
lc = l+overlap/2; inverses(inverselndex_i , inverselndex_j , ...
(k-l)*coreSize+l, :) = Binv((kc-l)*tileSize+lc, : ) ;
soln((k-l)*coreSize+l) = tileSoln((kc-l)*tileSize+lc) ; inverses(inverselndex_i , inverselndex_j , ...
(k+coreSize-1) *coreSize+l , :) ...
= Binv((kc+tileSize-l)*tileSize+lc, : ) ;
soln( (k+coreSize-1) *coreSize+l) = ...
tileSoln((kc+tileSize-l)*tileSize+lc) ;
inverses(inverselndex_i , inverselndex_j , ...
(k+2*coreSize-l) *coreSize+l , : ) ...
= Binv((kc+2*tileSize-l)*tileSize+lc, :) ;
soln( (k+2*coreSize-l) *coreSize+l) = ...
tileSoln((kc+2*tileSize-l)*tileSize+lc) ;
end
end
if lo Rank
numLowRankTiles = numLowRankTiles + 1;
fprintfO lowRank \nJ , numLowRankTiles, tileNum) ;
end
end
img = HelOrlnvTransform(reshape(soln, [coreSize, coreSize, c])); reconTile = real(img);
if d==l
reconTile_lpf = zeros(size(tile) ) ;
for k=l : size(X,3)
reconTile_lpf( : , : ,k) = LpfFFT(tile( : , : ,k)+sigma2*abs ...
(randn(size(tile( : , : ,k) ) ) ) , r_luma) ;
end
stitchedlmage_lpf (rowStart : rowStart+coreSize-1 , ...
colStart : colStart+coreSize-1 , :) = ...
reconTile_lpf(overlap/2+1 : overlap/2+coreSize , ...
overlap/2+1 : overlap/2+coreSize , : ) ;
end
stitchedImage(rowStart :rowStart+coreSize-l , ...
colStart : colStart+coreSize-1 , :) = reconTile; end
imwrite(uint8(round(stitchedImage) ) , 'unidirectionalTemp .pngJ , JpngJ ) ; end
fprintf ( J\nJ ) ;
if "useSavedlnverses
save(inverseFileName, ' ' , 'C , 'inverses', 'havelnverse ' ) ;
end
imageStack(d, :,:,:) = stitchedlmage ;
end
Z = msePickDirection(stitchedIniage_lpf , imageStack) ;
imwrite(uint8(round(Z) ) , filename , JpngJ ) ;
minimalSize = DetermineMinimalSize(numRows, numCols, coreSize, r_luma) ;
imwrite(uint8(round(stitchedImage_lpf) ) , strcat ( Jlpf_stitched_ ' , lpfFilename) , Jpng reducedlmage = reduce(Z, minimalSize);
imwrite(uint8(round(reducedImage) ) , strcat ('reduced.' , filename) , JpngJ) ;
reducedlmage_lpf = reduce(stitchedlmage_lpf , minimalSize);
imwrite(uint8(round(reducedImage_lpf) ) , strcat( Jreduced_lpf , lpfFilename) , JpngJ )
[ME, MAE, MSE, PSNR] = Metrics(stitchedlmage_lpf , Z, S) ;
disp([ME, MAE, MSE, PSNR]); function [inverses, havelnverse, numlnverses] = initializeInverseCache(c, .
M, N, coreSize) numlnverses = M/coreSize;
assert (floor(numlnverses) == numlnverses); must be an integer
inverses = zeros(numInverses , numlnverses, c*coreSize*coreSize , M*N) ;
havelnverse = zeros(numInverses , numlnverses);
¾
function B=determineEquations_Spatial_Chroma(M, r_luma, r_chroma, ...
theta, aspect, weight)
Generates the set of equations: Cx=0 which sets Fourier components of the image in each of luma, cl,c2 outside an ellipsee to zero where x is the set of image pixel values in luma, cl,c2.
CFA equations are generated elsewhere.
c = 3;
majorLengths = [r_luma*M; r_chroma*M; r_chroma*M] ;
outsideEllipse = GenerateEllipses(M, majorLengths, aspect, theta);
outsideEllipse = 1-outsideEllipse ;
for i=l:c
outsideEllipse(i , : , : ) = ifftshift (squeeze(outsideEllipse(i ,:,:)));
end
B = zeros(sum(sum(sum(outsideEllipse) ) ) , c*M*M) ;
D = M*fftmtx(M, 0) ;
eqnlndex = 0;
for k=1 : c
for j=l: M
for i=l: M
if outsideEllipse(k, i, j)
eqnlndex = eqnlndex+l;
for j2=l: M
for ±2=1: M
pixellndex = (k-l)*M*M + (j2-l)*M + 12;
B(eqnlndex, pixellndex) = weight*D(i , i2) * D(j2,j); end
end
end
end
end
end
¾
function [soln, Binv, lowRank] = EvaluateO_Tile_WithPre_Spatial_Chroma(Y, ...
C, B, numGoodPixels, usePrecompGoodPixels, goodPixels, noiseTile) Sets up CFA constraints - i.e. equations that constrain the
reconstructed image to closely follow the sensor values.
Y is the sensor values
C represents the CFA
B represents transform constraints
7.
Binv returns the inverse of B concatenated with matrix formed by
the CFA constraints.
The list of bad pixels also may or may not be given.
M = size(Y,l); N = size(Y,2);
c = 3;
Simulate bad pixels
if "usePrecompGoodPixels
randPixels = randperm(M*N) ;
goodPixelNums = randPixels(l :numGoodPixels) ;
goodPixels = zeros(M*N, 1);
for p=l :numGoodPixels
goodPixels(goodPixelNums(p)) = 1;
end
goodPixels = reshape(goodPixels, M, N) ;
end offset = size(B,l);
numEqns = offset+numGoodPixels;
Y_good = zeros(numEqns, 1);
Luma-Chroma color space
R r3 r2 r6 L
G = r3 -r2 r6 * CI
% B r3 0 -2r6 C2
r3 = l/sqrt(3); r2 = l/sqrt(2); r6 = l/sqrt(6);
Append additional equations corresponding to CFA filtering
Y_lin = Y(:) ;
B_CFA = zeros(nuniGoodPixels, c*M*N) ;
goodPixellndex = 0; linearPixellndex = 0;
for j=l: N
for i=l: M
linearPixellndex = linearPixellndex + 1;
if goodPixels(i ,j )
goodPixellndex = goodPixellndex + 1;
Y_good(offset+goodPixellndex) = Y_lin(linearPixelIndex) ; B_CFA(goodPixelIndex, linearPixellndex) = r3;
if C(i, j, 1) == 1
B_CFA(goodPixelIndex, M*N+linearPixelIndex) = r2; B_CFA(goodPixelIndex, 2*M*N+linearPixelIndex) = r6;
else if C(i, j , 2) == 1
B_CFA(goodPixelIndex, M*N+linearPixelIndex) = -r2;
B_CFA(goodPixelIndex, 2*M*N+linearPixelIndex) = r6;
else
B_CFA(goodPixelIndex, 2*M*N+linearPixelIndex) = -2*r6; end
end
end
end
end
B = [B; B_CFA] ;
if nuniGoodPixels < M*N
B = B(l :numEqns, : ) ;
end
sys_rank = rank(B) ;
7fprintf ( J7i Rank \nJ , sys_rank) ;
7fprintf ( J7i variables. ,i equations. ,i Rank \nJ, c*v,nuniGoodPixels,sys_rank) ; interpolate
Solve the system of equations
°/,soln = B\Y_good;
Binv = pinv(B) ;
soln = Binv*Y_good;
Binv = Binv(:, end-M*N+l : end) ;
if sys_rank < c*M*N
lo Rank = 1 ;
else
lowRank = 0 ;
end
¾
function mask = GenerateEllipses(tileSize, majorLength, aspect, theta)
Generate elliptical masks for each color component
majorLength, aspect and theta are arrays
with one value for each color component
c = size(majorLength, 1);
mask = zeros(c, tileSize, tileSize) ;
for i=l:c
mask(i, :, :) = GenerateEllipticalMask(tileSize , majorLength(i) ,
aspect , theta) ; end function A = GenerateEllipticalMask(tileSize , majorLength, aspect, ... theta)
Generate an ellipse of ones in a tile of zeros
7.
majorLength: Length of major axis of ellipse
aspect: Aspect ratio of the ellipse
theta: angle of the major axis of ellipse wrt to the coordinate axis A = ones(tileSize, tileSize) ;
rotator = [cos(theta), -sin(theta) ; sin(theta), cos(theta)] ;
r2 = majorLength*majorLength/4;
center = floor(tileSize/2)+l ;
for i=l:tileSize
for j=l:tileSize
point = [i-center, j-center] ;
rPoint = rotator*point( : ) ;
rPoint(2) = rPoint (2) /aspect ;
if dot (rPoint, rPoint) > r2
A(i, j) = 0;
end
end
end
¾
function Z = msePickDirection(X , Y)
A proof of concept tool for rest of the technology
This yields the best possible result by picking the correct candidate with the help of the original image
Needs to be replaced in a practical system
Z = squeeze(Y(l, :, :, :));
for d=2:size(Y,l)
Ey = X-squeeze(Y(d, :, :, :));
Ey = Ey.*Ey;
Ez = X-Z;
Ez = Ez . *Ez ;
for i=l : size(X , 1)
for j=l :size(X,2)
if sum(Ey(i, j, :)) < sum(Ez(i, j, :))
Z(i, j, :) = Y(d, i, j, :);
end
end
end
end
¾
function [goodPixels, numGoodPixels] = ...
GenerateGoodPixels(usePrecompGoodPixels, M, N)
Randomly generate good pixels
numGoodPixels = M*N;
goodPixels = 0;
if usePrecompGoodPixels
randPixels = randperm(M*N) ;
goodPixellndices = randPixels(l :numGoodPixels) ;
goodPixels = zeros(M*N, 1);
for p=l :numGoodPixels
goodPixels(goodPixelIndices(p) ) = 1 ;
end
goodPixels = reshape(goodPixels, M, N) ;
end ¾
function Y = HelOrlnvTransform(X)
Inverse color transform
Takes image X and returns RGB image Y
r3 = l/sqrt(3); r2 = l/sqrt(2); r6 = l/sqrt(6);
helOrMatrix = [r3, r2, r6; r3, -r2, r6; r3, 0, -2*r6] ;
°/.X = permute(X, [3, 1, 2])
Y = X;
for i=l : size(X , 1)
for j=l:size(X,2)
P = X(i, j, :);
Y(i, j, :) = helOrMatrix*P(:) ;
end
end
¾
function Y = HelOrTransform(X)
Forward color transform
Takes RGB image X and returns Y
r3 = l/sqrt(3); r2 = l/sqrt(2); r6 = l/sqrt(6);
helOrMatrix = [r3, r2, r6; r3, -r2, r6; r3, 0, -2*r6] ; invHelOrMatrix = inv(helOrMatrix) ;
.X = permute(X, [3, 1, 2])
Y = X;
for i=l : size(X , 1)
for j=l:size(X,2)
P = X(i, j, :);
Y(i, j, :) = invHelOrMatrix*P(:) ;
end
end
¾
function Y = LpfFFT(X, r)
Low passes the signal X by retaining the bottom r fraction of DFT coefficients
Works on non-square rectangular image patches maxX = max(max(X));
A = fftshift(fft2(X)) ;
r = r*size(A, l)/2;
r2 = r*r;
alpha = size(A, l)/size(A,2) ;
alpha2 = alpha*alpha;
centerRow = floor(size(A, l)/2)+l;
centerCol = floor(size(A, 2)/2)+l;
for i=l:size(A, 1)
for j=l:size(A, 2)
if (i-centerRow)~2 + alpha2* (j-centerCol) "2 > r2 A(i, j) = 0;
end
end
end
Take care of the edge
A(l:size(A, 1), 1)=0;
A(l, l:size(A, 2))=0;
Y = ifft2(ifftshift(A)) ;
¾ function Y = Sample(X,C)
make CFA as big as the image
C = repmat(C, [ceil(size(X, l)/size(C, 1)) ceil(size(X ,2) /size(C , 2) ) 1]); C = C(l:size(X,l),l:size(X,2),:);
take the inner product
Y = X.*C;
if ndims(Y)==3
Y = sum(Y,3);
end
¾
function Y = reduce(X, targetSize)
do nsamples without aliasing
M = size(X,l); N=size(X,2); c=size(X,3);
M2 = targetSize(l) ;N2 = targetSize(2) ;
assert (floor((M-M2) /2)==(M-M2)/2) ;
assert (floor((N-N2) /2)==(N-N2)/2) ;
XFFT_orig = zeros(M2, N2, c) ;
X_orig = zeros(size(XFFT_orig)) ;
for k=1 : c
temp = fftshift(fft2(X(: , : ,k))) ;
I think this works only if both initial and final dimensions are even or both odd
XFFT_orig(: , : ,k) = temp(M/2+l-M2/2 : M/2+M2/2 , N/2+1-N2/2 : N/2+N2/2) ; X_orig(: , : ,k) = real(ifft2(ifftshift (XFFT_orig( : , : ,k) ) ) ) ;
end
output range limiting
X_orig = real(X_orig) ;
reductionFactor = size(X, l)*size(X,2)/(targetSize(l)*targetSize(2)) ;
X_orig=X_orig/reductionFactor ;
_orig=X_orig*255/max(max(max(X_orig) ) ) ;
Y = min(max(X_orig,0) ,255) ;
¾
function D = fftmtx(n, islnverse)
Generate the ID DFT matrix
D = zeros(n,n) ;
w = exp(-2i*pi/n) ;
if islnverse
w = 1/w;
end
for k=1 :n
for j=l:n
D(k, j) = w~((j-l)*(k-l));
end
end
D = D/n; function minimalSize = DetermineMinimalSize(numRows,numCols,coreSize,r_luma) Find the smallest size of reconstructed image that holds
all the image information
minimalSize(l) = ceil(numRows*coreSize*r_luma) ;
if mod(numRows*coreSize-minimalSize(l) , 2)
minimalSize(l) = minimalSize(l)+l ;
end
minimalSize(2) = ceil(numCols*coreSize*r_luma) ;
if mod(numCols*coreSize-minimalSize(2) , 2)
minimalSize(2) = minimalSize(2)+l ;
end
¾ function [ME, MAE, MSE, PSNR] = Metrics(A, B, S)
Measures quality of reconstructed image B
by comparing it with the original image A
ME=B-A;
E=abs(B-A) ;
ME = mean(ME(:));
imwrite(uint8(round(E) ) ,strcat( 'error ' ,S) , 'png' ) ;
err_amp = E*255/max(max(max(E) ) ) ;
imwrite(uint8(round(err_amp) ) ,strcat ( 'errorAmp' ,S) , 'png' ) ;
E=[E(:)];
MAE=sum(E)/size(E,l) ;
E=E.~2;
MSE=sum(E)/size(E,l) ;
PSNR=10*loglO(255~2/MSE) ;
Draw FFT errors
fftErr = zeros(size(A) ) ;
fftErr = fftshift(log(l+abs(fft2(B)-fft2(A)))) ;
fftErr = fftErr*256/max([fftErr(:)]) ;
fftErr=uint8(fftErr + 0.5*ones(size(fftErr) ) ) ;
imwrite(fftErr , strcat ( 'FFTerror' ,S) , 'png');
[0140] The terms and expressions that have been employed in the foregoing specification are used therein as terms of description and not of limitation, and there is no intention, in the uses of such terms and expressions, to exclude equivalents of the features shown and described or portions thereof, it being recognized that the scope of the invention is defined and limited only by the claims which follow.

Claims

Claims What is claimed is:
1. A method for processing an image, comprising: receiving a sample set of data generated by transforming and sampling an optical property of an original color image in a spatial basis, wherein the transformation effected is substantially local in the spatial basis and is 85-Capture or greater; and
applying a generalized inverse of said transform to the sample set of data to produce a set of inferred original image data,
wherein the generalized inverse does not use variational minimization and does not use color ratios.
2. A method for processing an image, comprising: receiving a sample set of data generated by transforming and sampling an optical property of an original color image in a spatial basis, wherein the transformation effected is substantially local in the spatial basis and is 85-Capture or greater; and
applying a generalized inverse of said transform, that respects predetermined spectral constraints, to the sample set of data to produce a set of inferred original image data.
3. The method of claim 2, further comprising: providing an image sensor having a plurality of photosensitive sensor elements; providing an optical transformation device having a plurality of transformation elements responsive to the optical property wherein the transformation effected by the optical transformation device is substantially local in the spatial basis and is 85-Capture or greater;
projecting the image onto the optical transformation device; and sensing at the image sensor the optical responses of the transformation elements to the original image to produce the sample set of data.
4. The method of claim 3, wherein providing an optical transformation device comprises providing an optical filter array having a plurality of filter elements.
5. The method of claim 4, wherein providing an optical filter array includes providing at least two sets of the filter elements that exhibit different responses to color.
6. The method of claim 5, wherein providing the optical filter elements includes providing them arranged in a random color pattern satisfying at least one predetermined distribution condition.
7. The method of claim 4, wherein the optical filter array is provided with filter elements responsive to one of several basic colors, the filter elements are arranged in a random color pattern, and inferred original image data is produced in all of the basic colors.
8. The method of claim 7, wherein the optical filter array is provided with filter elements having a one-to-one relationship to substantially all the sensor array elements, each such filter element being responsive to one of the basic colors, the filter elements being arranged in a random color pattern satisfying at least one predetermined distribution condition, and inferred original image data being produced in all of the basic colors.
9. The method of claim 4, wherein the optical filter array is provided with filter elements responsive to randomly composed panchromatic colors, the filter elements are arranged in a random pattern, and inferred image data is produced along all axes of a suitable color space.
10. The method of claim 2, wherein additionally chromatic aberration is corrected by augmenting said transform with a transform corresponding to chromatic aberration prior to image inference.
11. The method of claim 2, wherein only unsaturated sample data elements are used.
12. The method of claim 11, wherein different spatial locations of said transform are grouped into at least two subsets whose respective sensitivities differ so as to increase the dynamic range of image acquisition.
13. The method of claim 11, wherein different spatial locations of said transform are grouped into at least two subsets whose respective sensitivities differ so as to increase the sensitivity of image acquisition.
14. The method of claim 3, wherein the sensor elements are grouped into at least two subsets and their respective sensitivities differ in at least one of the following respects: optical transformation device sensitivities, sensor element quantum efficiencies, or electronic signal integration times, so as to increase one or both of the dynamic range and sensitivity of the image acquisition system by using only unsaturated sensor elements and said color filter array is random whereby aliasing is rendered similar to noise.
15. The method of claim 2, wherein said sampling is performed at spatial locations which are arranged substantially in a jittered pattern.
16. The method of claim 15, wherein said generalized inverse sets a predetermined set of higher frequency components substantially to zero, whereby aliasing may be made visually less objectionable.
17. The method of claim 15, wherein sparsity promotion is used for image inference.
18. The method of claim 3, wherein said sensor elements are arranged in a substantially jittered pattern, and said generalized inverse sets a predetermined set of higher frequency components substantially to zero whereby aliasing may be made visually less objectionable as well as reducible.
19. The method of claim 3, wherein said sensor elements are arranged in a substantially jittered pattern, and said generalized inverse uses sparsity promotion, whereby the inferred image has higher resolution.
20. The method of claim 2, wherein said generalized inverse sets a predetermined set of higher frequency components of chrominance or other color difference signals substantially to zero, thereby giving more bandwidth to luminance than chrominance and said color filter array is random, whereby aliasing may be made visually less objectionable as well as reducible.
21. The method of claim 2, wherein a domain in frequency space is adaptively constructed, components of chrominance outside which are set substantially to zero by said generalized inverse.
22. The method of claim 21, wherein said domain is anisotropic.
23. The method of claim 2, wherein the color space is determined so as to lower chrominance energy for the image being reconstructed.
24. The method of claim 2, wherein said generalized inverse uses a matrix inverse.
25. The method of claim 2, wherein said generalized inverse respects predetermined regu- larization constraints.
26. The method of claim 25 wherein said regularization method promotes sparsity in at least one of the following bases: Fourier related bases, wavelet bases or curvelet bases.
27. The method of claim 26 wherein said sparsity promotion makes an under-determined problem fully determined.
28. The method of claim 25 wherein the Landweber method is used for said regularization.
29. The method of claim 25 wherein the Tikhanov method is used for said regularization.
30. The method of claim 25 wherein the truncated singular valued decomposition method is used for said regularization.
31. The method of claim 2 wherein projection on convex sets is used to project each iterate onto the space of solutions having one or both of circular frequency support and positive values.
32. The method of claim 2, wherein said generalized inverse uses statistical estimation techniques.
33. The method of claim 32 wherein said statistical estimation technique includes computing a maximum likelihood estimator, a best linear unbiased estimator, a minimum variance linear estimator, and a maximum a posteriori estimator.
34. The method of claim 33 wherein said maximum likelihood estimator is computed by writing the inverse problem in the spatial domain
approximating the Poissonian photon shot noise at each photosite with a Gaussian of the same variance and zero mean,
approximating the mean of said Poissonian by computing an approximate solution to said inverse problem
combining said Gaussian with the Gaussian of any additive noise that is present to get a combined Gaussian, and
using multi-variate normal regression on the combined Gaussian.
35. The method of claim 33 wherein said maximum likelihood estimator is computed by writing the inverse problem in the spatial domain
applying the EM algorithm on said inverse problem.
36. The method of claim 2, wherein the optical property is the intensity of one of several colors sampled in a random color pattern, and inferred original image data is produced in all of the basic colors.
37. The method of claim 2, wherein the optical property is the intensity of one of several panchromatic colors sampled in a random color pattern, and inferred image data is produced along all axes of a suitable color space for each inferred original image data element.
38. The method of claim 2 wherein sparsity promotion, regularization or statistical estimation methods are used to reduce noise.
39. The method of claim 3 wherein one or more of the sensor elements of the image sensor is defective.
40. The method of claim 39 wherein one or more high frequency components is set substantially to zero by said generalized inverse and said color filter array is random, whereby the resulting aliasing is rendered similar to noise.
41. The method of claim 3 wherein said color filter array pattern is comprised of a tiling of a smaller pattern, whereby said image inference is made faster.
42. The method of claim 3 wherein multiple, sequential sample sets of data are produced from which multiple, sequential sets of corresponding inferred original image data are produced.
43. The method of claim 3 wherein only a subset of the sensor elements are read so as to obtain higher frame rates and said subset is randomly chosen whereby aliasing is rendered similar to noise.
44. A method for sampling an image at high speed, comprising: creating an optical color filter array;
providing an image sensor comprising a plurality of photosensitive sensor elements; projecting an image through said color filter array;
detecting image intensity values transmitted through said color filter array after a single exposure at sensor elements of said image sensor;
reading out detected intensity values from only a subset of sensor elements to increase speed; and
inferring the input image from the detected image intensity values,
said subset of sensor elements being randomly chosen whereby aliasing is rendered similar to noise.
45. The method of claim 44 wherein multiple, sequential sample sets of data are produced from which multiple, sequential sets of corresponding inferred original image data are produced.
46. A method for sampling an image, comprising: creating an optical color filter array;
providing an image sensor comprising a plurality of photosensitive sensor elements; projecting an image through said color filter array;
detecting image intensity values transmitted through said color filter array after a single exposure at each sensor element of said image sensor; and
inferring the input image from the detected image intensity values, said color filter array being random whereby aliasing is rendered similar to noise said sensor elements being grouped into two or more subsets whose respective sensitivities differ in at least one of the following respects: filter element transmit- tance, sensor element quantum efficiencies, or electronic signal integration times so as to increase one or both of the dynamic range and sensitivity of the image acquisition system by using only unsaturated sensor elements.
47. A method for sampling an image, comprising: creating an optical color filter array;
providing an image sensor comprising a plurality of photosensitive sensor elements; projecting an image through said color filter array;
detecting image intensity values transmitted through said color filter array after a single exposure at each sensor element of said image sensor; and
inferring the input image from the detected image intensity values,
said sensor elements being arranged substantially in a jittered pattern and at least some higher frequency components are set to zero prior to image inference whereby aliasing is rendered similar to noise.
48. A method for sampling an image, comprising: creating an optical color filter array;
providing an image sensor comprising a plurality of photosensitive sensor elements; projecting an image through said color filter array;
detecting image intensity values transmitted through said color filter array after a single exposure at each sensor element of said image sensor; and
inferring the input image from the detected image intensity values,
said sensor elements being arranged substantially in a jittered pattern and sparsity promotion may be used to reconstruct a higher resolution image.
49. A method for sampling an image, comprising: projecting an image onto an image sensor comprising a plurality of photosensitive sensor elements;
detecting image intensity values after a single exposure at each sensor element of said image sensor; and
inferring the input image from the detected image intensity values;
said sensor elements being arranged substantially in a jittered pattern and at least some higher frequency components are set to zero prior to image inference whereby aliasing is rendered similar to noise.
50. A method for sampling an image, comprising: projecting an image onto an image sensor comprising a plurality of photosensitive sensor elements;
detecting image intensity values after a single exposure at each sensor element of said image sensor; and
inferring the input image from the detected image intensity values;
said sensor elements being arranged substantially in a jittered pattern and sparsity promotion may be used to reconstruct a higher resolution image.
51. A method for reducing noise in an image, comprising: writing equations in the spatial domain expressing sparsity and/or band limited- ness of said image in some other domain including the Fourier related or wavelet domains;
approximating the Poissonian photon shot noise at each photosite with a Gaussian of the same variance and zero mean;
using the image intensity at each photosite as an approximation for the mean of said Poissonian at said photosite;
combining said Gaussian with the Gaussian of white noise, if available, to get a combined Gaussian; and
computing the maximum likelihood estimator of said combined Gaussian.
52. The method of claim 51 wherein said maximum likelihood estimator is computed using multi-variate normal regression.
53. A method for reducing noise in an image comprising: writing equations in the spatial domain expressing sparsity and/or band limited- ness of said image in some other domain including the Fourier related or wavelet domains;
applying the EM algorithm to the inverse problem of said equations to compute its maximum likelihood estimator.
54. A method for reducing noise in a color image using the Landweber, Tikhanov or Truncated singular valued decomposition regularization methods.
55. A method for computing a sparse representation of a signal in a basis, comprising: inserting additional vectors from said basis into said sparse representation in each of multiple iterations; and
using a statistical estimator such as a maximum likelihood estimator to decide which additional vectors to include in each iteration.
56. A method for processing an image, comprising: receiving a sample set of data generated by transforming and sampling an optical property of an original color image in a spatial basis, wherein the transformation effected is substantially local in the spatial basis and is 85-Capture or greater; and
applying a generalized inverse of said transformation to the sample set of data to produce a set of inferred original image data,
wherein said sampling is done at spatial locations which are arranged substantially in a jittered pattern.
57. The method of claim 56, wherein said generalized inverse sets a predetermined set of higher frequency components substantially to zero whereby aliasing may be rendered similar to noise.
58. The method of claim 56, wherein sparsity promotion is used for image inference.
59. A system for processing an image, comprising: a data processing apparatus adapted to:
receive a sample set of data generated by transforming and sampling an optical property of an original image in a spatial basis, wherein the transformation effected is substantially local in the spatial basis and is 85-Capture or greater; and
apply a generalized inverse of said transform, that respects predetermined spectral constraints, to the sample set of data to produce a set of inferred original image data.
The system of claim 59, further comprising: an image sensor having a plurality of photosensitive sensor elements;
a optical transformation device having a plurality of transformation elements responsive to the optical property wherein the transformation effected by the optical transformation device is substantially local in the spatial basis and is 85-Capture or greater; and
an optical imaging device adapted to project the image onto the filter array, the image sensor being disposed with respect to the optical transformation device so as to receive the optical responses of the transformation elements to the image at the photosensitive elements corresponding thereto to produce the sample set of data.
PCT/US2011/029878 2010-03-24 2011-03-24 Method and system for robust and flexible extraction of image information using color filter arrays WO2011119893A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP11760265A EP2550808A2 (en) 2010-03-24 2011-03-24 Method and system for robust and flexible extraction of image information using color filter arrays

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US31684010P 2010-03-24 2010-03-24
US61/316,840 2010-03-24
US201161435291P 2011-01-22 2011-01-22
US61/435,291 2011-01-22

Publications (2)

Publication Number Publication Date
WO2011119893A2 true WO2011119893A2 (en) 2011-09-29
WO2011119893A3 WO2011119893A3 (en) 2011-12-22

Family

ID=44673872

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2011/029878 WO2011119893A2 (en) 2010-03-24 2011-03-24 Method and system for robust and flexible extraction of image information using color filter arrays

Country Status (2)

Country Link
EP (1) EP2550808A2 (en)
WO (1) WO2011119893A2 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567968A (en) * 2011-12-22 2012-07-11 孔令华 Image algorithm based on micro-multispectral optical filter
CN104094323A (en) * 2012-02-03 2014-10-08 梅伊有限公司 Apparatus and method for characterizing items of currency
CN106126879A (en) * 2016-06-07 2016-11-16 中国科学院合肥物质科学研究院 A kind of soil near-infrared spectrum analysis Forecasting Methodology based on rarefaction representation technology
US9681109B2 (en) 2015-08-20 2017-06-13 Qualcomm Incorporated Systems and methods for configurable demodulation
CN112750092A (en) * 2021-01-18 2021-05-04 广州虎牙科技有限公司 Training data acquisition method, image quality enhancement model and method and electronic equipment
WO2023182938A3 (en) * 2022-03-22 2023-11-02 Agency For Science, Technology And Research Method for brillouin optical time domain analysis (botda) based dynamic distributed strain sensing
CN117541495A (en) * 2023-09-04 2024-02-09 长春理工大学 Image stripe removing method, device and medium for automatically optimizing model weight

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003255035A (en) * 2002-03-06 2003-09-10 Sony Corp Array and method for signal estimation using the same
US20030218679A1 (en) * 2001-12-24 2003-11-27 Alfio Castorina Method for improving the quality of a digital image
JP2006211610A (en) * 2005-01-31 2006-08-10 Olympus Corp Imaging system
US20080123097A1 (en) * 2004-10-25 2008-05-29 Hamed Hamid Muhammed System for Multi- and Hyperspectral Imaging

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030218679A1 (en) * 2001-12-24 2003-11-27 Alfio Castorina Method for improving the quality of a digital image
JP2003255035A (en) * 2002-03-06 2003-09-10 Sony Corp Array and method for signal estimation using the same
US20080123097A1 (en) * 2004-10-25 2008-05-29 Hamed Hamid Muhammed System for Multi- and Hyperspectral Imaging
JP2006211610A (en) * 2005-01-31 2006-08-10 Olympus Corp Imaging system

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567968A (en) * 2011-12-22 2012-07-11 孔令华 Image algorithm based on micro-multispectral optical filter
CN104094323A (en) * 2012-02-03 2014-10-08 梅伊有限公司 Apparatus and method for characterizing items of currency
US9681109B2 (en) 2015-08-20 2017-06-13 Qualcomm Incorporated Systems and methods for configurable demodulation
US10313640B2 (en) 2015-08-20 2019-06-04 Qualcomm Incorporated Systems and methods for configurable demodulation
US10735698B2 (en) 2015-08-20 2020-08-04 Qualcomm Incorporated Systems and methods for converting non-Bayer pattern color filter array image data
CN106126879A (en) * 2016-06-07 2016-11-16 中国科学院合肥物质科学研究院 A kind of soil near-infrared spectrum analysis Forecasting Methodology based on rarefaction representation technology
CN112750092A (en) * 2021-01-18 2021-05-04 广州虎牙科技有限公司 Training data acquisition method, image quality enhancement model and method and electronic equipment
WO2023182938A3 (en) * 2022-03-22 2023-11-02 Agency For Science, Technology And Research Method for brillouin optical time domain analysis (botda) based dynamic distributed strain sensing
CN117541495A (en) * 2023-09-04 2024-02-09 长春理工大学 Image stripe removing method, device and medium for automatically optimizing model weight

Also Published As

Publication number Publication date
EP2550808A2 (en) 2013-01-30
WO2011119893A3 (en) 2011-12-22

Similar Documents

Publication Publication Date Title
US9118797B2 (en) Method and system for robust and flexible extraction of image information using color filter arrays
CN112805744B (en) System and method for demosaicing multispectral images
Menon et al. Color image demosaicking: An overview
WO2011119893A2 (en) Method and system for robust and flexible extraction of image information using color filter arrays
US8761504B2 (en) Spatio-spectral sampling paradigm for imaging and a novel color filter array design
US9025871B2 (en) Image processing apparatus and method of providing high sensitive color images
Zhang et al. Universal demosaicking of color filter arrays
US20100092082A1 (en) framework for wavelet-based analysis and processing of color filter array images with applications to denoising and demosaicing
JP2012060641A (en) Digital raw image demosaic method, computer program thereof, and image sensor circuit or graphic circuit thereof
CN102812709A (en) Method And System For Compressive Color Image Sampling And Reconstruction
Chakrabarti et al. Rethinking color cameras
KR102083721B1 (en) Stereo Super-ResolutionImaging Method using Deep Convolutional Networks and Apparatus Therefor
US10237519B2 (en) Imaging apparatus, imaging system, image generation apparatus, and color filter
US20180122046A1 (en) Method and system for robust and flexible extraction of image information using color filter arrays
WO2009047643A2 (en) Mehtod and apparatus for image processing
WO2020139493A1 (en) Systems and methods for converting non-bayer pattern color filter array image data
Hirakawa et al. Spatio-spectral sampling and color filter array design
De Lavarène et al. Practical implementation of LMMSE demosaicing using luminance and chrominance spaces
Paul et al. Maximum accurate medical image demosaicing using WRGB based Newton Gregory interpolation method
US20220292647A1 (en) Apparatus and method for processing image
JP7415464B2 (en) Video processing device, video processing method and program
Aghagolzadeh et al. Bayer and panchromatic color filter array demosaicing by sparse recovery
Singh et al. Disregarding spectral overlap—A unified approach for demosaicking, compressive sensing and color filter array design
Alleysson et al. Frequency selection demosaicking: A review and a look ahead
WO2009094618A1 (en) System and method for cross-talk correction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11760265

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 9081/CHENP/2012

Country of ref document: IN

WWE Wipo information: entry into national phase

Ref document number: 2011760265

Country of ref document: EP