EP2550808A2 - Verfahren und system für robuste und flexible extraktion von bilddaten mithilfe von farbfilterarrays - Google Patents

Verfahren und system für robuste und flexible extraktion von bilddaten mithilfe von farbfilterarrays

Info

Publication number
EP2550808A2
EP2550808A2 EP11760265A EP11760265A EP2550808A2 EP 2550808 A2 EP2550808 A2 EP 2550808A2 EP 11760265 A EP11760265 A EP 11760265A EP 11760265 A EP11760265 A EP 11760265A EP 2550808 A2 EP2550808 A2 EP 2550808A2
Authority
EP
European Patent Office
Prior art keywords
image
color
sensor
optical
filter array
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP11760265A
Other languages
English (en)
French (fr)
Inventor
Mritunjay Singh
Tripurari Singh
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of EP2550808A2 publication Critical patent/EP2550808A2/de
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/133Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements including elements passing panchromatic light, e.g. filters passing white light
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/61Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
    • H04N25/611Correction of chromatic aberration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/618Noise processing, e.g. detecting, correcting, reducing or removing noise for random or high-frequency noise

Definitions

  • Embodiments of the present invention relate to multi-spectral imaging systems such as still cameras, video cameras, scanners and microscopes and more specifically to imaging systems that use fewer sensor elements than previous techniques for comparable image quality.
  • Images herein can be considered signals whose amplitude may represent some optical property such as intensity, color and polarization which may vary spatially but not significantly temporally during the relevant measurement period.
  • light intensity typically is detected by photosensitive sensor elements or photosites.
  • An image sensor is composed of a two dimensional regular tiling of these individual sensor elements.
  • Color imaging systems need to sample the image in at least three basic colors to synthesize a color image.
  • basic colors to refer to primary colors, secondary colors or any suitably selected set of colors.
  • all references to red, green and blue should be construed to apply to any set of basic colors.
  • Color sensing may be achieved by a variety of means such as, for example, (a) splitting the image into three identical copies, separately filtering each into the basic colors, and sensing each of them using separate image sensors, or (b) using a rotating filter disk to transmit images filtered in each of the basic colors onto the same image sensor.
  • a very popular design for capturing color images is to use a single sensor overlaid with a color filter array (" CFA" ).
  • CFA color filter array
  • This design yields red, green and blue images of equal resolution, or equivalently luminance and chrominance signals of equal bandwidth.
  • 'Luminance is defined as a weighted sum of basic color signals where all the weights are positive while "chrominance” , is defined as a weighted sum of basic color signals where at least one weight is negative.
  • the color stripe design is still used in high end cameras such as the Panavision Genesis Digital Camera, http://www.panavision.com/publish/2007/l l/18Genesis.pdf, page 2, 2007.
  • Newer CFA designs by Bayer see FIG. 4 and B.E. Bayer, "Color imaging array” , July 20 1976. US Patent 3,971,065) and others (see K. Hirakawa and P.J. Wolfe, "Spatio-spectral color filter array design for enhanced image fidelity" in Proc. of IEEE ICIP, pages II: 81-84, 2007 and L. Condat, "A New Class of Color Filter Arrays with Optimal Sensing Properties” ) make different trade-offs between luminance and chrominance bandwidths as well as the crosstalk between them.
  • FIG. 3 shows an exemplary monochrome image 310 and it's spectral image 320.
  • the spectral image is obtained by taking the logarithm of the absolute value of the Fourier transform of the image.
  • An aggressive class of techniques for close packing of color component spectra employs adaptive directional techniques during image reconstruction. These techniques assume the color component spectra of small image patches to be sparse in at least one direction. They design their CFA to generate more than one copy of chrominance spectra (see B.E. Bayer, "Color imaging array” , July 20 1976, US Patent 3,971,065), implicitly or explicitly identify the cleanest copy during the image reconstruction step and use directional filtering to demultiplex them (see E. Dubois, "Frequency-domain methods for demosai eking of bayer-sampled color images” , IEEE Signal Processing Letters, 12(12):847-850, 2005 and K.
  • Each color of the CFA, Cj(n) , i G ⁇ r, g, b ⁇ , is the superposition of these carriers scaled by an appropriate real amplitude
  • Each 3 ⁇ 4 (fc) (n), 0 ⁇ k ⁇ m can be viewed as a color component.
  • k > 1 the chrominance signals.
  • A can be interpreted as the color transform matrix
  • A the generalized inverse of A, can be interpreted as the inverse color transform.
  • FIG. 4 shows the Bayer CFA 410.
  • FIG. 5 illustrates how color information with its circular support is packed into the sensor's rectangular support. This can be most easily understood in terms of an alternative color space:
  • the central circle represents Luminance (L).
  • the four quarter circles at the vertices make up Chrominancel (CI).
  • the two semi-circles at the left and right edges make up the first copy of Chrominance2 (C2a).
  • the two semi-circles at the top and bottom edges make up the second copy of Chrominance2 (C2b).
  • the present invention overcomes problems and limitations of prior imaging methods and systems by providing novel methods and systems for, among other things, sampling an image to obtain image data and processing image data.
  • One such method comprises receiving a sample set of data generated by transforming and sampling an optical property of an original color image in a spatial basis, wherein the transformation effected is substantially local in the spatial basis and has partially overlapping spectra.
  • a generalized inverse of the transform is applied to the sample set of data to produce a set of inferred original image data, wherein the generalized inverse does not use variational minimization and does not assume constant color ratios.
  • Another such method comprises receiving a sample set of data generated by transforming and sampling an optical property of an original color image in a spatial basis, wherein the transformation effected is substantially local in the spatial basis and has partially overlapping spectra.
  • a generalized inverse of the transform that respects predetermined spectral constraints is applied to the sample set of data to produce a set of inferred original image data.
  • Another such method comprises creating an optical color filter array, providing an image sensor comprising a plurality of photosensitive sensor elements, projecting an image through the color filter array, and detecting image intensity values transmitted through the color filter array after a single exposure at sensor elements of the image sensor. Detected intensity values are read out from only a subset of sensor elements to increase speed, the subset of sensor elements being randomly chosen whereby aliasing is rendered similar to noise. The input image is inferred from the detected image intensity values.
  • a further such method comprises creating an optical color filter array, providing an image sensor comprising a plurality of photosensitive sensor elements, projecting an image through the color filter array, and detecting image intensity values transmitted through the color filter array after a single exposure at each sensor element of the image sensor.
  • the color filter array is random, whereby aliasing is rendered similar to noise.
  • the sensor elements are grouped into two or more subsets whose respective characteristics differ in at least one of the following respects: sensitivities, sensor element quantum efficiencies and electronic signal integration times.
  • the input image is inferred from the detected image intensity values.
  • Yet another such method comprises creating an optical color filter array, providing an image sensor comprising a plurality of photosensitive sensor elements, projecting an image through the color filter array, and detecting image intensity values transmitted through the color filter array after a single exposure at each sensor element of the image sensor.
  • the sensor elements are arranged substantially in a jittered pattern.
  • the input image is inferred from the detected image intensity values, while at least some higher frequency components of the detected image intensity values are set to zero prior to image inference.
  • Another such method comprises creating an optical color filter array, providing an image sensor comprising a plurality of photosensitive sensor elements,- projecting an image through the color filter array, and detecting image intensity values transmitted through the color filter array after a single exposure at each sensor element of the image sensor.
  • the sensor elements are arranged substantially in a jittered pattern.
  • the input image is inferred from the detected image intensity values, where sparsity promotion is used to reconstruct a higher resolution image.
  • a method for sampling an image comprises projecting an image onto an image sensor comprising a plurality of photosensitive sensor elements arranged in a pattern obtained by jittering the pixel locations a small distance off of a regular lattice in a random way and and detecting image intensity values after a single exposure at each sensor element of the image sensor.
  • the input image is inferred from the detected image intensity values.
  • Another such method for reducing noise in an image.
  • This method comprises receiving a sample set of data generated by transforming and sampling an optical property of an original color image in a spatial basis, wherein the transformation effected is substantially diagonal in the spatial basis and has partially overlapping spectra and approximating the Poissonian photon shot noise at each photosite with a Gaussian of the same variance and zero mean.
  • the image intensity at each photosite is used as an approximation for the mean of the Poissonian at the photosite.
  • the Gaussian of the same variance and zero mean is combined with the Gaussian noise from other sources to get a combined Gaussian.
  • the maximum likelihood estimator of the combined Gaussian is computed.
  • a further such method for reducing noise in an image.
  • This method comprises capturing image data representative of intensities of the image in the spatial domain, transforming the image data into transformed data in another domain, and applying the Expectation Maximization algorithm to the inverse transformation of the data to compute its maximum likelihood estimator.
  • An additional such method for computing a sparse representation of a signal in a basis. This method comprises inserting additional vectors from the basis into the sparse representation in multiple iterations and using a statistical estimator such as a maximum likelihood estimator to decide which additional vectors to include in each iteration.
  • a statistical estimator such as a maximum likelihood estimator
  • Yet another such method for processing an image. It comprises receiving a sample set of data generated by transforming and sampling an optical property of an original color image in a spatial basis, wherein the transformation effected is substantially diagonal in the spatial basis and has partially overlapping spectra.
  • a generalized inverse of the transformation is applied to the sample set of data to produce a set of inferred original image data, wherein the sampling is done at spatial locations which are arranged substantially in a jittered pattern.
  • FIG 1 is a flowchart showing an exemplary method for sampling a color image with a CFA in accordance with an embodiment of the present invention.
  • FIG 2 is a schematic diagram of an exemplary color imaging system in accordance with an embodiment of the present invention.
  • FIG 3 Shows a monochrome image and it's spectral image.
  • FIG 4 Shows the popular Bayer Color Filter Array.
  • FIG 5 Illustrates amplitude modulation of an image effected by the Bayer CFA.
  • FIG 6 is a diagram of an exemplary randomized color filter array in accordance with an embodiment of the present invention.
  • FIG 7 Illustrates amplitude modulation of an image effected by the CFA shown in FIG 6 in the Fourier domain.
  • FIG 8 is a code listing of a Matlab simulation of an exemplary embodiment of the invention showing image reconstruction by matrix inversion.
  • FIG 9 Shows the results of a simulation of a simple exemplary embodiment of the present invention in Matlab.
  • FIG 10 is a code listing of a Matlab simulation of an exemplary embodiment of the invention showing image reconstruction with sparsity promotion.
  • the present invention works within the broad framework of Color Filter Array based color imaging systems.
  • FIG. 1 is a flowchart showing an exemplary method of color imaging, in accordance with an embodiment of the present invention.
  • a Color Filter Array is created.
  • the incident image is filtered through this CFA.
  • the filtered image is detected by an image sensor after it is exposed to it.
  • the image is reconstructed from the image sensor output and the CFA pattern.
  • FIG. 2 is a schematic diagram of an imaging system, in accordance with an embodiment of the present invention.
  • Image 210 is focused by lens 220 onto Color Filter Array 230.
  • the filtered image is detected by image sensor 240.
  • the resulting plurality of sensed filtered image intensity values is sent to processor 250 where image reconstruction is performed.
  • Equation 10 ⁇ ( ⁇ ) ⁇ ( ⁇ ) D g (il) ,( ⁇ ) (11) and Oi(Q),i G ⁇ r,g,b ⁇ is a row vector obtained by appropriately rearranging the elements of C i ,i G ⁇ r, g,b ⁇ so as to effect the convolution of equation 9.
  • rank(A) ⁇ X ⁇ .
  • rank(A) ⁇ min(
  • ⁇ X ⁇ is k times ⁇ Y ⁇ , in the case k basic colors.
  • X can only be recovered if an image model exists that provides additional a priori information, such as restricted signal bandwidth, or sparse representation in some bases.
  • equation 10 has to be augmented with
  • x ⁇ ⁇ xXg r be a column vector formed by the concatenation of Xi, i G ⁇ r, g, b ⁇ and
  • B c 9 be a matrix similarly formed by the concatenation of ⁇ 3 ⁇ 4, i G ⁇ r, g, b ⁇ .
  • , in the case k basic colors. Hence x can only be recovered if an image model exists that provides additional a priori information, such as restricted signal bandwidth, or sparse representation in some bases. Thus, equation 14 has to be augmented with B' ⁇ x (15) where is a vector typically composed of constants, usually zeros. Combining equations 14, 15 we obtain
  • Equation 16 From equation 16 we find that a basic color value at a pixel location can be computed as a weighted sum of elements of y'. Since the only image dependent values of y' are in its sub-vector y, this reduces to a space variant filter plus an optional constant. This constant is 0 if 7 is a vector of zeros.
  • the space variant filter obtained from equation 16 has a large kernel the size of
  • Another practical consideration is the space required to store the space variant filter kernels. This can be addressed by using a periodic CFA formed by tiling the sensor with a pattern block, so that the number of filter kernels is reduced to the block size. Such tiling sufficiently preserves the random character of a random CFA as long as the block size is not too small. Rectangular blocks suffice for most applications but other, more complicated, shapes may also be employed.
  • Space variant filter kernels can be pre-computed and used for image reconstruction only if equation 12 or 15 can be set up independently of the image. This is the case for section “Image Model with Restricted Bandwidth” but not for sections “Image Model with Transform-coefficient Sparsity” and “Other Regularization Techniques for Image Reconstruction” . Hybrid approaches that combine space variant filters with adaptive algorithms are described in section "Image Model with Adaptive Color Space and Bandwidth.”
  • the rank of matrix A is determined by the choice of CFA.
  • a carefully tailored CFA with a small repeating pattern can have a rank close to or equal to ⁇ Y ⁇ .
  • CFAs have a large number of filter colors complicating their manufacture. Filters that allow the transmission of more than one basic color are known as panchromatic filters. Panchromatic filters vary in the amount of transmission of each basic color and hence come in a large number of colors.
  • CFAs comprised of a random arrangement of basic colors such as Red, Green and Blue (known collectively "RGB” ) filters have rank close to ⁇ Y ⁇ . Furthermore, some random arrangements of the basic colors can have a rank equal to Intuitively, this is due to the fact that the DFT of a random pattern is overwhelmingly likely to contain ⁇ Y ⁇ carrier frequencies of non-zero amplitude. Furthermore, practically all of these amplitudes are unique. Such CFAs are easier to manufacture than panchromatic ones. For more on eigenvalues of random matrices, see E.J. Candes, T. Tao, "Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?" , IEEE Transactions on Information Theory, 2006 and references cited therein.
  • Randomized Panchromatic CFAs can also be built wherein colors are chosen randomly. Such CFAs also have rank concentrated close to ⁇ Y ⁇ . However, these are harder to manufacture than random basic-color filter arrays.
  • FIG. 6 shows an exemplary Color Filter Array 610, in accordance with an embodiment of the present invention.
  • Red, Green and Blue filters in approximately equal numbers are distributed in a random pattern.
  • FIG. 7 Illustrates amplitude modulation of an image effected by the CFA shown in FIG. 6 in the Fourier domain.
  • Each circle represents the spectrum of a basic color signal modulated with one of the multitude of carriers. For better visibility, only a few of the carriers are shown.
  • a ⁇ -Capture transform if it has the following property when applied to a band-limited color image signal with color space dimensionality k: If the signal has circular Fourier domain support with ⁇ as many independent Fourier coefficients in each color as the number of Fourier coefficients in the transformed signal, then X% of the latter are independent linear combinations of the former.
  • the parameter ⁇ of a ⁇ -Capture transform controls the rank of the inverse transform and is a measure of the amount of color information captured in the transformed signal.
  • the color signal bandwidth restriction can also be applied in the Wavelet, Curvelet, Karhunen-Loeve and other bases where it results in many low value coefficients that can be approximated to zero.
  • chrominancel cl
  • chrominance2 c2
  • luminance basis vectors that are orthogonal to the luminance basis vector defined above have similarly low high frequency content.
  • Color spaces can be adaptively chosen for different image neighborhoods so as to minimize high frequency content of chrominance signals. For example, this can be done by pre-multiplying each basic color by a factor inversely proportional to its strength in the neighborhood, reconstructing the thus normalized image followed by de-normalization to restore the original colors.
  • Signal bandwidth can be adaptively set for each image neighborhood.
  • Signal bandwidth along the direction of the edge is lower than in the direction perpendicular to it.
  • Edge sensing algorithms can be used to take advantage of this fact, for example, and setup equation 12 or 15 with a roughly oval spectrum with major axis in the direction of the edge. This allows for the reconstruction of higher resolution images and makes the system of equations more over determined and thus robust to noise.
  • Image reconstruction then consists of selecting the most suitable color space and spectral shape for each neighborhood, and applying the corresponding space variant filter.
  • Spectral sparsity of natural images may be used to reduce the number of degrees of freedom.
  • most of the large amplitude transform coefficients (as used herein the term “transform coefficient” is to be construed as including “Fourier transform coefficient” , “Wavelet transform coefficient” , “Curvelet transform coefficient” , “Karhunen-Loeve transform coefficient” and coefficients of other vector bases spanning the space of image signals) of color components are solved for, and many of the low amplitude coefficients are neglected. If many of the low amplitude transform coefficients are first identified and set to zero, the resulting simplified inverse problem can be solved using any of the standard linear inverse problem techniques.
  • a random CFA is well suited to our objective of extracting the large valued Transform coefficients as it provides a random set of projections of the image. See E.J. Candes, T. Tao, "Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?" , IEEE Transactions on Information Theory, 2006. Random CFAs will be assumed for the rest of this section.
  • the low value transform coefficients can be identified by first reconstructing an appropriately bandwidth restricted image. This works because the neglected high frequencies have low magnitudes compared to the low frequencies (see D.L. Ruderman and W. Bialek, "Statistics of natural images: Scaling in the woods” , Physics Review Letters 73 (1994), no. 6, 814-817). Furthermore, the resulting error is randomized and the resulting dense spectral support of the error results in small value transform coefficients remaining small. Once the low value transform coefficients are identified and removed, higher frequency coefficients are reintroduced and the resulting linear problem re-solved. This process is repeated until all "large" transform coefficients are identified and solved for.
  • L ⁇ minimization also known as Basis Pursuit, is a standard Compressive Sensing technique to identify many of the high amplitude transform coefficients - and thereby the low amplitude transform coefficients. See Chen, Donoho, and Saunders, "Atomic decomposition by basis pursuit” , SIAM J. Scientific Computing, vol. 20, pp. 33-61, 1999.
  • Matching Pursuit is a faster but suboptimal greedy algorithm for identifying the large valued transform coefficients. See Mallat and Zhang, "Matching Pursuits with Time- Frequency Dictionaries” , IEEE Transactions on Signal Processing, December 1993, pp. 3397- 3415. Variants include, but are not limited to, Orthogonal Matching Pursuit, Regularized Orthogonal Matching Pursuit and Simultaneous Orthogonal Matching Pursuit.
  • the band limitedness of the image can be imposed on each iteration of the solution by projecting the iterate onto the corresponding convex set.
  • Transform sparsity image model can be used in conjunction with limited bandwidth image model to reconstruct the input image. This allows larger bandwidth images to be reconstructed than is possible with only the limited bandwidth model or, for that matter, with state of the art image reconstruction algorithms. This, in turn, allows for larger bandwidth OLPFs to be used or for OLPFs to be omitted altogether.
  • Noise has two primary components: a) additive Gaussian noise generated at the sensor due to thermal and electrical effects and b) Poissonian photon-shot noise which we approximate by a Gaussian of the same variance and zero mean.
  • Naive CFA designs when combined with simplistic matrix inversion for image reconstruction can often lead to amplification of this noise in the reconstructed image.
  • S(Q) ⁇ X + ⁇ ( ⁇ ) ⁇ ⁇ ( ⁇ ) ( 19)
  • ⁇ ⁇ ( ⁇ ) is the noisy signal output by the sensor for frequency ⁇
  • ⁇ ( ⁇ ) is a random variable representing noise
  • S(Q) is the corresponding row of the matrix S.
  • equation 13 can be rewritten as where ⁇ ⁇ ( ⁇ ) is the noisy reconstruction of ⁇ ( ⁇ ) .
  • This scheme becomes susceptible to noise amplification if the choice of CFA is such as to make some elements of S ⁇ 1 ( , ⁇ ) large in magnitude or negative. Multiplication of signals with large values amplifies noise while subtraction of strong signals to obtain a weak residual signal decreases SNR as noise energy gets added even as the signal gets subtracted.
  • Noise amplification can be reduced by employing a slightly larger photosite count that results in a more overdetermined systems of linear equations. Many reconstruction algorithms perform better on such over-determined systems of linear equations.
  • Low amplitude carriers in the CFA result in low amplitude sidebands. These low amplitude sidebands are amplified by the inverse color transform of the reconstruction step, which amplifies noise as well. CFAs should be designed so that the sum of all carrier energies is high.
  • Enforcing spectral sparsity is an effective technique for suppressing noise. This is done using the techniques of section "Image Model with Transform-coefficient Sparsity" . This leads to the linear system of equations being over determined and hence a substantial improvement in Signal to Noise Ratio.
  • Statistical estimation techniques form another effective class of noise suppression techniques.
  • Statistical Estimation is the term used to describe techniques for estimating the parameters of a probabilistic distribution from a set of samples from that distribution. In the case of an over-determined system of equations, statistical estimation can be used to approximate the means of distributions that generated the sensor readings. Least squares regression is one of the simplest forms of statistical estimation.
  • Various other estimators may also be computed on over-determined systems such as the Maximum Likelihood Estimator (MLE), Maximum A Posteriori Probability (MAP) estimator, Best Linear Unbiased Estimator or a Minimum Variance Linear Estimator.
  • MLE Maximum Likelihood Estimator
  • MAP Maximum A Posteriori Probability estimator
  • Best Linear Unbiased Estimator Best Linear Unbiased Estimator or a Minimum Variance Linear Estimator.
  • Matlab simulation was performed wherein a CFA with equal numbers of Red, Green and Blue filters arranged in a random pattern was generated. The original color image was filtered through this CFA. Various amounts of white noise was added to this and reconstruction was performed on the resultant image sensor output by inverting the linear transformation effected by the CFA in the Fourier domain.
  • Matlab is a product of The Math Works, Inc., Natick, Massachussetts, U.S.A.
  • FIG. 8 shows a snippet of Matlab code used in the simulation. While we do not provide a complete code listing of the simulation, this snippet should be sufficient for anyone of ordinary skill in the art to reproduce our results.
  • the matrix inversion step of the reconstruction is performed by the Moore-Penrose pseudoinverse algorithm.
  • FIG. 9 shows the results of this simulation.
  • 910 is the original color image.
  • 920 is the image after being filtered by an exemplary randomized RGB filter.
  • 930 is the reconstructed color image. No noise was added leading to practically perfect reconstruction (PSNR> 300dB).
  • Low intensity signals may be drowned by noise and high intensity signals may cause sensor saturation.
  • the Dynamic Range of an imaging system is a measure of the range of intensities it can capture. Sensitivity, on the other hand, measures the system's responsiveness to low incident light. High dynamic range and increased low-light sensitivity are valuable characteristics.
  • Photosite sensitivities can be controlled by any suitable method including filter element transmittance, sensor element efficiency or electronic signal integration times. As before, we allow partial spectral overlaps to increase packing efficiency and use a regular pattern of panchromatic filters or a random arrangement of high sensitivity and regular sensitivity photosites of various colors. [0104] We can deal with photosites lost to saturation or under-exposure by reducing the number of transform coefficients solved for. If a random CFA is used this can be done gracefully, as described in section "Random CFAs and graceful Error Handling" .
  • the CFA mentioned above also serves to improve image-capture in high ISO settings or of low dynamic range scenes. In these scenarios sensor element saturation and underexposure are not a problem and reduced luminance noise is the benefit.
  • Wavelength dependent refractive properties of camera optics can lead to geometric misalignment of the image in different colors. This is called Chromatic Aberration.
  • the framework of the current invention allows for the incorporation of an elegant means of computationally correcting for this optical problem during image reconstruction.
  • the framework of the present invention allows joint image reconstruction and chromatic aberration correction. This is particularly effective when color transforms that take advantage of correlations between basic colors such as red, green and blue are used, as is done in section "Image Model with Restricted Bandwidth” .
  • This is achieved by expressing the discrete samples of the undistorted image in terms of those of the observed distorted image in equation 14. This is done by first expressing the continuous signal in terms of the discretized samples and then applying the continuous domain geometric distortion of chromatic aberration to it before re-discretizing it. This expresses the discrete components of the undistorted image as image independent linear combinations of those of the observed distorted image.
  • Chromatic aberration can be described as a combination of warping in the image plane and defocusing. We derive an explicit expression below for the discrete domain linear transformation corresponding to the former. While there is no exact way of correcting for the defocusing problem, some correction may be achieved by a space varying sharpening filter such as described in Sing Bing Kang, "Automatic Removal of Chromatic Aberration from a Single Image” , 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2007) , 18-23 June 2007.
  • the discrete domain image is obtained from the continuous domain through sampling with rectangular photosites. This sampling may be described as a convolution with a 2D rectangular function followed by 2D Dirac comb sampling:
  • X t (x C t * H Box ) - H Sample (24) where is the continuous domain image in color i G ⁇ r, g, b ⁇ just before it gets filtered by the CFA and Xi is the same image discretized by the sensor.
  • x may be obtained from the ideal chromatic aberration free image x c through the warping functions Uu and u 2 i. Un and u 2 i are functions of u and u 2 and are determined offline using existing methods for characterizing Chromatic Aberration.
  • x c l (u 1 ,u 2 ) c t ⁇ u u , u 2i ) (27)
  • Equation 24 for x becomes
  • H Sample (28) which may be formally inverted: where HL P / is a low pass filter that recovers the continuous signal, an example implementation of which is
  • Equation 24 when combined with equations 27 and 29 becomes
  • CA(m,n) / du ⁇ du 2 / dv 1 dv 2 H Lp f(v n)H B - (u(u) v)H Box (m- u) (34) m, n may be rasterized to make them 1-dimensional, making this, in matrix notation
  • Random CFAs of basic colors are especially suited for Chromatic Aberration correction since they avoid large (electromagnetic) bandwidth panchromatic filters resulting is sharper images in each basic color, and can gracefully handle loss of information in accordance with the section "Random CFAs and graceful Error Handling.”
  • Aliasing of color components can be handled by arranging photosite locations in a "jittered pattern" .
  • This term to refer to any irregular arrangement, including one where photosites on a regular lattice are perturbed a small distance. This perturbation is random and smaller than the distance between any two neighboring lattice points.
  • the problem of adjacent photosites overlapping can be avoided by changing their size and geometry.
  • An example pattern, shown in FIG. 11 makes these perturbations along the x and y directions identical along the same column and row respectively.
  • a jittered sensor requires changes to equation 14 to account for non-regular lattice locations and the non-regular photosite sizes affecting box filtering that varies from photosite to photosite. This is a straightforward modification of sampling theory along the lines of section "Chromatic Aberration Correction.” This modified sampling equation allows the capture of image frequencies beyond the Nyquist limit of a regular sensor of the same size and number of photosites. Informally, a jittered sensor lattice is roughly equivalent to a sensor with a higher Nyquist frequency but with a random set of photosites removed.
  • Jittering may be used not just in color image sensors but in monochrome ones as well with the same corresponding benefits.
  • Aliasing is a problem in such systems because of reduced sensor Nyquist rate. If a random set of photosites is read out instead of line skipping and random sets of photosites are combined together instead of photosite binning, the image reconstruction problem becomes identical to the jittered sensor lattice case. Image reconstruction techniques described in section "Sensor Lattice Jittering for Increased Bandwidth and graceful Aliasing" can be used ameliorate the aliasing problem.
  • CFA based color imaging adds to imaging in these other fields: firstly, the multiplexing of images in two or more colors onto one sensor image, secondly the linear relations corresponding to filtering by a CFA, thirdly correlations between the image in different colors and fourthly linear relations corresponding to approximately circular band limitedness of the optical signal.
  • the Tikhanov regularization framework casts the signal reconstruction problem as an optimization problem and admits a variety of penalty functionals as its objective.
  • the standard formulation uses an energy minimizing term which also leads to an approximate solution to equation 36.
  • Total Variation minimization also corresponds to a particular penalty functional within the Tikhanov regularization framework.
  • Truncated Singular Valued Decomposition is another effective regularization scheme which may be used for color image reconstruction. These regularization techniques are applied to each of the multiplexed images in each basic color. For details see M. Bertero, P. Boccacci, "Introduction to Inverse Problems in Imaging” , Taylor & Francis, ISBN 0750304359, hereby incorporated by reference in its entirety.
  • the Bayesian framework may be used for regularization if a suitable prior is used. See Y. Hel-Or, "The canonical correlations of color images and their use for demosaicing" which computes a Maximum A Posteriori Estimate (MAP). We propose that this scheme may be improved by applying them on the full linear system including not just the linear transform of the CFA, as they have done, but the linear constraints corresponding to the image's frequency band hmitedness as well.
  • Conjugate Gradient and Steepest Descent are two iterative techniques that may be used on Bayesian or other non-linear Regularization schemes.
  • Band hmitedness and spectral sparsity constraints may be combined with any of the above techniques through the method of Projection On Convex Sets. In this scheme, at each iteration, the iterate is projected onto the set of band limited and spectrally sparse solutions. Solution positivity may also be imposed using a similar scheme.
  • FIG. 10 shows a listing of Matlab code used in a simulation based on the sparsity pro- motion approach to image reconstruction. This listing should be sufficient for anyone of ordinary skill in the art with a familiarity with the GPSR solver (http:/ /www. lx.it.pt/ ⁇ mtf/GPSR/) to reproduce our results. With no noise added, a PSNR figure of 42dB was obtained. With enough noise added so that the input image would have a PSNR of 39dB, the reconstructed image had a PSNR of 34dB.
  • the scalability of the present image reconstruction scheme depends on the specific Inverse Problem solution technique used.
  • linear solution techniques such as matrix inversion or MLE calculation
  • the result is a space variant FIR filter.
  • the size of such filters can be reduced by windowing and associated techniques.
  • Scalability can also be achieved by the standard technique of blocking, wherein a pixel is reconstructed by only considering a small block of photosites around it. This works well because parts of images that are far away from each other are quite independent of each other, and hence have little influence on each other.
  • the present invention may be used not just for still imaging but for video as well. Besides a trivial extension to multiple frames, algorithms that perform joint reconstruction of multiple frames leveraging their correlation may also be used.
  • the present invention may also be used in other situations where multi-spectral image sensor systems are limited by geometric constraints.
  • the present invention allows multi-spectral sampling to be folded into smaller sensors requiring smaller apertures without increased acquisition times.
  • the present invention may be used in conjunction with non-rectangular photosites and lattice arrangements including but not limited to hexagonal and octagonal schemes. [0137] The present invention may be used in image scanners.
  • the present invention may be used in acquiring multi-spectral images in different number of dimensions including ID and 3D.
  • the first system of equations is generated once and saved to file.
  • each distinct overlapping tile needs to be computed and saved.
  • r_chroma r_luma/2;
  • r_filt r_luma+0.0; Determines the number of coeffs
  • noise sigma2*randn( [size(X , 1) size(X,2)]);
  • numRows floor((size(X , 1) -overlap)/coreSize) ;
  • numCols floor((size(X,2)-overlap)/coreSize) ;
  • stitchedlmage_lpf zeros(numRows+coreSize, numCols+coreSize, c) ; Compute LPF mask
  • lpfMask GenerateEllipticalMask(M, r_filt*M, 1, 0);
  • numCoeffs sum(sum(lpfMask) ) *3 ;
  • [goodPixels, numGoodPixels] GenerateGoodPixels(usePrecompGoodPixels, M, N) ;
  • imageStack zeros(numDirs, numRows+coreSize , numCols+coreSize , c) ;
  • numNZVars numVars-size(B, 1) ;
  • numEqns size(B, l)+numGoodPixels
  • filename sprintf ( ' -°/ 0 icoeffs ,ivars-y,iNZs-y,ieqns-y,iof°/ 0 i ' , numCoeffs, ... numVars, numNZVars, numEqns, coreSize, tileSize) ;
  • filename strcat(S(l :end-4) , filename, suffix, '.png');
  • inverseFileName sprintf ( 'B-C-inverses-Xiof 0 / 0 i _Chroma_Dir_ 0 / 0 i .mat ' , ... coreSize, tileSize, uint8(r_luma*100) , uint8(r_chroma*100) , d) ;
  • stitchedlmage zeros(numRo s+coreSize, numCols+coreSize, c) ;
  • tileNum tileNum + 1;
  • C_shift circshift(C, [-(i-l)*coreSize, -(j-l)*coreSize] ) ;
  • tile(: , : ,k) ifft2(ifftshift (fftshift(fft2(tile( : , : ,k))) .*lpfMask)) end
  • Binv squeeze(inverses(inverseIndex_i , inverseIndex_j ,:,:));
  • soln((k-l)*coreSize+l) tileSoln((kc-l)*tileSize+lc) ; inverses(inverselndex_i , inverselndex_j , ...
  • numLowRankTiles numLowRankTiles + 1;
  • reconTile_lpf zeros(size(tile) ) ;
  • reconTile_lpf( : , : ,k) LpfFFT(tile( : , : ,k)+sigma2*abs ...
  • stitchedlmage_lpf (rowStart : rowStart+coreSize-1 , ...
  • overlap/2+1 overlap/2+coreSize , : ) ;
  • imageStack(d, :,:,:) stitchedlmage ;
  • minimalSize DetermineMinimalSize(numRows, numCols, coreSize, r_luma) ;
  • reducedlmage_lpf reduce(stitchedlmage_lpf , minimalSize);
  • [ME, MAE, MSE, PSNR] Metrics(stitchedlmage_lpf , Z, S) ;
  • Cx 0 which sets Fourier components of the image in each of luma, cl,c2 outside an ellipsee to zero where x is the set of image pixel values in luma, cl,c2.
  • outsideEllipse GenerateEllipses(M, majorLengths, aspect, theta);
  • outsideEllipse 1-outsideEllipse ;
  • outsideEllipse(i , : , : ) ifftshift (squeeze(outsideEllipse(i ,:,:)));
  • Binv returns the inverse of B concatenated with matrix formed by
  • the list of bad pixels also may or may not be given.
  • randPixels randperm(M*N) ;
  • goodPixelNums randPixels(l :numGoodPixels) ;
  • goodPixels reshape(goodPixels, M, N) ;
  • B_CFA zeros(nuniGoodPixels, c*M*N) ;
  • Y_good(offset+goodPixellndex) Y_lin(linearPixelIndex) ;
  • B_CFA(goodPixelIndex, linearPixellndex) r3;
  • B_CFA(goodPixelIndex, M*N+linearPixelIndex) r2;
  • B_CFA(goodPixelIndex, 2*M*N+linearPixelIndex) r6;
  • B [B; B_CFA] ;
  • sys_rank rank(B) ;
  • Binv pinv(B) ;
  • Binv Binv(:, end-M*N+l : end) ;
  • function mask GenerateEllipses(tileSize, majorLength, aspect, theta)
  • mask(i, :, :) GenerateEllipticalMask(tileSize , majorLength(i) ,
  • end function A GenerateEllipticalMask(tileSize , majorLength, aspect, ... theta)
  • rPoint rotator*point( : ) ;
  • rPoint(2) rPoint (2) /aspect ;
  • Ey X-squeeze(Y(d, :, :, :));
  • randPixels randperm(M*N) ;
  • goodPixellndices randPixels(l :numGoodPixels) ;
  • goodPixels(goodPixelIndices(p) ) 1 ;
  • goodPixels reshape(goodPixels, M, N) ;
  • helOrMatrix [r3, r2, r6; r3, -r2, r6; r3, 0, -2*r6] ;
  • Y(i, j, :) helOrMatrix*P(:) ;
  • A fftshift(fft2(X)) ;
  • alpha size(A, l)/size(A,2) ;
  • alpha2 alpha*alpha
  • centerRow floor(size(A, l)/2)+l;
  • centerCol floor(size(A, 2)/2)+l
  • C repmat(C, [ceil(size(X, l)/size(C, 1)) ceil(size(X ,2) /size(C , 2) ) 1]);
  • C C(l:size(X,l),l:size(X,2),:);
  • XFFT_orig zeros(M2, N2, c) ;
  • X_orig zeros(size(XFFT_orig)) ;
  • XFFT_orig(: , : ,k) temp(M/2+l-M2/2 : M/2+M2/2 , N/2+1-N2/2 : N/2+N2/2) ;
  • X_orig(: , : ,k) real(ifft2(ifftshift (XFFT_orig( : , : ,k) ) ) ) ) ) ;
  • X_orig real(X_orig) ;
  • reductionFactor size(X, l)*size(X,2)/(targetSize(l)*targetSize(2)) ;
  • X_orig X_orig/reductionFactor ;
  • _orig X_orig*255/max(max(max(X_orig) ) ) ;
  • Y min(max(X_orig,0) ,255) ;
  • minimalSize(2) ceil(numCols*coreSize*r_luma) ;
  • minimalSize(2) minimalSize(2)+l ;
  • err_amp E*255/max(max(max(E) ) ) ;
  • MAE sum(E)/size(E,l) ;
  • MSE sum(E)/size(E,l) ;
  • PSNR 10*loglO(255 ⁇ 2/MSE) ;
  • fftErr zeros(size(A) ) ;
  • fftErr fftshift(log(l+abs(fft2(B)-fft2(A)))))) ;
  • fftErr fftErr*256/max([fftErr(:)]) ;
  • fftErr uint8(fftErr + 0.5*ones(size(fftErr) ) ) ;

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Color Television Image Signal Generators (AREA)
EP11760265A 2010-03-24 2011-03-24 Verfahren und system für robuste und flexible extraktion von bilddaten mithilfe von farbfilterarrays Withdrawn EP2550808A2 (de)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US31684010P 2010-03-24 2010-03-24
US201161435291P 2011-01-22 2011-01-22
PCT/US2011/029878 WO2011119893A2 (en) 2010-03-24 2011-03-24 Method and system for robust and flexible extraction of image information using color filter arrays

Publications (1)

Publication Number Publication Date
EP2550808A2 true EP2550808A2 (de) 2013-01-30

Family

ID=44673872

Family Applications (1)

Application Number Title Priority Date Filing Date
EP11760265A Withdrawn EP2550808A2 (de) 2010-03-24 2011-03-24 Verfahren und system für robuste und flexible extraktion von bilddaten mithilfe von farbfilterarrays

Country Status (2)

Country Link
EP (1) EP2550808A2 (de)
WO (1) WO2011119893A2 (de)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567968A (zh) * 2011-12-22 2012-07-11 孔令华 基于微多光谱滤光片的图像算法
CN104094323B (zh) * 2012-02-03 2017-11-21 梅伊有限公司 用于表征通货项的设备和方法
US9681109B2 (en) 2015-08-20 2017-06-13 Qualcomm Incorporated Systems and methods for configurable demodulation
CN106126879B (zh) * 2016-06-07 2018-09-28 中国科学院合肥物质科学研究院 一种基于稀疏表示技术的土壤近红外光谱分析预测方法
CN112750092A (zh) * 2021-01-18 2021-05-04 广州虎牙科技有限公司 训练数据获取方法、像质增强模型与方法及电子设备
WO2023182938A2 (en) * 2022-03-22 2023-09-28 Agency For Science, Technology And Research Method for brillouin optical time domain analysis (botda) based dynamic distributed strain sensing
CN117541495A (zh) * 2023-09-04 2024-02-09 长春理工大学 一种自动优化模型权重的图像条纹去除方法、装置及介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE60130671D1 (de) * 2001-12-24 2007-11-08 St Microelectronics Srl Verfahren zur Kontrastverbesserung in digitalen Farbbildern
JP2003255035A (ja) * 2002-03-06 2003-09-10 Sony Corp アレイ及びそれを用いた信号推定方法
SE0402576D0 (sv) * 2004-10-25 2004-10-25 Forskarpatent I Uppsala Ab Multispectral and hyperspectral imaging
JP2006211610A (ja) * 2005-01-31 2006-08-10 Olympus Corp 撮像システム

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2011119893A2 *

Also Published As

Publication number Publication date
WO2011119893A2 (en) 2011-09-29
WO2011119893A3 (en) 2011-12-22

Similar Documents

Publication Publication Date Title
US9118797B2 (en) Method and system for robust and flexible extraction of image information using color filter arrays
CN112805744B (zh) 用于对多光谱图像去马赛克的系统和方法
Menon et al. Color image demosaicking: An overview
EP2550808A2 (de) Verfahren und system für robuste und flexible extraktion von bilddaten mithilfe von farbfilterarrays
US8577184B2 (en) System and method for super-resolution imaging from a sequence of color filter array (CFA) low-resolution images
US8761504B2 (en) Spatio-spectral sampling paradigm for imaging and a novel color filter array design
JP5563597B2 (ja) 多重化イメージング
US9025871B2 (en) Image processing apparatus and method of providing high sensitive color images
Zhang et al. Universal demosaicking of color filter arrays
US20100092082A1 (en) framework for wavelet-based analysis and processing of color filter array images with applications to denoising and demosaicing
JP2012060641A (ja) デジタルraw画像のデモザイク方法、そのコンピュータプログラムおよびそのイメージセンサ回路またはグラフィック回路
CN102812709A (zh) 压缩彩色图像采样和重建的方法和系统
KR102083721B1 (ko) 딥 러닝을 이용한 양안기반 초해상 이미징 방법 및 그 장치
Chakrabarti et al. Rethinking color cameras
US10237519B2 (en) Imaging apparatus, imaging system, image generation apparatus, and color filter
US20180122046A1 (en) Method and system for robust and flexible extraction of image information using color filter arrays
WO2009047643A2 (en) Mehtod and apparatus for image processing
WO2020139493A1 (en) Systems and methods for converting non-bayer pattern color filter array image data
Hirakawa et al. Spatio-spectral sampling and color filter array design
De Lavarène et al. Practical implementation of LMMSE demosaicing using luminance and chrominance spaces
Paul et al. Maximum accurate medical image demosaicing using WRGB based Newton Gregory interpolation method
US12045960B2 (en) Apparatus and method for processing image
JP7415464B2 (ja) 映像処理装置、映像処理方法およびプログラム
Aghagolzadeh et al. Bayer and panchromatic color filter array demosaicing by sparse recovery
Alleysson et al. Frequency selection demosaicking: A review and a look ahead

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20121024

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20151001