WO2024147826A1 - Capteurs d'image en couleur, procédés et systèmes - Google Patents

Capteurs d'image en couleur, procédés et systèmes Download PDF

Info

Publication number
WO2024147826A1
WO2024147826A1 PCT/US2023/073352 US2023073352W WO2024147826A1 WO 2024147826 A1 WO2024147826 A1 WO 2024147826A1 US 2023073352 W US2023073352 W US 2023073352W WO 2024147826 A1 WO2024147826 A1 WO 2024147826A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
pixels
color
cell
filter
Prior art date
Application number
PCT/US2023/073352
Other languages
English (en)
Inventor
Geoffrey B. Rhoads
Ulrich C. Boettiger
Christopher J. CHAPUT
Robert G. Lyons
Hugh L. Brunk
Arlie R. Conner
Original Assignee
Transformative Optics Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Transformative Optics Corporation filed Critical Transformative Optics Corporation
Publication of WO2024147826A1 publication Critical patent/WO2024147826A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/135Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on four or more different wavelength filter elements
    • H04N25/136Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on four or more different wavelength filter elements using complementary colours

Definitions

  • a digital image sensor that provides superior low light performance.
  • One particular embodiment is an image sensor comprising a semiconductor substrate fabricated to define a plurality of pixels, including a pixel cell of N pixels.
  • This cell includes a first pixel at a first location in a spatial group, a second pixel at a second location, a third pixel at a third location, and so forth through an Nth pixel at a Nth location.
  • Each of the pixels has a respective spectral response, and at least two pixels in the cell have different spectral responses.
  • the semiconductor substrate is further fabncated to define hardware circuitry configured to: (a) compare scene values associated with a first pair of pixels in the cell to obtain a first pixel pair datum; (b) compare scene values associated with a second pair of pixels in the cell, different than said first pair of pixels, to obtain a second pixel pair datum; and (c) form query data based on the first and second pixel pair data.
  • comparisons are performed between all pairings of pixels in a cell.
  • such comparisons extend further, to pairings between the subject cell and surrounding cells.
  • the resulting query data (which in some embodiments may be based on dozens or hundreds of pixel pair values) is provided as input data to a color reconstruction module that discerns color information for a pixel in the cell based on the query data.
  • Figs. 1 and 1A illustrate two embodiments.
  • Fig. 4 illustrates another embodiment.
  • Figs. 6A-6I detail spectral transmission curves for illustrative filters.
  • Fig. 8 compares curves of Figs. 6A and 6B.
  • Fig. 16 shows a sparse array of transparent pedestals fabricated (e.g., of clear photoresist) on a photosensor array.
  • Fig. 18 depicts a green filter atop a transparent pedestal.
  • Figs. 19A-19E illustrate different arrays of sparse pedestals on an array of photosensors.
  • Figs 20 and 21 illustrate filter cells employing both relatively -thick and relatively -thin filters.
  • Fig. 22 shows spectral transmission curves for the filters of Fig. 21.
  • Fig. 23 illustrates an additional filter cell employing both relatively -thick and relatively -thin filters.
  • Fig. 39 illustrates how color correction matrices can vary depending on local position of filter cells (or filters).
  • Fig. 40 shows how a base pixel (A) is compared against two ordinate pixels (B and C), yielding two pixel pair data.
  • the layer-2 pixel material covers layer-1 material.
  • these contacts between layer 1 cells and layer 2 cells can be quantified, e.g., in effective nanometers of overlap.
  • layer 2’s tolerances will be relaxed, as compared to contemporary norms. This relaxation is for the same reason stated for layer 1.
  • current norms might posit a tolerance for 15% standard deviations in color resist thicknesses and only 3 percent cross-material residuals. Embodiments of the present technology increase one or both of these figures by 50%, 100%, 200% or more.
  • the sixth layer is the yellow (Y) mask and color resist layer, at pixel location S8.
  • the thickness specification for this sixth layer is 1 micron, with relaxed tolerances as before.
  • Figs. 2 and 3 show spectral curves associated with these color resists. These curves are based on published information from Fujifilm depicting characteristics of its Color Mosaic line of pigments.
  • Fig. 2 shows cyan, magenta and yellow pigment transmission at different layer thicknesses, with the solid line indicating 0.9 microns (nominal), the dotted line being 0.7 microns, and the dashed line being 1.1 microns. Note, in connection with the differing thicknesses, that the curves don’t simply shift up and down with the same profile. Rather, their shapes change. For example, the widths of the notches change, as do the slopes of the curves and the shapes of their stop-bands.
  • Fig. 3 shows the red, green and blue pigment filter transmissions at nominal layer thicknesses.
  • Figs. 2 and 3 exhibit the spectral-axis visible range from 400 to 700 nanometers. Extensions into the near infrared (NIR) and near ultraviolet (NUV) are encouraged within all designs and applications where more than just ‘excellent human- viewed color pictures’ are desired. As taught in previous disclosures, a balance is encouraged that optimizes the quality of color images while maintaining a specified quality of multichannel information useful to machine vision applications (or vice-versa).
  • NIR near infrared
  • NUV near ultraviolet
  • the underlying quantum efficiency (QE) of the silicon detector fades toward lower levels as blue light moves into the NUV, and as far red light moves into the NIR. So, in both cases, the underlying behavior of the sensor is moving the photoelectron signal levels downwards.
  • QE quantum efficiency
  • the quantum efficiency of silicon falls off with increasing wavelengths, and either through pigmentation supplements, or explicit glass surfaces, or other means, one can fashion an all-pixel NIR cut-off.
  • the first embodiment employs an all-pixel NIR cut-off somewhere between about 750 nm and about 800 nm.
  • This 3 x 3 color filter array includes 3 customary red, green and blue filters, plus two each of cyan, yellow and magenta.
  • Each of these latter filters is fabricated with two different thicknesses - thin and thick (denoted “N” for narrow and “T” for thick in the figure), to yield two different spectral transmission filter curves for each of these three colors.
  • the thin can be, e.g., less than 1.0 microns, such as 0.9, 0.8, or 0.7 microns, while the thick can be, e.g., greater than 0.8 microns, such as 0.9, 1, 1.2 or 1.5 microns (paired appropriately with the thin filter of that color to maintain the thin/thick relationship).
  • a color filter array can include elements formed of the same color resist, but with different thicknesses, to achieve diversity in filtering action.
  • one magenta filter layer is 0.5 microns while another magenta filter layer is 1 micron.
  • one layer may be just be 10% or 20% or 50% thicker than another layer of the same color.
  • one layer may be 100% or 200%, or more greater, in thickness than another layer of the same color.
  • One embodiment employs different thickness for only one color, whereas other embodiments employ different thicknesses for multiple colors. As indicated, some embodiments deposit a single photoresist at more than two different thicknesses.
  • every photosite will contain some finite, measurable amount of each of the four pigments, namely its assigned color and trace amounts of the other three.
  • Six different nominal surface thicknesses for these four color resists have been specified. Each photosite has a nominal surface thickness value ranging from a few tenths of a micron to over 1 micron; we call this the nominal thickness of the color resist layer.
  • pigment concentration level of a photosite assigned pigment as the sensor-global mean value of said pigment, after a sensor has been manufactured and packaged.
  • This global-sensor mean value is normalized, or set to 1.0 (i.e., we are not here talking microns).
  • each photosite in the first embodiment there are three (contaminating) pigments that are different from the photosite-assigned pigment.
  • Each of these three pigments will have some sensor-global mean as measured across the entire manufactured and packaged sensor, in the cells where it is not the assigned pigment. This global mean for each of the three different pigments can be called the ‘mixing mean’. All three pigments will have a unique mixing mean, with values in normalized units of a few hundredths (e.g., 0.015, 0.03, or 0.05) for higher tolerance manufacturing practices, to still higher values, such as 0.06, 0.1, 0.15 or higher normalized units for pushing-the-envelope experimentation.
  • these non-assigned pigment mixing means will have their own standard deviations, call these ‘mixing slop’. (Empirical practice is expected to show that for many sensors designs, the mixing means and the mixing slop values will be con-elated via a simple square root relationship; be this as it may, this disclosure keeps these numbers as independent values.)
  • One embodiment of the technology is thus a color imaging sensor with plural pixels, where each pixel is associated with a plural byte memory, and this memory stores data relating one or more parameter(s) of the pixel to such parameter(s) for like pixels across the sensor.
  • Every photosite has its own unique signature relative to these forty-two calibration parameters, which poses the matters of measuring and using these parameters.
  • Chromabath and FuGal illumination The first matter is addressed by what is termed Chromabath and FuGal illumination.
  • the second matter is addressed by Pixel-Wise Correction.
  • This disclosure builds on the Chromabath technology' previously taught in applications 63/267,892 and 18/056,704), replacing the monochromator used therein with a multi-LED (e.g., ten- or twelve-LED) illumination system termed the Full Gamut Illuminator (FuGal).
  • the monochromator arrangement retains a role, however, in that it is used in order to train and calibrate the usage of the FuGal in a scaled manufacturing line.
  • each photosite may be characterized by parameters (some of which depend on the type of photosite layer) including: 1) its dark-median value in digital numbers; 2) its nominal equal- white-light gain in digital numbers, which is then related to irradiance (light levels falling upon a photosite); and then 3) through 5) are the CYMG mixing ratios, with the sum of the ratios being 1.0, where only three parameters are required to fully specify those ratios.
  • Measurement of the dark medians draws from ‘dark frame’ characterization in astronomy. No light is allowed to fall onto a sensor; many frames of data are captured; and the long-term global average of the frames is stored, sometimes associated with metadata indicating the temperature of the sensor. Such data is then used to correct later measurements, e.g., by subtracting the dark frame data on a pixel-by -pixel basis. Many existing CMOS image sensors have some form of dark level adjustment and/or correction. In some embodiments, applicant uses the median, rather than the mean, of a series of dark frames for correction. This is believed to aid in certain operations that employ neighboring photosite comparisons.
  • the equal-white-light gain values for a sensor’s photosites are typically measured after correction for each photosite’s dark median value has been applied. ‘Flat field’ imaging procedures can be used to measure these gain values.
  • the lower case bl indicates that these are either the modelled (lower cost scenarios) or the empirically measured pseudo-Beer-Lambert curves for the respective pigments. “Pseudo” simply acknowledges that empiricism trumps theory.
  • Some embodiments comprise color filter cells characterized in that plural pairs of filter transmission curves, defined by samples at 10 nm intervals from 400-700 nm, cross each other exactly once, or exactly zero times.
  • Some embodiments comprise color filter cells including three or more different filters, each with an associated transmission curve, said filters being characterized in that a count of crossings between all pairs of said curves, in each of thirty 10 nm bands between 400-700 nm, yields a vector of 30 count values, the average value of which is at least 2.
  • Some embodiments comprise color filter cells including three or more different filters, each with an associated transmission curve, said filters being characterized in that a count of crossings between all pairs of said curves, in each of thirty 10 nm bands between 400-700 nm, yields a vector of 30 count values, and no two consecutive count values in said vector are both equal to zero.
  • each of the just-detailed embodiments can be comprised partly or wholly of non-normative filters.
  • Hamming distance between their bit strings.
  • a Hamming distance between two strings of equal length is the number of positions at which the corresponding bits are different.
  • the Hamming distance between crossing-code (A,B) and crossing code (A,C) is determined by comparing their strings and counting the number of bit positions where they are different. From Table VII:
  • Crossing-code (A,C) 000100000000000000100000000000
  • the Hamming distance of 8 is between crossing-code (A,G) and crossing-code (H,I). There are 2 Hamming distances of 7 among the 630 values.
  • Some embodiments comprise color filter cells characterized in that multiple Hamming distances between all possible crossing-codes defined between different filters in the cell have values of zero. One or more of these Hamming distances of zero can involve crossingcodes that are not all zero. At least one of these Hamming distances of zero can involve crossing-codes including at least three “l”s.
  • Some embodiments comprise color filter cells characterized in that multiple Hamming distances between all possible crossing-codes defined between different filters in the cell have values of 5 or more, or 7 or more.
  • Some embodiments comprise color filter cells characterized in that an average Hamming distance between all possible crossing-codes defined between different filters in the cell is at least 3.
  • Some embodiments comprise color filter cells characterized in that a standard deviation of all Hamming distances between all possible crossing-codes defined between different filters in the cell is less than 1.25.
  • each of the just-detailed embodiments can be comprised partly or wholly of non-normative filters.
  • Some embodiments comprise color filter cells characterized in that the average efficiency across all non-normative filters in the cell is at least 40%. In some such embodiments the average efficiency of all non-normative filters is at least 50%, or at least 60%, or at least 70%.
  • the efficiencies of individual filters within a cell can vary substantially. In Table IX the efficiencies vary from less than 25% to more than 65%. That is, one filter has an efficiency that is more than 2.65 times the efficiency of a second filter in the cell.
  • Some embodiments comprise color filter cells characterized by including a first non- normative filter that has an efficiency at least 2.0 times, or at least 2.5 times, the efficiency of a second non-normative filter in the cell.
  • Some embodiments comprise color filter cells characterized as including one or more non-normative filters having group-normalized transmission functions that stay above 0.2 in the 400-700 nm wavelength range.
  • Some embodiments comprise color filter cells characterized as including at least one filter having a group-normalized transmission function that stays below 0.7 from 400-700 nm.
  • Some embodiments comprise color filter cells characterized as including plural filters having group-normalized transmission functions that stay below 0.75 from 400-700 nm.
  • Some embodiments comprise color filter cells characterized as including three filters having group-normalized transmission functions that stay below 0.8 from 400-700 nm.
  • each of the just-detailed embodiments can be comprised partly or wholly of non-normative filters.
  • sample correlation coefficient Another metric that is useful in characterizing filter diversity is sample correlation coefficient. Given two arrays of n filter transmission function sample values, x and y (e.g., the 31 values for filters A and B detailed in Table I), the sample correlation coefficient r (hereafter simply “correlation”) is computed as:
  • Some embodiments comprise color filter cells characterized in that a correlation computed between transmission functions of two different filters in the cell, at least one of which is non-normative, at 10 nm intervals from 400-700 nm, is negative.
  • Some embodiments comprise color filter cells characterized in that a correlation computed between transmission functions of two different filters in the cell, at 10 nm intervals from 400-700 nm, is at least 0.8, at least 0.9 or at least 0.95.
  • the qualifier “local” indicates a spectral transmission function extremum within a threshold-sized neighborhood of wavelengths.
  • An exemplary neighborhood spans 60 nm, i.e., 30 nm plus and minus from a central wavelength.
  • a local maximum or minimum e.g., a 60 nm-span local maximum or minimum.
  • transmission function curve to be a local maximum only if its group-normalized value is 0.05 higher than another transmission function value within a 60 nm neighborhood centered on the feature. Similarly for a minimum - it must have a value that is 0.05 lower than another transmission function value within a 60 nm neighborhood. If the transmission function is at a high or low value at either end of the curve (as is the case, e.g., at the left edge of Fig. 6A), we don’t know what lies beyond, so we don’t term it a local maxima or minima for purposes of the present discussion.
  • a local maximum as “broad” if its transmission function drops less than 25%, from its maximum value, within a 40 nm spectrum centered on the maximum wavelength (sampled at 10 nm intervals). That is, the maximum is broad-topped.
  • a notch as broad if its transmission function value at the bottom of the notch is less than 25% beneath the largest transmission function value within a 40 nm spectrum centered on the notch wavelength.
  • a broad local minimum is found at 590 nm in Filter D (Fig. 6D). Its notch is just 19% below the largest value found within 20 nm (i.e., the transmission function at 590 nm is 0.400, and the largest value in the 40 nm window is 0.493 at 610 nm). This is the only broad local minimum in the detailed set of nine filters.
  • Some embodiments comprise color filter cells characterized in that a count of broad 60 nm-span local maxima exceeds a count of broad 60 nm-span local minima. Some such embodiments are characterized in that the count of broad 60 nm-span local maxima is at least 150%, at least 200%, at least 300% or at least 400% of the count of broad 60 nm-span local minima. Some such embodiments are characterized in that the count of broad 60 nm-span local maxima is at least seven times the count of broad 60 nm-span local minima.
  • Some embodiments comprise color filter cells characterized in that one or more non- normative filters in the cell has a transmission function that includes exactly one 60 nm-span local maximum and no 60 nm-span local minimum.
  • Some embodiments comprise color filter cells with plural different non-normative filters, characterized in that a count of broad maxima among said non-normative filters is greater than a count of broad minima among said non-normative filters.
  • the slopes of the filter curves that connect to extrema can vary. Diversity can be aided by diversity in the slopes of the transmission curves.
  • the slope of a curve as the change in group-normalized transmission over a span of 10 nm (i.e., from 400 to 410 nm, 410 to 420 nm, etc.). Although determined over a 10 nm interval, the slope is expressed in units of per-nanometer. For example, between 690 and 700 nm, the group-normalized transmission value of Filter A changes from .0403 to .0490, or a difference of .0087 over a span of 10 nm. It thus has a slope of ,00087/nm. Slopes can be positive or negative, depending on whether a curve ascends or descends with increasing wavelength.
  • the filter curves can also be characterized, in part, by absolute values of the slopes.
  • Some embodiments comprise color filter cells including at least one non-normative filter, characterized in that the absolute value slopes of all group-normalized filter transmission functions of non-normative filters, when computed over 10 nm intervals between 400-700 nm, yield a set of values, and at least 50% of the values are less than 0.01/nm or less than 0.005/nm.
  • Some embodiments comprise color filter cells including at least one non-normative filter, characterized in that the absolute value slopes of all group-normalized filter transmission functions of non-normative filters, when computed over 10 nm intervals between 400-700 nm, yield a set of values, and at least 20% of the values are less than 0.001/nm.
  • a 60 nm-span local extrema is defined by reference to a 60 nm-wide neighborhood, i.e., plus and minus 30 nm from a center wavelength. Since transmission function data beyond the range 400-700 nm is sometimes unavailable, local extrema are typically defined starting at 430 nm and ending at 670 nm.
  • Fig. 9 To the right side of Fig. 9 there is a second valley and a second valley neighborhood.
  • the filter curve is shown to have transmission function local minimum value of .0403 at 690 nm. It is unknown whether this value fits the definition of a 60 nm-span local minimum (i.e., a valley) because it is unknown if the curve goes still lower, e.g., at 710 nm or 720 nm. Nonetheless, 620, 630, 640, 650, 660, 670, 680, 690 and 700 nm can all be identified as falling within a valley neighborhood, because all have group-normalized values below 0. 15.
  • each “third class” 10 nm wavelength span is characterized by a slope value which, as detailed in Table XI, can be positive or negative.
  • a first group of 1-5 filters are all at or near local extrema
  • a second group of 1-5 filters all have positive slopes
  • a third group of 1-5 filters all have negative slopes.
  • the magnitudes of the slopes desirably include a variety of values, e.g., commonly in the range of 0.001/nm to 0. 1/nm.
  • filters in the second group have slopes of 0.019/nm and ,033/nm (i.e., Filters B and I)
  • filters in the third group have slopes of -0.0022/nm, -0.0064 and -,0089/nm (i.e., Filters E, F and C).
  • different of the nine filters fall into different of the groups in different wavelength bands.
  • the average value in this histogram is 1.97 and the standard deviation is 1.60.
  • the difference between two crossing-codes can be indicated by a Hamming distance.
  • the 36 crossing-codes associated with the second filter set can be paired in 630 ways.
  • the Hamming distances range from 0 to 7, with an average value of 3.065 and a standard deviation of 1.24. There are 11 Hamming distances of 0 in the set of 630 values. There are 4 with a Hamming distance of 7.
  • Some embodiments comprise a color filter cell characterized in that at least one non- normative filter in the cell has an efficiency exceeding 66%.
  • this third filter set has transmission functions as detailed in Table XIX:
  • Some embodiments comprise a color filter cell comprised partly or wholly of non- normative filters, characterized in that a set of dot products between group-normalized transmission functions of all different filters in the cell, at 10 nm intervals from 400-700 nm, has a standard deviation less than 4, or less than 3.7.
  • Some embodiments comprise a color filter cell comprised partly or wholly of non- normative filters, characterized in that dot products computed between group-normalized transmission functions of all possible pairings of different filters, at 10 nm intervals from 400-700 nm, yield a set of values, and at least 25% of said values are less than 11.
  • Various known string-matching algorithms can be used. One is based on a Hamming distance. First, determine the ordering of outputs from nine differently-filtered pixels in a low-light scene. Call this nine-element sequence a query' string. Then compare this query string with each of the 36 spectral reference strings in Table XXI. Count the number of positions at which the query' vector has a different letter, in a given string position, within each spectral reference string. The smaller this count, the better the query string matches a spectral reference string. The spectral reference string that most closely matches the query string (i.e., the string having the smallest count of letter differences with the query string) indicates the hue at that region in the photosensor.
  • the query string can be deemed to match a wavelength between the two wavelengths indicated by the two spectral reference strings. For example, if the query string most-closely matches spectral reference string E-D-C-F-A-B-G-H-I, and this reference string is found in Table XXI for both 400 nm and 410 nm, then the query string can be associated with a hue of 405 nm.
  • a 36-bit query code is thereby produced. Comparing this query code with each of the reference hue-codes in Table XXII may find that the query code is closest to the reference hue-code for 430 nm, i. e. 110000110000011000011111111111111111. So pixel values for this region in the noisy image frame are replaced with RGB (or CMY) pixel values corresponding to the 430 nm hue.
  • each pixel is regarded to be at the center of a cell, and its hue is determined based on comparisons of its value with values of other pixels in that cell. (If the cell doesn’t have a center pixel, then a pixel near the center can be used.)
  • a simple technique is linear interpolation.
  • To project a B-filtered value onto the location of the subject pixel we look to the values of the two nearest B-filtered pixels (i.e., in the same row). These two pixels, identified as Bl and B2, are shown in Fig. 11 A. Their values are weighted in accordance with the reciprocal of their distances to the subject pixel, with the weights summing to unity.
  • the interpolated “B” value at the subject pixel location is:
  • local luminance in a region of imagery is estimated based on the statistical distribution of (normalized) values, since low light images exhibit larger deviations.
  • Different RGB values can be stored in the lookup table memory for different combinations of hue and luminance. Or a single set of RGB values can be stored for each hue, and then values can then be scaled up or down based on estimated luminance.
  • Fig. 16 illustrates the concept.
  • a transparent resist is applied, exposed through a mask, developed and washed (sometimes collectively termed “masked”) on a photosensor substrate 171 to form transparent pedestals 172 at five locations in a nine-filter cell.
  • the resist may have a thickness of 500 nm.
  • Fig. 17 shows this excerpt of the sensor after five subsequent masking layers have defined five colored filters - such as red, green, blue, cyan and magenta.
  • this process is repeated two more times, with resists “C” and “D ”
  • resists “C” and “D ” For each color, a thick fdter layer and a thin fdter layer are formed - the latter being at locations having transparent pedestals.
  • a fifth resist “E” is applied to the wafer, and masked to create a filter at the center of the color filter cell. Referring back to Fig. 16, it can be seen that there is a transparent pedestal at this location. Thus, resist layer “E” nowhere extends down to the photosensor substrate, but rather rests atop the transparent pedestal, only in a 500 nm layer.
  • the thicker and thinner filter layers of the same color resist have thicknesses in the ratio of 2:1 (i.e., 1000 nm and 500 nm). But this need not be the case.
  • Such ratios can range from 1.1 : 1 to 3 : 1 or 4 : 1 , or larger. Commonly the ratio is between 1.4: 1 and 2.5: 1, with a ratio between l.5:l and 2:1 being more common.
  • Fig. 18 shows an excerpt from a color filter cell in which a green resist layer of 400 nm thickness is formed atop a transparent pedestal of 300 nm thickness.
  • Elsewhere in this color filter cell may be a green pigment layer that extends down to the level on which the transparent pedestal is formed, with a thickness of 700 nm.
  • the thick-to-thin ratio in the case of these green-pigmented filter layers is thus 1.75: 1 (i.e., 700:400).
  • the pedestals have heights between 200 and 500 nm, and resist is applied to depths to achieve thick filters (where no pedestal is located) of 600 to 1100 nm. In one particular embodiment, the pedestals all have heights of 200-300 nm. In the same or other embodiments, resist is applied to form thick filters of 700 - 1000 nm thickness (with thinner filters where pedestals are located).
  • thin and thick filters of a given resist color edge-adjoin each other This is not necessary.
  • Some CFAs (or cells) have combinations of such relationships, with thin and thick filters of a first color resist comeradj oining each other, and thin and thick filters of a second color edge-adjoining each other, or not adjoining at all.
  • CFAs or (cells) are characterized by all three relationships: comeradjoining for thin and thick filters of a first color, edge-adjoining for thin and thick filters of a second color, and not adjoining for thin and thick filters of a third color.
  • the checkerboard pattern of transparent pedestals in Fig. 16 can be inverted, with the four comer locations and the center location lacking pedestals, and pedestals instead being formed at the other four locations.
  • a cell can include a greater or lesser number, ranging from 1 up to one-less than the total number of filters in the cell.
  • the array of pedestals may be termed “sparse.” That is, not every photosensor (or microlens) is associated with a pedestal.
  • One embodiment is thus an image sensor including a sparse array of transmissive pedestals, with an array of photosensors disposed below the pedestals and colored filter media (e.g., pigment) disposed above the pedestals.
  • the sparse array may be a checkerboard array, but need not be so.
  • Such arrangement commonly includes filter elements of thicker and thinner dimensions, the filter elements of thinner dimensions each being disposed above one of the transmissive pedestals.
  • a gapped checkerboard pattern can comprise such an array of pedestals without meeting at the comers (e.g., by reducing the sizes of each of the Fig. 16 pedestals in horizontal dimensions, by 1% or more (e.g., 2%, 5%, 10%, 25% or 50%).
  • Figs. 19A-19E show a few such sparse patterns, with “T” denoting filter locations with transparent pedestals. Each of these, in turn, can be inverted, with transparent pedestals formed in the unmarked locations rather than the “T”-marked locations. As can be seen, the transparent pedestal locations can be edge-adjoining, comer adjoining, or not adjoining, or any combination of these three within a given cell. (Here, as in the earlier discussion of thick and thin filters of the same color, the adjacency relationships are stated in the context of a single cell. Once a cell is tiled with other cells, different adjacencies can arise.)
  • Fig. 20 shows a cell of this sort that includes three transparent pedestals, using the pedestal pattern of Fig. 19E.
  • the three locations with transparent pedestals yield less-dense color filters, since such filters are physically thinner. These are shown by lighter lines and lettering.
  • the locations lacking transparent pedestals yield more dense color filters, since such filters are physically thicker. These are shown by darker lines and lettering.
  • the filters that appear twice in the cell can be secondary' colors. In still other embodiments, the filters that appear twice in the cell can include one or more primary colors, and one or more secondary colors.
  • Filters of other functions can be included - including filters with desired ultraviolet (e.g., below 400 nm) and infrared (e.g., above 750 nm) characteristics, and filters of the diverse, non-conventional sorts detailed earlier. Each such filter can be included once in the cell, or can be included twice - once thin and once thick.
  • filters can be included once in the cell, or can be included twice - once thin and once thick.
  • certain pixels may be un-filtered (panchromatic), e.g., by a color resist that is transparent at all wavelengths of concern.
  • transparent pedestals to achieve thinner filter layers can be employed in cells of sizes different than 3 x 3, such as in cells of size 4 x 4, 5 x 5, and non-square cells.
  • a first masking operation defines a transparent pedestal at one of the four pixel locations (in the upper left, indicated by the lighter lines and lettering).
  • Three other masking operations follow, defining four color filters: one of red, one of blue, and two of green.
  • the green filter in the upper left, formed atop the transparent pedestal, is thinner than the green filter formed in the lower right.
  • the green filter in the upper left is also thinner than the red and blue filters.
  • This thin green filter passes more light than the thicker green filter (which, like the red and blue filters, is of conventional thickness). This increases the sensor’s efficiency. Being thinner also broadens-out the spectral curve, in accordance with the Beer-Lambert law. This changes the slopes and positions of the filter skirts, enabling an improvement in color accuracy.
  • the Bayer cell employs two green filters in its 2 x 2 pattern in deference to the sensitivity of the human visual system (HVS) to green. If a sensor is to serve machine vision purposes, then the HVS-based rationale for double-green is moot, and another color may be doubled, i.e., red or blue.
  • Fig. 23 shows a variant Bayer cell employing two diagonally- adjoining blue filters, one thick and one thin.
  • Fig. 24 shows transmission curves for such an arrangement. The thin blue filter curve is shown by the bold solid line. Here again, the thin filter is one-third the thickness of the other filters. As with Fig. 22 arrangement, this modification increases the efficiency of the sensor, and diversifies the spectral curves - enabling better color accuracy.
  • the cell needn’t be square. Since there are six readily available pigmented resists (namely the three primary colors red, green and blue, and the three secondary colors cyan, magenta and yellow), such resists can be used to form six filters in a 2 x 3 pixel cell. Again, transparent pedestals can first be formed on certain of these pixels, so that resist that is later masked at such locations is thin relative to pixels lacking the pedestals.
  • the cell of Fig. 25 can be paired with a related cell in which the filter colors are each moved one pixel to the left, while the former pedestal pattern is maintained. This is shown in Fig. 26.
  • the top two rows comprise the cell of Fig. 25.
  • the lower 2 x 3 pixel cell is identical except the filters are each shifted one position to the left.
  • the result is a 4 x 3 pixel cell of 12 filters, containing thin and thick filters of four of the six colors, together with two thin filters of the fifth color (here cyan) and two thick filters of the sixth color (here red).
  • the thin and thick filters of a common color are formed in a single masking step - the difference being a transparent pedestal underneath the thin filter.
  • Fig. 29 shows group-normalized transmission functions for a six-element cell employing five resists.
  • one of the thin filters is a red-, green- or blue-passing filter
  • another of the thin filters is, respectively, a red-, green, or blue-attenuating filter (i.e., a cyan, magenta or yellow filter).
  • two masking operations can be utilized to form two layers of transparent pedestals, some atop others.
  • a first masking operation can create six 500 nm-thick pedestals at locations in a 3 x 3 cell.
  • a second masking operation can form three more pedestals, e.g., 300 nm thick - each atop one of the 500 nm pedestals created with the first masking step. This results in a first set of three pedestals of 500 nm thickness, and a second set of three pedestals of a total 800 nm thickness. Three other locations in the cell have no pedestal.
  • One particular such resist has an IR-tapered panchromatic response.
  • An IR-tapered panchromatic response is one that is essentially panchromatic through the visible light wavelengths, having a spectral transmission function greater than 80%, 90% or even 95+% over the 400-700 nm range, but then tapering down to be less responsive into IR.
  • the spectral transmission function of such a resist is below 50%, 20% or 10% at some point in the 700-900 nm range, and preferably at some point in the 700-780 nm range, such as at 720, 740 or 760 nm.
  • An embodiment according to one aspect of the technology is a color filter cell including a first filter comprised of a first colored resist formed on a transparent pedestal, and a second filter comprised of said same first colored resist not formed on a transparent pedestal, wherein the second filter has a thickness greater than the first filter.
  • An embodiment according to another aspect of the technology is a photosensor that includes a checkerboard pattern of transparent pedestals spanning the photosensor.
  • the filters are drawn only from CMYRGB color resists.
  • FIG. 30 is taken from a Canon datasheet for the 120MXS sensor and shows responses of its red, green and blue fdtered pixels into the near infrared range. Also shown in Fig. 30, in solid line, is the response of pixels in the monochrome version of the Canon sensor. This is the sensor’s panchromatic response, i.e., without an overlaid color filter array. The shape of this panchromatic response curve is primarily due to the quantum efficiency of the silicon photosensors but also is influenced by the sensor’s microlens array and other factors.
  • yellow-filtered pixels At wavelengths between about 500 and 780 nm, yellow-filtered pixels have strong responses, above 70% and commonly over 80% over panchromatic responses at such wavelengths (yellow filters being panchromatic except for blocking blue wavelengths below 500 nm). Between 640 and 780 nm, yellow-filtered pixels have responses that are very close to those of red-filtered pixels detailed in the above table. The yellow pixels, however, have greater efficiencies (e.g., over a spectrum extending between 400 and 750 nm) than the red pixels.
  • the red and yellow pixels in this exemplar of the second class of embodiments respond strongly at wavelengths in the near-infrared.
  • the four channels of image data in this arrangement do not include a channel sensed by a blue pixel, but the IR-tapered panchromatic pixels are sensitive to blue.
  • the IR-tapered panchromatic pixels are sensitive to blue.
  • three of these channels can be red, green and blue - representing image scene content as perceived by receptors of the human eye, while the fourth channel can be made to vary in accordance with near infrared scene content.
  • the red and yellow pixels in the embodiments in this discussion lack the infrared blocking filter (sometimes termed a hot mirror filter) that is commonly used with image sensors.
  • IR-attenuating filters may be used, but may allow significant pixels responses in the near infrared, such as a response at 750 nm of 5% or more of peak response within the visible light range.
  • Embodiments described in this section can also be implemented by forming certain filters on IR-filtering pedestals, as described earlier.
  • the described embodiments can naturally make use of filters, and filter cells, having attributes detailed earlier. Except as expressly stated, red, green, blue, cyan, magenta and/or yellow filters are not required.
  • a color filter array may casually, or deliberately, be positioned over a photosensor array so that a single filter overlies a non-mtegral number of photosensors. Some photosensors may be overlaid by plural filters. In some embodiments, the photosensors and filters have different dimensions to contribute to this effect
  • the spectral filtering function for each photosensor in the device is characterized. Applicant’s Chromabath procedure can be used. Associated data memorializing the filtering function for each photosensor is stored in memory on the device. Similarly, data for kernels by which scalar outputs from individual photosensors in a neighborhood can be transformed into color values for desired color channels, for a pixel at the center of the neighborhood, are also stored in the device memory. (Such neighborhoods may be, e.g., of size 5 x 5 or 7 x7).
  • - filters of two different spectral functions can be achieved with the same media (e.g., pigmented resist) by making the two filters of different thicknesses, as detailed earlier.
  • the spatially - varying filter arrangements can also employ filters, and filter cells, having attributes detailed earlier.
  • Fig. 36 exaggerates the idea for the sake of illustration.
  • this figure we find four specific regions of the spectrum where a given red pixel - sitting somewhere in a sea of pixels - happens to deviate measurably from the global mean red spectral function. It is this kind of spectral function deviation that the Chromabath procedures measure and correct.
  • execution of this process can involve a calibration of a sensor-under-test stage using a multi-LED lighting system, and a calibration of the calibrator stage using a monochromator.
  • CMOS imager manufacturers commonly ‘bin’ individual sensors and thus categorize them into a commercial grading system. This grading system brings with it disparities in the price that can be charged for any given sensor. Vast amounts of R&D, engineering and quality-assurance budgets are allocated to increase the yield of the higher quality level bins.
  • Sensitivity differences discerned in this manner are encoded into an N-byte compressed ‘signature’ that may be stored directly on memory fabricated on a CMOS sensor substrate, or stored in some other (e.g., off-chip) manner by which processes utilizing the image sensor output signals can have access to this N-byte signature data.
  • the processing of pixels into output images utilizes information indicated by these N-byte signatures to increase the quality of image output. For example, the output of a “hot” pixel can be decreased, and the pixel’s unique spectral response can be taken into account, in rendering or analyzing data from the sensor.
  • Machinelearning and Al applications can also use these N-byte signatures as further dimensions of ‘feature vectors’ that are employed during training, testing and use of neural networks and related applications.
  • a series of medium-narrow-bandwidth LEDs, individually illuminated, can also be used in place of a monochromator for this measurement of pixels’ N-byte signatures.
  • the practical advantage of using a series of LEDs is that it is generally less expensive than monochromators and the so-called ‘form factor’ of placing LEDs in proximity to wafer-scale sensors is superior: the LEDs as a bank can sit just above a wafer as is inferred in the commercial product by Gamma Scientific, their RS-7-4 Wafer Probe Illuminator.
  • a sensor manufacturer identifies the quality assurance criteria that are most frequently failed.
  • a few of these may not be susceptible to mitigation by N- byte signature information, such as a simply-dead sensor, or a sensor with an internal short or open circuit that disables some function.
  • the N-byte signature data is then used to convey data (or indices to data stored elsewhere) by which these idiosyncrasies can be ameliorated.
  • a connected pair, a connected trio, etc., of pixels can either be ‘dead’ or otherwise out of specific performance parameters.
  • N-byte pixel characterization can become a useful mitigation factor transforming a lower-binned sensors fetching a lower market-price, into a higher-binned sensor fetching a higher market-price.
  • neighborhoods of pixel-spectral-functions may not employ a fixed, repetitive cell-pattern, such as the 2x2 Bayer (RGGB) cell.
  • RGGB 2x2 Bayer
  • An example is the spatially-varying color filter arrays detailed above. The following discussion addresses how data generated by these non-repetitive pixel neighborhoods can be turned into image solutions.
  • N a luminance (luma) image that corresponds to the pixel data.
  • N a luminance (luma) image.
  • a different kernel is defined for each differently-colored pixel, with a given neighborhood of surrounding pixel colors.
  • red, green, blue, and yellow filter cell discussed above, there would be four different 6 x 6 kernels - each kernel centered on a respective one of the differently-colored pixels.
  • 7 x 7 kernels there would be nine different 7 x 7 kernels.
  • a first step can be to parameterize each differently-colored pixel’s spectral response profile. This parameterization is desirably accurate enough to “fit” the empirically measured pixel profiles to within a few percentage points of the pixel’s peak response level in the spectrum.
  • Fig. 38 depicts an example of a windowed Fourier series of functions, defined here over the interval 350 - 800 nm, which can be fit to both (a) to pixel spectral response profiles, and (b) to pixel spectral function solutions.
  • the functions can also be weighted by the photosensor’s quantum efficiency, as a function of wavelength. Each term of the Fourier series is associated with a corresponding weighting coefficient, and the weighted functions are then summed, as is familiar in other applications employing the parametric fitting of functions.
  • an image sensor is used to capture an image of a color test chart having multiple printed patches of known color, in known lighting.
  • the captured image is stored in an m x n x 3 array, where m is the number of rows in the sensor, n is the number of columns, and 3 indicates the number of different output colors.
  • the captured image would be identical to another m x n x 3 array containing reference data corresponding to the correct colors, but it is not.
  • the captured image array is multiplied by a 3 x 3 x 3 color correction matrix, whose coefficients have been tailored so that the product of such multiplication yields a best least-squares fit to the reference array.
  • 3 x 3 x 3 color correction matrix whose coefficients have been tailored so that the product of such multiplication yields a best least-squares fit to the reference array.
  • A31 A32 A33] (where A is simply a generic leter representing various formulations, some involve X, Y and Z, others involving R, G and B, and yet other hybrids of these), into a locally adaptive form:
  • a locally adapted color correction matrix value is a function of many parameters, including its ‘index’ of where it sits in the 3x3 color correction matrix itself. (As with the previously-detailed four-byte pixel signature data, the local color correction data is stored on-device in memory adequate for this purpose.)
  • One form of the 4-byte pixel signature posits the encoding of some “mixing” (crosscontamination) of the masked pigments, e.g., red and green pigments having trace amounts in a nominal blue pixel, and the same situation for nominal red and nominal green.
  • this ‘encoding scheme’ as an example for how to build these f functions for the locally adaptive color correction matrices, then we can build in this additional translation layer. Again empirically (and via simulation), mappings can be solved (the f functions themselves) via puting together ‘truth’ datasets matched to millions of instances of 4-byte neighborhood values imaging the full gamut of colors, and learning the answer.
  • one method trains these locally adaptive color correction functions through machine learning and any number of choices which match millions or billions of truth-examples to corresponding millions or billions of 4-byte neighborhood pixel-signature values, viewing millions of color patches and color patterns across the entire gamut of colors.
  • the example here has one CCM per 2 x 2 pixel Bayer cell; other arrangements are certainly possible (including one CCM per each N x N region, with overlaps; and applying CCMs to pixels rather than cells).
  • CCMs become locally tuned to minor imperfections in the pixel-to-pixel spectral functions of the underlying pixel types.
  • Regional and global scale corrections can be built in to these local CCMs, where, for example, if spin coats over a chip are at a unit thickness at one comer of a sensor, and 0.97 unit thickness at another comer, this global scale non-uniformity can still be corrected by having the local values slowly change accordingly.
  • CMOS sensor photosites pixels
  • the accepted theory of light measurement by CMOS sensor photosites is that incident light generates so-called photo-electrons at a discrete pixel.
  • the number of collected photo-electrons is discrete and whole, having no other numbers than 0, 1, 2, 3 and upwards.
  • read-noise of an amplifier plus analog to digital conversion arrangement.
  • shot-noise is also present, an industry term used to describe the Poissonian statistics of discrete (whole number) measurement arrangements.
  • Yet an additional factor that often should be considered is pixel to pixel variations in individual measurement behavior, a phenomenon often referred to as fixed pattern noise.
  • ShadowChrome works well for normal brightness scenery imaging, where pixels are enjoying 100’s if not 1000’s of generated photo-electrons in every image snap, it has really been designed for very low light levels where the so-called signal to noise ratio (SNR) is 10 or even down to 1, and below 1.
  • SNR signal to noise ratio
  • ShadowChrome can make use of the Chromabath data results.
  • a second possible preliminary stage to ShadowChrome involves the use of either calibrated white patches, or, in situations where such patches are unavailable, some equivalent ‘scene' where there is access to ‘no color’ objects. As an ultimate backup where no scenes are available at all, one still can use ‘theoretical’ sensor specification data such as the sensor spectral sensitivity curves of each pixel type.
  • the aim of the this second preliminary stage is to track the so-named ‘graygain’ of either A) all pixels of some given spectral type, e.g., 9; or B) each pixel’s gray -gain in a Chromabath fashion.
  • the latter is preferred for reaching the utmost in color measurement performance, but the former is acceptable as well, since CMOS sensors typically have well within 1% uniform behavior in ‘generic gain.’ Since we are dealing with very low light level imaging, often involving single-digit photo-electron counts, this 1% uniformity is a class diminishing returns situation.
  • pixel spectral types as a class have very different grey-gains, one type compared to another, but within a given spectral type, the grey-gains are effectively the same.
  • Grey-gain values themselves can be arbitrarily defined and then normalized to each other, but in this disclosure we use the convention that the highest grey-gain value belonging to only one of the spectral- pixel-types will be assigned the value of 1.0, and all others will be slightly lower than 1.0 but in the proper ratio. So-called ‘white patch equalization’ between the pixel-spectral-types would posit that grey-gain values below 0.8 is preferably avoided, if possible. (It will be recognized that these white patch and grey gain data are, in a sense, metrics of pixel efficiency.)
  • an image sensor comprising a 3 x 3 cell of nine pixels - some or all of which have differing spectral responses (i.e., they are of differing types). Filters and filter cells having the attributes detailed earlier are exemplary. These nine pixels may be labeled as the first through ninth pixels (or, interchangeably, as pixels A-I), in accordance with some arbitrary mapping of such labels to the nine pixel positions.
  • a scene value associated with one pixel in the cell termed a base pixel
  • a scene value associated with a different pixel in the cell termed an ordinate pixel.
  • the term “scene value” is sometimes used to refer to a value associated with a pixel when the image sensor is illuminated with light from a scene.
  • the scene value of a pixel can be its raw analog or digital output value, but other values can be used as well.
  • the term digital number, or DN is also commonly used to represent a sensed datum of scene brightness.
  • pixel A is the base pixel
  • pixel B is the ordinate pixel; the comparison is indicated by the arrow 401 between these pixels.
  • Such pixel pair comparison data is desirably produced by a hardware circuitry module fabricated on the same semiconductor substrate as the image sensor, and a representation of such data (e.g., as a vector data structure) is output by such circuitry as query data.
  • This query data is applied to a subsequent process (typically implemented as an additional hardware circuitry module, either on the same substrate or on a companion chip), which assigns output color information for the central pixel based in part on such data.
  • This module may be termed a demosaicing module or a color reconstruction module.
  • Such hardware arrangement is shown in Fig. 41, with the dashed line indicating a common semiconductor substrate including the stated modules.
  • the quality of output color information will ultimately depend on the richness of the query information. Accordingly, query information based on just two inter-pixel comparisons (base and first ordinate; base and second ordinate) is rarely used. In many embodiments, further comparison operations are undertaken between the scene value associated with the base pixel, and scene values associated wi th still other pixels in the cell, yielding other pixel pair data. If the base pixel is termed the first pixel, then eight pixel pair comparison data can be produced, involving comparisons with the second through ninth (ordinate) pixels.
  • the first two inter-pixel comparison data are produced as described above, i.e., by comparing the scene value associated with the first pixel with scene data associated with the second pixel (i.e., a [1,2] comparison, where the former number indicates the base pixel and the latter number indicates the ordinate pixel), and by comparing the scene value associated with the first pixel with scene data associated with the third pixel (i.e., a [1,3] comparison). Similar such comparisons likewise compare the scene value associated with the first pixel with scene data respectively associated with the fourth through ninth pixels in the cell, yielding [1,4] through [1,9] pixel pair data.
  • Fig. 42 illustrates these further comparisons - each involving pixel A as the base pixel. Again, a representation of all such pixel pair data is output by the hardware circuitry as query data.
  • compare and its various forms such as ‘comparing’ and ‘comparison’, are used for a variety of mathematical choices of precisely how said comparison is made.
  • One form of comparison is a sigmoid function comparison (see Wikipedia for details).
  • the limiting case of the sigmoid function becomes a simple greater- than, less-than comparison of two separate values. In the case of whole number DNs, the case of equal-to also becomes a realized case, often leading to a null result or the assignment of the value 0.
  • the limiting value of the sigmoid both in this disclosure and more generally is the numbers 1 and -1.
  • the query data involves only a single base pixel.
  • the first pixel can be compared with eight other pixels, namely the second through ninth pixels (or more accurately, scene values associated with such pixels are compared)
  • the second pixel can be compared with seven other pixels, namely the third through ninth pixels.
  • the third pixel can be compared with six other pixels, namely the fourth through ninth pixels.
  • the fourth pixel can be compared with five other pixels, namely the fifth (central) through ninth pixels. And so on until the eighth pixel is compared with just one pixel: the ninth pixel.
  • the detailed process compares a scene value associated with a Qth pixel in the cell, with a scene value associated with an Rth pixel in the cell, to update a Qth-Rth ([Q,R]) pixel pair data, for each Q between 1 and N-l, and for each R between Q+l and N.
  • the comparison result comprising the pixel pair data can take different forms in different embodiments.
  • the comparison result is a count that is incremented when the base pixel scene value is greater than the ordinate pixel scene value, and is decremented when the base pixel scene value is less than the ordinate pixel scene value. (If the base and ordinate values are equal, then the comparison yields a result of zero.)
  • the 36 comparisons thus yield a 36-element vector, each element of which is -1, 0 or 1. This may be termed a high/low comparison.
  • the comparison result is an arithmetic difference between the two scene values being compared. For instance, if the scene value of the base pixel is 25 and the scene value of an ordinate pixel is 70, the comparison result (the pixel pair datum) is -45.
  • the 36 element vector is comprised of 36 integer or real numbers (depending on whether the scene values are integer or real values). This may be termed an analog or difference-preserving comparison.
  • non-linear ‘weighting’ that can be applied to these comparisons as well, which is often the case in machine-learning implementations where one is not equipped with full knowledge of the final correct choices: let the training of data against large ‘truth’ based images make the choices.
  • the parameters of the sigmoid function itself can be machine-learning tuned.
  • the quality of output color information will ultimately depend on the richness of the query information. While the just-described arrangement generates query data by comparisons within a single color filter array cell, richer query information can be obtained by extending such comparisons into the field of pixels beyond that single cell.
  • the color filter array comprises a tiling of cells. That is, referring to the just-discussed single cell as a first cell, there are multiple further cells tiled in a neighborhood around the first cell. Such further cells adjoin the first cell, or adjoin other further cells that adjoin the first cell, etc. These further cells may each replicate the first cell in its pattern of pixel types and its orientation. In such case, we sometimes refer to pixels found at the same spatial position in each of two cells as spatial counterparts, or as spatially- corresponding (e.g., a first pixel found in the upper left of the first cell is a spatial counterpart to a first pixel found in the upper left of the further cell).
  • some or all of these further cells may have the same pattern of pixel types as the first cell but be oriented differently, e.g., rotated 90 degrees to the right. Or some or all of these further cells may have a different pattern of pixel types but include pixels of one or more types found in the first cell. In such cases, we sometimes refer to pixels of the same type found in each of two cells as color- (or type-) counterparts, or as color- (or type-) corresponding (e.g., a blue pixel found in the first cell is a color-counterpart of a blue pixel found in a further cell).
  • scene values of pixels within the first cell are compared with scene values of spatial- or color-counterpart pixels in the further cells.
  • the scene value associated with the first pixel in the first cell is compared not only against the scene value of the second pixel in the first cell (as described above), but also with a scene value associated with a second pixel in one of the further cells.
  • the first-second ([1,2]) pixel pair datum referenced earlier reflects a result of this comparison. This operation is repeated one or more additional times, with second pixels in one or more other of the further cells.
  • Fig. 44 shows the first cell (i.e., the nine pixels outlined in bold in the center), within a local neighborhood of replicated cells.
  • pixel A of the first cell is the base pixel.
  • B the second pixel
  • This base pixel is also compared against second pixels in the cells to the left, and to the right, of the first cell, as indicated by the longer arrows.
  • the scene value associated with the first pixel in the first cell is compared with a scene value associated with a third pixel not only in within the first cell, but also within one of the further cells.
  • the first-third ([1,3]) pixel pair datum referenced earlier is changed to reflects a result of this comparison. This operation is repeated one or more additional times, with third pixels in one or more of the other further cells.
  • FIG. 45 Such operation is shown in Fig. 45, which parallels Fig. 44 but for the [1,3] pixel pair case.
  • the first pixel (A) in the first cell can be compared with two or more fourth pixels in the further cells, to yield richer [1,4] pixel pair data.
  • the second pixel (B) in the first cell can be compared against third through ninth pixels in multiple of the further cells, to enrich the comparison data employed in the query data.
  • the third pixel (C) in the first cell can be compared against fourth through ninth pixels in the further cells.
  • a scene value associated with the pixel in the first cell is compared against scene values associated with second pixels in two further cells - one to the left and one to the right.
  • a larger set of further cells can be employ ed.
  • eight further cells can be employ ed in this manner, i.e., the left-, right-, top- and bottom-adjoining cells, and also the four comer-adjoining cells.
  • the [1,2] pixel data is thus based on a total of nine comparisons, i.e., compared against the second pixel in the first cell, and second pixels in the eight adjoining cells. That is, the first, base, pixel in the first cell is compared against second, ordinate, pixels in each of a 3 x 3 tiling of cells having the first cell at its center.
  • each pixel pair datum such as [1,2] is thus based on 25 comparisons. If high/low comparisons are employed, then each pixel pair datum can have a value ranging from -25 to +25. In many embodiments, each such datum is shifted by 25, to make the value non-negative.
  • the base pixel for pixel pair [1,2] is associated with a scene value of 150, and the 25 ordinate pixels with which it is compared are associated with scene values between 40 and 60, then the [1,2] pair datum will accumulate to 50 (since, in 25 instances, the base value is greater than the ordinate value, with shifting by 25).
  • each pixel pair datum can have a large value dependent on the accumulated sum of scene value differences. For instance, in the just- given example, the [1,2] pixel pair datum will accumulate to about 2500 (since, in 25 instances, the base value exceeds the ordinate value by about 100).
  • the two detailed comparisons, high/low and analog, are exemplary only. Many other comparison operations can be used. For example, the arithmetic differences between the base value and each of the ordinate values can be weighted in accordance with the spatial distance between the pixels being compared, with larger distances being weighted less. Many other arrangements will occur to the artisan given the present disclosure. Likewise, as previously stated, machine learning applied to large training sets of imagery' can guide neural net implementations/weightings of these comparisons.
  • the scene values associated with the base and ordinate pixels of each pair can each be raw pixel values - either analog or digital. Or they can be processed values, such as data output by an image signal processor module that performs hot pixel correction or other adjustment on raw pixel data. Furthermore, superior color measurement output will be produced if each pixel has been ‘corrected’ by its own unique dark-median, as described above. Thus, any comparison of one pixel raw datum to another pixel’s raw datum will also involve each pixel’s dark-median correction values. Also, the individual gray-gains of individual pixels, or the type-class gray-gains (described above), can be used to ‘luminance level adjust’ the compared values prior to the comparison operation itself.
  • the scene value associated with a subject pixel can also be a mean or median value computed using all of the pixels of that same type within a neighborhood of 3 x 3 or 5 x 5 centered on the subject pixel. (In forming a mean or median, pixels that are remote from the subject pixel may be weighted less than pixels that are close.)
  • base pixels are associated with scene values of one type (e.g., mean) while ordinate pixels are associated with scene values of another type (e.g., raw).
  • the foregoing discussion details a procedure for generating query data to determine color information for a single pixel within a cell of N pixels - namely a (the) central pixel in the cell.
  • the process is repeated.
  • the cell boundaries are shifted, re-framing the cell, to make this different pixel the central pixel.
  • the boundary of a repeatedly- tiled cell is arbitrary.
  • a Bayer cell can be regarded, scanning from top left and then down, as a grouping of red/Green/Green/Blue. Or as green/Red/Blue/Green. Or as green/Blue/Red/Green. Or as blue/Green/Green/Red.
  • the nine pixels of the illustrative Fig. 40 cell can be re-framed in nine ways, as shown in Fig. 47.
  • a different set of query data based on a differing set of comparison data, is produced for each of these framings, and is used to determine color information for pixels E, F, D, H, I, G, B, C and A.
  • the set of pixel pair data is ordered as follows: ⁇ [1,2], [1,3], [1,4], [1,5], [1,6], [1,7], [1,8], [1,9], [2,3], [2,4], [2,5], [2,6], [2,7], [2,8], [2,9], [3,4], [3,5], [3,6], [3,7], [3,8], [3,9], [4,5], [4,6], [4,7], [4,8], [4,9], [5,6], [5,7], [5,8], [5,9], [6,7], [6,8], [6,9], [7,8], [7,9], [8,9] ⁇
  • pixel 2 compared with pixel 1 gives no new information; it is simply the negative of pixel 1 compared with pixel 2.
  • the comparison between pixels 2 and 1 can yield results different than the comparison between pixels 1 and 2.
  • a vector of 72 elements may be used, based on comparisons between all possible ordered pixel pairs. However, such difference is not normally significant, so the smaller number of elements is typically used (i.e., 36) even if the base and ordinate scene values are not determined in the same manner.
  • the query data for a single pixel at the center of the framed cell may take a form such as:
  • Such a data structure will be recognized to comprise a multi-symbol code that expresses results of determining, between pairs of pixels, which are associated with larger scene values.
  • One way to generate the reference data is to employ the sensor to image color charts comprising patches of known colors (e.g., Gretag color charts) under known illumination (e.g., D65), and to perform the above-detailed comparisons on resulting pixel data to yield 36-D reference data vectors. That is, the reference comparison data is generated in the same manner as the query data, but the scene colors are known rather than unknown.
  • a given patch of reference scene color will produce various data vectors depending on the various random factors involved, including random variations in the patch color, random variations in illumination intensity, sensor shot noise, sensor read noise, photosensor sensitivity variations among the pixels, etc. Such perturbations serve to splay the vector representation of the known color into a distribution of data vectors.
  • the 36-D volume containing such vectors defines the space associated with the known color.
  • reference data vectors associated with known colors can be stored and used as a basis for comparison to 36-D query data associated with a subject pixel E capturing light of an unknown color.
  • the task becomes finding, in the reference data, the 36-D vector data that best-matches the query vector.
  • the known color associated with the best-matching reference vector is then assigned as the output color information for that pixel.
  • the reference vectors - labeled with the colors to which they correspond - can be used to train a convolutional neural network.
  • the parameters and weights of the network are iteratively adjusted during training, e.g., by a reverse gradient descent process, to configure the network so as to respond to an input query vector corresponding to pixel E by providing output data indicating the color for that pixel. (Such parameters/weights can then be stored as reference data.)
  • the colors can be defined in a desired color space. Most commonly X,Y CIE coordinate data are employed, but other color spaces - including sRGB, L*a*b*, hue angle (L*c*h), etc. - can be used.
  • Color charts provide only a limited number of known colors.
  • Another method of generating reference data is to employ trusted multi-spectral images.
  • One suitable set of multi-spectral images is the so-called CAVE data set, published by Columbia University. The set comprises 32 scenes, each represented by full spectral resolution 16-bit reflectance data from 400 nm to 700 nm at 10 nm steps (31 bands total). This set of data is available at www ⁇ dot>cs ⁇ dot>columbia ⁇ dot>edu/CAVE/databases/multispectral/ and also at corresponding web ⁇ dot>archive ⁇ dot>org pages.
  • This approach does not utilize the physical image sensor itself to sense a scene.
  • behavior of the image sensor can be modeled, e.g., by measuring the spectral transmittance function of its differently -filtered pixels, its spectral transmittance variation among filters of the same ty pe, its shot noise, its read noise, its pixel amplitude variations, etc.
  • Such parameters characterizing the sensor behavior can be applied to the published imagery to produce a thousand or more sets of simulated pixel data as might be produced (and perturbed) by the image sensor from a given scene, in Monte Carlo fashion. Each such different frame of pixel data is analyzed to determine a 36-D vector associated with each “E” pixel in the frame.
  • each such pixel is known (in terms of the published amplitude at each of 31 spectral bands), and can be converted to the desired color space.
  • This reference data associating 36-D reference vectors with known colors, is then utilized in one of the manners detailed above, to output color information in response to input query data. It will be understood that the foregoing discussion has concerned assigning color information to a single pixel E in the cell. The reference data just-discussed is specific to that pixel E.
  • Luminance can be determined on a local neighborhood basis, such as average raw pixel value across a field of 5 x 5, 9 x 9, or 15 x 15 pixels, or a field of 3 x 3, or 5 x 5, or 10 x 10 pixel cells.
  • the first step is often to determine brightness of a region around the pixel, and then to select a set of reference data, or parameters/weights of a neural network, tailored to that brightness.
  • the training data can comprise triplets of information: the vector of pixel-pair data, the local brightness, and the known color.
  • the network is provided with the vector of query data and the measured local brightness as inputs, and outputs its estimation of the corresponding color information.
  • One approach to measuring these direct hue angles is to utilize cosine and sine functions operating on the hue angle to find a hyperplane in 36 dimensional space which optimizes the fit between angles in 36 dimensional space and the x and y chromaticity hue angles in the CIE chromaticity space (or the a and b vectors of the Lab color space, or several other color spaces where the color is separated from the luminance).
  • such a cell can be re-framed as a larger cell - one having a center pixel.
  • An example is the classic Bayer cell.
  • This cell can be re-framed into, e.g., 3 x 3 cells, as shown by the bold outlines in Fig. 48.
  • This pattern can thus be seen to be a tiling of four different 3 x 3 cells.
  • In one cell (the bolded cell at the upper left) there are five greens, two reds and two blues.
  • In another cell (to the right) there are four greens, four blues and one red.
  • the third cell (the bolded cell at the lower left), there are four greens, four reds and one blue.
  • the fourth cell there are again five greens, two reds and two blues.
  • a cell can include two or more (and sometimes four or more) pixels of the same type. It will further be recognized that, although the cells are different, the component colors are the same in each.
  • a vector of 36 ⁇ -1, 0, +1 ⁇ elements can be formed, and used to assign a color to the center pixel of the cell.
  • the first pixel position is an R pixel, and serves as the base pixel against which the eight other pixels in the cell are compared as ordinates.
  • the second pixel position (i.e., the first ordinate) is a G pixel.
  • the first pixel is also compared to the G pixel nearest to the base pixel but in the adjoining cell to the left.
  • a comparison is also made to the G pixel nearest to the base pixel but in the adjoining cell to the right. (These G pixels are underlined.) This triples the richness of the [1,2] pixel pair data - extending it from a single comparison to three comparisons.
  • this first base pixel (R) can also be compared with the G pixel nearest to the base pixel but in the adjoining cell above the subject cell, and to the nearest G pixel in the adjoining cell below the subject cell. Both of these pixels are denoted by asterisks. This enriches the [1,2] pixel pair datum to reflect five comparisons rather than one.
  • the “nearest” pixel in the adjoining cell to the left/right/above/below is ambiguous, because two such pixels of the specified type are equidistant in the adjoining cell.
  • the upper of two equidistant pixels in the cell to the left, and the lower of two equidistant pixels in the cell to the right can be selected for comparison.
  • the left of two equidistant pixels in the cell above, and the right-most of two equidistant pixels in the cell below can be selected for comparison.
  • the first, second and third pixels are of first, second and third types, respectively (shown in enlarged letters R, G, R, respectively in Fig. 48).
  • the image sensor includes plural further cells around the first cell, each of which comprises pixels of types included in the first cell.
  • Such embodiment includes comparing the scene value associated with the first pixel in the first cell with a scene value associated with a pixel of the second type (G) in one of the further cells, and updating the first-second ([1,2]) pixel pair datum based on a result of this comparison. This act is repeated one or more additional times with pixels of the second type in other of the further cells.
  • This embodiment can further include comparing the scene value associated with the first pixel in the first cell with a scene value associated with a pixel of the third type (R) in one of the further cells, and updating the first-third ([1,3]) pixel pair datum based on a result of this comparison. Again, this act can be repeated one or more additional times with pixels of the third type in other of the further cells.
  • Query data is then formed that represents, in part, each of the [1,2] and [1,3] pixel pair data.
  • each of the 36 pixel pair data can be enriched by performing other comparisons outside the subject cell.
  • the query data is not resolved into color data by reference to one of nine sets of reference data, as in the earlier case (the nine sets corresponding to the nine re-framing of the cell to place each of the pixels in the center, per Fig. 47).
  • one of 36 sets of reference data is used (disregarding further sets of reference data to account for brightness variations). That is, there are four different cell arrangements, and there are nine re-framings unique to each.
  • the processing detailed herein can be performed by a general purpose microprocessor, GPU, or other computational unit of a computer. More commonly, however, some or all of the processing is performed by a specialized image signal processor (ISP).
  • ISP image signal processor
  • the ISP circuitry (comprising an array of transistor logic gates) can be integrated on the same substrate - usually silicon - as the photosensors of the image sensor, or the ISP circuitry can be provided on a companion chip. In some embodiments, the ISP processing is distributed: some on the image sensor chip, and other on a companion chip.
  • the 36 pixel pair data represented by the query data in certain of the detailed embodiments is exemplary only; more or less pixel pair data can naturally be used.
  • two pixel pair data are used. For instance, scene values associated with one pair of pixels in the first cell are compared, and a result is employed in one pixel pair datum. Scene values associated with a second pair of pixels in the first cell are compared, and a result is employed in a second pixel pair datum.
  • each of the pixel pair data can be initialized to a value such as zero, or 25.
  • the comparison data then serves to update such values.
  • transparent denotes a spectral transmission function of greater than 90%, and preferably greater than 95% or 98%, over a spectrum of interest. If an image sensor produces RGB- or XYZ- based output, the spectrum of interest is the spectrum of human vision, taken here to be 400-700 nm.
  • Implementation can additionally, or alternatively, employ dedicated electronic circuitry that has been custom-designed and manufactured to perform some or all of the component acts, as an application specific integrated circuit (ASIC).
  • ASIC application specific integrated circuit

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

Les performances de bruit des capteurs d'image sont améliorées par l'utilisation de comparaisons inter-pixels, qui peuvent monter à plusieurs centaines, toutes les informations de contribution concernant un pixel donné. Ceci permet une détermination précise de la chromaticité de l'image, même à des rapports signal sur bruit extrêmement bas. Dans d'autres modes de réalisation, des filtres de fonctions de transmission spectrale non classiques sont utilisés pour réduire le métamérisme, ce qui permet de discerner des informations de scène non visibles à l'œil humain. D'autres modes de réalisation impliquent la fabrication d'un réseau épars de socles transparents sur un réseau de photocapteurs d'image. Lorsqu'une résine photosensible de couleur est ensuite appliquée, ces socles provoquent des variations d'épaisseur dans la couche de résine photosensible obtenue, ce qui donne lieu à des pixels ayant différentes réponses spectrales malgré l'utilisation de la même résine photosensible de couleur. D'autres modes de réalisation concernent des agencements de capteurs d'image permettant la génération de données de sortie rouge, verte, bleue et NIR à l'aide uniquement de quatre résines photosensibles de couleur classiques (rouge, vert, bleu, cyan, magenta, jaune). Un grand nombre d'autres caractéristiques et agencements sont également détaillés.
PCT/US2023/073352 2023-01-05 2023-09-01 Capteurs d'image en couleur, procédés et systèmes WO2024147826A1 (fr)

Applications Claiming Priority (14)

Application Number Priority Date Filing Date Title
US202363478527P 2023-01-05 2023-01-05
US63/478,527 2023-01-05
US202363478728P 2023-01-06 2023-01-06
US63/478,728 2023-01-06
US202363479572P 2023-01-12 2023-01-12
US63/479,572 2023-01-12
US202363481390P 2023-01-25 2023-01-25
US63/481,390 2023-01-25
US202363487941P 2023-03-02 2023-03-02
US63/487,941 2023-03-02
US202363500089P 2023-05-04 2023-05-04
US63/500,089 2023-05-04
US202363515577P 2023-07-25 2023-07-25
US63/515,577 2023-07-25

Publications (1)

Publication Number Publication Date
WO2024147826A1 true WO2024147826A1 (fr) 2024-07-11

Family

ID=88204083

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/073352 WO2024147826A1 (fr) 2023-01-05 2023-09-01 Capteurs d'image en couleur, procédés et systèmes

Country Status (1)

Country Link
WO (1) WO2024147826A1 (fr)

Citations (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3971065A (en) 1975-03-05 1976-07-20 Eastman Kodak Company Color imaging array
US6638668B2 (en) 2000-05-12 2003-10-28 Ocean Optics, Inc. Method for making monolithic patterned dichroic filter detector arrays for spectroscopic imaging
US20050153219A1 (en) 2004-01-12 2005-07-14 Ocean Optics, Inc. Patterned coated dichroic filter
US20060045333A1 (en) * 2004-08-30 2006-03-02 Via Technologies, Inc. Method and apparatus for dynamically detecting pixel values
JP2006098684A (ja) 2004-09-29 2006-04-13 Fujifilm Electronic Materials Co Ltd カラーフィルタ及び固体撮像素子
US20070230774A1 (en) 2006-03-31 2007-10-04 Sony Corporation Identifying optimal colors for calibration and color filter array design
US20100118172A1 (en) * 2008-11-13 2010-05-13 Mccarten John P Image sensors having gratings for color separation
US7763401B2 (en) 2005-05-11 2010-07-27 Fujifilm Corporation Colorant-containing curable negative-type composition, color filter using the composition, and method of manufacturing the same
US7914957B2 (en) 2006-08-23 2011-03-29 Fujifilm Corporation Production method for color filter
US20110217636A1 (en) 2010-02-26 2011-09-08 Fujifilm Corporation Colored curable composition, color filter and method of producing color filter, solid-state image sensor and liquid crystal display device
US8314866B2 (en) 2010-04-06 2012-11-20 Omnivision Technologies, Inc. Imager with variable area color filter array and pixel elements
US8603708B2 (en) 2008-09-30 2013-12-10 Fujifilm Corporation Dye-containing negative curable composition, color filter using same, method of producing color filter, and solid-state imaging device
US8853717B2 (en) 2011-06-30 2014-10-07 Dai Nippon Printing Co., Ltd. Dye dispersion liquid, photosensitive resin composition for color filters, color filter, liquid crystal display device and organic light emitting display device
US20140349101A1 (en) 2012-03-21 2014-11-27 Fujifilm Corporation Colored radiation-sensitive composition, colored cured film, color filter, pattern forming method, color filter production method, solid-state image sensor, and image display device
US20150116554A1 (en) 2012-07-06 2015-04-30 Fujifilm Corporation Color imaging element and imaging device
US20150185380A1 (en) 2013-12-27 2015-07-02 Samsung Electronics Co., Ltd. Color Filter Arrays, And Image Sensors And Display Devices Including Color Filter Arrays
US20150346404A1 (en) 2013-02-14 2015-12-03 Fujifilm Corporation Infrared ray absorbing composition or infrared ray absorbing composition kit, infrared ray cut filter using the same, method for producing the infrared ray cut filter, camera module, and method for producing the camera module
WO2016183743A1 (fr) * 2015-05-15 2016-11-24 SZ DJI Technology Co., Ltd. Système et procédé de prise en charge d'un débruitage d'image basé sur le degré de différentiation de blocs voisins
US9632222B2 (en) 2011-08-31 2017-04-25 Fujifilm Corporation Method for manufacturing a color filter, color filter and solid-state imaging device
US20170195586A1 (en) 2015-12-23 2017-07-06 Imec Vzw User device
US9715642B2 (en) 2014-08-29 2017-07-25 Google Inc. Processing images using deep neural networks
WO2018056704A1 (fr) 2016-09-20 2018-03-29 황현준 Système de commande de distributeur automatique de poupées permettant l'impression d'objet photo en utilisant une interface
US20190332008A1 (en) 2017-03-24 2019-10-31 Fujifilm Corporation Photosensitive coloring composition, cured film, color filter, solid-state imaging element, and image display device
US20200344430A1 (en) * 2019-04-23 2020-10-29 Coherent AI LLC High dynamic range optical sensing device employing broadband optical filters integrated with light intensity detectors
US20210079210A1 (en) 2018-08-15 2021-03-18 Fujifilm Corporation Composition, film, optical filter, laminate, solid-state imaging element, image display device, and infrared sensor
CN113676628A (zh) * 2021-08-09 2021-11-19 Oppo广东移动通信有限公司 多光谱传感器、成像装置和图像处理方法
US20220043344A1 (en) 2019-05-24 2022-02-10 Fujifilm Corporation Photosensitive resin composition, cured film, color filter, solid-state imaging element and image display device
US20220244104A1 (en) 2021-01-29 2022-08-04 Spectricity Spectral sensor module

Patent Citations (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3971065A (en) 1975-03-05 1976-07-20 Eastman Kodak Company Color imaging array
US6638668B2 (en) 2000-05-12 2003-10-28 Ocean Optics, Inc. Method for making monolithic patterned dichroic filter detector arrays for spectroscopic imaging
US20050153219A1 (en) 2004-01-12 2005-07-14 Ocean Optics, Inc. Patterned coated dichroic filter
US20060045333A1 (en) * 2004-08-30 2006-03-02 Via Technologies, Inc. Method and apparatus for dynamically detecting pixel values
JP2006098684A (ja) 2004-09-29 2006-04-13 Fujifilm Electronic Materials Co Ltd カラーフィルタ及び固体撮像素子
US7763401B2 (en) 2005-05-11 2010-07-27 Fujifilm Corporation Colorant-containing curable negative-type composition, color filter using the composition, and method of manufacturing the same
US20070230774A1 (en) 2006-03-31 2007-10-04 Sony Corporation Identifying optimal colors for calibration and color filter array design
US7914957B2 (en) 2006-08-23 2011-03-29 Fujifilm Corporation Production method for color filter
US8603708B2 (en) 2008-09-30 2013-12-10 Fujifilm Corporation Dye-containing negative curable composition, color filter using same, method of producing color filter, and solid-state imaging device
US20100118172A1 (en) * 2008-11-13 2010-05-13 Mccarten John P Image sensors having gratings for color separation
US20110217636A1 (en) 2010-02-26 2011-09-08 Fujifilm Corporation Colored curable composition, color filter and method of producing color filter, solid-state image sensor and liquid crystal display device
US8314866B2 (en) 2010-04-06 2012-11-20 Omnivision Technologies, Inc. Imager with variable area color filter array and pixel elements
US8853717B2 (en) 2011-06-30 2014-10-07 Dai Nippon Printing Co., Ltd. Dye dispersion liquid, photosensitive resin composition for color filters, color filter, liquid crystal display device and organic light emitting display device
US9632222B2 (en) 2011-08-31 2017-04-25 Fujifilm Corporation Method for manufacturing a color filter, color filter and solid-state imaging device
US20140349101A1 (en) 2012-03-21 2014-11-27 Fujifilm Corporation Colored radiation-sensitive composition, colored cured film, color filter, pattern forming method, color filter production method, solid-state image sensor, and image display device
US20150116554A1 (en) 2012-07-06 2015-04-30 Fujifilm Corporation Color imaging element and imaging device
US20150346404A1 (en) 2013-02-14 2015-12-03 Fujifilm Corporation Infrared ray absorbing composition or infrared ray absorbing composition kit, infrared ray cut filter using the same, method for producing the infrared ray cut filter, camera module, and method for producing the camera module
US20150185380A1 (en) 2013-12-27 2015-07-02 Samsung Electronics Co., Ltd. Color Filter Arrays, And Image Sensors And Display Devices Including Color Filter Arrays
US9715642B2 (en) 2014-08-29 2017-07-25 Google Inc. Processing images using deep neural networks
WO2016183743A1 (fr) * 2015-05-15 2016-11-24 SZ DJI Technology Co., Ltd. Système et procédé de prise en charge d'un débruitage d'image basé sur le degré de différentiation de blocs voisins
US20170195586A1 (en) 2015-12-23 2017-07-06 Imec Vzw User device
WO2018056704A1 (fr) 2016-09-20 2018-03-29 황현준 Système de commande de distributeur automatique de poupées permettant l'impression d'objet photo en utilisant une interface
US20190332008A1 (en) 2017-03-24 2019-10-31 Fujifilm Corporation Photosensitive coloring composition, cured film, color filter, solid-state imaging element, and image display device
US20210079210A1 (en) 2018-08-15 2021-03-18 Fujifilm Corporation Composition, film, optical filter, laminate, solid-state imaging element, image display device, and infrared sensor
US20200344430A1 (en) * 2019-04-23 2020-10-29 Coherent AI LLC High dynamic range optical sensing device employing broadband optical filters integrated with light intensity detectors
US20220043344A1 (en) 2019-05-24 2022-02-10 Fujifilm Corporation Photosensitive resin composition, cured film, color filter, solid-state imaging element and image display device
US20220244104A1 (en) 2021-01-29 2022-08-04 Spectricity Spectral sensor module
CN113676628A (zh) * 2021-08-09 2021-11-19 Oppo广东移动通信有限公司 多光谱传感器、成像装置和图像处理方法

Non-Patent Citations (12)

* Cited by examiner, † Cited by third party
Title
ARAD, B. ET AL.: "Filter selection for hyperspectral estimation.", IN PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, 2017, pages 3153 - 3161
DATTA GOURAV ET AL: "Enabling ISPless Low-Power Computer Vision", 2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), IEEE, 2 January 2023 (2023-01-02), pages 2429 - 2438, XP034290856, DOI: 10.1109/WACV56688.2023.00246 *
GEELEN BERT ET AL: "A compact snapshot multispectral imager with a monolithically integrated per-pixel filter mosaic", PROCEEDINGS OF SPIE, IEEE, US, vol. 8974, 7 March 2014 (2014-03-07), pages 89740L - 89740L, XP060034605, ISBN: 978-1-62841-730-2, DOI: 10.1117/12.2037607 *
HARDEBERG, J.Y.: "Filter selection for multispectral color image acquisition.", JOURNAL OF IMAGING SCIENCE AND TECHNOLOGY, vol. 48, no. 2, 2004, pages 105 - 110
IMAI, F.H. ET AL.: "Digital camera filter design for colorimetric and spectral accuracy.", IN PROC. OF THIRD INTERNATIONAL CONFERENCE ON MULTISPECTRAL COLOR SCIENCE, July 2001 (2001-07-01), pages 13 - 16
JANG HOUK ET AL: "In-sensor optoelectronic computing using electrostatically doped silicon", NATURE ELECTRONICS, vol. 5, no. 8, 1 August 2022 (2022-08-01), pages 519 - 525, XP093102714, DOI: 10.1038/s41928-022-00819-6 *
LI, S.X.: "Filter selection for optimizing the spectral sensitivity of broadband multispectral cameras based on maximum linear independence.", SENSORS, vol. 18, no. 5, 2018, pages 1455
PARK ET AL.: "Visible and near-infrared image separation from CMYG color filter array based sensor", IEEE INTERNATIONAL ELMAR SYMPOSIUM, 2016, pages 209 - 212, XP032993760, DOI: 10.1109/ELMAR.2016.7731788
SADEGHIPOOR ET AL.: "A novel compressive sensing approach to simultaneously acquire color and near-infrared images on a single sensor", IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, 2013, pages 1646 - 1650, XP032507936, DOI: 10.1109/ICASSP.2013.6637931
SIPPEL, F. ET AL.: "Optimal Filter Selection for Multispectral Object Classification Using Fast Binary Search. In 2022 IEEE 24", INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING (MMSP, pages 1 - 5
TERANAKA ET AL.: "Single-sensor RGB and NIR image acquisition: toward optimal performance by taking account of CFA pattern, demosaicing, and color correction", ELECTRONIC IMAGING, vol. 18, 2016, pages 1 - 6, XP055712031, DOI: 10.2352/ISSN.2470-1173.2016.18.DPMI-256
YAKO, M. ET AL.: "Video-rate hyperspectral camera based on a CMOS-compatible random array of Fabry-Perot filters.", NATURE PHOTONICS, 23 January 2023 (2023-01-23), pages 1 - 6

Similar Documents

Publication Publication Date Title
CN108419061B (zh) 基于多光谱的图像融合设备、方法及图像传感器
Jiang et al. What is the space of spectral sensitivity functions for digital color cameras?
Lukac et al. Color filter arrays: Design and performance analysis
KR101442313B1 (ko) 카메라 센서 교정
CN101371591B (zh) 具有改进感光度的图像传感器
CN106575035B (zh) 用于光场成像的系统和方法
CN102742279B (zh) 滤色器阵列图像的迭代去噪
CN101933321A (zh) 用于估测场景光源的图像传感器装置及方法
CN102450020A (zh) 四通道滤色器阵列内插
CN101233763A (zh) 处理彩色和全色像素
JP4328424B2 (ja) 画像変換方法
CN102415099A (zh) 空间变化的光谱响应校准数据
Parmar et al. Selection of optimal spectral sensitivity functions for color filter arrays
Sajadi et al. Switchable primaries using shiftable layers of color filter arrays
CN112005545B (zh) 用于重建由覆盖有滤色器马赛克的传感器获取的彩色图像的方法
JP4617870B2 (ja) 撮像装置および方法、並びにプログラム
Glatt et al. Beyond RGB: A Real World Dataset for Multispectral Imaging in Mobile Devices
Jahanirad et al. An evolution of image source camera attribution approaches
Mihoubi Snapshot multispectral image demosaicing and classification
WO2024147826A1 (fr) Capteurs d'image en couleur, procédés et systèmes
Prasad Strategies for resolving camera metamers using 3+ 1 channel
US6970608B1 (en) Method for obtaining high-resolution performance from a single-chip color image sensor
Ramanath Interpolation methods for the bayer color array
Gong et al. Optimal noise-aware imaging with switchable prefilters
Berns et al. Modifications of a sinarback 54 digital camera for spectral and high-accuracy colorimetric imaging: simulations and experiments

Legal Events

Date Code Title Description
DPE2 Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23777464

Country of ref document: EP

Kind code of ref document: A1