WO2023015425A1 - Pixel array, image sensor, and electronic device without demosaicing and methods of operation thereof - Google Patents

Pixel array, image sensor, and electronic device without demosaicing and methods of operation thereof Download PDF

Info

Publication number
WO2023015425A1
WO2023015425A1 PCT/CN2021/111653 CN2021111653W WO2023015425A1 WO 2023015425 A1 WO2023015425 A1 WO 2023015425A1 CN 2021111653 W CN2021111653 W CN 2021111653W WO 2023015425 A1 WO2023015425 A1 WO 2023015425A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixels
image unit
pixel array
color
image
Prior art date
Application number
PCT/CN2021/111653
Other languages
French (fr)
Inventor
Makoto Monoi
Original Assignee
Huawei Technologies Co.,Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co.,Ltd. filed Critical Huawei Technologies Co.,Ltd.
Priority to PCT/CN2021/111653 priority Critical patent/WO2023015425A1/en
Priority to CN202180101102.0A priority patent/CN117751576A/en
Publication of WO2023015425A1 publication Critical patent/WO2023015425A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/703SSIS architectures incorporating pixels for producing signals other than image signals
    • H04N25/704Pixels specially adapted for focusing, e.g. phase difference pixel sets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/46Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by combining or binning pixels

Definitions

  • Embodiments of this application relate to image sensors, and in particular to CMOS image sensors.
  • Image sensors are commonly used in electronic devices such as digital cameras, video cameras, webcams, mobile phones, and computers in applications that involve capturing images.
  • an image sensor has an array of cells (pixels) arranged in rows and columns.
  • Each cell contains a photosensitive element (also referred to as a sensor element; e.g., a photodiode) that generates an electric charge in response to incident light.
  • An on-chip lens OCL; also referred to as an on-chip microlens
  • OCL on-chip microlens
  • the generated electric charge is accumulated in a charge accumulation node (a capacitor-like structure often called a floating diffusion node, sometimes abbreviated FD herein) associated with the cell.
  • a charge accumulation node a capacitor-like structure often called a floating diffusion node, sometimes abbreviated FD herein
  • An output electric signal corresponding to the light incident on the cell is generated from the electric charge accumulated in the floating diffusion node.
  • CCD charge-coupled device
  • CMOS complementary metal-oxide-semiconductor
  • CMOS image sensors In a CCD image sensor, an electric charge generated at a pixel in response to light is stored in a capacitor.
  • the capacitors in one line are controlled to transfer their charge to their neighbors at once in a "bucket brigade” manner, and the capacitor at the end of the line outputs its charge to an amplifier.
  • each pixel in the array has a photodiode and a switch (e.g., a transistor) .
  • CMOS image sensors can be made inexpensive as compared with CCD image sensors, because CMOS image sensors, complete with control circuitry, can be manufactured in an ordinary semiconductor manufacturing process.
  • a CMOS image sensor may include a pixel array and a readout circuit for taking out image signals from pixels.
  • the readout circuit includes a row control circuit, a column control circuit, and a control circuit. As noted above, in a CMOS image sensor, by controlling the switches in the array, a signal from each pixel can be accessed directly.
  • An image signal corresponding to a pixel (cell) is read out by the readout circuit by rows and columns.
  • a particular pixel row in the array may be selected by the row control circuit, and image signals generated by the pixels in that row are read out column by column along column lines by the column control circuit.
  • An analog-to-digital conversion (ADC) circuit may be provided to convert the signals from the pixels to digital values.
  • a color filter array may be provided.
  • the color filter array includes color filter elements over the pixels of the pixel array.
  • the color filter elements may include red, green, and blue color filter elements arranged in a so-called Bayer pattern, but other colors and/or other arrangement patterns may also be used.
  • demosaicing (sometimes abbreviated DM herein) .
  • demosaicing through interpolation for the Bayer pattern is subject to several disadvantages including low resolution, high power consumption, and color artifacts.
  • the present application proposes a pixel array configuration and pixel circuitry that do not require interpolation between image units.
  • a pixel array for an image sensor comprising a plurality of image units.
  • Each image unit comprises a plurality of pixels for color components of a predetermined color space; and each pixel is configured to detect at least one color component of the color components of the color space.
  • an electric signal aggregating contributions from pixels for that color component among the plurality of pixels included in each image unit is provided as an output from the image unit.
  • a center of gravity of the pixels of the same color in an image unit substantially coincides with a center of gravity of the image unit; and/or the pixels of the same color in an image unit lie on lines extending in the four directions of two perpendicular straight lines intersecting at a center of the image unit.
  • interpolation between image units is not required. This results in lower computational complexity as compared with techniques that involve interpolation between image units.
  • the pixel array according to this aspect provides a resolution higher than techniques in which an image unit is configured to detect only one color component. Moreover, color artifacts due to the arrangement of color filters of some known pixel array may be avoided.
  • electric charges generated by photosensitive elements e.g., photodiodes of pixels in each image unit are aggregated in a physical process.
  • electric charges generated by photosensitive elements of pixels in each image unit are aggregated in a common electric charge storage structure.
  • electric charges generated by photosensitive elements of at least two pixels in each image unit are stored in a common floating diffusion node.
  • electric charges generated by photosensitive elements of at least two pixels in each image unit are stored in a common floating diffusion node, and at least two floating diffusion nodes in each image units are electrically coupled.
  • arithmetic operations for aggregating contributions from pixels for a color component among the plurality of pixels included in each image unit may be reduced or eliminated.
  • photosensitive elements e.g., photodiodes
  • photosensitive elements e.g., photodiodes
  • This implementation allows taking out charges for each color component separately even if a charge storage structure is shared by pixels for different colors.
  • each image unit is composed of four-by-four pixels, a floating diffusion node being shared for each of groups of two-by-two pixels in the four-by-four pixels, and the four floating diffusion nodes included in an image unit being electrically coupled.
  • This implementation allows a relatively simple scheme of sharing floating diffusion nodes, while allowing aggregation of charges contributed from the pixels in an image unit in a physical process in which arithmetic operations are not required.
  • each image unit is composed of four-by-four pixels, a floating diffusion node being shared for each of groups of two-by-two pixels in the four-by-four pixels, and two of the four floating diffusion nodes included in an image unit being electrically coupled, whereby the 16 pixels included in each image unit form two groups each comprising eight pixels among which electric charges are aggregated, wherein electric signals derived from electric charges aggregated in respective groups of eight pixels are aggregated and are provided as an output from the image unit.
  • This implementation provides an alternative scheme of sharing floating diffusion nodes.
  • each image unit is composed of four-by-four pixels, a floating diffusion node being shared for each of groups of two-by-two pixels in the pixel array, and the four floating diffusion nodes being electrically coupled in groups of two floating diffusion nodes, thereby forming groups each comprising eight pixels among which electric charges are aggregated, wherein electric signals derived from electric charges aggregated in respective groups that include a pixel belonging to each image unit are summed and are provided as an output from the image unit.
  • This implementation provides another alternative scheme of sharing floating diffusion nodes.
  • each image unit is composed of four-by-four pixels, a floating diffusion node being shared for each of groups of two-by-two pixels in the pixel array, and a floating diffusion node for a group of two-by-two pixels being electrically coupled to a diagonally adjacent floating diffusion node, thereby forming groups each comprising eight pixels among which electric charges are aggregated, wherein electric signals derived from electric charges aggregated in respective groups that include a pixel belonging to each image unit are summed/aggregated and are provided as an output from the image unit.
  • each image unit is composed of four-by-four pixels, a floating diffusion node being shared for each of groups of two-by-two pixels in the pixel array, and a floating diffusion node for a group of two-by-two pixels being electrically coupled to a diagonally adjacent floating diffusion node, thereby forming groups each comprising eight pixels among which electric charges are aggregated, wherein electric signals derived from electric charges aggregated in two adjacent groups are summed/aggregated and an output from the image unit is derived by interpolation of such summed/aggregated values.
  • the ninth and tenth implementations provide two modes of yet another alternative scheme of sharing floating diffusion nodes.
  • interpolation is employed to derive a final output from the image unit.
  • the color space is a color space comprising three colors.
  • the color space is a color space comprising three colors A, B, and C, each image unit being composed of four-by-four pixels, wherein the colors of pixels in each image unit are arranged as
  • the color space is an RGB color space comprising red (R) , green (G) , and blue (B) , each image unit being composed of four-by-four pixels, wherein the colors of pixels in each image unit are arranged as
  • the color space is a YRG color space comprising yellow (Y) , red (R) , and blue (B) , each image unit being composed of four-by-four pixels, wherein the colors of pixels in each image unit are arranged as
  • the color space is a color space comprising four colors.
  • At least one on-chip microlens covers more than one pixel of the same color.
  • At least one on-chip microlens covers an area comprising two-by-two pixels of the same color; and/or at least one on-chip microlens covers an area comprising two adjacent pixels of the same color.
  • the sixteenth and seventeenth implementations are advantageous in providing phase detection (for auto focusing) with pairs of pixels in the image sensor.
  • an image sensor comprising: the pixel array according to the first aspect per se or any of the implementations of the first aspect of the present disclosure, and a readout circuit configured to read out signals from image units of the pixel array.
  • the readout circuit comprises: a row control circuit configured to select rows of image units of the pixel array; a column control circuit configured to read out, by column-by-column control, signals from each image unit in a row selected by the row control circuit; an analog-to-digital converter for converting signals from each image unit to a digital signal; and a control circuit for controlling the readout operation of the readout circuit.
  • an electronic device comprising the image sensor according to the second aspect per se or the first implementation of the second aspect of the present disclosure; a lens mechanism configured to direct incident light to the image sensor; and an autofocusing mechanism for the lens mechanism.
  • the autofocusing mechanism is configured to perform phase difference detection autofocusing based on a pair of profiles obtained from a plurality of pairs of pixels.
  • the two pixels of each pair of the plurality of pairs of pixels are covered by the same on-chip microlens.
  • a method of operation of an image sensor comprising a pixel array comprising a plurality of image units, wherein each image unit comprises a plurality of pixels for color components of a predetermined color space; wherein each pixel is configured to detect at least one color component of the color components of the color space; wherein a center of gravity of the pixels of the same color in an image unit substantially coincides with a center of gravity of the image unit; and/or the pixels of the same color in an image unit lie on lines extending in the four directions of two perpendicular straight lines intersecting at a center of the image unit, wherein the method comprises: for each color component of the color space, providing an electric signal aggregating contributions from pixels for that color component among the plurality of pixels included in each image unit as an output from the image unit.
  • electric charges generated by photosensitive elements of pixels in each image unit are aggregated in a physical process.
  • electric charges generated by photosensitive elements of pixels in each image unit are aggregated in a common electric charge storage structure.
  • electric charges generated by photosensitive elements of at least two pixels in each image unit are stored in a common floating diffusion node.
  • electric charges generated by photosensitive elements of at least two pixels in each image unit are stored in a common floating diffusion node, and at least two floating diffusion nodes in each image unit are electrically coupled.
  • photosensitive elements of pixels for different colors in each image unit are controlled to output electric charges at different times.
  • FIG. 1 illustrates an electronic device having an image sensor including a pixel array
  • FIG. 2 illustrates a color filter arrangement according to the Bayer pattern
  • FIG. 3 illustrates interpolation for providing missing color values
  • FIG. 4 illustrates color artifacts for a color zone plate (CZP) that arise due to demosaicing with the Bayer pattern;
  • CZP color zone plate
  • FIG. 5 illustrates a color filter arrangement according to an embodiment of the present application
  • FIG. 6 illustrates how color values for the pixels can be determined through averaging within each image unit without interpolation between image units, according to an embodiment of the present application
  • FIG. 7 illustrates reduced color artifacts for a color zone plate (CZP) according to an embodiment of the present application
  • FIG. 8 illustrates sharing of floating diffusion nodes (FDs) for the color filter arrangement illustrated in FIG. 5 according to an embodiment of the present application
  • FIG. 9 illustrates how color values for image units are determined without interpolation between image units for the color filter arrangement illustrated in FIG. 5, according to an embodiment of the present application
  • FIG. 10 illustrates color artifacts for a color zone plate (CZP) for various color filter arrangements according to some embodiments of the present application
  • FIG. 11 illustrates exemplary color filter arrangements with four colors according to some embodiments of the present application.
  • FIG. 12 illustrates on-chip lens (OCL) patterns for the color filter arrangement illustrated in FIG. 5, according to an embodiment of the present application
  • FIG. 13 illustrates how color values for image units are determined in two-step binning for the color filter arrangement illustrated in FIG. 5, according to an embodiment of the present application
  • FIG. 14 illustrates how color values for image units are determined in two-step binning for the color filter arrangement illustrated in FIG. 5, according to an embodiment of the present application
  • FIG. 15 illustrates how color values for image units are determined in two-step binning for the color filter arrangement illustrated in FIG. 5, according to an embodiment of the present application.
  • FIG. 16 illustrates how color values for image units are determined in two-step binning for a color filter arrangement with four colors according to an embodiment of the present application.
  • Fig. 1 is a schematic diagram of an electronic device 100 that includes an image sensor 105.
  • the image sensor 105 may include a pixel array 110 and a readout circuit 115 for taking out image signals from pixels.
  • the readout circuit 115 includes a row control circuit 120, a column control circuit 130, and a control circuit 150.
  • An image signal corresponding to a pixel (cell) is generated by a photosensitive element (also referred to as a sensor element; e.g., a photodiode) of a pixel and is read out by the readout circuit by rows and columns.
  • a photosensitive element is not limited to a photodiode.
  • a photoconductor film made of organic material can be used as a photosensitive element.
  • a particular pixel row in the array may be selected by the row control circuit 120, and image signals generated by the pixels in that row are read out column by column along column lines by the column control circuit 130.
  • An analog-to-digital conversion (ADC) circuit may be provided to convert image signals from pixels to digital values.
  • a color filter array may be provided.
  • the color filter array includes color filter elements over at least one of the pixels of the pixel array.
  • the color filter elements typically include red, green, and blue color filter elements.
  • a red color filter element passes red light, and thus, the pixel behind the red filter element (sometimes referred to as a red pixel) responds to (and thus detects) red light.
  • the pixel behind a green filter element (sometimes referred to as a green pixel) detects green light
  • the pixel behind a blue filter element sometimes referred to as a blue pixel
  • a broadband filter element that passes two or more colors may be used.
  • a Bayer color filter pattern (also called a Bayer mosaic pattern, a Bayer pattern, or the like) is one typical arrangement of color filter elements in a color filter array, and includes repeating units of 2-by-2 pixels, in which two pixels in the diagonal positions are green, and one of the remaining two pixels is red and the remaining one is blue.
  • the term Bayer pattern may also refer to the pattern of a unit including 2-by-2 pixels.
  • Fig. 2 illustrates such a Bayer color filter pattern.
  • the Bayer pattern employs twice as many green pixels as the red and blue pixels. This is to provide higher precision for green, to which human eyes are more sensitive than to red and blue.
  • a pixel with a red color filter element provides only a red color signal and cannot provide a green sensor signal or a blue sensor signal.
  • a pixel with a green color filter element provides only a green sensor signal
  • a pixel with a blue color filter element provides only a blue sensor signal.
  • a demosaicing algorithm is employed to provide green and blue sensor signals for a red pixel (a pixel with a red color filter element) ; provide red and blue sensor signals for a green pixel; and provide red and green sensor signals for a blue pixel.
  • Fig. 3 is a schematic diagram illustrating how pixel values of each color are interpolated to provide pixel values for which sensor signals of that color are missing.
  • demosaicing through interpolation in a Bayer pattern is subject to several disadvantages.
  • color artifacts may occur at an edge region in which color changes abruptly.
  • Fig. 4 illustrates a simulated result of demosaicing of an image of a well-known color zone plate (CZP) obtained with an image sensor with the Bayer pattern.
  • CZP color zone plate
  • a CZP is a concentric circular pattern, in which the gray level varies according to the sine function with frequencies increasing radially from the center. Components of high spatial frequencies (small pitches) with respect to the resolution of the screen yield circular patterns off the center by aliasing.
  • the CZP is a gray scale pattern, color artifacts give rise to false colors to those circles resulting from aliasing. (Although this is not apparent in the black-and-white drawing, Applicant is ready to submit a color image if required for examination. ) This is because there are twice as many green pixels as red or blue pixels, and thus the green pixels have a different pitch (spatial frequency) from that of the red or blue pixels.
  • an image unit corresponds to a pixel.
  • an image unit corresponds to multiple pixels, as described in more detail below.
  • Fig. 5 illustrates a color filter array according to an embodiment of the present application. It can be seen that each image unit of the Bayer pattern as illustrated in Fig. 2 is subdivided into 16 (or multiple in general) pixels, where a photosensitive element such as a photodiode may be provided for each pixel. It should be noted that while the term pixel originally refers to a unit in an image ( "picture element" ) , it may also refer to a corresponding sensor area. While an image unit corresponds to a pixel one-to-one in the Bayer pattern, an area corresponding to each photodiode is called a pixel in the context of the present application. In the example of Fig. 5, each image unit corresponds to 16 pixels. As described below, the values of the pixels in an image unit are aggregated to provide one value for each color component. (Hence, an "image unit” remains a unit in the image, though it may also refer to a corresponding sensor area covering multiple pixels. )
  • an area corresponding to four pixels may be used for a unit for the Bayer pattern.
  • Such a smaller Bayer pattern unit may enhance resolution and thus reduce artifacts. But it will also increase the burden of interpolation processing.
  • the present application proposes a different approach that does not rely on interpolation between image units.
  • Fig. 6 schematically illustrates how the color filter array of Fig. 5 can be used to provide color values for the whole set of pixels of each color of RGB without interpolation.
  • the value for red for each image unit may be obtained by averaging the values from red pixels in the image unit.
  • the color values may be obtained by averaging the values from the pixels in each image unit. (It should be noted that since the number of pixels in an image unit is constant (16 in the present example) , the average of the values from pixels in an image unit is equivalent to the sum of the values from pixels in the image unit up to a scale factor. )
  • every color is detected in each image unit, its resolution is higher than that for the Bayer pattern of Fig. 2, in which only one color is detected in each image unit.
  • intensity of red light is detected for only four image units for the Bayer pattern.
  • red light intensity is detected for every one of the 16 image units. Further, processing required for interpolation between image units is eliminated. Moreover, there are no color artifacts due to interpolation between image units.
  • Fig. 7 illustrates a simulated image of a well-known color zone plate (CZP) obtained with an image sensor with the color filter arrangement of this embodiment. Compared with Fig. 4, there is no or little color artifact. (Although this is not apparent in the black-and-white drawing, Applicant is ready to submit a color image if required for examination. )
  • CZP color zone plate
  • a pixel array according to this embodiment may be summarized with generic terms as follows:
  • a pixel array for an image sensor comprising a plurality of image units
  • each image unit comprises a plurality of pixels for color components of a predetermined color space
  • each pixel is configured to detect at least one color component of the color components of the color space
  • a center of gravity of the pixels of the same color in an image unit substantially coincides with a center of gravity of the image unit
  • the pixels of the same color in an image unit lie on lines extending in the four directions of two perpendicular straight lines intersecting at a center of the image unit.
  • a floating diffusion node is a node (to be specific, a semiconductor region with doped impurities) for storing electric charges generated by a photodiode, and is also referred to as a charge storage node or a charge accumulation node.
  • charges from pixels may be added together without arithmetic operations. (As noted above, addition is equivalent to averaging up to scaling. )
  • Fig. 8 illustrates an exemplary scheme for sharing floating diffusion nodes.
  • each group of four pixels among the 16 pixels in an image unit share a floating diffusion node (illustrated with a black circle) , and the four floating diffusion nodes in an image unit are electrically connected. This results in summing (averaging) of pixel values in an image unit.
  • Fig. 9 illustrates how signals (charges) from pixels for each color (i.e., four pixels for each of red and blue or eight pixels for green) in an image unit are aggregated (this process is referred to as binning) to obtain a color value for the image unit.
  • the pixels of the same color in an image unit lie (in a symmetric manner) on lines extending in the four directions of two perpendicular straight lines intersecting at the center of the image unit.
  • the color filter array as illustrated in Fig. 5 or Fig. 6 is based on the principle (ii) above. It can be seen from the cross mark (i.e., crossing line segments) shown in Fig. 6.
  • Fig. 10 illustrates various possible arrangements for the color filters in an image unit.
  • YRB is used, which includes yellow instead of green as in RGB.
  • the drawings in (c) - (f) show grouping of two or four color filters (by the shape of the color filters forming a half-or quarter-segment of a circle or an ellipse instead of a square) . Such grouping is described in more detail below.
  • the image unit shown with a thick line
  • has an irregular shape which is adopted to use pixels in pairs.
  • FIG. 10 illustrates a simulated image of a circular zone plate (CZP) (left) and its emphasized color component (right) .
  • CZP circular zone plate
  • the degree of reduction of color artifacts varies.
  • the inventor has found that the case of (a) , which is similar to the arrangement in Fig. 5 and Fig. 6, is the most advantageous. (Although this is not apparent in the black-and-white drawing, Applicant is ready to submit a color image if required for examination. )
  • Fig. 11 illustrates exemplary color filter arrangements when four colors are used instead of three colors as in RGB or YGB.
  • an on-chip lens (also referred to as an on-chip microlens) may be provided for each pixel area in order to effectively direct light incident on a pixel to a photosensitive area of the pixel.
  • OCL on-chip lens
  • a location where a ray hits a photosensitive element corresponds to a location where the ray passed through the main lens of a camera.
  • a location where a ray hits a photosensitive element corresponds to a location in an object space regardless of where the ray passed through the main lens.
  • This correspondence can be utilized for phase detection autofocusing.
  • phase detection autofocusing or phase difference detection autofocusing, PD AF
  • phase detection autofocusing or phase difference detection autofocusing, PD AF
  • rays passing through the main lens of a camera at its extremes hit the same position on the photosensitive element if the imaged object is in focus, but they hit different positions if the object is out of focus.
  • the sense and magnitude of the shift in the positions allow determination of in which direction and how much the focus should be moved (whether the focus should be brought closer or farther and how much) so that the object is in focus.
  • a pair of sensor elements is used to detect a ray that has passed each side (e.g., left and right) of the main lens.
  • a row of such pairs of sensors substantially forms a pair of one-dimensional image sensors, which provide two profiles corresponding to a linear portion of the imaged object (each profile corresponds to one side of the main lens through which the ray has passed) . Comparison of the two profiles allows determination of the shift in the positions of the image of the object.
  • Such a row of pairs of sensors may be provided in both vertical and horizontal directions (which substantially provides both vertical and horizontal one-dimensional image sensors) .
  • an on-chip lens covering two or four pixels of the same color it is advantageous to provide an on-chip lens covering two or four pixels of the same color.
  • an on-chip lens covering a pair of adjacent pixels causes rays passing through respective sides (e.g., left and right sides) of the main lens to be incident on the respective pixels of the pair (pupil division) .
  • the color arrangement in some embodiments of the present application is suitable for such an on-chip lens covering more than one pixel of the same color.
  • Fig. 12 illustrates some embodiments in which two or four pixels of the same color in the color filter arrangement of Fig. 5 (left) or a variant thereof (middle, right) are covered with one on-chip lens.
  • Exemplary groupings of pixels are also illustrated in Fig. 10 as described above, in which (c) - (f) show grouping of two or four color filters by the shape of the color filters forming a half-or quarter-segment of a circle or an ellipse instead of a square.
  • Half-shielded pixels can also be phase detection pixels.
  • Phase detection pixels can be laid out more sparsely (e.g., one out of sixteen pixels) .
  • a shared floating diffusion node aggregates contribution from two pixels (for each of red and blue) or four pixels (for green) within a 4-by-2 pixel area (Step 1) .
  • This is a physical process, and does not require arithmetic operations.
  • contributions from two adjacent areas are digitally summed to derive a value for the 4-by-4 pixel image unit.
  • each image unit is composed of four-by-four pixels, a floating diffusion node being shared for each of groups of two-by-two pixels in the four-by-four pixels, and two of the four floating diffusion nodes included in an image unit being electrically coupled, whereby the 16 pixels included in each image unit form two groups each comprising eight pixels among which electric charges are aggregated,
  • a shared floating diffusion node aggregates contribution from two pixels (for each of red and blue) or four pixels (for green) within a 4-by-2 pixel area (Step 1) .
  • Step 1 This is similar to Fig. 13, but different in the location of pixels subject to the binning. The difference in the location results in a difference in pixels to be aggregated in the next step.
  • Step 2 contributions from areas that overlap the image unit, specifically, four areas (for each of red and blue) or five areas (for green) , are digitally summed. Overall, this amounts to aggregation of contributions from an area of 6-by-6 pixels, enlarged from the 4-by-4 image unit by one pixel on each side.
  • photosensitive elements such as photodiodes do not only contribute to the image units they belong to, but signals from some pixels contribute to more than one image unit.
  • each image unit is composed of four-by-four pixels, a floating diffusion node being shared for each of groups of two-by-two pixels in the pixel array, and the four floating diffusion nodes being electrically coupled in groups of two floating diffusion nodes, thereby forming groups each comprising eight pixels among which electric charges are aggregated,
  • two floating diffusion nodes in the diagonal direction are electrically connected.
  • This embodiment provides for a low power mode in addition to the no-interpolation (no-demosaicing) mode similar to the foregoing embodiments.
  • Step 1 which is common for both modes, contributions from pixels in a group, i.e., four pixels (for each of red and blue) or eight pixels (for green) are aggregated.
  • This is a physical process (charge binning) achieved by sharing of floating diffusion nodes and diagonal coupling thereof.
  • Step 2 a process for no-interpolation mode is similar to that in Fig. 14.
  • contributions from groups that overlap the image unit specifically, four areas (for each of red and blue) or five areas (for green) , are digitally summed. Again, contributions from an area somewhat larger than an image unit are aggregated, and some pixels contribute to more than one image unit.
  • Step 2 for the case of low power mode as shown in Fig. 15, two adjacent diagonal areas that have undergone the binning in Step 1 are aggregated. While this step may be performed by digital operations, it may also be achieved in the analog domain. That is, charges aggregated in Step 1 for two areas to be aggregated in Step 2 are taken out to a column line at once (which can be achieved by control via switching of the readout circuitry) . This allows summing (averaging) of charges from the two areas without performing arithmetic operations. It should be noted that such analog aggregation of charges via switching of the readout circuitry is not limited to this embodiment, but is also applicable to other embodiments. However, the embodiment of Fig. 15 allows it without requiring additional switches, because the areas to be aggregated are aligned vertically.
  • a color value for each image unit is then determined by interpolation of the charges aggregated in Step 2.
  • each image unit is composed of four-by-four pixels, a floating diffusion node being shared for each of groups of two-by-two pixels in the pixel array, and a floating diffusion node for a group of two-by-two pixels being electrically coupled to a diagonally adjacent floating diffusion node, thereby forming groups each comprising eight pixels among which electric charges are aggregated,
  • each image unit is composed of four-by-four pixels, a floating diffusion node being shared for each of groups of two-by-two pixels in the pixel array, and a floating diffusion node for a group of two-by-two pixels being electrically coupled to a diagonally adjacent floating diffusion node, thereby forming groups each comprising eight pixels among which electric charges are aggregated,
  • Fig. 16 illustrates an embodiment in which the sharing of floating diffusion nodes in Fig. 14 is applied to a pixel array with four colors.
  • a shared floating diffusion node aggregates contribution from two pixels (for each of red and blue) or four pixels (for each of green and yellow) within a 4-by-2 pixel area (Step 1) .
  • Step 2 contributions from four areas are digitally summed (this operation is not necessary for green according to this arrangement) .
  • Such a pixel array and/or an image sensor may be employed in an electronic device such as a digital camera, which further includes a lens mechanism configured to direct incident light to the image sensor.
  • the electronic device may also include an autofocusing mechanism for the lens mechanism.
  • the autofocusing mechanism is configured to perform phase difference detection autofocusing based on a pair of profiles obtained from a plurality of pairs of pixels.
  • the two pixels of each pair of the plurality of pairs of pixels may be covered by the same on-chip microlens.
  • Such an electronic device is not limited to a digital camera, but may also be a video camera, a webcam, a mobile phone, a computer, or any other device configured to capture images.
  • some functions may be implemented in a form of a computer program for causing a processor or a computing device to perform one or more functions.
  • various arithmetic operations and/or various control functions of an electronic device may be implemented as a computer program.
  • the computer program may be embodied on a non-transitory computer-readable storage medium.
  • the storage medium may be any medium that can store a computer program and may be a solid-state memory such as a USB drive, a flash drive, a read-only memory (ROM) , and a random-access memory (RAM) ; a magnetic storage medium such as a removable or non-removable hard disk; or an optical storage medium such as an optical disc.
  • Control functions may also be implemented with discrete or integrated circuit elements.

Abstract

Embodiments of this application relate to image sensors. A pixel array configuration and pixel circuitry that do not require interpolation between image units are provided. Specifically, an embodiment provides a pixel array for an image sensor, the pixel array comprising a plurality of image units, wherein each image unit comprises a plurality of pixels for color components of a predetermined color space; wherein each pixel is configured to detect at least one color component of the color components of the color space. For each color component of the color space, an electric signal aggregating contributions from pixels for that color component among the plurality of pixels included in each image unit is provided as an output from the image unit. A center of gravity of the pixels of the same color in an image unit substantially coincides with a center of gravity of the image unit; and/or the pixels of the same color in an image unit lie on lines extending in the four directions of two perpendicular straight lines intersecting at a center of the image unit. An image sensor including a pixel array, an electronic device including an image sensor, and methods of operation are also provided.

Description

PIXEL ARRAY, IMAGE SENSOR, AND ELECTRONIC DEVICE WITHOUT DEMOSAICING AND METHODS OF OPERATION THEREOF TECHNICAL FIELD
Embodiments of this application relate to image sensors, and in particular to CMOS image sensors.
BACKGROUND
Image sensors are commonly used in electronic devices such as digital cameras, video cameras, webcams, mobile phones, and computers in applications that involve capturing images.
Typically, an image sensor has an array of cells (pixels) arranged in rows and columns. Each cell contains a photosensitive element (also referred to as a sensor element; e.g., a photodiode) that generates an electric charge in response to incident light. An on-chip lens (OCL; also referred to as an on-chip microlens) may be provided for each pixel to effectively direct incoming light onto a photosensitive area of that pixel.
The generated electric charge is accumulated in a charge accumulation node (a capacitor-like structure often called a floating diffusion node, sometimes abbreviated FD herein) associated with the cell. An output electric signal corresponding to the light incident on the cell is generated from the electric charge accumulated in the floating diffusion node.
Most image sensors are either charge-coupled device (CCD) image sensors or complementary metal-oxide-semiconductor (CMOS) image sensors. CCD and CMOS image sensors differ in a signal readout method as well as in a manufacturing process.
In a CCD image sensor, an electric charge generated at a pixel in response to light is stored in a capacitor. The capacitors in one line are controlled to transfer their charge to their neighbors at once in a "bucket brigade" manner, and the capacitor at the end of the line outputs its charge to an amplifier. In contrast, in a CMOS image sensor, each pixel in the array has a photodiode and a switch (e.g., a transistor) . Thus, control of the switches in the array allows directly accessing a signal from each pixel. CMOS image sensors can be made inexpensive as compared with CCD image sensors, because CMOS image sensors, complete with control circuitry, can be  manufactured in an ordinary semiconductor manufacturing process.
A CMOS image sensor may include a pixel array and a readout circuit for taking out image signals from pixels. The readout circuit includes a row control circuit, a column control circuit, and a control circuit. As noted above, in a CMOS image sensor, by controlling the switches in the array, a signal from each pixel can be accessed directly.
An image signal corresponding to a pixel (cell) is read out by the readout circuit by rows and columns. Typically, in readout operations, a particular pixel row in the array may be selected by the row control circuit, and image signals generated by the pixels in that row are read out column by column along column lines by the column control circuit. An analog-to-digital conversion (ADC) circuit may be provided to convert the signals from the pixels to digital values.
An output of a photosensitive element is only responsive to the intensity of light, and does not provide color information. Thus, when it is desired to capture color images, a color filter array (CFA) may be provided. The color filter array includes color filter elements over the pixels of the pixel array. The color filter elements may include red, green, and blue color filter elements arranged in a so-called Bayer pattern, but other colors and/or other arrangement patterns may also be used.
Since a pixel covered by a filter element of one color cannot respond to the other colors, those missing color values have to be determined by interpolation. The process of obtaining color values for each color for all the pixels in the pixel array by interpolation is referred to as demosaicing (sometimes abbreviated DM herein) .
However, demosaicing through interpolation for the Bayer pattern is subject to several disadvantages including low resolution, high power consumption, and color artifacts.
SUMMARY
In view of the disadvantages involved with a pixel array that employs interpolation for the Bayer pattern, the present application proposes a pixel array configuration and pixel circuitry that do not require interpolation between image units.
According to a first aspect of the present disclosure, there is provided a pixel array for an image sensor, the pixel array comprising a plurality of image units. Each image unit comprises a plurality of pixels for color components of a predetermined color space; and each pixel is configured to detect at least one color component of the color components of the color space. For each color component of the color space, an electric signal aggregating contributions from pixels for that color component among the plurality of pixels included in each image unit is provided as an  output from the image unit. A center of gravity of the pixels of the same color in an image unit substantially coincides with a center of gravity of the image unit; and/or the pixels of the same color in an image unit lie on lines extending in the four directions of two perpendicular straight lines intersecting at a center of the image unit.
In this aspect of the present disclosure, interpolation between image units is not required. This results in lower computational complexity as compared with techniques that involve interpolation between image units. Moreover, the pixel array according to this aspect provides a resolution higher than techniques in which an image unit is configured to detect only one color component. Moreover, color artifacts due to the arrangement of color filters of some known pixel array may be avoided.
According to a first implementation of the first aspect of the present disclosure, electric charges generated by photosensitive elements (e.g., photodiodes) of pixels in each image unit are aggregated in a physical process.
According to a second implementation of the first aspect of the present disclosure based on the first aspect per se or the first implementation of the first aspect of the present disclosure, electric charges generated by photosensitive elements of pixels in each image unit are aggregated in a common electric charge storage structure.
According to a third implementation of the first aspect of the present disclosure based on the first aspect per se or the first or second implementation of the first aspect of the present disclosure, electric charges generated by photosensitive elements of at least two pixels in each image unit are stored in a common floating diffusion node.
According to a fourth implementation of the first aspect of the present disclosure based on the first aspect per se or the first to third implementation of the first aspect of the present disclosure, electric charges generated by photosensitive elements of at least two pixels in each image unit are stored in a common floating diffusion node, and at least two floating diffusion nodes in each image units are electrically coupled.
In these implementations of the present disclosure, arithmetic operations for aggregating contributions from pixels for a color component among the plurality of pixels included in each image unit may be reduced or eliminated.
According to a fifth implementation of the first aspect of the present disclosure based on the first aspect per se or the first to fourth implementation of the first aspect of the present disclosure, photosensitive elements (e.g., photodiodes) of pixels for different colors in each image unit are controlled to output electric charges at different times.
This implementation allows taking out charges for each color component separately even  if a charge storage structure is shared by pixels for different colors.
According to a sixth implementation of the first aspect of the present disclosure based on the first aspect per se or the first to fifth implementation of the first aspect of the present disclosure, each image unit is composed of four-by-four pixels, a floating diffusion node being shared for each of groups of two-by-two pixels in the four-by-four pixels, and the four floating diffusion nodes included in an image unit being electrically coupled.
This implementation allows a relatively simple scheme of sharing floating diffusion nodes, while allowing aggregation of charges contributed from the pixels in an image unit in a physical process in which arithmetic operations are not required.
According to a seventh implementation of the first aspect of the present disclosure based on the first aspect per se or the first to fifth implementation of the first aspect of the present disclosure, each image unit is composed of four-by-four pixels, a floating diffusion node being shared for each of groups of two-by-two pixels in the four-by-four pixels, and two of the four floating diffusion nodes included in an image unit being electrically coupled, whereby the 16 pixels included in each image unit form two groups each comprising eight pixels among which electric charges are aggregated, wherein electric signals derived from electric charges aggregated in respective groups of eight pixels are aggregated and are provided as an output from the image unit.
This implementation provides an alternative scheme of sharing floating diffusion nodes.
According to an eighth implementation of the first aspect of the present disclosure based on the first aspect per se or the first to fifth implementation of the first aspect of the present disclosure, each image unit is composed of four-by-four pixels, a floating diffusion node being shared for each of groups of two-by-two pixels in the pixel array, and the four floating diffusion nodes being electrically coupled in groups of two floating diffusion nodes, thereby forming groups each comprising eight pixels among which electric charges are aggregated, wherein electric signals derived from electric charges aggregated in respective groups that include a pixel belonging to each image unit are summed and are provided as an output from the image unit.
This implementation provides another alternative scheme of sharing floating diffusion nodes.
According to a ninth implementation of the first aspect of the present disclosure based on the first aspect per se or the first to fifth implementation of the first aspect of the present disclosure, each image unit is composed of four-by-four pixels, a floating diffusion node being shared for each of groups of two-by-two pixels in the pixel array, and a floating diffusion node for a group of two-by-two pixels being electrically coupled to a diagonally adjacent floating diffusion node, thereby forming groups each comprising eight pixels among which electric charges are  aggregated, wherein electric signals derived from electric charges aggregated in respective groups that include a pixel belonging to each image unit are summed/aggregated and are provided as an output from the image unit.
According to a tenth implementation of the first aspect of the present disclosure based on the first aspect per se or the first to fifth implementation of the first aspect of the present disclosure, each image unit is composed of four-by-four pixels, a floating diffusion node being shared for each of groups of two-by-two pixels in the pixel array, and a floating diffusion node for a group of two-by-two pixels being electrically coupled to a diagonally adjacent floating diffusion node, thereby forming groups each comprising eight pixels among which electric charges are aggregated, wherein electric signals derived from electric charges aggregated in two adjacent groups are summed/aggregated and an output from the image unit is derived by interpolation of such summed/aggregated values.
The ninth and tenth implementations provide two modes of yet another alternative scheme of sharing floating diffusion nodes. In the tenth implementation, interpolation is employed to derive a final output from the image unit.
According to an eleventh implementation of the first aspect of the present disclosure based on the first aspect per se or the first to tenth implementation of the first aspect of the present disclosure, the color space is a color space comprising three colors.
According to a twelfth implementation of the first aspect of the present disclosure based on the first aspect per se or the first to eleventh implementation of the first aspect of the present disclosure, the color space is a color space comprising three colors A, B, and C, each image unit being composed of four-by-four pixels, wherein the colors of pixels in each image unit are arranged as
ABCA
CAAB
BAAC
ACBA
or
ACBA
BAAC
CAAB
ABCA.
According to a thirteenth implementation of the first aspect of the present disclosure based on the first aspect per se or the first to twelfth implementation of the first aspect of the present  disclosure, the color space is an RGB color space comprising red (R) , green (G) , and blue (B) , each image unit being composed of four-by-four pixels, wherein the colors of pixels in each image unit are arranged as
GBRG
RGGB
BGGR
GRBG
or
GRBG
BGGR
RGGB
GBRG.
According to a fourteenth implementation of the first aspect of the present disclosure based on the first aspect per se or the first to twelfth implementation of the first aspect of the present disclosure, the color space is a YRG color space comprising yellow (Y) , red (R) , and blue (B) , each image unit being composed of four-by-four pixels, wherein the colors of pixels in each image unit are arranged as
YBRY
RYYB
BYYR
YRBY
or
YRBY
BYYR
RYYB
YBRY.
According to a fifteenth implementation of the first aspect of the present disclosure based on the first aspect per se or the first to tenth implementation of the first aspect of the present disclosure, the color space is a color space comprising four colors.
According to a sixteenth implementation of the first aspect of the present disclosure based on the first aspect per se or the first to fifteenth implementation of the first aspect of the present disclosure, at least one on-chip microlens covers more than one pixel of the same color.
According to a seventeenth implementation of the first aspect of the present disclosure based on the first aspect per se or the sixteenth implementation of the first aspect of the present  disclosure, at least one on-chip microlens covers an area comprising two-by-two pixels of the same color; and/or at least one on-chip microlens covers an area comprising two adjacent pixels of the same color.
The sixteenth and seventeenth implementations are advantageous in providing phase detection (for auto focusing) with pairs of pixels in the image sensor.
According to a second aspect, there is provided an image sensor comprising: the pixel array according to the first aspect per se or any of the implementations of the first aspect of the present disclosure, and a readout circuit configured to read out signals from image units of the pixel array.
Advantages provided by the second aspect of the present disclosure are similar to those provided by the first aspect, and will not be repeated here.
According to a first implementation of the second aspect of the present disclosure, the readout circuit comprises: a row control circuit configured to select rows of image units of the pixel array; a column control circuit configured to read out, by column-by-column control, signals from each image unit in a row selected by the row control circuit; an analog-to-digital converter for converting signals from each image unit to a digital signal; and a control circuit for controlling the readout operation of the readout circuit.
According to a third aspect, there is provided an electronic device comprising the image sensor according to the second aspect per se or the first implementation of the second aspect of the present disclosure; a lens mechanism configured to direct incident light to the image sensor; and an autofocusing mechanism for the lens mechanism.
Advantages provided by the third aspect of the present disclosure are similar to those provided by the first aspect, and will not be repeated here.
According to a first implementation of the third aspect of the present disclosure, the autofocusing mechanism is configured to perform phase difference detection autofocusing based on a pair of profiles obtained from a plurality of pairs of pixels.
According to a second implementation of the third aspect of the present disclosure based on the first implementation of the third aspect, the two pixels of each pair of the plurality of pairs of pixels are covered by the same on-chip microlens.
According to a fourth aspect, there is provided a method of operation of an image sensor comprising a pixel array comprising a plurality of image units, wherein each image unit comprises a plurality of pixels for color components of a predetermined color space; wherein each pixel is configured to detect at least one color component of the color components of the color space; wherein a center of gravity of the pixels of the same color in an image unit substantially coincides  with a center of gravity of the image unit; and/or the pixels of the same color in an image unit lie on lines extending in the four directions of two perpendicular straight lines intersecting at a center of the image unit, wherein the method comprises: for each color component of the color space, providing an electric signal aggregating contributions from pixels for that color component among the plurality of pixels included in each image unit as an output from the image unit.
Advantages provided by the fourth aspect of the present disclosure are similar to those provided by the first aspect, and will not be repeated here.
According to a first implementation of the fourth aspect, electric charges generated by photosensitive elements of pixels in each image unit are aggregated in a physical process.
According to a second implementation of the fourth aspect based on the fourth aspect per se or the first implementation of the fourth aspect of the present disclosure, electric charges generated by photosensitive elements of pixels in each image unit are aggregated in a common electric charge storage structure.
According to a third implementation of the fourth aspect based on the fourth aspect per se or the first or second implementation of the fourth aspect of the present disclosure, electric charges generated by photosensitive elements of at least two pixels in each image unit are stored in a common floating diffusion node.
According to a fourth implementation of the fourth aspect based on the fourth aspect per se or the first to third implementations of the fourth aspect of the present disclosure, electric charges generated by photosensitive elements of at least two pixels in each image unit are stored in a common floating diffusion node, and at least two floating diffusion nodes in each image unit are electrically coupled.
According to a fifth implementation of the fourth aspect based on the fourth aspect per se or the first to fourth implementations of the fourth aspect of the present disclosure, photosensitive elements of pixels for different colors in each image unit are controlled to output electric charges at different times.
It is to be understood that any feature described in connection with one aspect of the present disclosure may also be applicable to the other aspects as appropriate.
BRIEF DESCRIPTION OF DRAWINGS
To describe technical solutions in embodiments of the present application, references are made to the accompanying drawings, in which
FIG. 1 illustrates an electronic device having an image sensor including a pixel array;
FIG. 2 illustrates a color filter arrangement according to the Bayer pattern;
FIG. 3 illustrates interpolation for providing missing color values;
FIG. 4 illustrates color artifacts for a color zone plate (CZP) that arise due to demosaicing with the Bayer pattern;
FIG. 5 illustrates a color filter arrangement according to an embodiment of the present application;
FIG. 6 illustrates how color values for the pixels can be determined through averaging within each image unit without interpolation between image units, according to an embodiment of the present application;
FIG. 7 illustrates reduced color artifacts for a color zone plate (CZP) according to an embodiment of the present application;
FIG. 8 illustrates sharing of floating diffusion nodes (FDs) for the color filter arrangement illustrated in FIG. 5 according to an embodiment of the present application;
FIG. 9 illustrates how color values for image units are determined without interpolation between image units for the color filter arrangement illustrated in FIG. 5, according to an embodiment of the present application;
FIG. 10 illustrates color artifacts for a color zone plate (CZP) for various color filter arrangements according to some embodiments of the present application;
FIG. 11 illustrates exemplary color filter arrangements with four colors according to some embodiments of the present application;
FIG. 12 illustrates on-chip lens (OCL) patterns for the color filter arrangement illustrated in FIG. 5, according to an embodiment of the present application;
FIG. 13 illustrates how color values for image units are determined in two-step binning for the color filter arrangement illustrated in FIG. 5, according to an embodiment of the present application;
FIG. 14 illustrates how color values for image units are determined in two-step binning for the color filter arrangement illustrated in FIG. 5, according to an embodiment of the present application;
FIG. 15 illustrates how color values for image units are determined in two-step binning for the color filter arrangement illustrated in FIG. 5, according to an embodiment of the present application; and
FIG. 16 illustrates how color values for image units are determined in two-step binning for a color filter arrangement with four colors according to an embodiment of the present application.
DESCRIPTION OF EMBODIMENTS
The following describes embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of an electronic device 100 that includes an image sensor 105. The image sensor 105 may include a pixel array 110 and a readout circuit 115 for taking out image signals from pixels. The readout circuit 115 includes a row control circuit 120, a column control circuit 130, and a control circuit 150.
An image signal corresponding to a pixel (cell) is generated by a photosensitive element (also referred to as a sensor element; e.g., a photodiode) of a pixel and is read out by the readout circuit by rows and columns. A photosensitive element is not limited to a photodiode. For example, a photoconductor film made of organic material can be used as a photosensitive element. Typically, in readout operations, a particular pixel row in the array may be selected by the row control circuit 120, and image signals generated by the pixels in that row are read out column by column along column lines by the column control circuit 130. An analog-to-digital conversion (ADC) circuit may be provided to convert image signals from pixels to digital values.
An output of a photosensitive element is only responsive to the intensity of light, and does not provide color information. Thus, when it is desired to capture color images, a color filter array (CFA) may be provided. The color filter array includes color filter elements over at least one of the pixels of the pixel array. The color filter elements typically include red, green, and blue color filter elements. A red color filter element passes red light, and thus, the pixel behind the red filter element (sometimes referred to as a red pixel) responds to (and thus detects) red light. Similarly, the pixel behind a green filter element (sometimes referred to as a green pixel) detects green light, and the pixel behind a blue filter element (sometimes referred to as a blue pixel) detects blue light. Sometimes, a broadband filter element that passes two or more colors may be used.
A Bayer color filter pattern (also called a Bayer mosaic pattern, a Bayer pattern, or the like) is one typical arrangement of color filter elements in a color filter array, and includes repeating units of 2-by-2 pixels, in which two pixels in the diagonal positions are green, and one of the remaining two pixels is red and the remaining one is blue. (The term Bayer pattern may also refer to the pattern of a unit including 2-by-2 pixels. ) Fig. 2 illustrates such a Bayer color filter pattern. The Bayer pattern employs twice as many green pixels as the red and blue pixels. This is to provide higher precision for green, to which human eyes are more sensitive than to red and blue.
In the Bayer pattern (or typical variants thereof) , a pixel with a red color filter element provides only a red color signal and cannot provide a green sensor signal or a blue sensor signal.  Similarly, a pixel with a green color filter element provides only a green sensor signal, and a pixel with a blue color filter element provides only a blue sensor signal. Thus, missing color values have to be determined by interpolation. The process of obtaining color values for each color for all the pixels in the pixel array by interpolation is referred to as demosaicing. In other words, a demosaicing algorithm is employed to provide green and blue sensor signals for a red pixel (a pixel with a red color filter element) ; provide red and blue sensor signals for a green pixel; and provide red and green sensor signals for a blue pixel.
As one example, Fig. 3 is a schematic diagram illustrating how pixel values of each color are interpolated to provide pixel values for which sensor signals of that color are missing.
However, demosaicing through interpolation in a Bayer pattern is subject to several disadvantages. First, since the Bayer pattern allows only one pixel (for each of red and blue) or two pixels (for green) out of a 4-pixel unit to be used for detection, the resolution is low. Second, extra signal processing power is required for interpolation. Moreover, color artifacts may occur at an edge region in which color changes abruptly.
Fig. 4 illustrates a simulated result of demosaicing of an image of a well-known color zone plate (CZP) obtained with an image sensor with the Bayer pattern. (Simple bilinear interpolation based on averaging two or four nearest pixels of the same color is employed in the illustrated example. ) 
A CZP is a concentric circular pattern, in which the gray level varies according to the sine function with frequencies increasing radially from the center. Components of high spatial frequencies (small pitches) with respect to the resolution of the screen yield circular patterns off the center by aliasing. Although the CZP is a gray scale pattern, color artifacts give rise to false colors to those circles resulting from aliasing. (Although this is not apparent in the black-and-white drawing, Applicant is ready to submit a color image if required for examination. ) This is because there are twice as many green pixels as red or blue pixels, and thus the green pixels have a different pitch (spatial frequency) from that of the red or blue pixels.
In view of the above disadvantages caused by the interpolation for the Bayer pattern, the present application proposes a pixel array configuration and pixel circuitry that do not require interpolation between image units. (With the Bayer pattern, an image unit corresponds to a pixel. According to embodiments of the present application, an image unit corresponds to multiple pixels, as described in more detail below. ) 
Fig. 5 illustrates a color filter array according to an embodiment of the present application. It can be seen that each image unit of the Bayer pattern as illustrated in Fig. 2 is subdivided into 16 (or multiple in general) pixels, where a photosensitive element such as a  photodiode may be provided for each pixel. It should be noted that while the term pixel originally refers to a unit in an image ( "picture element" ) , it may also refer to a corresponding sensor area. While an image unit corresponds to a pixel one-to-one in the Bayer pattern, an area corresponding to each photodiode is called a pixel in the context of the present application. In the example of Fig. 5, each image unit corresponds to 16 pixels. As described below, the values of the pixels in an image unit are aggregated to provide one value for each color component. (Hence, an "image unit" remains a unit in the image, though it may also refer to a corresponding sensor area covering multiple pixels. ) 
Given such a subdivision of an image unit, an area corresponding to four pixels (rather than four image units as in the conventional Bayer pattern) may be used for a unit for the Bayer pattern. Such a smaller Bayer pattern unit may enhance resolution and thus reduce artifacts. But it will also increase the burden of interpolation processing. The present application proposes a different approach that does not rely on interpolation between image units.
Fig. 6 schematically illustrates how the color filter array of Fig. 5 can be used to provide color values for the whole set of pixels of each color of RGB without interpolation. According to this embodiment, since every image unit has (four) red pixels, interpolation between image units is not required. The value for red for each image unit may be obtained by averaging the values from red pixels in the image unit. Similarly, for green and blue, the color values may be obtained by averaging the values from the pixels in each image unit. (It should be noted that since the number of pixels in an image unit is constant (16 in the present example) , the average of the values from pixels in an image unit is equivalent to the sum of the values from pixels in the image unit up to a scale factor. )
According to this embodiment, since every color is detected in each image unit, its resolution is higher than that for the Bayer pattern of Fig. 2, in which only one color is detected in each image unit. For example, among the 16 image units illustrated in Fig. 3, intensity of red light is detected for only four image units for the Bayer pattern. In contrast, according to this embodiment as illustrated in Fig. 6, red light intensity is detected for every one of the 16 image units. Further, processing required for interpolation between image units is eliminated. Moreover, there are no color artifacts due to interpolation between image units.
Fig. 7 illustrates a simulated image of a well-known color zone plate (CZP) obtained with an image sensor with the color filter arrangement of this embodiment. Compared with Fig. 4, there is no or little color artifact. (Although this is not apparent in the black-and-white drawing, Applicant is ready to submit a color image if required for examination. )
A pixel array according to this embodiment may be summarized with generic terms as  follows:
A pixel array for an image sensor, the pixel array comprising a plurality of image units,
wherein each image unit comprises a plurality of pixels for color components of a predetermined color space;
wherein each pixel is configured to detect at least one color component of the color components of the color space;
wherein, for each color component of the color space, an electric signal aggregating contributions from pixels for that color component among the plurality of pixels included in each image unit is provided as an output from the image unit; and
wherein
a center of gravity of the pixels of the same color in an image unit substantially coincides with a center of gravity of the image unit; and/or
the pixels of the same color in an image unit lie on lines extending in the four directions of two perpendicular straight lines intersecting at a center of the image unit.
According to a selective embodiment, at least some of the 16 (or multiple in general) pixels in each image unit share a floating diffusion node (FD) . A floating diffusion node is a node (to be specific, a semiconductor region with doped impurities) for storing electric charges generated by a photodiode, and is also referred to as a charge storage node or a charge accumulation node. By sharing a floating diffusion node, charges from pixels may be added together without arithmetic operations. (As noted above, addition is equivalent to averaging up to scaling. )
Fig. 8 illustrates an exemplary scheme for sharing floating diffusion nodes. As can be seen in the enlarged drawing to the right, each group of four pixels among the 16 pixels in an image unit share a floating diffusion node (illustrated with a black circle) , and the four floating diffusion nodes in an image unit are electrically connected. This results in summing (averaging) of pixel values in an image unit.
Fig. 9 illustrates how signals (charges) from pixels for each color (i.e., four pixels for each of red and blue or eight pixels for green) in an image unit are aggregated (this process is referred to as binning) to obtain a color value for the image unit.
Since electric charges from photodiodes corresponding to different colors are to be stored in the same floating diffusion node, collection of charges for different colors may be performed at different times. That is, at one time, charges from photodiodes for red pixels are transferred to the floating diffusion node; after the charges are read out, charges from photodiodes for green pixels are transferred to the floating diffusion node; and, after they are read out, charges from photodiodes for blue pixels are transferred to the floating diffusion node. In a way, binning for  red, green, and blue as illustrated in Fig. 9 may be viewed as three steps of collecting charges for these colors. (As a matter of course, the order of collecting charges for different colors is not limited to this example. )
Color Filter Arrangement
The following design guidelines may be advantageous for the color filter arrangement in an image unit subdivided into pixels:
(i) the center of gravity of the pixels of the same color in an image unit substantially coincides with the center of gravity of the image unit; and/or
(ii) the pixels of the same color in an image unit lie (in a symmetric manner) on lines extending in the four directions of two perpendicular straight lines intersecting at the center of the image unit.
The color filter array as illustrated in Fig. 5 or Fig. 6 is based on the principle (ii) above. It can be seen from the cross mark (i.e., crossing line segments) shown in Fig. 6.
The color filter array of the present application is not limited to the arrangement as illustrated in Fig. 5 and Fig. 6. Fig. 10 illustrates various possible arrangements for the color filters in an image unit. (Here, YRB is used, which includes yellow instead of green as in RGB. ) The drawings in (c) - (f) show grouping of two or four color filters (by the shape of the color filters forming a half-or quarter-segment of a circle or an ellipse instead of a square) . Such grouping is described in more detail below. For the case of (f) , the image unit (shown with a thick line) has an irregular shape, which is adopted to use pixels in pairs.
Each of (a) - (f) in Fig. 10 illustrates a simulated image of a circular zone plate (CZP) (left) and its emphasized color component (right) . The degree of reduction of color artifacts varies. The inventor has found that the case of (a) , which is similar to the arrangement in Fig. 5 and Fig. 6, is the most advantageous. (Although this is not apparent in the black-and-white drawing, Applicant is ready to submit a color image if required for examination. )
Fig. 11 illustrates exemplary color filter arrangements when four colors are used instead of three colors as in RGB or YGB.
Lens Pattern for Phase Detection (PD)
In a pixel panel for an image sensor, an on-chip lens (OCL) (also referred to as an on-chip microlens) may be provided for each pixel area in order to effectively direct light incident on a pixel to a photosensitive area of the pixel. By the action of an on-chip lens, a location where a ray hits a photosensitive element corresponds to a location where the ray passed through the main lens of a camera. (Without an on-chip lens, a location where a ray hits a photosensitive element  corresponds to a location in an object space regardless of where the ray passed through the main lens. ) This correspondence can be utilized for phase detection autofocusing.
The basis for autofocusing by phase detection (referred to as phase detection autofocusing or phase difference detection autofocusing, PD AF) is that rays passing through the main lens of a camera at its extremes hit the same position on the photosensitive element if the imaged object is in focus, but they hit different positions if the object is out of focus. The sense and magnitude of the shift in the positions allow determination of in which direction and how much the focus should be moved (whether the focus should be brought closer or farther and how much) so that the object is in focus. Specifically, a pair of sensor elements is used to detect a ray that has passed each side (e.g., left and right) of the main lens. A row of such pairs of sensors substantially forms a pair of one-dimensional image sensors, which provide two profiles corresponding to a linear portion of the imaged object (each profile corresponds to one side of the main lens through which the ray has passed) . Comparison of the two profiles allows determination of the shift in the positions of the image of the object. Such a row of pairs of sensors may be provided in both vertical and horizontal directions (which substantially provides both vertical and horizontal one-dimensional image sensors) .
For such phase detection autofocusing, it is advantageous to provide an on-chip lens covering two or four pixels of the same color. For example, an on-chip lens covering a pair of adjacent pixels causes rays passing through respective sides (e.g., left and right sides) of the main lens to be incident on the respective pixels of the pair (pupil division) .
The color arrangement in some embodiments of the present application is suitable for such an on-chip lens covering more than one pixel of the same color.
Fig. 12 illustrates some embodiments in which two or four pixels of the same color in the color filter arrangement of Fig. 5 (left) or a variant thereof (middle, right) are covered with one on-chip lens.
Exemplary groupings of pixels are also illustrated in Fig. 10 as described above, in which (c) - (f) show grouping of two or four color filters by the shape of the color filters forming a half-or quarter-segment of a circle or an ellipse instead of a square.
Half-shielded pixels can also be phase detection pixels. Phase detection pixels can be laid out more sparsely (e.g., one out of sixteen pixels) .
Pixel Circuitry
In the embodiment described with respect to Fig. 8 above, four pixels in the color filter arrangement of Fig. 5 and Fig. 6 share a floating diffusion node, and four of such floating diffusion  nodes are electrically connected. The present application is not limited to such circuitry, and other configurations for pixel circuitry are also possible.
According to an embodiment as illustrated in Fig. 13, four pixels in the color filter arrangement of Fig. 5 and Fig. 6 share a floating diffusion node, and two of such floating diffusion nodes are electrically connected. As shown, aggregation of contribution from the pixels in an image unit is a two-step process in this embodiment. First, a shared floating diffusion node aggregates contribution from two pixels (for each of red and blue) or four pixels (for green) within a 4-by-2 pixel area (Step 1) . This is a physical process, and does not require arithmetic operations. Then, contributions from two adjacent areas are digitally summed to derive a value for the 4-by-4 pixel image unit. (It should be noted that while 3-by-2, rather than 4-by-2, pixel areas are surrounded with borders for red and blue in Fig. 13, this is merely intended to focus on those pixels that contribute to the binning. )
The features specific to a pixel array according to this embodiment may be summarized with generic terms as follows:
each image unit is composed of four-by-four pixels, a floating diffusion node being shared for each of groups of two-by-two pixels in the four-by-four pixels, and two of the four floating diffusion nodes included in an image unit being electrically coupled, whereby the 16 pixels included in each image unit form two groups each comprising eight pixels among which electric charges are aggregated,
wherein electric signals derived from electric charges aggregated in respective groups of eight pixels are aggregated and are provided as an output from the image unit.
According to an embodiment as illustrated in Fig. 14, again, four pixels in the color filter arrangement of Fig. 5 and Fig. 6 share a floating diffusion node, and two of such floating diffusion nodes are electrically connected. This embodiment differs from that of Fig. 13 in the location of the floating diffusion nodes. First, a shared floating diffusion node aggregates contribution from two pixels (for each of red and blue) or four pixels (for green) within a 4-by-2 pixel area (Step 1) . This is similar to Fig. 13, but different in the location of pixels subject to the binning. The difference in the location results in a difference in pixels to be aggregated in the next step. In Step 2, contributions from areas that overlap the image unit, specifically, four areas (for each of red and blue) or five areas (for green) , are digitally summed. Overall, this amounts to aggregation of contributions from an area of 6-by-6 pixels, enlarged from the 4-by-4 image unit by one pixel on each side. In this embodiment, photosensitive elements such as photodiodes do not only contribute to the image units they belong to, but signals from some pixels contribute to more than one image unit.
The features specific to a pixel array according to this embodiment may be summarized with generic terms as follows:
wherein each image unit is composed of four-by-four pixels, a floating diffusion node being shared for each of groups of two-by-two pixels in the pixel array, and the four floating diffusion nodes being electrically coupled in groups of two floating diffusion nodes, thereby forming groups each comprising eight pixels among which electric charges are aggregated,
wherein electric signals derived from electric charges aggregated in respective groups that include a pixel belonging to each image unit are summed and are provided as an output from the image unit.
According to an embodiment as illustrated in Fig. 15, two floating diffusion nodes in the diagonal direction are electrically connected. This embodiment provides for a low power mode in addition to the no-interpolation (no-demosaicing) mode similar to the foregoing embodiments.
In Step 1, which is common for both modes, contributions from pixels in a group, i.e., four pixels (for each of red and blue) or eight pixels (for green) are aggregated. This is a physical process (charge binning) achieved by sharing of floating diffusion nodes and diagonal coupling thereof.
In Step 2, a process for no-interpolation mode is similar to that in Fig. 14. In order to obtain a color value for a given image unit, contributions from groups that overlap the image unit, specifically, four areas (for each of red and blue) or five areas (for green) , are digitally summed. Again, contributions from an area somewhat larger than an image unit are aggregated, and some pixels contribute to more than one image unit.
In Step 2 for the case of low power mode, as shown in Fig. 15, two adjacent diagonal areas that have undergone the binning in Step 1 are aggregated. While this step may be performed by digital operations, it may also be achieved in the analog domain. That is, charges aggregated in Step 1 for two areas to be aggregated in Step 2 are taken out to a column line at once (which can be achieved by control via switching of the readout circuitry) . This allows summing (averaging) of charges from the two areas without performing arithmetic operations. It should be noted that such analog aggregation of charges via switching of the readout circuitry is not limited to this embodiment, but is also applicable to other embodiments. However, the embodiment of Fig. 15 allows it without requiring additional switches, because the areas to be aggregated are aligned vertically.
A color value for each image unit is then determined by interpolation of the charges aggregated in Step 2.
The features specific to a pixel array according to no-interpolation mode of this  embodiment may be summarized with generic terms as follows:
each image unit is composed of four-by-four pixels, a floating diffusion node being shared for each of groups of two-by-two pixels in the pixel array, and a floating diffusion node for a group of two-by-two pixels being electrically coupled to a diagonally adjacent floating diffusion node, thereby forming groups each comprising eight pixels among which electric charges are aggregated,
wherein electric signals derived from electric charges aggregated in respective groups that include a pixel belonging to each image unit are aggregated and are provided as an output from the image unit.
The features specific to a pixel array according to low power mode of this embodiment may be summarized with generic terms as follows:
each image unit is composed of four-by-four pixels, a floating diffusion node being shared for each of groups of two-by-two pixels in the pixel array, and a floating diffusion node for a group of two-by-two pixels being electrically coupled to a diagonally adjacent floating diffusion node, thereby forming groups each comprising eight pixels among which electric charges are aggregated,
wherein electric signals derived from electric charges aggregated in two adjacent groups are aggregated and an output from the image unit is derived by interpolation of such aggregated values.
Fig. 16 illustrates an embodiment in which the sharing of floating diffusion nodes in Fig. 14 is applied to a pixel array with four colors. First, a shared floating diffusion node aggregates contribution from two pixels (for each of red and blue) or four pixels (for each of green and yellow) within a 4-by-2 pixel area (Step 1) . In Step 2, contributions from four areas are digitally summed (this operation is not necessary for green according to this arrangement) .
Various embodiments of a pixel array and/or an image sensor have been described in the above. Such a pixel array and/or an image sensor may be employed in an electronic device such as a digital camera, which further includes a lens mechanism configured to direct incident light to the image sensor.
The electronic device may also include an autofocusing mechanism for the lens mechanism. The autofocusing mechanism is configured to perform phase difference detection autofocusing based on a pair of profiles obtained from a plurality of pairs of pixels. In some embodiments, the two pixels of each pair of the plurality of pairs of pixels may be covered by the same on-chip microlens.
Such an electronic device is not limited to a digital camera, but may also be a video camera, a webcam, a mobile phone, a computer, or any other device configured to capture images.
While various embodiments are described above and illustrated in the drawings, the present invention is not limited to the specific embodiment described or illustrated.
The unit division disclosed in embodiments of the present application is not limiting, and embodiments may be configured with other divisions of components.
Where appropriate, some functions may be implemented in a form of a computer program for causing a processor or a computing device to perform one or more functions. For example, various arithmetic operations and/or various control functions of an electronic device, e.g., a camera, may be implemented as a computer program. The computer program may be embodied on a non-transitory computer-readable storage medium. The storage medium may be any medium that can store a computer program and may be a solid-state memory such as a USB drive, a flash drive, a read-only memory (ROM) , and a random-access memory (RAM) ; a magnetic storage medium such as a removable or non-removable hard disk; or an optical storage medium such as an optical disc. Control functions may also be implemented with discrete or integrated circuit elements.
The foregoing descriptions are merely to illustrate various embodiments of the present application, and are not intended to limit the scope of the invention. Any variation that would readily occur to a person skilled in the art in view of the present disclosure shall fall within the scope of this application. For example, measures separately disclosed may be combined in a single embodiment as appropriate, as long as such measures are not mutually exclusive.

Claims (30)

  1. A pixel array for an image sensor, the pixel array comprising a plurality of image units,
    wherein each image unit comprises a plurality of pixels for color components of a predetermined color space;
    wherein each pixel is configured to detect at least one color component of the color components of the color space;
    wherein, for each color component of the color space, an electric signal aggregating contributions from pixels for that color component among the plurality of pixels included in each image unit is provided as an output from the image unit; and
    wherein
    a center of gravity of the pixels of the same color in a given image unit substantially coincides with a center of gravity of the given image unit; and/or
    the pixels of the same color in a given image unit lie on lines extending in four directions of two perpendicular straight lines intersecting at a center of the given image unit.
  2. The pixel array according to Claim 1, wherein electric charges generated by photosensitive elements of pixels in each image unit are aggregated in a physical process.
  3. The pixel array according to Claim 1 or 2, wherein electric charges generated by photosensitive elements of pixels in each image unit are aggregated in a common electric charge storage structure.
  4. The pixel array according to any one of Claims 1 to 3, wherein electric charges generated by photosensitive elements of at least two pixels in each image unit are stored in a common floating diffusion node.
  5. The pixel array according to any one of Claims 1 to 4, wherein electric charges generated by photosensitive elements of at least two pixels in each image unit are stored in a common floating diffusion node, and at least two floating diffusion nodes in each image unit are electrically coupled.
  6. The pixel array according to any one of Claims 1 to 5, wherein photosensitive elements of pixels for different colors in each image unit are controlled to output electric charges at different times.
  7. The pixel array according to any one of Claims 1 to 6, wherein each image unit is composed of four-by-four pixels, a floating diffusion node being shared for each of groups of two-by-two pixels in the four-by-four pixels, and the four floating diffusion nodes included in an image unit being electrically coupled.
  8. The pixel array according to any one of Claims 1 to 6, wherein each image unit is composed of four-by-four pixels, a floating diffusion node being shared for each of groups of two-by-two pixels in the four-by-four pixels, and two of the four floating diffusion nodes included in an image unit being electrically coupled, whereby the 16 pixels included in each image unit form two groups each comprising eight pixels among which electric charges are aggregated, and
    wherein electric signals derived from electric charges aggregated in respective groups of eight pixels are aggregated and are provided as an output from the image unit.
  9. The pixel array according to any one of Claims 1 to 6, wherein each image unit is composed of four-by-four pixels, a floating diffusion node being shared for each of groups of two-by-two pixels in the pixel array, and the four floating diffusion nodes being electrically coupled in groups of two floating diffusion nodes, thereby forming groups each comprising eight pixels among which electric charges are aggregated, and
    wherein an output from a given image unit is provided by summing electric signals derived from electric charges aggregated in respective groups that include a pixel belonging to the given image unit.
  10. The pixel array according to any one of Claims 1 to 6, wherein each image unit is composed of four-by-four pixels, a floating diffusion node being shared for each of groups of two-by-two pixels in the pixel array, and a floating diffusion node for a group of two-by-two pixels being electrically coupled to a diagonally adjacent floating diffusion node, thereby forming groups each comprising eight pixels among which electric charges are aggregated, and
    wherein an output from a given image unit is provided by aggregating electric signals derived from electric charges aggregated in respective groups that includes a pixel belonging to each image unit.
  11. The pixel array according to any one of Claims 1 to 6, wherein each image unit is composed of four-by-four pixels, a floating diffusion node being shared for each of groups of two-by-two pixels in the pixel array, and a floating diffusion node for a group of two-by-two pixels being electrically coupled to a diagonally adjacent floating diffusion node, thereby forming groups each comprising  eight pixels among which electric charges are aggregated, and
    wherein electric signals derived from electric charges aggregated in two adjacent groups are aggregated and an output from the image unit is derived by interpolation of such aggregated values.
  12. The pixel array according to any one of Claims 1 to 11, wherein the color space is a color space comprising three colors.
  13. The pixel array according to any one of Claims 1 to 12, wherein the color space is a color space comprising three colors A, B, and C, each image unit being composed of four-by-four pixels,
    wherein the colors of pixels in each image unit are arranged as
    ABCA
    CAAB
    BAAC
    ACBA
    or
    ACBA
    BAAC
    CAAB
    ABCA.
  14. The pixel array according to any one of Claims 1 to 13, wherein the color space is an RGB color space comprising red (R) , green (G) , and blue (B) , each image unit being composed of four-by-four pixels,
    wherein the colors of pixels in each image unit are arranged as
    GBRG
    RGGB
    BGGR
    GRBG
    or
    GRBG
    BGGR
    RGGB
    GBRG.
  15. The pixel array according to any one of Claims 1 to 13, wherein the color space is a YRG color space comprising yellow (Y) , red (R) , and blue (B) , each image unit being composed of four-by-four pixels,
    wherein the colors of pixels in each image unit are arranged as
    YBRY
    RYYB
    BYYR
    YRBY
    or
    YRBY
    BYYR
    RYYB
    YBRY.
  16. The pixel array according to any one of Claims 1 to 11, wherein the color space is a color space comprising four colors.
  17. The pixel array according to any one of Claims 1 to 16, comprising at least one phase detection pixel.
  18. The pixel array according to any one of Claims 1 to 17, wherein at least one on-chip microlens covers more than one pixel of the same color.
  19. The pixel array according to Claim 18, wherein
    at least one on-chip microlens covers an area comprising two-by-two pixels of the same color; and/or
    at least one on-chip microlens covers an area comprising two adjacent pixels of the same color.
  20. An image sensor comprising:
    the pixel array according to any one of Claims 1 to 19; and
    a readout circuit configured to read out signals from image units of the pixel array.
  21. The image sensor according to Claim 20, wherein the readout circuit comprises:
    a row control circuit configured to select rows of image units of the pixel array;
    a column control circuit configured to read out, by column-by-column control, signals from each image unit in a row selected by the row control circuit;
    an analog-to-digital converter for converting signals from each image unit to a digital signal; and
    a control circuit for controlling the readout operation of the readout circuit.
  22. An electronic device, comprising:
    the image sensor according to Claim 20 or 21;
    a lens mechanism configured to direct incident light to the image sensor; and
    an autofocusing mechanism for the lens mechanism.
  23. The electronic device according to Claim 22, wherein the autofocusing mechanism is configured to perform phase difference detection autofocusing based on a pair of profiles obtained from a plurality of pairs of pixels.
  24. The electronic device according to Claim 23, wherein for two pixels of each pair of the plurality of pairs of pixels, the two pixels are covered by the same on-chip microlens.
  25. A method of operation of an image sensor comprising a pixel array comprising a plurality of image units,
    wherein each image unit comprises a plurality of pixels for color components of a predetermined color space;
    wherein each pixel is configured to detect at least one color component of the color components of the color space;
    wherein
    a center of gravity of the pixels of the same color in an image unit substantially coincides with a center of gravity of the image unit; and/or
    the pixels of the same color in an image unit lie on lines extending in the four directions of two perpendicular straight lines intersecting at a center of the image unit,
    wherein the method comprises:
    for each color component of the color space, providing an electric signal aggregating contributions from pixels for that color component among the plurality of pixels included in each image unit as an output from the image unit.
  26. The method of according to Claim 25, wherein electric charges generated by photosensitive elements of pixels in each image unit are aggregated in a physical process.
  27. The method according to Claim 25 or 26, wherein electric charges generated by photosensitive elements of pixels in each image unit are aggregated in a common electric charge storage structure.
  28. The method according to any one of Claims 25 to 27, wherein electric charges generated by photosensitive elements of at least two pixels in each image unit are stored in a common floating diffusion node.
  29. The method according to any one of Claims 25 to 28, wherein electric charges generated by photosensitive elements of at least two pixels in each image unit are stored in a common floating diffusion node, and at least two floating diffusion nodes in each image unit are electrically coupled.
  30. The method according to any one of Claims 25 to 29, wherein photosensitive elements of pixels for different colors in each image unit are controlled to output electric charges at different times.
PCT/CN2021/111653 2021-08-10 2021-08-10 Pixel array, image sensor, and electronic device without demosaicing and methods of operation thereof WO2023015425A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/111653 WO2023015425A1 (en) 2021-08-10 2021-08-10 Pixel array, image sensor, and electronic device without demosaicing and methods of operation thereof
CN202180101102.0A CN117751576A (en) 2021-08-10 2021-08-10 Demosaicing-free pixel array, image sensor, electronic device and operation method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/111653 WO2023015425A1 (en) 2021-08-10 2021-08-10 Pixel array, image sensor, and electronic device without demosaicing and methods of operation thereof

Publications (1)

Publication Number Publication Date
WO2023015425A1 true WO2023015425A1 (en) 2023-02-16

Family

ID=85200385

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/111653 WO2023015425A1 (en) 2021-08-10 2021-08-10 Pixel array, image sensor, and electronic device without demosaicing and methods of operation thereof

Country Status (2)

Country Link
CN (1) CN117751576A (en)
WO (1) WO2023015425A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080074518A1 (en) * 2006-09-11 2008-03-27 Jeffery Steven Beck Color filter device and method for eliminating or reducing non-uniform color error caused by asymmetric color cross-talk in image sensor devices
US20090200451A1 (en) * 2008-02-08 2009-08-13 Micron Technology, Inc. Color pixel arrays having common color filters for multiple adjacent pixels for use in cmos imagers
CN102036020A (en) * 2009-10-06 2011-04-27 佳能株式会社 Solid-state image sensor and image sensing apparatus
US20160353034A1 (en) * 2015-05-27 2016-12-01 Semiconductor Components Industries, Llc Multi-resolution pixel architecture with shared floating diffusion nodes
US20180152677A1 (en) * 2016-11-29 2018-05-31 Cista System Corp. System and method for high dynamic range image sensing
US20190252453A1 (en) * 2016-10-20 2019-08-15 Invisage Technologies, Inc. Image sensors with enhanced wide-angle performance

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080074518A1 (en) * 2006-09-11 2008-03-27 Jeffery Steven Beck Color filter device and method for eliminating or reducing non-uniform color error caused by asymmetric color cross-talk in image sensor devices
US20090200451A1 (en) * 2008-02-08 2009-08-13 Micron Technology, Inc. Color pixel arrays having common color filters for multiple adjacent pixels for use in cmos imagers
CN102036020A (en) * 2009-10-06 2011-04-27 佳能株式会社 Solid-state image sensor and image sensing apparatus
US20160353034A1 (en) * 2015-05-27 2016-12-01 Semiconductor Components Industries, Llc Multi-resolution pixel architecture with shared floating diffusion nodes
US20190252453A1 (en) * 2016-10-20 2019-08-15 Invisage Technologies, Inc. Image sensors with enhanced wide-angle performance
US20180152677A1 (en) * 2016-11-29 2018-05-31 Cista System Corp. System and method for high dynamic range image sensing

Also Published As

Publication number Publication date
CN117751576A (en) 2024-03-22

Similar Documents

Publication Publication Date Title
CN212785522U (en) Image sensor and electronic device
US10021358B2 (en) Imaging apparatus, imaging system, and signal processing method
US10032810B2 (en) Image sensor with dual layer photodiode structure
US10334189B1 (en) HDR pixel array with double diagonal binning
US9749556B2 (en) Imaging systems having image sensor pixel arrays with phase detection capabilities
CN111757006B (en) Image acquisition method, camera assembly and mobile terminal
US8520103B2 (en) Solid-state imaging device, signal processing method thereof and image capturing apparatus
US20170347042A1 (en) Imaging systems with high dynamic range and phase detection pixels
RU2490715C1 (en) Image capturing device
US10855959B2 (en) Image sensing device
US9332199B2 (en) Imaging device, image processing device, and image processing method
CN210143059U (en) Image sensor integrated circuit, image sensor, and imaging system
US7259788B1 (en) Image sensor and method for implementing optical summing using selectively transmissive filters
US20140211079A1 (en) Imaging device and focusing control method
CN111741221B (en) Image acquisition method, camera assembly and mobile terminal
WO2023015425A1 (en) Pixel array, image sensor, and electronic device without demosaicing and methods of operation thereof
JP6137539B2 (en) Solid-state imaging device, driving method thereof, and electronic apparatus
CN112351172B (en) Image processing method, camera assembly and mobile terminal
JP7339780B2 (en) Imaging device and its control method
CN113141475A (en) Imaging system and pixel merging method
TWI795895B (en) Solid-state imaging device, solid-state imaging device manufacturing method, and electronic apparatus
TWI837660B (en) Image sensor
TWI795796B (en) Solid camera device, signal processing method of solid camera device, and electronic apparatus
WO2023248693A1 (en) Solid-state imaging device, method for manufacturing solid-state imaging device, and electronic apparatus
CN115988299A (en) Image sensor sharing microlens to partially shield phase focusing and control method thereof

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE