CN117751576A - Demosaicing-free pixel array, image sensor, electronic device and operation method thereof - Google Patents

Demosaicing-free pixel array, image sensor, electronic device and operation method thereof Download PDF

Info

Publication number
CN117751576A
CN117751576A CN202180101102.0A CN202180101102A CN117751576A CN 117751576 A CN117751576 A CN 117751576A CN 202180101102 A CN202180101102 A CN 202180101102A CN 117751576 A CN117751576 A CN 117751576A
Authority
CN
China
Prior art keywords
pixels
image
color
pixel array
floating diffusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180101102.0A
Other languages
Chinese (zh)
Inventor
物井诚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN117751576A publication Critical patent/CN117751576A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/703SSIS architectures incorporating pixels for producing signals other than image signals
    • H04N25/704Pixels specially adapted for focusing, e.g. phase difference pixel sets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/46Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by combining or binning pixels

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Color Television Image Signal Generators (AREA)
  • Solid State Image Pick-Up Elements (AREA)

Abstract

Embodiments of the present application relate to image sensors. A pixel array configuration and pixel circuits are provided that do not require interpolation between image elements. In particular, embodiments provide a pixel array for an image sensor, the pixel array comprising a plurality of image cells, wherein each image cell comprises a plurality of pixels for color components of a predetermined color space; wherein each pixel is for detecting at least one of the color components of the color space. For each color component of the color space, providing as output from the image units an electrical signal that aggregates contributions of pixels for the color component among the plurality of pixels included in each image unit. The center of gravity of the pixels of the same color in an image unit substantially coincides with the center of gravity of the image unit; and/or pixels of the same color in the image unit are located on lines extending in four directions of two perpendicular straight lines intersecting at the center of the image unit. Image sensors including pixel arrays, electronic devices including image sensors, and methods of operation are also provided.

Description

Demosaicing-free pixel array, image sensor, electronic device and operation method thereof
Technical Field
Embodiments of the present application relate to image sensors, and more particularly, to CMOS image sensors.
Background
Image sensors are commonly used in electronic devices, such as digital cameras, video cameras, webcams, cell phones, and computers, for capturing images.
Typically, an image sensor has an array of cells (pixels) arranged in rows and columns. Each cell includes a photosensitive element (also referred to as a sensor element; e.g., a photodiode) that generates an electrical charge in response to incident light. An on-chip lens (OCL; also referred to as an on-chip microlens) may be provided for each pixel to efficiently direct incident light onto the photosensitive area of the pixel.
The generated charge is accumulated in a charge accumulation node (commonly referred to as a capacitor-like structure of floating diffusion nodes, sometimes abbreviated herein as FD) associated with the cell. An output electrical signal corresponding to light incident on the cell is generated from the charge accumulated in the floating diffusion node.
Most image sensors are charge-coupled device (CCD) image sensors, or complementary metal oxide semiconductor (complementary metal-oxide-semiconductor, CMOS) image sensors. CCD and CMOS image sensors differ in signal readout methods and manufacturing processes.
In a CCD image sensor, charges generated at pixels in response to light are stored in capacitors. The capacitors in one line are controlled to immediately transfer their charge to their neighbors in a "bucket-brigade" fashion, with the capacitors at the end of the line outputting their charge to the amplifier. In contrast, in a CMOS image sensor, each pixel in the array has a photodiode and a switch (e.g., a transistor). Thus, the switches in the array can be controlled to directly access the signal from each pixel. The CMOS image sensor is manufactured at a lower cost than the CCD image sensor because the CMOS image sensor is provided with a control circuit and can be manufactured using a general semiconductor manufacturing process.
The CMOS image sensor may include a pixel array and a readout circuit for taking out an image signal from the pixels. The readout circuit includes a row control circuit, a column control circuit, and a control circuit. As described above, in the CMOS image sensor, the switch in the control array can directly access the signal from each pixel.
Image signals corresponding to pixels (cells) are read out by a read-out circuit in rows and columns. In general, in a readout operation, a particular row of pixels in the array may be selected by a row control circuit, and image signals generated by pixels in that row are read out column by column along a column line by column control circuit. Analog-to-digital conversion (ADC) circuits may be used to convert signals from pixels to digital values.
The output of the photosensitive element is only sensitive to the intensity of the light and does not provide color information. Thus, when a color image needs to be captured, a color filter array (color filter array, CFA) may be used. The color filter array includes color filter units on pixels of the pixel array. The color filter units may include red, green, and blue color filter units arranged in a bayer pattern, although other colors and/or other arrangement patterns may be used.
Since pixels covered by the color filter elements of one color are not sensitive to other colors, those missing color values must be determined by interpolation. The process of obtaining the color values for each color of all pixels in a pixel array by interpolation is referred to as demosaicing (sometimes abbreviated herein as DM).
However, demosaicing bayer patterns by interpolation has several disadvantages of low resolution, high power consumption, and color artifacts.
Disclosure of Invention
In view of the drawbacks associated with using an interpolated pixel array for the bayer pattern, the present application provides a pixel array configuration and pixel circuit that does not require interpolation between image elements.
According to a first aspect of the present invention, there is provided a pixel array for an image sensor, the pixel array comprising a plurality of image cells. Each image unit comprises a plurality of pixels for color components of a predetermined color space; each pixel is for detecting at least one of the color components of the color space. For each color component of the color space, providing as output from the image units an electrical signal that aggregates contributions of pixels for the color component among the plurality of pixels included in each image unit. The center of gravity of the pixels of the same color in an image unit substantially coincides with the center of gravity of the image unit; and/or pixels of the same color in the image unit are located on lines extending in four directions of two perpendicular straight lines intersecting at the center of the image unit.
In this aspect of the invention, interpolation between image units is not required. This results in lower computational complexity compared to techniques involving interpolation between image units. Furthermore, the pixel array according to this aspect provides a resolution higher than that of the technique in which the image unit is used to detect only one color component. Furthermore, color artifacts due to the color filtering arrangement of some known pixel arrays may be avoided.
According to a first implementation of the first aspect of the present invention, the charge generated by the photosensitive element (e.g. photodiode) of the pixel in each picture element is accumulated in a physical process.
According to a second implementation form of the first aspect of the invention as such or according to a first implementation form of the first aspect of the invention, the charge generated by the photosensitive elements of the pixels in each picture element is accumulated in a common charge storage structure.
According to a third implementation form of the first aspect as such or according to the first implementation form or the second implementation form of the first aspect as such, the charge generated by the photosensitive elements of at least two pixels in each image cell is stored in a common floating diffusion node.
According to a fourth implementation form of the first aspect of the invention, based on the first aspect of the invention itself or the first to third implementation forms of the first aspect, the charge generated by the photosensitive elements of the at least two pixels in each image cell is stored in a common floating diffusion node, the at least two floating diffusion nodes in each image cell being electrically coupled.
In these implementations of the invention, arithmetic operations for aggregating contributions from pixels for color components among a plurality of pixels included in each image unit may be reduced or eliminated.
According to a fifth implementation form of the first aspect of the invention, based on the first aspect of the invention itself or the first implementation form to the fourth implementation form of the first aspect, the light sensitive elements (e.g. photodiodes) of the pixels of different colors in each picture element are controlled to output charges at different times.
This implementation supports taking out charge for each color component separately, even if the charge storage structure is shared by pixels of different colors.
According to a sixth implementation form of the first aspect of the invention, each image unit consists of 4 x 4 pixels, each of the groups of 2 x 2 pixels in the 4 x 4 pixels sharing a floating diffusion node, the four floating diffusion nodes comprised in the image unit being electrically coupled, based on the first aspect of the invention itself or the first to fifth implementation forms of the first aspect.
This implementation supports a relatively simple scheme of sharing floating diffusion nodes while supporting the accumulation of charge contributed from pixels in the image cell in a physical process that does not require arithmetic operations.
With reference to the first aspect of the present invention itself or to the first to fifth implementation forms of the first aspect, according to a seventh implementation form of the first aspect of the present invention, each image unit is composed of 4×4 pixels, each of the 2×2 pixel groups in the 4×4 pixels shares a floating diffusion node, two floating diffusion nodes of four floating diffusion nodes included in the image unit are electrically coupled, whereby 16 pixels included in each image unit form two groups, each group including 8 pixels, charges are accumulated in the pixels, wherein electrical signals derived from the charges accumulated in the respective groups of 8 pixels are accumulated and provided as an output from the image unit.
This implementation provides an alternative to sharing the floating diffusion node.
With reference to the first aspect of the invention itself or to the first to fifth implementation forms of the first aspect, according to an eighth implementation form of the first aspect of the invention, each image element is composed of 4 x 4 pixels, each of the groups of 2 x 2 pixels in the array of pixels shares a floating diffusion node, the four floating diffusion nodes are electrically coupled in groups, each group is composed of two floating diffusion nodes, thereby forming groups, each group comprises eight pixels, charges are accumulated in the pixels, wherein electrical signals derived from charges accumulated in respective groups comprising pixels belonging to each image element are summed and provided as an output from the image element.
This implementation provides another alternative to sharing a floating diffusion node.
With reference to the first aspect of the invention itself or to the first to fifth implementation forms of the first aspect, according to a ninth implementation form of the first aspect of the invention, each image element is composed of 4 x 4 pixels, each of the groups of 2 x 2 pixels in the array of pixels shares a floating diffusion node, one floating diffusion node for a group of 2 x 2 pixels is electrically coupled to a diagonally adjacent floating diffusion node, thereby forming groups, each group comprising eight pixels, in which charges are accumulated, wherein electrical signals derived from charges accumulated in the respective groups comprising pixels belonging to each image element are summed/accumulated and provided as output from the image element.
With reference to the first aspect of the invention itself or to the first to fifth implementation forms of the first aspect, according to a tenth implementation form of the first aspect of the invention, each image element is composed of 4 x 4 pixels, each of the groups of 2 x 2 pixels in the array of pixels shares a floating diffusion node, one floating diffusion node for a group of 2 x 2 pixels is electrically coupled to diagonally adjacent floating diffusion nodes, thereby forming groups, each group comprising eight pixels, charges are accumulated in the pixels, wherein electrical signals derived from charges accumulated in two adjacent groups are summed/accumulated, and the output from the image element is derived by interpolating these summed/accumulated values.
The ninth implementation and the tenth implementation provide two modes of yet another alternative to sharing the floating diffusion node. In a tenth implementation, interpolation is used to obtain the final output from the image unit.
According to a tenth implementation form of the first aspect as such or according to the first implementation form of the first aspect as such or according to the eleventh implementation form of the first aspect as such, the color space is a color space comprising three colors.
According to a twelfth implementation form of the first aspect as such or according to the first to eleventh implementation forms of the first aspect as such, the color space is a color space comprising three colors A, B and C, each image unit consisting of 4 x 4 pixels, wherein the colors of the pixels in each image unit are arranged as:
ABCA
CAAB
BAAC
ACBA
or (b)
ACBA
BAAC
CAAB
ABCA。
According to a thirteenth implementation form of the first aspect of the invention, based on the first aspect of the invention itself or the first implementation form to the twelfth implementation form of the first aspect, the color space is an RGB color space comprising red (R), green (G) and blue (B), each image unit is composed of 4 x 4 pixels, wherein the colors of the pixels in each image unit are arranged to:
GBRG
RGGB
BGGR
GRBG
Or (b)
GRBG
BGGR
RGGB
GBRG。
According to a fourteenth implementation form of the first aspect of the invention as such or according to the first to twelfth implementation forms of the first aspect of the invention, the color space is a YRG color space comprising yellow (Y), red (R) and blue (B), each image unit consists of 4 x 4 pixels, wherein the colors of the pixels in each image unit are arranged as:
YBRY
RYYB
BYYR
YRBY
or (b)
YRBY
BYYR
RYYB
YBRY。
According to a fifteenth implementation form of the first aspect of the invention, the color space is a color space comprising four colors.
According to a sixteenth implementation form of the first aspect of the invention, based on the first aspect of the invention itself or the first implementation form to the fifteenth implementation form of the first aspect, the at least one on-chip microlens covers more than one pixel of the same color.
According to a seventeenth implementation form of the first aspect of the invention, based on the first aspect of the invention itself or the sixteenth implementation form of the first aspect, the at least one on-chip microlens coverage area comprises 2 x 2 pixels of the same color; and/or at least one on-chip microlens coverage area comprises two adjacent pixels of the same color.
The sixteenth and seventeenth implementations are advantageous in providing phase detection (for autofocus) for pixel pairs in an image sensor.
According to a second aspect, there is provided an image sensor comprising: a pixel array according to the first aspect of the invention itself or any implementation of the first aspect, and a readout circuit for reading out signals from image cells of the pixel array.
The advantages provided by the second aspect of the present invention are similar to those provided by the first aspect and will not be described in detail here.
According to a first implementation form of the second aspect of the invention, the readout circuit comprises: a row control circuit for selecting rows of image cells of the pixel array; a column control circuit for reading out a signal from each image cell in a row selected by the row control circuit by controlling column by column; an analog-to-digital converter for converting a signal from each image unit into a digital signal; and a control circuit for controlling the read operation of the read-out circuit.
According to a third aspect, there is provided an electronic device comprising: an image sensor according to the second aspect of the invention itself or the first implementation of the second aspect; a lens member for guiding incident light to the image sensor; an autofocus member for the lens member.
The advantages provided by the third aspect of the present invention are similar to those provided by the first aspect and will not be described in detail here.
According to a first implementation of the third aspect of the present invention, the auto-focus means is configured to perform phase difference detection auto-focus based on a pair of profiles obtained from a plurality of pairs of pixels.
Based on the first implementation of the third aspect, according to a second implementation of the third aspect of the present invention, two pixels of each pair of the plurality of pairs of pixels are covered by the same on-chip microlens.
According to a fourth aspect, there is provided a method of operating an image sensor, the image sensor comprising: a pixel array including a plurality of image units, wherein each image unit includes a plurality of pixels for color components of a predetermined color space; wherein each pixel is for detecting at least one of the color components of the color space; wherein the center of gravity of the pixels of the same color in an image unit substantially coincides with the center of gravity of the image unit; and/or pixels of the same color in an image unit are located on lines extending in four directions of two perpendicular straight lines intersecting at the center of the image unit, wherein the method comprises: for each color component of the color space, an electrical signal is provided as an output from the image unit, the electrical signal aggregating contributions of pixels for the color component of the plurality of pixels included in each image unit.
The advantages provided by the fourth aspect of the present invention are similar to those provided by the first aspect and will not be described in detail here.
According to a first implementation form of the fourth aspect, the charge generated by the photosensitive elements of the pixels in each image unit is accumulated in a physical process.
With reference to the fourth aspect of the invention itself or to the first implementation of the fourth aspect, according to a second implementation of the fourth aspect, the charge generated by the photosensitive elements of the pixels in each picture element is accumulated in a common charge storage structure.
In accordance with a third implementation form of the fourth aspect as such or according to the first implementation form or the second implementation form of the fourth aspect, the charge generated by the photosensitive elements of at least two pixels in each image cell is stored in a common floating diffusion node.
With reference to the fourth aspect of the invention itself or the first to third implementation forms of the fourth aspect, according to the fourth implementation form of the fourth aspect, the charges generated by the photosensitive elements of the at least two pixels in each image cell are stored in a common floating diffusion node, the at least two floating diffusion nodes in each image cell being electrically coupled.
With reference to the fourth aspect of the invention itself or the first to fourth implementations of the fourth aspect, according to a fifth implementation of the fourth aspect, the photosensitive elements of the pixels of different colors in each image unit are controlled to output charges at different times.
It will be appreciated that any feature described in connection with one aspect of the invention may be used in other aspects as desired.
Drawings
For a description of the technical solutions of the embodiments of the present application, reference is made to the accompanying drawings.
Fig. 1 shows an electronic device with an image sensor comprising an array of pixels.
Fig. 2 shows a color filtering arrangement according to a bayer pattern.
Fig. 3 shows interpolation for providing missing color values.
Fig. 4 shows color artifacts of Color Zone Plate (CZP) due to bayer pattern demosaicing.
Fig. 5 shows a color filtering arrangement according to an embodiment of the present application.
Fig. 6 illustrates how the color values of pixels are determined by averaging within each image unit without interpolating between the image units, according to an embodiment of the present application.
FIG. 7 illustrates reduced color artifacts of Color Zone Plate (CZP) according to embodiments of the present application.
Fig. 8 illustrates sharing of floating diffusion nodes (FD) of the color filtering arrangement shown in fig. 5 according to an embodiment of the present application.
Fig. 9 illustrates how color values of image elements are determined without interpolation between the image elements of the color filtering arrangement shown in fig. 5, according to an embodiment of the present application.
Fig. 10 illustrates color artifacts for Color Zone Plates (CZPs) for various color filtering arrangements according to some embodiments of the present application.
Fig. 11 illustrates an exemplary color filtering arrangement having four colors according to some embodiments of the present application.
Fig. 12 illustrates an on-chip lens (OCL) pattern of the color filtering arrangement shown in fig. 5 according to an embodiment of the present application.
Fig. 13 shows how the color values of the image units are determined in a two-step combination (binding) of the color filtering arrangement shown in fig. 5 according to an embodiment of the present application.
Fig. 14 shows how the color values of the image units are determined in a two-step combination of the color filtering arrangement shown in fig. 5, according to an embodiment of the present application.
Fig. 15 shows how the color values of the image units are determined in a two-step combination of the color filtering arrangement shown in fig. 5, according to an embodiment of the present application.
Fig. 16 shows how the color values of an image unit are determined in a two-step combination of color filter arrangements with four colors according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described below with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of an electronic device 100 including an image sensor 105. The image sensor 105 may include a pixel array 110 and readout circuitry 115 for retrieving image signals from pixels. The readout circuit 115 includes a row control circuit 120, a column control circuit 130, and a control circuit 150.
The image signals corresponding to the pixels (cells) are generated by photosensitive elements (also referred to as sensor elements; e.g., photodiodes) of the pixels and read out by readout circuits in rows and columns. The photosensitive element is not limited to a photodiode. For example, a photoconductor film made of an organic material may be used as the photosensitive element. In general, in a readout operation, a particular row of pixels in the array may be selected by the row control circuit 120, and the image signals generated by the pixels in that row are read out column by column along the column lines by the column control circuit 130. Analog-to-digital conversion (ADC) circuitry may be provided to convert the image signal from a pixel to a digital value.
The output of the photosensitive element is only sensitive to the intensity of the light and does not provide color information. Thus, when a color image needs to be captured, a color filter array (color filter array, CFA) may be provided. The color filter array includes color filter cells on at least one of the pixels of the pixel array. The color filter units generally include a red color filter unit, a green color filter unit, and a blue color filter unit. The red filter element passes red light, and therefore, pixels (sometimes referred to as red pixels) behind the red filter element sense (and thus detect) the red light. Similarly, pixels behind a green color filter element (sometimes referred to as a green pixel) detect green light, and pixels behind a blue color filter element (sometimes referred to as a blue pixel) detect blue light. Sometimes, a broadband color filter unit passing two or more colors may be used.
The bayer color filter pattern (also referred to as bayer mosaic pattern, bayer pattern, etc.) is one typical arrangement of color filter cells in a color filter array, and includes a repeating unit of 2×2 pixels in which two pixels in a diagonal position are green, one of the remaining two pixels is red, and the other one is blue. (the term bayer pattern may also refer to a pattern of cells comprising 2 x 2 pixels.) fig. 2 illustrates such a bayer color filter pattern. The bayer pattern uses twice as many green pixels as red and blue pixels. This is to provide greater precision for the green color, which the human eye is sensitive to than red and blue.
In the bayer pattern (or a typical variation thereof), pixels having red color filter elements provide only red signals, but not green or blue sensor signals. Similarly, pixels with green color filter elements provide only green sensor signals, while pixels with blue color filter elements provide only blue sensor signals. Therefore, the missing color values must be determined by interpolation. The process of obtaining the color value of each color of all pixels in the pixel array by interpolation is called demosaicing. In other words, a demosaicing algorithm is used to provide green and blue sensor signals for red pixels (pixels with red color filter elements); providing red and blue sensor signals for the green pixel; and provides red and green sensor signals for the blue pixels.
In one example, fig. 3 is a schematic diagram showing how pixel values for each color are interpolated to provide pixel values of sensor signals that lack that color.
However, demosaicing by interpolation in the bayer pattern has several drawbacks. First, the resolution is low because the bayer pattern supports only one pixel (each of red and blue) or two pixels (green) in a 4-pixel unit for detection. Second, interpolation requires additional signal processing power. Furthermore, color artifacts may occur in edge regions where the color suddenly changes.
Fig. 4 shows the simulation results of demosaicing of an image of a well-known Color Zone Plate (CZP) obtained with an image sensor having a bayer pattern. (in the example shown, a simple bilinear interpolation based on averaging two or four nearest pixels of the same color is used.)
CZP is a concentric pattern of circles in which the gray level varies according to a sinusoidal function, increasing in frequency radially from the center. Components of high spatial frequency (small pitch) relative to screen resolution produce off-centered circular patterns by aliasing. Although CZP is a gray pattern, color artifacts can result in false colors for those circles that result from aliasing. (although this is not apparent in black and white drawings, if inspection is required, the applicant is ready to submit a colour image.) this is because the green pixels are twice as many as the red or blue pixels and therefore have a different pitch (spatial frequency) than the red or blue pixels.
In view of the above-described drawbacks caused by interpolation of bayer patterns, the present application provides a pixel array configuration and a pixel circuit that do not require interpolation between image units. ( For bayer patterns, the image elements correspond to pixels. According to embodiments of the present application, an image element corresponds to a plurality of pixels, as described in more detail below. )
Fig. 5 illustrates a color filter array according to an embodiment of the present application. It can be seen that each picture element of the bayer pattern shown in fig. 2 is subdivided into 16 (or typically a plurality) pixels, wherein each pixel may be provided with a photosensitive element such as a photodiode. It should be noted that while the term pixel initially refers to a unit in an image ("image element"), it may also refer to a corresponding sensor region. Although the image cells are in one-to-one correspondence with pixels in the bayer pattern, in the context of the present application, the region corresponding to each photodiode is referred to as a pixel. In the example of fig. 5, each image element corresponds to 16 pixels. As described below, the values of the pixels in the image unit are aggregated to provide one value for each color component. (thus, an "image cell" is still a cell in an image, but it may also refer to a corresponding sensor area covering multiple pixels.)
Given this subdivision of image elements, regions corresponding to four pixels (instead of four image elements in a conventional bayer pattern) may be used for elements of the bayer pattern. Such smaller bayer pattern units may increase resolution, thereby reducing artifacts. But this also increases the burden of the interpolation process. The present application provides a different approach that does not rely on interpolation between image elements.
Fig. 6 schematically shows how the color filter array of fig. 5 is used without interpolation to provide color values for the entire set of pixels for each color of RGB. According to the present embodiment, since each image unit has (four) red pixels, interpolation between image units is not required. The red value of each image element may be obtained by averaging the values of the red pixels in the image element. Similarly, for green and blue, the color values may be obtained by averaging the values of the pixels in each image unit. (it should be noted that since the number of pixels in an image unit is constant (16 in this example), the average value of the values of the pixels in an image unit is equal to the sum of the values of the pixels in an image unit divided by the number factor.)
According to the present embodiment, since each color is detected in each image unit, the resolution thereof is higher than that of the bayer pattern of fig. 2, in which only one color is detected in each image unit. For example, of the 16 image units shown in fig. 3, the red light intensity is detected for only 4 image units of the bayer pattern. In contrast, according to this embodiment as shown in fig. 6, the red light intensity of each of the 16 image units is detected. Furthermore, the processing required for interpolation between image units is eliminated. Furthermore, there are no color artifacts due to interpolation between image units.
Fig. 7 shows a simulation image of a well-known Color Zone Plate (CZP) obtained with an image sensor having the color filter arrangement of the present embodiment. In contrast to fig. 4, there are no or few color artifacts. (although this is not apparent in black and white drawings, if inspection is required, applicants are ready to submit color images.)
The pixel array according to the present embodiment can be summarized in general terms as follows:
a pixel array for an image sensor, the pixel array comprising a plurality of image cells,
wherein each image unit comprises a plurality of pixels for color components of a predetermined color space;
Wherein each pixel is for detecting at least one of the color components of the color space;
wherein for each color component of the color space, providing as output from the image units an electrical signal aggregating contributions of pixels for the color component among the plurality of pixels included in each image unit;
wherein the method comprises the steps of
The center of gravity of the pixels of the same color in an image unit substantially coincides with the center of gravity of the image unit; and/or
The pixels of the same color in an image unit are located on lines extending in four directions of two perpendicular straight lines intersecting at the center of the image unit.
According to an alternative embodiment, at least some of the 16 (or typically multiple) pixels in each image unit share a floating diffusion node (FD). The floating diffusion node is a node for storing charges generated by the photodiode (i.e., a semiconductor region having doped impurities), and is also referred to as a charge storage node or a charge accumulation node. By sharing the floating diffusion node, the charges of the pixels can be added together without arithmetic operations. (As described above, addition corresponds to averaging according to the scale factor.)
Fig. 8 illustrates an exemplary scheme for sharing a floating diffusion node. As can be seen from the enlarged view on the right, each group of 4 pixels out of 16 pixels in the image cell shares a floating diffusion node (shown with black circles), the 4 floating diffusion nodes in the image cell being electrically connected. This enables the summation (averaging) of the pixel values in the image unit.
Fig. 9 shows how the signals (charges) from the pixels of each color in the image unit (i.e., four pixels of each of red and blue or eight pixels of green) are aggregated (this process is referred to as combining) to obtain the color value of the image unit.
Since charges from photodiodes corresponding to different colors will be stored in the same floating diffusion node, collection of charges of different colors may be performed at different times. That is, the charge from the photodiode of the red pixel is simultaneously transferred to the floating diffusion node; after the charge is read out, the charge from the photodiode of the green pixel is transferred to the floating diffusion node; and after the charge is read out, the charge from the photodiode of the blue pixel is transferred to the floating diffusion node. To some extent, the combination of red, green and blue colors shown in fig. 9 can be considered as three steps to collect these color charges. (of course, the order in which the different colors of charge are collected is not limited to this example.)
Color filter arrangement
The following design criteria may be advantageous for a color filtering arrangement in an image unit subdivided into pixels:
(i) The center of gravity of the same color pixels in the image unit substantially coincides with the center of gravity of the image unit; and/or
(ii) Pixels of the same color in an image unit are located (in a symmetrical manner) on lines extending in four directions of two perpendicular straight lines intersecting at the center of the image unit.
The color filter array shown in fig. 5 or 6 is based on principle (ii) described above. This can be seen from the cross-over markings (i.e., cross-over line segments) shown in fig. 6.
The color filter array of the present application is not limited to the arrangement shown in fig. 5 and 6. Fig. 10 shows various possible color filtering arrangements in an image unit. The diagrams in (c) - (f) (where YRB is used herein, which includes yellow instead of green in RGB) show groupings of two or four color filters (the shape of which forms a circle or ellipse instead of a half or quarter of a square). Such grouping will be described in more detail below. For the case of (f), the image elements (shown in bold lines) have an irregular shape for using pixels in pairs.
Each of (a) - (f) in fig. 10 shows a simulated image of a circular zone plate (circular zone plate, CZP) (left) and its emphasized color component (right). The degree of reduction of color artifacts varies. The inventors have found that a situation similar to (a) of the arrangement in fig. 5 and 6 is most advantageous. (although this is not apparent in black and white drawings, if inspection is required, applicants are ready to submit color images.)
Fig. 11 shows an exemplary color filtering arrangement when four colors are used instead of three colors in RGB or YGB.
Lens pattern for Phase Detection (PD)
In a pixel panel for an image sensor, an on-chip lens (OCL) (also referred to as an on-chip microlens) may be provided for each pixel region in order to efficiently guide light incident on the pixel to a photosensitive region of the pixel. The position at which the light impinges on the photosensitive element corresponds to the position at which the light passes through the main lens of the camera by the action of the on-chip lens. (in the absence of an on-chip lens, the position at which the light impinges on the photosensitive element corresponds to the position in object space, regardless of the position at which the light passes through the main lens.) this correspondence may be used for phase detection autofocus.
The basis for auto-focusing by phase detection (known as phase detection auto-focusing or phase difference detection auto-focusing, PD AF) is that if the imaged object is in focus, the light rays passing through the main lens of the camera strike the same location on the photosensitive element at its extreme, but if the object is out of focus, the light rays hit a different location. The direction and magnitude of the positional offset support determining in which direction and how much the focus should be moved (whether the focus should be close or far, and how much to be close or far) so that the object is in focus. Specifically, a pair of sensor elements is used to detect light that has passed through each side (e.g., left and right) of the main lens. A row of such sensors essentially forms a one-to-one dimensional image sensor that provides two contours corresponding to the linear portion of the imaged object (each contour corresponding to one side of the main lens through which light has passed). Comparing the two contours supports determining an offset in the position of the image of the object. Such a row of sensor pairs may be provided in a vertical direction and in a horizontal direction (which essentially provides vertical and horizontal one-dimensional image sensors).
For such phase detection autofocus it is advantageous to provide an on-chip lens covering two or four pixels of the same color. For example, an on-chip lens covering a pair of adjacent pixels causes light rays passing through the respective sides (e.g., left and right sides) of the main lens to be incident on the respective pixels of the pair (pupil division).
The color arrangement in some embodiments of the present application is suitable for an on-chip lens covering more than one pixel of the same color.
Fig. 12 shows some embodiments, wherein two or four pixels of the same color in the color filtering arrangement of fig. 5 (left) or its variants (middle, right) are covered by one on-chip lens.
As described above, an exemplary grouping of pixels is also shown in fig. 10, where (c) - (f) show groupings of two or four color filters whose shapes form a half or quarter of a circle or ellipse instead of a square.
The half-mask pixels may also be phase detection pixels. The phase detection pixels may be laid out more sparsely (e.g., one of 16 pixels).
Pixel circuit
In the embodiment described with respect to fig. 8 above, the four pixels in the color filtering arrangement of fig. 5 and 6 share a floating diffusion node, and four of these floating diffusion nodes are electrically connected. The present application is not limited to such a circuit, and other configurations of the pixel circuit are also possible.
According to the embodiment shown in fig. 13, the four pixels in the color filtering arrangement of fig. 5 and 6 share a floating diffusion node, and two of these floating diffusion nodes are electrically connected. As shown, in this embodiment, the aggregation of contributions from pixels in an image unit is a two-step process. First, the common floating diffusion node gathers contributions of two pixels (each of red and blue) or four pixels (green) within a 4×2 pixel region (step 1). This is a physical process that does not require arithmetic operations. The contributions from two adjacent regions are then digitally summed to obtain the value of the 4 x 4 pixel image element. (it should be noted that although in FIG. 13, the 3X 2, rather than 4X 2 pixel areas are surrounded by a boundary of red and blue, this is merely to highlight those pixels that contribute to the combination.)
The specific features of the pixel array according to this embodiment can be summarized in general terms as follows:
each image cell is composed of 4 x 4 pixels, each of the 2 x 2 groups of pixels in the 4 x 4 pixels sharing a floating diffusion node, two floating diffusion nodes of four floating diffusion nodes included in the image cell being electrically coupled, whereby 16 pixels included in each image cell form two groups, each group including 8 pixels, charge accumulating in the pixels,
Wherein the electrical signals derived from the charges accumulated in each group of 8 pixels are accumulated and provided as output from the image unit.
Again, according to the embodiment shown in fig. 14, the four pixels in the color filtering arrangement of fig. 5 and 6 share a floating diffusion node, and two of these floating diffusion nodes are electrically connected. This embodiment differs from the embodiment of fig. 13 in the location of the floating diffusion node. First, the common floating diffusion node gathers contributions of two pixels (each of red and blue) or four pixels (green) within a 4×2 pixel region (step 1). This is similar to fig. 13, but differs in the location of the pixels affected by the combination. The difference in position results in a difference in pixels to be gathered in the next step. In step 2, contributions from regions overlapping the image elements, in particular four regions (each of red and blue) or five regions (green), are digitally summed. In general, this amounts to an aggregation of contributions from a 6×6 pixel region (one pixel amplified from each side of the 4×4 image unit). In this embodiment, photosensitive elements such as photodiodes not only contribute to the image cell to which they belong, but also signals from some pixels contribute to more than one image cell.
The specific features of the pixel array according to this embodiment can be summarized in general terms as follows:
wherein each picture element is made up of 4 x 4 pixels, each of the 2 x 2 groups of pixels in the array of pixels shares a floating diffusion node, four floating diffusion nodes are electrically coupled in groups, each group is made up of two floating diffusion nodes, thereby forming groups, each group comprising eight pixels in which charge is accumulated,
wherein the electrical signals derived from the charges accumulated in the respective groups comprising pixels belonging to each picture element are summed and provided as output from the picture element.
According to the embodiment shown in fig. 15, two floating diffusion nodes in the diagonal direction are electrically connected. The present embodiment provides a low power consumption mode in addition to the no interpolation (no demosaicing) mode similar to the above-described embodiments.
In step 1, which is common to both modes, contributions from pixels in the group, i.e. four pixels (each of red and blue) or eight pixels (green), are aggregated. This is a physical process (charge combination) achieved by sharing a floating diffusion node and its diagonal coupling.
In step 2, the process of the no interpolation mode is similar to that in fig. 14. In order to obtain the color value of a given image unit, contributions from groups overlapping the image unit, in particular four regions (each of red and blue) or five regions (green), are digitally summed. Also, contributions from regions slightly larger than the image elements are aggregated, and some pixels contribute to more than one image element.
In step 2, for the low power consumption mode, as shown in fig. 15, two adjacent diagonal areas that have undergone combination in step 1 are aggregated. Although this step may be performed by digital operations, it may also be implemented in the analog domain. That is, the charges accumulated in step 1 for the two regions to be accumulated in step 2 are taken out to the column line at a time (this can be achieved by controlling the switching of the readout circuit). This supports summing (averaging) the charges from the two regions without performing arithmetic operations. It should be noted that this analog charge accumulation by switching of the readout circuit is not limited to this embodiment, but is also applicable to other embodiments. However, the embodiment of fig. 15 supports it without the need for additional switches, as the areas to be clustered are vertically aligned.
The color value of each image element is then determined by interpolation of the charge accumulated in step 2.
The specific features of the pixel array without interpolation mode according to the present embodiment can be summarized in general terms as follows:
each picture element is made up of 4 x 4 pixels, each of the groups of 2 x 2 pixels in the array of pixels sharing a floating diffusion node, one floating diffusion node for a group of 2 x 2 pixels being electrically coupled to diagonally adjacent floating diffusion nodes, thereby forming groups, each group comprising eight pixels, charge being accumulated in the pixels,
Wherein the electrical signals derived from the charges accumulated in the respective groups including the pixels belonging to each image cell are accumulated and supplied as output from the image cell.
The specific features of the pixel array in the low power consumption mode according to the present embodiment can be summarized in general terms as follows:
each picture element is made up of 4 x 4 pixels, each of the groups of 2 x 2 pixels in the array of pixels sharing a floating diffusion node, one floating diffusion node for a group of 2 x 2 pixels being electrically coupled to diagonally adjacent floating diffusion nodes, thereby forming groups, each group comprising eight pixels, charge being accumulated in the pixels,
wherein the electrical signals derived from the charges accumulated in two adjacent groups are accumulated and the output from the image unit is derived by interpolation of these accumulated values.
Fig. 16 shows an embodiment in which the sharing of the floating diffusion node in fig. 14 is applied to a pixel array having four colors. First, the common floating diffusion node gathers contributions of two pixels (each of red and blue) or four pixels (each of green and yellow) within a 4×2 pixel region (step 1). In step 2, the contributions of the four regions are digitally summed (green does not require this operation according to the arrangement).
Various embodiments of pixel arrays and/or image sensors have been described above. Such a pixel array and/or image sensor may be used in an electronic device such as a digital camera, which further comprises lens means for directing incident light to the image sensor.
The electronic device may further comprise an auto-focus member for the lens member. The auto-focus means is for performing phase difference detection auto-focus based on a pair of profiles obtained from a plurality of pairs of pixels. In some embodiments, two pixels of each of the plurality of pairs of pixels may be covered by the same on-chip microlens.
Such an electronic device is not limited to a digital camera, but may be a video camera, a web cam, a mobile phone, a computer, or any other device for capturing images.
While various embodiments have been described above and shown in the drawings, the invention is not limited to the specific embodiments described or shown.
The cell divisions disclosed in the embodiments of the present application are not limiting, and embodiments may configure other component divisions.
Where appropriate, some of the functions may be implemented in the form of a computer program for causing a processor or computing device to perform one or more functions. For example, various arithmetic operations and/or various control functions of an electronic device (e.g., a video camera) may be implemented as a computer program. The computer program may be embodied on a non-transitory computer readable storage medium. The storage medium may be any medium that can store a computer program, and may be a solid state memory such as a USB drive, a flash drive, a read-only memory (ROM), and a random-access memory (RAM); a magnetic storage medium such as a removable or non-removable hard disk; or an optical storage medium such as an optical disc. The control functions may also be implemented in discrete or integrated circuit elements.
The above description is only intended to illustrate various embodiments of the present application and is not intended to limit the scope of the invention. Any modification which can be easily performed by a person skilled in the art in view of the present invention shall fall within the scope of the present application. For example, individually disclosed measures may be combined in a single embodiment as required, provided that the measures are not mutually exclusive.

Claims (30)

1. A pixel array for an image sensor, wherein the pixel array comprises a plurality of image cells,
wherein each image unit comprises a plurality of pixels for color components of a predetermined color space;
wherein each pixel is for detecting at least one of the color components of the color space;
wherein for each color component of the color space, providing as output from the image units an electrical signal aggregating contributions of pixels for the color component among the plurality of pixels included in each image unit; wherein the method comprises the steps of
The center of gravity of the pixels of the same color in a given image unit substantially coincides with the center of gravity of the given image unit; and/or
The pixels of the same color in a given image unit are located on lines extending in four directions of two perpendicular straight lines intersecting at the center of the given image unit.
2. The pixel array of claim 1, wherein the charge generated by the photosensitive elements of the pixels in each image cell is accumulated in a physical process.
3. A pixel array according to claim 1 or 2, wherein the charges generated by the photosensitive elements of the pixels in each picture element are accumulated in a common charge storage structure.
4. A pixel array according to any one of claims 1 to 3, wherein the charge generated by the photosensitive elements of at least two pixels in each picture element is stored in a common floating diffusion node.
5. A pixel array according to any one of claims 1 to 4, wherein the charges generated by the photosensitive elements of at least two pixels in each image cell are stored in a common floating diffusion node, the at least two floating diffusion nodes in each image cell being electrically coupled.
6. A pixel array according to any one of claims 1 to 5, wherein the photosensitive elements of the pixels of different colours in each picture element are controlled to output charge at different times.
7. The pixel array according to any one of claims 1 to 6, wherein each image cell is comprised of 4 x 4 pixels, each of the 2 x 2 groups of pixels in the 4 x 4 pixels sharing a floating diffusion node, the four floating diffusion nodes included in the image cell being electrically coupled.
8. The pixel array according to any one of claims 1 to 6, wherein each image cell is composed of 4 x 4 pixels, each of the 2 x 2 pixel groups of the 4 x 4 pixels shares a floating diffusion node, two floating diffusion nodes of four floating diffusion nodes included in the image cell are electrically coupled, whereby 16 pixels included in each image cell form two groups, each group including 8 pixels, charge is accumulated in the pixels,
wherein the electrical signals derived from the charges accumulated in each group of 8 pixels are accumulated and provided as output from the image unit.
9. The pixel array according to any one of claims 1 to 6, wherein each image cell is composed of 4 x 4 pixels, each of the 2 x 2 groups of pixels in the pixel array shares a floating diffusion node, four floating diffusion nodes are electrically coupled in groups, each group comprising two floating diffusion nodes, thereby forming groups, each group comprising eight pixels, charge accumulating in the pixels,
wherein the output from a given image cell is provided by summing electrical signals derived from charges accumulated in respective groups comprising pixels belonging to said given image cell.
10. The pixel array according to any one of claims 1 to 6, wherein each image cell is composed of 4 x 4 pixels, each of the groups of 2 x 2 pixels in the pixel array sharing a floating diffusion node, one floating diffusion node for a group of 2 x 2 pixels being electrically coupled to diagonally adjacent floating diffusion nodes, thereby forming groups, each group comprising eight pixels, charge being accumulated in the pixels,
wherein the output from a given image cell is provided by aggregating electrical signals derived from charges aggregated in respective groups comprising pixels belonging to each image cell.
11. The pixel array according to any one of claims 1 to 6, wherein each image cell is composed of 4 x 4 pixels, each of the groups of 2 x 2 pixels in the pixel array sharing a floating diffusion node, one floating diffusion node for a group of 2 x 2 pixels being electrically coupled to diagonally adjacent floating diffusion nodes, thereby forming groups, each group comprising eight pixels, charge being accumulated in the pixels,
wherein the electrical signals derived from the charges accumulated in two adjacent groups are accumulated and the output from the image unit is derived by interpolation of these accumulated values.
12. The pixel array according to any one of claims 1 to 11, wherein the color space is a color space comprising three colors.
13. The pixel array according to any one of claims 1 to 12, wherein the color space is a color space comprising three colors A, B and C, each image unit consisting of 4 x 4 pixels,
wherein the color of the pixels in each image unit is arranged to:
ABCA
CAAB
BAAC
ACBA
or (b)
ACBA
BAAC
CAAB
ABCA。
14. The pixel array according to any one of claims 1 to 13, wherein the color space is an RGB color space comprising red (R), green (G) and blue (B), each image unit is composed of 4 x 4 pixels,
wherein the color of the pixels in each image unit is arranged to:
GBRG
RGGB
BGGR
GRBG
or (b)
GRBG
BGGR
RGGB
GBRG。
15. The pixel array according to any one of claims 1 to 13, wherein the color space is a YRG color space comprising yellow (Y), red (R) and blue (B), each image unit consisting of 4 x 4 pixels,
wherein the color of the pixels in each image unit is arranged to:
YBRY
RYYB
BYYR
YRBY
or (b)
YRBY
BYYR
RYYB
YBRY。
16. The pixel array according to any one of claims 1 to 11, wherein the color space is a color space comprising four colors.
17. The pixel array according to any one of claims 1 to 16, comprising at least one phase detection pixel.
18. The pixel array according to any one of claims 1 to 17, wherein at least one on-chip microlens covers more than one pixel of the same color.
19. The pixel array of claim 18, wherein,
at least one on-chip microlens-covering region includes 2 x 2 pixels of the same color; and/or
The at least one on-chip microlens coverage area includes two adjacent pixels of the same color.
20. An image sensor, comprising:
the pixel array according to any one of claims 1 to 19;
and a readout circuit for reading out signals from the image cells of the pixel array.
21. The image sensor of claim 20, wherein the readout circuitry comprises:
a row control circuit for selecting rows of image cells of the pixel array;
a column control circuit for reading out a signal from each image cell in a row selected by the row control circuit by controlling column by column;
an analog-to-digital converter for converting a signal from each image unit into a digital signal;
And a control circuit for controlling the read operation of the read-out circuit.
22. An electronic device, comprising:
the image sensor of claim 20 or 21;
a lens member for guiding incident light to the image sensor;
an autofocus member for the lens member.
23. The electronic device of claim 22, wherein the autofocus component is configured to perform phase difference detection autofocus based on a pair of profiles obtained from a plurality of pairs of pixels.
24. The electronic device of claim 23, wherein for two pixels of each pair of the plurality of pairs of pixels, the two pixels are covered by a same on-chip microlens.
25. A method of operating an image sensor, comprising a pixel array comprising a plurality of image cells,
wherein each image unit comprises a plurality of pixels for color components of a predetermined color space;
wherein each pixel is for detecting at least one of the color components of the color space;
wherein the method comprises the steps of
The center of gravity of the pixels of the same color in an image unit substantially coincides with the center of gravity of the image unit; and/or
The same color pixels in an image unit are located on lines extending in four directions of two perpendicular straight lines intersecting at the center of the image unit,
wherein the method comprises the following steps:
for each color component of the color space, an electrical signal is provided as an output from the image unit, the electrical signal aggregating contributions of pixels for the color component of the plurality of pixels included in each image unit.
26. A method according to claim 25, wherein the charge generated by the photosensitive elements of the pixels in each image element is accumulated in a physical process.
27. A method according to claim 25 or 26, wherein the charges generated by the photosensitive elements of the pixels in each picture element are accumulated in a common charge storage structure.
28. A method according to any one of claims 25 to 27, wherein the charge generated by the photosensitive elements of at least two pixels in each picture element is stored in a common floating diffusion node.
29. A method according to any one of claims 25 to 28, wherein the charge generated by the photosensitive elements of at least two pixels in each image cell is stored in a common floating diffusion node, the at least two floating diffusion nodes in each image cell being electrically coupled.
30. A method according to any one of claims 25 to 29, wherein the photosensitive elements of the pixels of different colours in each picture element are controlled to output charge at different times.
CN202180101102.0A 2021-08-10 2021-08-10 Demosaicing-free pixel array, image sensor, electronic device and operation method thereof Pending CN117751576A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/111653 WO2023015425A1 (en) 2021-08-10 2021-08-10 Pixel array, image sensor, and electronic device without demosaicing and methods of operation thereof

Publications (1)

Publication Number Publication Date
CN117751576A true CN117751576A (en) 2024-03-22

Family

ID=85200385

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180101102.0A Pending CN117751576A (en) 2021-08-10 2021-08-10 Demosaicing-free pixel array, image sensor, electronic device and operation method thereof

Country Status (2)

Country Link
CN (1) CN117751576A (en)
WO (1) WO2023015425A1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7633538B2 (en) * 2006-09-11 2009-12-15 Aptina Imaging Corporation Color filter device and method for eliminating or reducing non-uniform color error caused by asymmetric color cross-talk in image sensor devices
US7745779B2 (en) * 2008-02-08 2010-06-29 Aptina Imaging Corporation Color pixel arrays having common color filters for multiple adjacent pixels for use in CMOS imagers
JP4881987B2 (en) * 2009-10-06 2012-02-22 キヤノン株式会社 Solid-state imaging device and imaging device
US9686486B2 (en) * 2015-05-27 2017-06-20 Semiconductor Components Industries, Llc Multi-resolution pixel architecture with shared floating diffusion nodes
WO2018075703A1 (en) * 2016-10-20 2018-04-26 Invisage Technologies, Inc. Image sensors with enhanced wide-angle performance
US10306191B2 (en) * 2016-11-29 2019-05-28 Cista System Corp. System and method for high dynamic range image sensing

Also Published As

Publication number Publication date
WO2023015425A1 (en) 2023-02-16

Similar Documents

Publication Publication Date Title
CN212785522U (en) Image sensor and electronic device
US10015416B2 (en) Imaging systems with high dynamic range and phase detection pixels
CN110113546B (en) Imaging system and method for combining and reading out adjacent pixel units in pixel array
TWI387334B (en) Image sensor and image capture system with extended dynamic range
EP2179591B1 (en) Multiple component readout of image sensor
CN112736101B (en) Image sensor having shared microlenses between multiple sub-pixels
TWI388877B (en) Imaging device having first and second lens arrays
TWI499293B (en) Sampling and readout of an image sensor
JP5589146B2 (en) Imaging device and imaging apparatus
JP5572765B2 (en) Solid-state imaging device, imaging apparatus, and focusing control method
CN110177226B (en) Image sensing device
US10986316B2 (en) System and method for high dynamic range image sensing
CN109981939A (en) Imaging system
US20110115954A1 (en) Sparse color pixel array with pixel substitutes
CN210143059U (en) Image sensor integrated circuit, image sensor, and imaging system
EP2680591B1 (en) Color imaging device
JP2014032214A (en) Imaging apparatus and focus position detection method
EP2800355B1 (en) Imaging device, method for controlling imaging device, and control program
CN117751576A (en) Demosaicing-free pixel array, image sensor, electronic device and operation method thereof
JP2016139664A (en) Solid state image pickup device
CN111131729A (en) Image sensor having a two-color filter sharing a photodiode
EP2800374A1 (en) Imaging device, control method for imaging device, and control program
CN113141475A (en) Imaging system and pixel merging method
US20240205560A1 (en) Sensor including micro lenses of different sizes
CN216873256U (en) Image sensor for partially shielding phase focusing by shared micro-lens

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination