CN210093338U - Image sensor with a plurality of pixels - Google Patents

Image sensor with a plurality of pixels Download PDF

Info

Publication number
CN210093338U
CN210093338U CN201921132625.8U CN201921132625U CN210093338U CN 210093338 U CN210093338 U CN 210093338U CN 201921132625 U CN201921132625 U CN 201921132625U CN 210093338 U CN210093338 U CN 210093338U
Authority
CN
China
Prior art keywords
floating diffusion
image sensor
transistor
pixel
imaging pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201921132625.8U
Other languages
Chinese (zh)
Inventor
鲁斯蒂·卫哲瑞德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Semiconductor Components Industries LLC
Original Assignee
Semiconductor Components Industries LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US16/225,704 external-priority patent/US10785425B2/en
Application filed by Semiconductor Components Industries LLC filed Critical Semiconductor Components Industries LLC
Application granted granted Critical
Publication of CN210093338U publication Critical patent/CN210093338U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Solid State Image Pick-Up Elements (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)

Abstract

An image sensor is disclosed that may include an array of imaging pixels. Each imaging pixel may have a photodiode that generates charge in response to incident light, a floating diffusion region, and a transfer transistor that transfers charge from the photodiode to the floating diffusion region. Each floating diffusion region may have an associated capacitance formed by a depletion region located between an n-type region and a p-type region in the semiconductor substrate. To enable selective combining in the voltage domain, multiple transistors may be coupled to the floating diffusion capacitor. The first plurality of pixels may selectively couple the floating diffusion capacitor to ground. The second plurality of pixels may selectively couple the floating diffusion capacitors to floating diffusion capacitors of neighboring pixels. During readout, the voltages of multiple floating diffusion capacitors can be non-destructively combined on a single floating diffusion capacitor.

Description

Image sensor with a plurality of pixels
This application claims benefit and priority from U.S. provisional patent application 62/738,072 filed 2018, 9, 28, which is hereby incorporated by reference in its entirety.
Technical Field
The present invention relates generally to image sensors, and more particularly to image sensors having pixel binning capability.
Background
Image sensors are often used in electronic devices such as cellular phones, cameras, and computers to capture images. In a typical arrangement, an electronic device is provided with an array of image pixels arranged in rows and columns of pixels. Each image pixel in the array includes a photodiode coupled to a floating diffusion region via a transfer gate. Each pixel receives incident photons (light) and converts these photons into an electrical signal. A column circuit is coupled to each pixel column for reading out pixel signals from the image pixels. Sometimes, image sensors are designed to provide images to electronic devices using the Joint Photographic Experts Group (JPEG) format.
Several image sensor applications require binning. In some conventional image sensors, binning is achieved by combining electrons from multiple pixels on a single node prior to readout. In other conventional image sensors, the digital signals from the pixels may be combined after readout. However, such conventional image sensors may have limited flexibility and/or a lower than desired frame rate.
Accordingly, it is desirable to provide an improved imaging sensor with variable pixel binning.
SUMMERY OF THE UTILITY MODEL
The utility model provides an improved imaging sensor with variable pixel amalgamation.
An image sensor comprising an array of imaging pixels, the image sensor comprising: a photodiode for a first imaging pixel of the array of imaging pixels, wherein the photodiode is configured to generate charge in response to incident light; a first floating diffusion capacitor for the first imaging pixel; a transfer transistor configured to transfer charge from the photodiode to the first floating diffusion capacitor; a first transistor configured to selectively couple the first floating diffusion capacitor to ground; and a second transistor configured to selectively couple the first floating diffusion capacitor to a second floating diffusion capacitor of a second imaging pixel in the imaging pixel array.
Drawings
FIG. 1 is a schematic diagram of an exemplary electronic device having an image sensor, according to one embodiment.
Fig. 2 is a schematic diagram of an exemplary pixel array and associated readout circuitry for reading out image signals in an image sensor, according to one embodiment.
Fig. 3 is a circuit diagram of an exemplary imaging pixel according to one embodiment.
FIG. 4 is a schematic diagram illustrating the concept of pixel binning in the voltage domain, according to one embodiment.
Fig. 5 is a circuit diagram of an exemplary image sensor having transistors capable of selective binning in the voltage domain, according to one embodiment.
Fig. 6 is a circuit diagram of the image sensor of fig. 5 in an exemplary 1 x 1 binning mode in which each pixel is read out individually, according to one embodiment.
Fig. 7 is a circuit diagram of the image sensor of fig. 5 in an exemplary 2 x 2 binning mode in which pixel signals from each 2 x 2 imaging pixel group are binned in the voltage domain prior to readout according to one embodiment.
Fig. 8 is a circuit diagram of the image sensor of fig. 5 in an exemplary 4 x 4 binning mode in which pixel signals from each 4 x 4 imaging pixel group are binned in the voltage domain prior to readout according to one embodiment.
FIG. 9 is a flowchart of exemplary method steps for operating the image sensor of FIG. 5, according to one embodiment.
Fig. 10 is a cross-sectional side view of an exemplary image sensor showing how isolated p-well regions can be used to isolate floating diffusion regions, according to one embodiment.
FIG. 11 is a cross-sectional side view of an exemplary image sensor showing how deep trench isolation can be used to isolate floating diffusion regions, according to one embodiment.
Fig. 12 is a cross-sectional side view of an exemplary image sensor showing how the floating diffusion region may be isolated by a p-well and selectively coupled to ground by an Indium Gallium Zinc Oxide (IGZO) transistor, according to one embodiment.
Fig. 13 is a cross-sectional side view of an exemplary image sensor showing how the floating diffusion region may be isolated by a p-well and selectively coupled to ground by a Complementary Metal Oxide Semiconductor (CMOS) transistor, in accordance with one embodiment.
Fig. 14 is a state diagram illustrating an exemplary binning mode of an image sensor having circuits of the type shown in fig. 5-8, according to one embodiment.
Fig. 15 is a circuit diagram showing how an exemplary readout integrated circuit may include transistors capable of pixel binning in the voltage domain, according to one embodiment.
Detailed Description
Embodiments of the present invention relate to an image sensor. It will be understood by those skilled in the art that the exemplary embodiments of the present invention may be practiced without some or all of these specific details. In other instances, well known operations have not been described in detail to avoid unnecessarily obscuring embodiments of the invention.
Electronic devices such as digital cameras, computers, cellular telephones, and other electronic devices may include an image sensor that captures incident light to capture an image. The image sensor may include an array of pixels. Pixels in an image sensor may include a photosensitive element, such as a photodiode that converts incident light into an image signal. The image sensor may have any number (e.g., hundreds or thousands or more) of pixels. A typical image sensor may, for example, have hundreds of thousands or millions of pixels (e.g., mega pixels). The image sensor may include a control circuit (such as a circuit for operating the pixels) and a readout circuit for reading out an image signal corresponding to the charge generated by the photosensitive element.
FIG. 1 is a schematic diagram of an exemplary imaging and response system including an imaging system that captures images using an image sensor. The system 100 of fig. 1 may be an electronic device, such as a camera, cell phone, video camera, or other electronic device that captures digital image data, may be a vehicle security system (e.g., an active braking system or other vehicle security system), or may be a surveillance system.
As shown in fig. 1, system 100 may include an imaging system (such as imaging system 10) and a host subsystem (such as host subsystem 20). The imaging system 10 may include a camera module 12. The camera module 12 may include one or more image sensors 14 and one or more lenses.
Each image sensor in the camera module 12 may be the same, or there may be different types of image sensors in a given image sensor array integrated circuit. During image capture operations, each lens may focus light onto an associated image sensor 14 (such as the image sensor of fig. 2). Image sensor 14 may include light sensitive elements (i.e., pixels) that convert light into digital data. An image sensor may have any number (e.g., hundreds, thousands, millions, or more) of pixels. A typical image sensor may, for example, have millions of pixels (e.g., several mega pixels). For example, the image sensor 14 may include a bias circuit (e.g., a source follower load circuit), a sample and hold circuit, a Correlated Double Sampling (CDS) circuit, an amplifier circuit, an analog-to-digital converter circuit, a data output circuit, a memory (e.g., a buffer circuit), an addressing circuit, and the like.
Still image data and video image data from the camera sensor 14 may be provided to the image processing and data formatting circuit 16 via path 28. The image processing and data formatting circuit 16 may be used to perform image processing functions such as data formatting, adjusting white balance and exposure, video image stabilization, face detection, and the like. The image processing and data formatting circuitry 16 may also be used to compress raw camera image files as needed (e.g., into a joint photographic experts group or JPEG format). In a typical arrangement, sometimes referred to as a system-on-a-chip (SOC) arrangement, the camera sensor 14 and the image processing and data formatting circuit 16 are implemented on a common semiconductor substrate (e.g., a common silicon image sensor integrated circuit die). The camera sensor 14 and the image processing circuit 16 may be formed on separate semiconductor substrates, if desired. For example, the camera sensor 14 and the image processing circuit 16 may be formed on separate substrates that have been stacked.
Imaging system 10 (e.g., image processing and data formatting circuitry 16) may communicate the acquired image data to host subsystem 20 via path 18. Host subsystem 20 may include processing software for detecting objects in images, detecting movement of objects between image frames, determining distances of objects in images, filtering, or otherwise processing images provided by imaging system 10.
The system 100 may provide a number of advanced functions for the user, if desired. For example, in a computer or advanced cellular telephone, the user may be provided with the ability to run user applications. To implement these functions, the host subsystem 20 of the system 100 may have input-output devices 22 (such as a keypad, input-output ports, joystick, and display) and storage and processing circuitry 24. The storage and processing circuitry 24 may include volatile memory and non-volatile memory (e.g., random access memory, flash memory, hard disk drives, solid state drives, etc.). The storage and processing circuitry 24 may also include a microprocessor, microcontroller, digital signal processor, application specific integrated circuit, or the like.
An example of the arrangement of the camera module 12 of fig. 1 is shown in fig. 2. As shown in fig. 2, the camera module 12 includes an image sensor 14 and control and processing circuitry 44. Control and processing circuitry 44 may correspond to image processing and data formatting circuitry 16 in fig. 1. Image sensor 14 may include an array of pixels 32, such as pixels 100 (sometimes referred to herein as image sensor pixels, imaging pixels, or image pixels 100), and may also include control circuitry 40 and 42. Control and processing circuitry 44 may be coupled to row control circuitry 40 and may be coupled to column control and sense circuitry 42 via data paths 26. Row control circuitry 40 may receive row addresses from control and processing circuitry 44 and may supply corresponding row control signals (e.g., dual conversion gain control signals, pixel reset control signals, charge transfer control signals, halo control signals, row select control signals, or any other desired pixel control signals) to image pixels 100 via control paths 36. Column control and readout circuitry 42 may be coupled to columns of pixel array 32 via one or more conductive lines, such as column line 38. Column lines 38 may be coupled to each column of image pixels 100 in image pixel array 32 (e.g., each column of pixels may be coupled to a corresponding column line 38). Column lines 38 may be used to read out image signals from image pixels 100 and supply bias signals (e.g., bias currents or bias voltages) to image pixels 100. During an image pixel readout operation, a row of pixels in image pixel array 32 may be selected using row control circuitry 40, and image data associated with image pixels 100 of the row of pixels may be read out on column lines 38 by column control and readout circuitry 42.
Column control and readout circuitry 42 may include column circuitry such as column amplifiers for amplifying signals read out of array 32, sample and hold circuitry for sampling and storing signals read out of array 32, analog-to-digital converter circuitry for converting read out analog signals to corresponding digital signals, and column memory for storing the read out signals and any other desired data. Column control and readout circuitry 42 may output the digital pixel values over lines 26 to control and processing circuitry 44.
Array 32 may have any number of rows and columns. In general, the size of the array 32 and the number of rows and columns in the array 32 will depend on the particular implementation of the image sensor 14. Although rows and columns are generally described herein as horizontal and vertical, respectively, rows and columns may refer to any grid-like structure (e.g., features described herein as rows may be arranged vertically and features described herein as columns may be arranged horizontally).
Image array 32 may be provided with a color filter array having a plurality of color filter elements that allow a single image sensor to sample different colors of light. For example, image sensor pixels such as those in array 32 may be provided with a color filter array that allows a single image sensor to sample red, green, and blue light (RGB) using corresponding red, green, and blue image sensor pixels arranged in a bayer mosaic pattern. The bayer mosaic pattern consists of a repeating unit cell of 2 x 2 image pixels, where two green image pixels are diagonally opposite to each other and adjacent to a red image pixel diagonally opposite to a blue image pixel. In another suitable example, green pixels in a bayer pattern are replaced with broadband image pixels having broadband color filter elements (e.g., transparent color filter elements, yellow color filter elements, etc.). In yet another example, the image sensor may be a monochrome sensor, where each pixel is covered by the same type of color filter element (e.g., a transparent color filter element). These examples are merely exemplary, and in general, color filter elements of any desired color and any desired pattern may be formed over any desired number of image pixels 100.
If desired, the array 32 may be part of a stacked die arrangement, wherein the pixels 100 of the array 32 are divided between two or more stacked substrates. In such an arrangement, each of the pixels 100 in the array 32 may be divided between the two dies at any desired node within the pixel. For example, a node such as a floating diffusion node may be formed over both dies. A pixel circuit including a photodiode and circuitry coupled between the photodiode and a desired node (such as a floating diffusion node in this example) may be formed on the first die, and the remaining pixel circuits may be formed on the second die. The desired node may be formed on (i.e., as part of) a coupling structure (such as a conductive pad, a micro-pad, a conductive interconnect structure, or a conductive via) that connects the two dies. The coupling structure may have a first portion on a first die and a second portion on a second die before the two dies are bonded. The first die and the second die may be bonded to each other such that the first portion of the coupling structure and the second portion of the coupling structure are bonded together and electrically coupled. If desired, the first and second portions of the coupling structure may be compressively bonded to one another. However, this is merely illustrative. If desired, the first and second portions of the coupling structure formed on the respective first and second die may be bonded together using any metal-to-metal bonding technique, such as soldering or welding.
As described above, the desired node in the pixel circuit that is divided into two dies may be a floating diffusion node. Alternatively, the desired node in the pixel circuit that is divided over the two dies may be a node between the floating diffusion region and the gate of the source follower transistor (i.e., the floating diffusion node may be formed on the first die on which the photodiode is formed while the coupling structure may connect the floating diffusion node to the source follower transistor on the second die), a node between the floating diffusion region and the source-drain node of the transfer transistor (i.e., the floating diffusion node may be formed on the second die on which the photodiode is not provided), a node between the source-drain node of the source follower transistor and the row select transistor, or any other desired node of the pixel circuit.
In general, the array 32, row control circuitry 40, column control and readout circuitry 42, and control and processing circuitry 44 may be divided between two or more stacked substrates. In one example, the array 32 may be formed in a first substrate, and the row control circuitry 40, column control and readout circuitry 42, and control and processing circuitry 44 may be formed in a second substrate. In another example, the array 32 may be divided between a first substrate and a second substrate (using one of the above-described pixel division schemes), and the row control circuitry 40, column control and readout circuitry 42, and control and processing circuitry 44 may be formed in a third substrate.
The image sensor may be implemented in a vehicle security system. In a vehicle security system, images captured by an image sensor may be used by the vehicle security system to determine environmental conditions around the vehicle. For example, vehicle safety systems may include systems such as parking assist systems, automatic or semi-automatic cruise control systems, automatic braking systems, collision avoidance systems, lane keeping systems (sometimes referred to as lane drift avoidance systems), pedestrian detection systems, and the like. In at least some cases, the image sensor may form part of a semi-autonomous or autonomous unmanned vehicle.
To improve the performance of the image sensor, the image sensor may have binning capability. FIG. 3 illustrates an exemplary imaging pixel that may be included in an image sensor with selective pixel binning. As shown in fig. 3, the pixel 100 may include a photodiode 102 (PD). The transfer transistor 104(TX) may be coupled to a photodiode. When the transfer transistor 104 is enabled, charge can be transferred from the photodiode 102 to the associated floating diffusion region 106 (FD). The floating diffusion region may have an associated capacitance C as shownFD. Associated capacitance CFDMay be formed by a depletion region in the semiconductor substrate between the n-type region and the p-type region. Associated capacitance CFDSometimes referred to as a floating diffusion capacitor (or floating diffusion region capacitor) CFD. The n-type region may form an upper plate of the floating diffusion capacitor, and the p-type region may form a lower plate of the floating diffusion capacitor. A reset transistor 108(RST) may be coupled between the floating diffusion region 106 and a bias voltage source terminal 110. When the reset transistor 108 is enabled, the voltage of the floating diffusion region 106 may be reset.
The floating diffusion region 106 may be coupled to a gate of a source follower transistor 112 (SF). The source follower transistor is coupled between a bias voltage source terminal 110 and a row select transistor 114 (RS). When making a row selectionWhen the selection transistor 114 is active, the output voltage VOUTMay be provided to the column output line 116.
The pixel 100 may also include an anti-blooming transistor 118(AB) coupled between the photodiode 102 and a bias voltage source terminal 120. When the anti-blooming transistor 118 is enabled, charge from the photodiode 102 may be cleared to the bias voltage source terminal 120.
The example of fig. 3 is merely exemplary. The imaging pixel can have any desired transistor architecture. For example, an imaging pixel may include a charge storage region (e.g., a storage capacitor, a storage gate, a storage diode, etc.), an imaging pixel may include a dual conversion gain capacitor and/or a dual conversion gain transistor, etc.
To allow selective pixel binning, transistors may be included in the image sensor that allow pixel binning in the voltage domain. The floating diffusion regions of adjacent pixels may be coupled together for non-destructive merging. For example, additional transistors may be incorporated which allow selective summing of the voltages on the floating diffusions of different pixels.
Fig. 4 is a schematic diagram showing the concept of pixel binning in the voltage domain. For example, four imaging pixels may have respective floating diffusion capacitors CFD1、CFD2、CFD3And CFD4. Switches (e.g., transistors) such as switches 122, 124, and 126 may be coupled between the floating diffusion regions. Each floating diffusion region may have its own respective voltage. Closing the switch, however, will cause the voltage of one floating diffusion region to affect the voltage of the adjacent floating diffusion region.
For example, consider an example in which CFD1With associated voltage V1 (e.g., 1V), CFD2With associated voltage V2 (e.g., 2V), CFD3Has an associated voltage V3 (e.g., 3V), and CFD4With an associated voltage V4 (e.g., 4V). When switches 122, 124, and 126 are all open, each floating diffusion has its corresponding voltage. However, if switch 122 is closed, CFD2The voltage on will become equal to V2+ V1 (not just V2). If both switches 122 and 124 are closed, CFD3The voltage on will become equal to V3+ V2+ V1. If switches 122, 124 and 126 are all closed, VOUTV4+ V3+ V2+ V1. If the switches are then all reopened, the voltage at each floating diffusion will return to the original level of these voltages.
In summary, selective coupling of floating diffusion regions between pixels allows for selective combination of pixel signals in the voltage domain. Selective binning may increase the frame rate of the image sensor (since less total pixels need to be read out).
Fig. 5 is a circuit diagram illustrating a portion of an exemplary image sensor having transistors capable of selective binning in the voltage domain. For simplicity, in fig. 5, only the photodiode 102, the transfer transistor 104, the floating diffusion region 106, and the source follower transistor 112 of each pixel 100 are shown. However, it should be understood that each pixel in fig. 5 may include any of the components shown in fig. 3 or any other desired pixel component (e.g., storage capacitor, storage gate, storage diode, dual conversion gain capacitor, dual conversion gain transistor, etc.).
In addition, the image sensor may include transistors, such as transistors T1, T2, and T3. Each transistor T1 may be coupled at a respective CFD(e.g., a p-type layer of floating diffusion region capacitance) and ground. Each transistor T2 may be coupled to a floating diffusion region (e.g., an n-type layer of a floating diffusion capacitor) of the first pixel. Each transistor T2 may also be coupled to a capacitor CFDAnd a respective node between ground on an adjacent second pixel (e.g., a p-type layer of a floating diffusion capacitor). Specifically, each transistor T2 is coupled at CFDAnd the transistor T1 of the adjacent second pixel. The transistor T2 may couple adjacent pixels in the same row of the image sensor.
Each transistor T3 may be coupled to a floating diffusion region (e.g., an n-type layer of a floating diffusion capacitor) of the first pixel. Each transistor T3 may also be coupled to a capacitor CFDAnd a respective node between ground on an adjacent second pixel (e.g., a p-type layer of a floating diffusion capacitor). Specifically, eachTransistor T3 is coupled at CFDAnd the transistor T1 of the adjacent second pixel. The transistor T3 may couple adjacent pixels of the same column in the image sensor. Transistors T2 and T3 may be coupled to CFDAnd transistor T1 of a given pixel.
In fig. 5, each pixel is depicted as having a corresponding transistor T1. Transistor T2 is depicted as being connected between each pair of adjacent pixels in the image sensor. However, the transistor T3 is depicted as being connected only between some adjacent pairs of pixels in the image sensor. This example is merely exemplary. In general, each pixel may or may not be coupled to a respective transistor T1, T2, and/or T3. The more transistors T1, T2, and T3 included in the image sensor, the greater flexibility in pixel binning is provided. The arrangement of fig. 5 may provide 1 × 1 merging (e.g., no merging), 2 × 2 merging, and 4 × 4 merging capabilities. Additional merged patterns (16 × 16, 32 × 32, 64 × 64, etc.) would also be possible if the pattern of fig. 5 is repeated across the image sensor.
Fig. 6 to 8 illustrate different merging modes of the image sensor shown in fig. 5. For simplicity, in fig. 6-8, only the photodiode 102, transfer transistor 104, floating diffusion region 106, and source follower transistor 112 of each pixel 100 are shown. It should be understood, however, that each pixel in fig. 6-8 may include any of the components shown in fig. 3 or any other desired pixel component (e.g., storage capacitor, storage gate, storage diode, dual conversion gain capacitor, dual conversion gain transistor, etc.).
In addition, for simplicity, in fig. 6 to 8, only the transistors T1, T2, T3 effective in a given binning mode are depicted in the figures. However, it should be understood that all of the transistors in fig. 5 are present in the image sensor of fig. 6 to 8; the de-asserted transistors are not depicted in fig. 6-8 only.
Fig. 6 shows an exemplary 1 × 1 binning mode (e.g., no binning mode) in which each pixel is read out separately. As shown, transistor T1 of each pixel is enabled, coupling the floating diffusion region of each pixel to ground. This mode provides the highest resolution image data.
Fig. 7 shows an exemplary 2 x 2 binning mode, in which the pixel signals from each 2 x 2 imaging pixel group are binned in the voltage domain prior to readout. As shown in fig. 7, a given 2 x 2 imaging pixel group including pixels 100-1, 100-2, 100-3, and 100-4 may be merged. Transistor T1 of pixel 100-1 is asserted, coupling the floating diffusion region of 100-1 to ground. However, T2, located between transistors 100-1 and 100-2, is asserted to couple the floating diffusion region of pixel 100-1 to the floating diffusion region of pixel 100-2. T3 located between transistors 100-2 and 100-3 is asserted to couple the floating diffusion region of pixel 100-2 to the floating diffusion region of pixel 100-3. T2 between transistors 100-3 and 100-4 is asserted to couple the floating diffusion region of pixel 100-3 to the floating diffusion region of pixel 100-4. Thus, the voltage at the floating diffusion region of pixel 100-4 will be equal to the floating diffusion voltage of pixels 100-1, 100-2, 100-3, and 100-4 (similar to V in connection with FIG. 4)OUTV1+ V2+ V3+ V4). This effectively combines the pixel signal levels. Only the top right pixel 100-4 of each 2 x 2 group can be read out. This results in a frame rate four times faster than when reading out each pixel level. Faster frame rates may be better for imaging moving objects (e.g., for better speed determination).
Fig. 8 shows an exemplary 4 x 4 binning mode, in which the pixel signals from each 4 x 4 imaging pixel group are binned in the voltage domain prior to readout. Transistor T1 of the lower right pixel is asserted, coupling the floating diffusion region of the lower right pixel to ground, as shown in fig. 8. The chain of transistors T2 and T3 is also effected between the floating diffusion regions of adjacent pixels until the upper right pixel is reached. This effectively combines the pixel signal levels of all sixteen pixels depicted in fig. 8. Only the upper right pixel of each 4 x 4 group can be read out. This results in a frame rate that is sixteen times faster than when each pixel level is read out.
When operating the image sensor of fig. 5 to 8, correlated double sampling may be used. The floating diffusion region may be reset and the reset level of the floating diffusion region may be sampled before the transfer transistor is activated to transfer charge from the photodiode to the floating diffusion region. The floating diffusion region may be reset and sampled, if desired, while disabling all transistors T1, T2, and T3. After sampling the reset level, charge from the photodiode can be transferred to the corresponding floating diffusion region. The transistors T1, T2, and T3 may be deactivated during charge transfer. Alternatively, the transistors T1, T2, and T3 may optionally remain active during charge transfer. After the charge transfer, the selected transistors T1, T2, and T3 associated with a given binning mode may be asserted (e.g., the transistors of fig. 6 may be asserted for a 1 × 1 binning mode, the transistors of fig. 7 may be asserted for a 2 × 2 binning mode, and the transistors of fig. 8 may be asserted for a 4 × 4 binning mode). The signal levels of the floating diffusion regions of the desired pixels can then be read out (e.g., one signal level for each combination).
An exemplary method of operating the image sensor shown in fig. 5 to 8 will now be discussed. First, charge can be integrated on the photodiode. In one illustrative example, the accumulation time may begin by asserting an anti-blooming transistor (e.g., anti-blooming transistor 118 in fig. 3). In another example, the accumulation time may begin by asserting the transfer transistor and the reset transistor (e.g., transistors 104 and 108 in fig. 3) at the same time.
Next, the floating diffusion region may be reset to remove any accumulated charge from the floating diffusion region prior to reading out the photodiode. To reset the floating diffusion region, all of the transistors in transistor T1 in the image sensor may be validated to capacitor (C) the floating diffusion regionFD) All of C in (1)FDAnd (4) grounding. Next, a reset transistor (108) for each transistor may be asserted to reset the voltage of the floating diffusion region capacitor. After resetting the voltage of the floating diffusion region capacitor, the desired combination of transistors T1, T2, and T3 of a particular binning arrangement may be validated (e.g., as shown in fig. 6 for a 1 × 1 binning, in fig. 7 for a 2 × 2 binning, and in fig. 8 for a 4 × 4 binning). Once the desired transistors T1, T2, and T3 are enabled, the image of interest may be addressedThe reset voltage of the pixel (e.g., the pixel to be read out for that particular binning mode) is sampled.
After sampling the reset voltage, all of the transistors in transistor T1 may be validated (e.g., even if these transistors are not validated later for a particular consolidated arrangement). The transfer transistor can then be activated, thereby transferring charge to the floating diffusion region. After the charge transfer, the transfer transistor is deactivated. Then, a desired combination of the transistors T1, T2, and T3 of a specific binning arrangement (e.g., the same combination of the transistors T1, T2, and T3 as during reset signal sampling) may be validated. Once the required transistors T1, T2, and T3 are enabled, the signal voltage of the pixel of interest (e.g., the pixel read out for that particular binning mode) may be sampled.
Fig. 9 is a flowchart of exemplary method steps for operating the image sensor of fig. 5-8. At step 302, the photodiode may be reset to start the accumulation time. For example, the photodiode may be reset by asserting the anti-blooming transistor of each pixel. After the accumulation time, to begin readout, all of the T1 transistors of the T1 transistors may be validated at step 304. The T1 transistor is asserted to ground each floating diffusion region, and the floating diffusion regions may then be reset by asserting the reset transistor of the pixel.
After the floating diffusion region is reset at step 304, the combination of the T1 transistor, the T2 transistor, and the T3 transistor associated with the first merged configuration may be validated at step 306. Once the combination of the T1 transistor, the T2 transistor, and the T3 transistor is validated, the reset voltage of the floating diffusion region may be sampled at step 308.
Because voltage pixel binning is non-destructive, pixels can be sampled in multiple binning modes in a single frame. This is optional and a single combining pattern can be sampled in each frame if desired. If multiple merge mode samples are needed in a single frame, steps 306 and 308 may optionally be repeated as indicated by arrow 307 (e.g., for a second merge mode). For each unique binning mode, a respective unique set of transistors T1, T2, and T3 may be validated at step 306, and a respective reset voltage may be obtained at step 308.
After all required reset voltage samples have been obtained, the method may proceed to step 310. At step 310, all T1 transistors may be validated, and all T2 and T3 transistors may be disarmed. In this state, the transfer transistor can be activated to transfer charge from each photodiode to the corresponding floating diffusion region. Next, at step 312, the combination of the T1 transistor, the T2 transistor, and the T3 transistor associated with the first merge mode is validated. A signal voltage may then be obtained from each relevant floating diffusion associated with the merged mode. For example, in the 2 × 2 binning mode, only one floating diffusion region out of every four floating diffusions has a signal voltage that needs to be sampled. For correlated double sampling readout values, the signal voltage may be used with a reset voltage.
If only one merge mode is sampled per frame, the readout of the frame may be completed after step 314. However, if multiple merge modes are sampled per frame, additional sampling may be performed. As shown in fig. 9, in optional step 316, a combination of the T1 transistor, the T2 transistor, and the T3 transistor (e.g., a different combination than in step 312) associated with the second merge mode may be validated. The signal voltage of the floating diffusion region associated with the second binning mode is then sampled at step 318. Similar to step 314, only the relevant floating diffusion associated with the second binning mode may have its signal voltage sampled.
The reset voltage samples may be used to help correct the signal voltage obtained in step 318. There are many options for how to correct the signal voltage obtained in step 318. First, the reset voltage from step 308 when the transistors T1, T2, and T3 are asserted in the combination associated with the first merge mode may be used (even if the first merge mode and the second merge mode have different combinations of T1, T2, and T3 that are asserted). In other words, the reset voltage sampled in conjunction with the first binning mode can still be used for correlated double sampling in the second binning mode. Another option is to use the reset voltage from step 308 when the transistors T1, T2, and T3 are combined for the second merge mode. Yet another alternative is to obtain reset voltage samples at step 320. At step 320, the floating diffusion region may be reset (e.g., by asserting the T1 transistor and the reset transistor), the combination of transistors T1, T2, and T3 associated with the second merge mode may be asserted, and the second reset voltage may be sampled. Obtaining a reset voltage for correcting the signal voltage after the signal voltage has been sampled may be referred to as uncorrelated double sampling.
If not isolated, floating diffusion capacitor CFDMay have an effective capacitance that is affected by adjacent circuit components. For the image sensors shown in fig. 5 to 8, the floating diffusion region of each pixel may be isolated from the substrate so that the floating diffusion regions may be independently connected in series. Fig. 10 shows an image sensor having a floating diffusion region isolated using isolated p-well regions, while fig. 11 shows an image sensor having a floating diffusion region isolated using Deep Trench Isolation (DTI).
In fig. 10, the substrate 130 may include the photodiode 102. The substrate 130 may be a p-type epitaxial substrate with deep n-wells 131 and photodiodes 102. Isolated p-well 136 may isolate n + region 138 and p + region 140. The floating diffusion 106 may be formed from an n + region 138. The transfer gate 104 is interposed between the photodiode 102 and the floating diffusion 106. Interlayer dielectric layers 132 and 134 (sometimes referred to as gate dielectrics) are formed below and above the transfer gate 104. Isolated p-well 136 isolates p + region 140 and p + region 142. This allows the photodiode 102 to remain grounded even though the floating diffusion region 138 is independently coupled or decoupled (via transistor T1) from ground. p + region 142 may be a ground contact coupled to transistor T1. T1 is coupled between p + regions 142 (on one side of isolated p-well 136) and 140 (on the other side of isolated p-well 136). The transistor T1 may be a semiconductor oxide transistor formed with an active trench of Indium Gallium Zinc Oxide (IGZO).
As shown in fig. 10, transistor T1 includes an active trench 190 (formed from IGZO), a metal contact 192, a gate 194, and a dielectric layer 196. All of the transistors T1, T2, and T3 may optionally be semiconductor oxide transistors (e.g., having a similar structure as shown at T1 in fig. 10). The semiconductor oxide transistor has a low leakage level, which may be useful in the image sensor discussed herein due to the effective isolation of the floating diffusion region. Semiconductor oxide transistors can also be formed within the metal stack without the need for additional silicon, resulting in efficient fabrication.
In fig. 10, the light collection area of the pixel may include a deep n-well 131 and a photodiode 102. Isolated p-well 136 may be formed of p-type epitaxial silicon.
The example of using isolated p-wells 136 with surrounding deep n-wells to isolate floating diffusion regions 138 is merely exemplary. Alternatively, deep trench isolation such as deep trench isolation 152 may be used for isolation, as shown in fig. 11. The deep trench isolation may be formed from a material such as an oxide or a metal in a trench in the substrate 130. The deep trench isolations can extend from the front surface 198 of the substrate 130 to the back surface 199 of the substrate 130. A Buried Oxide (BOX)188 may be formed on the back surface of the substrate 130. The isolated p-well 136 may still be formed around the floating diffusion region 138. The light collection area of the pixel in fig. 11 includes a photodiode 102 (instead of the additional deep n-well in fig. 10). Although not explicitly shown in fig. 11, a semiconductor oxide transistor may be coupled between p + regions 140 and 142 in fig. 11, similar to that depicted in fig. 10.
Fig. 12 is a cross-sectional side view of an image sensor showing another possible implementation for isolating a floating diffusion region (FD) in a given imaging pixel. As shown in fig. 12, the substrate 130 may include a photodiode 102. The photodiode 102 may be electrically connected to the n + region 158 and the n + region 154. n + regions 154 and 158 may be separated by shallow trench isolation 152. Metal layer 156 may electrically connect n + region 154 to n + region 158 across STI 152. The substrate 130 may be a p-type epitaxial substrate with a deep n-well 160.
Isolated p-well 136 may isolate n + regions 138 and 154 and p + region 140. The floating diffusion 106 may be formed from an n + region 138. Transfer gate 104 is interposed between n + region 154 (electrically connected to photodiode 102) and floating diffusion 106. Interlayer dielectric layers 132 and 134 (sometimes referred to as gate dielectrics) are formed below and above the transfer gate 104. Isolated p-well 136 isolates p + region 140 and p + region 142. p + region 142 may be a ground contact coupled to transistor T1. T1 is coupled between p + regions 142 (on one side of STI 152) and 140 (on the other side of STI 152). The transistor T1 may be a semiconductor oxide transistor that forms an active trench semiconductor oxide such as Indium Gallium Zinc Oxide (IGZO). T1 in fig. 12 may be formed within the metal stack without the need for additional silicon. Although T1 is explicitly depicted in fig. 12, all of transistors T1, T2, and T3 may optionally be semiconductor oxide transistors.
Fig. 13 is a cross-sectional side view of an image sensor showing yet another possible embodiment for isolating a floating diffusion region (FD) in a given imaging pixel. The image sensor depicted in fig. 13 has a similar structure as in fig. 12. However, instead of using the IGZO transistor T1 as in fig. 12, the transistor T1 in fig. 13 is formed in the same manner as the transfer transistor 104 (e.g., using complementary metal oxide semiconductor or CMOS technology). P + region 140 may be coupled to n + region 174 on the other side of shallow trench isolation 152 by metal interconnect layer 172. T1 may have a gate formed over substrate 130 between n + region 174 and n + region 176. N + region 176 is then coupled to p + region 142 by metal interconnect layer 178. T1 may be asserted to selectively ground floating diffusion region 138 (e.g., by selectively coupling p + region 140 to p + region 142).
Fig. 14 is a state diagram illustrating an exemplary binning mode of an image sensor having circuitry of the type shown in fig. 5-8, according to one embodiment. Processing circuitry (also referred to as control circuitry) in the imaging system may place the image sensor in a desired binning mode (sometimes referred to as voltage binning mode). As shown, the image sensor may be operable in a first merge mode 202, a second merge mode 204, and a third merge mode 206. Each merge mode may have a different merge arrangement. For example, the first binning mode 202 may be a 1 × 1 binning mode, in which no binning occurs and each pixel is read out separately (as shown in fig. 6). Optionally, in the 1 × 1 binning mode, only a subset of the pixels may be read out to increase the frame rate (this is referred to as using sub-windows). The second binning mode 204 may be a 2 x 2 binning mode, in which the signals from each 2 x 2 group of pixels are binned and only one pixel out of every four pixels is read out (as shown in fig. 7). The third binning mode 206 may be a 4 x 4 binning mode, where the signals from each 4 x 4 pixel group are binned and only one pixel out of every sixteen pixels is read out (as shown in fig. 8).
The image sensor may switch between modes based on user preferences/selections, based on information from the processing circuitry (e.g., based on whether a moving object is present in the scene or based on the magnitude of the speed of the moving object in the scene), etc. The image sensor may be part of a system having different modes of operation.
For example, in the first mode of operation, the image sensor may operate in the first merge mode 202. If the processing circuit detects motion in the image data captured during the first merge mode, the image sensor may switch to the second merge mode 204 for speed determination. If the object is large enough, the centroid algorithm can be used for more accurate velocity determination. If the object moves fast enough (e.g., if the measured speed exceeds a given speed threshold), the image sensor may switch to the third merge mode 206 for better speed resolution.
In the second mode of operation, the image sensor may operate in the third merge mode 206. When motion is detected, the image sensor may switch to the first merge mode 202 for one frame in order to obtain one frame for object recognition at a higher resolution.
Since the merging is non-destructive, the single frame image data can be read in a variety of ways if desired (e.g., in a first merge mode, and then again in a second merge mode).
The example of the image sensor having three merging modes in fig. 14 is merely exemplary. In general, the image sensor may have any desired number of merge modes, each merge mode may have any desired merge arrangement, and the image sensor may switch between merge modes in any desired manner.
The voltage combining described herein may be applicable to both monolithic and stacked image sensors. In a stacked image sensor, two or more substrates (e.g., wafers) are connected with a conductive interconnect layer. For example, at any position in the circuit diagrams of fig. 3 and 5, an interconnect layer may be included, and the pixel circuit may be divided between two substrates.
The non-destructive voltage combining techniques described herein may also be used for read-out integrated circuits (ROICs). The ROIC may be coupled to the array of photosensitive elements by a conductive interconnect layer. For example, cadmium mercury telluride (HgCdTe) or another material (e.g., gallium arsenide) may be used to form the photosensitive element for infrared light detection. ROIC having selective combining capabilities described herein can be coupled to the photosensitive element through a conductive interconnect layer.
Fig. 15 is a circuit diagram of an exemplary ROIC with selective merge capability. In the image sensor 402 of fig. 15, a photosensitive region 404 (e.g., a region that generates charge in response to infrared light) is coupled to a readout integrated circuit (ROIC) 406. The photosensitive region 404 may be formed of cadmium mercury telluride (HgCdTe) or another material, such as gallium arsenide. The ROIC may include a transimpedance amplifier 408 (with an operational amplifier, capacitor, and transistor) that converts current into voltage. The capacitor 412 and the floating diffusion region 410 are coupled to the output of the transimpedance amplifier. The reset transistor 414 is coupled to the floating diffusion region. The floating diffusion region is coupled to the gate of source follower transistor 416. The readout capacitor 418 is coupled to one of the terminals of the source follower transistor. The readout capacitor may be coupled to transistors T1 (selectively coupling the readout capacitor to ground), T2 (selectively coupling the readout capacitor to the readout capacitor in an adjacent column), and T3 (selectively coupling the readout capacitor to the readout capacitor in an adjacent row), similar to that shown in connection with the floating diffusion capacitor of fig. 5. Fig. 15 also shows an additional source follower transistor 420 and row select transistor 422.
Although the readout capacitor 418 in fig. 15 is located at a different location and has a different application than the capacitor in fig. 5, the selective combining technique can be applied in a similar manner. This shows how the non-destructive voltage combining techniques described herein may be used in many applications (such as in ROIC) and is not limited to the floating diffusion voltage combining shown in fig. 5.
According to one embodiment, the image sensor may include an imaging pixel array, and the image sensor may include: a photodiode for imaging a first imaging pixel of an array of pixels, wherein the photodiode is configured to generate charge in response to incident light; a first floating diffusion capacitor for a first imaging pixel; a transfer transistor configured to transfer charge from the photodiode to the first floating diffusion capacitor; a first transistor configured to selectively couple the first floating diffusion capacitor to ground; and a second transistor configured to selectively couple the first floating diffusion capacitor to a second floating diffusion capacitor of a second imaging pixel in the imaging pixel array.
According to another embodiment, the second imaging pixel may be adjacent to the first imaging pixel.
According to another embodiment, the first imaging pixel may be located in a first row and a first column of the imaging pixel array, and the second imaging pixel may be located in the first row and a second column of the imaging pixel array.
According to another embodiment, the image sensor may further include a third transistor configured to selectively couple the first floating diffusion capacitor to a third floating diffusion capacitor of a third imaging pixel in the imaging pixel array.
According to another embodiment, the third imaging pixel may be located in the second row and the first column of the imaging pixel array.
According to another embodiment, the first imaging pixel may be located in a first row and a first column of the imaging pixel array, and the second imaging pixel may be located in a second row and the first column of the imaging pixel array.
According to another embodiment, the first floating diffusion capacitor may be formed of a first depletion region located between a first n-type region and a first p-type region in the semiconductor substrate, and the second floating diffusion capacitor may be formed of a second depletion region located between a second n-type region and a second p-type region in the semiconductor substrate.
According to another embodiment, the first transistor may selectively couple the first p-type region to ground.
According to another embodiment, the second transistor may selectively couple the first n-type region to the second p-type region.
According to another embodiment, the second imaging pixel may be formed in a row and a same column adjacent to the first imaging pixel, and the image sensor may further include a third imaging pixel in the array of imaging pixels and a third transistor configured to selectively couple the first n-type region to a third p-type region of a third floating diffusion capacitor of the third imaging pixel. The third imaging pixels may be formed in columns and the same rows adjacent to the first imaging pixels.
According to another embodiment, the first floating diffusion capacitor may be formed of a first depletion region between a first n-type region and a first p-type region in the semiconductor substrate, the first transistor may selectively couple the first p-type region to ground, and the first transistor may be formed with an active trench of indium gallium zinc oxide.
According to another embodiment, the first p-type region may be isolated from additional p-type regions coupled to ground.
According to one embodiment, an image sensor may include: an imaging pixel array, each imaging pixel of the imaging pixel array including a photodiode configured to generate charge in response to incident light, a floating diffusion capacitor, and a transfer transistor configured to transfer charge from the photodiode to the floating diffusion capacitor, a plurality of transistors, each transistor coupled to at least the floating diffusion capacitor of a respective one of the imaging pixels; and a control circuit configured to operate in a first voltage combining mode in which a first subset of the plurality of transistors is asserted during readout and a second combining mode in which a second subset of the plurality of transistors is asserted during readout, wherein the first subset and the second subset are different.
According to another embodiment, the first voltage combining mode may be a 2 × 2 voltage combining mode.
According to another embodiment, in the 2 × 2 voltage combination mode, voltages from the floating diffusion capacitors of the four imaging pixels in the 2 × 2 arrangement may be combined on the floating diffusion capacitor of a single imaging pixel of the four imaging pixels.
According to another embodiment, the second voltage combining mode may be a 4 x 4 voltage combining mode in which voltages from floating diffusion capacitors of sixteen imaging pixels in a 4 x 4 arrangement are combined on floating diffusion capacitors of a single pixel of the sixteen imaging pixels.
According to another embodiment, the photodiode of the imaging pixel may be formed in a semiconductor substrate, and the floating diffusion capacitor of the imaging pixel may be formed by a depletion region located between an n-type region and an isolated p-type region in the semiconductor substrate.
According to one embodiment, an image sensor may include: an imaging pixel array, each imaging pixel in the imaging pixel array comprising a capacitor, a plurality of transistors, wherein each transistor in the plurality of transistors is coupled to the capacitor of at least one imaging pixel; and a control circuit configured to non-destructively combine voltages from two or more capacitors on a single capacitor using a plurality of transistors during readout.
According to another embodiment, each capacitor may be a floating diffusion capacitor formed by a respective depletion region located between an n-type region and an isolated p-type region in a semiconductor substrate.
According to another embodiment, the control circuit may be configured to non-destructively combine voltages from a first number of capacitors for a first readout in a first image frame, the control circuit may be configured to non-destructively combine voltages from a second number of capacitors for a second readout in the first image frame, and the second number may be different from the first number.
The foregoing is considered as illustrative only of the principles of the invention, and numerous modifications are possible to those skilled in the art. The above embodiments may be implemented individually or in any combination.

Claims (10)

1. An image sensor comprising an array of imaging pixels, the image sensor comprising:
a photodiode for a first imaging pixel of the array of imaging pixels, the photodiode configured to generate charge in response to incident light;
a first floating diffusion capacitor for the first imaging pixel;
a transfer transistor configured to transfer charge from the photodiode to the first floating diffusion capacitor;
a first transistor configured to selectively couple the first floating diffusion capacitor to ground; and
a second transistor configured to selectively couple the first floating diffusion capacitor to a second floating diffusion capacitor of a second imaging pixel in the imaging pixel array.
2. The image sensor of claim 1, wherein the second imaging pixel is adjacent to the first imaging pixel.
3. The image sensor of claim 2, wherein the first imaging pixel is located in a first row and a first column of the imaging pixel array, and wherein the second imaging pixel is located in a first row and a second column of the imaging pixel array.
4. The image sensor of claim 3, further comprising:
a third transistor configured to selectively couple the first floating diffusion capacitor to a third floating diffusion capacitor of a third imaging pixel in the imaging pixel array.
5. The image sensor of claim 4, wherein the third imaging pixel is located in a second row and the first column of the imaging pixel array.
6. The image sensor of claim 2, wherein the first imaging pixel is located in a first row and a first column of the imaging pixel array, and wherein the second imaging pixel is located in a second row and a first column of the imaging pixel array.
7. The image sensor of claim 1, wherein the first floating diffusion capacitor is formed by a first depletion region located between a first n-type region and a first p-type region in a semiconductor substrate, the second floating diffusion capacitor is formed by a second depletion region located between a second n-type region and a second p-type region in the semiconductor substrate, the first transistor selectively couples the first p-type region to ground, and the second transistor selectively couples the first n-type region to the second p-type region.
8. The image sensor of claim 7, wherein the second imaging pixels are formed in rows and columns adjacent to the first imaging pixels, the image sensor further comprising:
a third imaging pixel in the array of imaging pixels, the third imaging pixel formed in a column and a same row adjacent to the first imaging pixel; and
a third transistor configured to selectively couple the first n-type region to a third p-type region of a third floating diffusion capacitor of the third imaging pixel.
9. The image sensor of claim 1, wherein the first floating diffusion capacitor is formed by a first depletion region located between a first n-type region and a first p-type region in a semiconductor substrate, the first transistor selectively couples the first p-type region to ground, and the first transistor is formed with an active trench of indium gallium zinc oxide.
10. The image sensor of claim 9, wherein the first p-type region is isolated from an additional p-type region coupled to ground.
CN201921132625.8U 2018-09-28 2019-07-18 Image sensor with a plurality of pixels Active CN210093338U (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201862738072P 2018-09-28 2018-09-28
US62/738,072 2018-09-28
US16/225,704 2018-12-19
US16/225,704 US10785425B2 (en) 2018-09-28 2018-12-19 Image sensor with selective pixel binning

Publications (1)

Publication Number Publication Date
CN210093338U true CN210093338U (en) 2020-02-18

Family

ID=69485160

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201921132625.8U Active CN210093338U (en) 2018-09-28 2019-07-18 Image sensor with a plurality of pixels

Country Status (1)

Country Link
CN (1) CN210093338U (en)

Similar Documents

Publication Publication Date Title
EP3308400B1 (en) Back-side illuminated pixels with interconnect layers
US10785425B2 (en) Image sensor with selective pixel binning
US10186535B2 (en) Image sensors with stacked photodiodes
US10756129B2 (en) Image sensors having imaging pixels with ring-shaped gates
EP2758937B1 (en) Stacked-chip imaging systems
CN108305884B (en) Pixel unit, method for forming pixel unit and digital camera imaging system assembly
US9231007B2 (en) Image sensors operable in global shutter mode and having small pixels with high well capacity
CN111182241B (en) Image sensor with high dynamic range imaging pixels
US20170133420A1 (en) Image sensors with color filter windows
US10070079B2 (en) High dynamic range global shutter image sensors having high shutter efficiency
US8390712B2 (en) Image sensing pixels with feedback loops for imaging systems
CN108269819B (en) Pixel cell, method for forming pixel cell and digital camera imaging system component
KR102268707B1 (en) Image sensor
US11588983B2 (en) High dynamic range imaging pixels with multiple photodiodes
US10075663B2 (en) Phase detection pixels with high speed readout
US9252185B2 (en) Back side illuminated image sensors with back side charge storage
CN110300272B (en) Stacked-chip image sensor and method of operating an image sensor capable of accumulating electrons and holes simultaneously
CN210168124U (en) Image sensor with a plurality of pixels
US11037977B2 (en) Stacked image sensor capable of simultaneous integration of electrons and holes
US20210152771A1 (en) Backside illuminated global shutter image sensor with an analog memory charge coupled device
CN210093338U (en) Image sensor with a plurality of pixels
US10958861B2 (en) Image sensors with in-pixel amplification circuitry
CN114975494A (en) Image pixel with coupled gate structure
US20210152770A1 (en) Systems and methods for generating time trace information

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant