CN106561046B - High dynamic range imaging system with improved readout and method of system operation - Google Patents

High dynamic range imaging system with improved readout and method of system operation Download PDF

Info

Publication number
CN106561046B
CN106561046B CN201610854201.7A CN201610854201A CN106561046B CN 106561046 B CN106561046 B CN 106561046B CN 201610854201 A CN201610854201 A CN 201610854201A CN 106561046 B CN106561046 B CN 106561046B
Authority
CN
China
Prior art keywords
signal
gain
pixel
photodiode
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610854201.7A
Other languages
Chinese (zh)
Other versions
CN106561046A (en
Inventor
B·克雷默斯
M·H·因诺森特
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Semiconductor Components Industries LLC
Original Assignee
Semiconductor Components Industries LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Semiconductor Components Industries LLC filed Critical Semiconductor Components Industries LLC
Publication of CN106561046A publication Critical patent/CN106561046A/en
Application granted granted Critical
Publication of CN106561046B publication Critical patent/CN106561046B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • H04N25/59Control of the dynamic range by controlling the amount of charge storable in the pixel, e.g. modification of the charge conversion ratio of the floating node capacitance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/71Charge-coupled device [CCD] sensors; Charge-transfer registers specially adapted for CCD sensors
    • H04N25/75Circuitry for providing, modifying or processing image signals from the pixel array
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors
    • H04N25/77Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components
    • H04N25/771Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components comprising storage means other than floating diffusion

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)

Abstract

The present disclosure relates to high dynamic range imaging pixels with improved readout. An imaging system may include an image sensor having a dual gain pixel array. Each pixel may operate using a dual readout method such that all signals are read out in a high gain configuration in order to increase the image operation speed or reduce the image operation power consumption. Each pixel may operate using a dual readout and dual analog to digital conversion method in which two sets of calibration data are stored. A High Dynamic Range (HDR) image signal may be generated for each pixel based on the signals read out from the pixels and based on lighting conditions. The HDR image may be generated based on a combination of the high gain signal and the low gain signal, and one or both of the two sets of calibration data. The HDR image may be generated using a system of equations. The system of equations may include a function of light intensity.

Description

High dynamic range imaging system with improved readout and method of system operation
This application claims benefit and priority from provisional patent application No.62/235,817 filed on day 1/10/2015, which is hereby incorporated by reference in its entirety.
Technical Field
The present invention relates generally to image sensors and, more particularly, to methods and circuits for operating image sensor pixels with dual gain readout to produce High Dynamic Range (HDR) images.
Background
In conventional imaging systems, image artifacts may be caused by moving objects, moving or jittering cameras, flickering illumination, and objects with varying illumination in the image frames. Such artifacts may include, for example, missing portions of objects, edge color artifacts, and object distortions. Examples of objects with varying illumination include Light Emitting Diode (LED) traffic signs (which may flash hundreds of times per second) and LED brake lights or headlights of modern cars.
Although the electronic rolling shutter and global shutter modes produce images with different artifacts, the root cause of such artifacts is common to both modes of operation. Typically, the image sensor collects light in an asynchronous manner with respect to the scene being photographed. This means that parts of the image frame may not be exposed for a part of the frame duration. This is especially the case for bright scenes when the accumulation time is much shorter than the frame time used. When a scene includes moving or rapidly changing objects, regions of the image frame that are not fully exposed to the dynamic scene may cause object distortion, ghosting effects, and color artifacts. Similar effects may be observed when the camera moves or shakes during image capture operations.
Conventional imaging systems may also have images with artifacts associated with low dynamic range. Scenes with lighter and darker portions may create artifacts in conventional image sensors because portions of the image may be over-exposed or under-exposed.
Dual gain pixels are often used to improve the dynamic range of image sensors. They can be used in either a fixed high gain or fixed low gain readout mode, or in a dual readout mode where both gain modes are readout. In the dual readout mode, charge is either completely stored on the photodiode or allowed to overflow to the floating diffusion node during accumulation. The combination of double gain sensing and overflow during accumulation allows for maximum dynamic range increase.
The dual gain pixels typically read out the captured high gain and low gain image data in respective high gain and low gain configurations. Switching between the high-gain configuration and the low-gain configuration causes electrical crosstalk. This cross-talk results in an undesirably large electrical offset between the signal being read in the high gain configuration and the signal being read in the low gain configuration. Such electrical offsets may cause the amplitude of the pixel output signal to exceed the operating range of the analog readout circuitry in the imaging system.
Dual gain pixels typically read out captured image data using a method that requires four pixel readouts and analog-to-digital conversions (ADCs) to operate without a frame buffer or three pixel readouts and three ADCs to operate with a frame buffer. In the latter case, it is necessary to provide a reference image for offset correction between signals using a frame buffer. Performing additional readout and ADC conversion requires additional power. Such increased power consumption is generally undesirable.
It is therefore desirable to have a High Dynamic Range (HDR) image sensor that does not have large electrical offsets between the pixel output signals and that requires less readout and ADC conversion than conventional image sensors.
Drawings
Fig. 1 is an illustration of an exemplary electronic device having an image sensor, according to an embodiment.
Fig. 2 is a schematic diagram of an exemplary pixel array and associated readout circuitry for reading out image signals in an image sensor, according to an embodiment.
Fig. 3 is a circuit diagram and corresponding potential diagram of a dual gain image pixel.
Fig. 4 is a series of potential diagrams illustrating potential levels in a three-sense operation method under high and low light conditions, and the flow of charge through the circuit of fig. 3.
Fig. 5 is a timing diagram illustrating pixel states, control signal timing, and analog-to-digital conversion and sensor readout operation timing in the circuit of fig. 3 in the three readout operation method of fig. 4.
Fig. 6 is a timing diagram illustrating pixel states, control signal timing, and analog-to-digital conversion and sensor readout operation timing in the circuit of fig. 3 in a four readout operation method.
Fig. 7 is a graph showing a relationship between light intensity and a signal level of a pixel output signal corresponding to the three readout/four readout operation method of fig. 5 and 6.
Fig. 8 is a graph showing the relationship of light intensity to signal level of a pixel output signal, and a method for mixing two pixel output signals to produce a single linear high dynamic range output signal.
Fig. 9 is a series of potential diagrams illustrating potential levels in a dual read operation method under low, medium, and high illumination conditions, and charge flow through the circuit of fig. 3, according to an embodiment.
Fig. 10 is a timing diagram illustrating pixel states, control signal timing, and analog-to-digital conversion and sensor readout operation timing in the circuit of fig. 3 in a dual readout operation method, according to an embodiment.
Fig. 11 is a graph illustrating light intensity versus signal level of a pixel output signal and a pixel output signal mixing method using two different mixing algorithms to obtain a linear high dynamic range output signal, according to an embodiment.
Detailed Description
Embodiments of the invention relate to image sensors, and more particularly, to image sensors having dual gain pixels with High Dynamic Range (HDR) output signals. It will be recognized by one skilled in the art that the exemplary embodiments of the present invention may be practiced without some or all of these specific details. In other instances, well known operations have not been described in detail so as not to unnecessarily obscure the present embodiments.
Imaging systems having digital camera modules are widely used in electronic devices such as digital cameras, computers, mobile phones, and other electronic devices. The digital camera module may include one or more image sensors that collect incident light to capture images.
In some cases, the imaging system may form part of a larger system, including a monitoring system or security system such as a vehicle (e.g., a car, bus, or any other vehicle). In a vehicle security system, images captured by an imaging system may be used by the vehicle security system to determine environmental conditions around the vehicle. For example, vehicle safety systems may include systems such as park assist systems, automatic or semi-automatic cruise control systems, automatic braking systems, collision avoidance systems, lane keeping systems (sometimes referred to as lane drift prevention systems), and the like.
In at least some instances, the imaging system may form part of a semi-autonomous or autonomous unmanned vehicle. Such imaging systems may capture images and use these images to detect nearby vehicles. Vehicle safety systems may sometimes turn on a warning light, issue a warning, or may activate braking, active steering, or other active collision avoidance measures if a nearby vehicle is detected in the image. Vehicle safety systems may use images continuously captured by an imaging system having a digital camera module to help avoid collisions with objects (e.g., other automobiles or other environmental objects), to help avoid undesirable deviations (e.g., crossing lane markings), or to help the vehicle operate safely during any of its normal operating modes.
The image sensor may include an array of image pixels. A pixel in an image sensor may include a photosensitive element, such as a photodiode that converts incident light into electrical charge. The image sensor may have any number (e.g., hundreds or thousands or more) of pixels. A typical image sensor may, for example, have hundreds, thousands, or millions of pixels (e.g., megapixels).
The image sensor pixels may be dual gain pixels that use additional transistors and storage regions along with a dual gain readout method to improve the dynamic range of the pixel. The dual gain readout method used can be adjusted to reduce the electrical offset between the pixel output signals, reduce the number of analog-to-digital conversions (ADCs) required for readout, and eliminate the need for a frame buffer.
FIG. 1 is a schematic diagram of an exemplary imaging and response system including an imaging system that captures images using an image sensor. The system 100 of fig. 1 may be a vehicle safety system (e.g., an active braking system or other vehicle safety system), may be a surveillance system, or may be an electronic device (such as a camera, mobile phone, video camera, or other electronic device that captures digital image data).
As shown in fig. 1, system 100 may include an imaging system (such as imaging system 10) and a host subsystem (such as host subsystem 20). The imaging system 10 may include a camera module 12. The camera module 12 may include one or more image sensors 14 and one or more lenses. The lenses in the camera module 12 may, for example, comprise M × N individual lenses arranged in an M × N array. The individual image sensors 14 may be arranged, for example, in a corresponding M x N image sensor array. The values of M and N may each be greater than or equal to 1, may each be greater than or equal to 2, may exceed 10, or may be any other suitable value.
Each image sensor in camera module 12 may be the same or there may be different types of image sensors in a given image sensor array integrated circuit. Each image sensor may be, for example, a Video Graphics Array (VGA) sensor having a 480 x 640 image sensor pixel resolution. Other image sensor pixel arrangements may also be used for the image sensor if desired. For example, image sensors with a resolution higher than VGA resolution (e.g., high definition image sensors), image sensors with a resolution lower than VGA resolution, and/or image sensor arrays in which the image sensors are not identical may be used.
During image capture operations, each lens may focus light onto an associated image sensor 14. Image sensor 14 may include light sensitive elements (i.e., pixels) that convert light into digital data. An image sensor may have any number (e.g., hundreds, thousands, millions, or more) of pixels. A typical image sensor may, for example, have millions of pixels (e.g., megapixels). For example, the image sensor 14 may include a bias circuit (e.g., a source follower load circuit), a sample-and-hold circuit, a Correlated Double Sampling (CDS) circuit, an amplifier circuit, an analog-to-digital converter circuit, a data output circuit, a memory (e.g., a buffer circuit), an addressing circuit, and the like.
Still image data and video image data from the camera sensor 14 may be provided to the image processing and data formatting circuit 16 via path 28. The image processing and data formatting circuit 16 may be used to perform image processing functions such as data formatting, adjusting white balance and exposure, video image stabilization, face detection, and the like. The image processing and data formatting circuit 16 may also be used to compress raw camera image files as needed (e.g., into a joint photographic experts group format or JPEG format for short). In a typical architecture, sometimes referred to as a system-on-a-chip (SOC) arrangement, the camera sensor 14 and the image processing and data formatting circuitry 16 are implemented on a common semiconductor substrate (e.g., a common silicon image sensor integrated circuit die). The camera sensor 14 and the image processing circuit 16 may be formed on separate semiconductor substrates, if desired. For example, the camera sensor 14 and the image processing circuit 16 may be formed on separate semiconductor substrates that have been stacked.
Imaging system 10 (e.g., image processing and data formatting circuitry 16) may communicate the acquired image data to host subsystem 20 via path 18. The host subsystem 20 may include an active control system that communicates control signals for controlling vehicle functions (such as braking or steering) to external devices. Host subsystem 20 may include processing software for detecting objects in images, detecting motion of objects between image frames, determining distances of objects in images, filtering, or otherwise processing images provided by imaging system 10. Host subsystem 20 may include an alarm system configured to disable imaging system 10 and/or generate a warning (e.g., a warning light, audible warning, or other warning on the dashboard of a vehicle) if the verification image data associated with the image sensor indicates that the image sensor is not functioning properly.
The system 100 may provide a number of advanced functions to the user, if desired. For example, in a computer or advanced mobile phone, the user may be provided with the ability to run user applications. To achieve these functions, the host subsystem 20 of the system 100 may have input-output devices 22 (such as a keypad, input-output ports, joystick, and display) and storage and processing circuitry 24. The storage and processing circuitry 24 may include volatile and non-volatile memory (e.g., random access memory, flash memory, hard disk drives, solid state drives, etc.). The storage and processing circuitry 24 may also include a microprocessor, microcontroller, digital signal processor, application specific integrated circuit, or the like.
During operation of the imaging system 10, the camera module 12 may continuously capture image frames and provide the image frames to the host subsystem 20. During image capture operations, authentication circuitry associated with the image sensor 14 may occasionally operate (e.g., after each image frame is captured, after every other image frame is captured, after every five image frames are captured, during a portion of an image frame, etc.). The image captured by the validation circuit runtime may include validation image data that contains validation information. The verification image data may be provided to image processing circuitry 16 and/or storage and processing circuitry 24. Image processing circuitry 16 may be configured to compare the verification image data with predetermined data stored on image processing circuitry 16. After the comparison, image processing circuitry 16 may send status information or other authentication information to host subsystem 20.
An example of the structure of the camera module 12 of fig. 1 is shown in fig. 2. As shown in fig. 2, the camera module 12 includes an image sensor 14 and control and processing circuitry 44. Control and processing circuitry 44 may correspond to image processing and data formatting circuitry 16 in fig. 1. Image sensor 14 may include an array of pixels, such as an array 32 of pixels 34 (sometimes referred to herein as image sensor pixels or image pixels 34). Control and processing circuitry 44 may be coupled to row control circuitry 40 and may be coupled to column control and sense circuitry 42 via data paths 26. Row control circuitry 40 may receive row addresses from control and processing circuitry 44 and may provide corresponding row control signals (e.g., dual conversion gain control signals, pixel reset control signals, charge transfer control signals, halo control signals, row select control signals, or any other desired pixel control signals) to image pixels 34 via control paths 36. Column control and readout circuitry 42 may be coupled to columns of pixel array 32 via one or more conductive lines, such as column line 38. A column line 38 may be coupled to each column of image pixels 34 in image pixel array 32 (e.g., each column of pixels may be coupled to a corresponding column line 38). Column lines 38 may be used to read out image signals from image pixels 34 and to provide bias signals (e.g., bias currents or bias voltages) to image pixels 34. During an image pixel readout operation, a row of pixels in image pixel array 32 may be selected using row control circuitry 40, and image data associated with the image pixels 34 of that row of pixels may be read out on column lines 38 by column control and readout circuitry 42.
Column control and readout circuitry 42 may include column circuitry such as column amplifiers for amplifying signals read out of array 32, sample and hold circuitry for sampling and storing signals read out of array 32, analog-to-digital converter circuitry for converting read out analog signals to corresponding digital signals, and column memory for storing the read out signals and any other desired data. Column control and readout circuitry 42 may output the digital pixel values over lines 26 to control and processing circuitry 44.
Array 32 may have any number of rows and columns. In general, the size of the array 32 and the number of rows and columns in the array 32 will depend on the implementation of the image sensor 14. Although rows and columns are generally described herein as horizontal and vertical, respectively, rows and columns may refer to any grid-like structure (e.g., features described herein as rows may be arranged vertically and features described herein as columns may be arranged horizontally).
If desired, the array 32 may be part of a stacked die structure, wherein the pixels 34 of the array 32 may be divided among two or more stacked substrates. In such a configuration, each pixel 34 in the array 32 may be divided among the two dies at any desired node within the pixel. For example, a node such as a floating diffusion node may be formed across two dies. A pixel circuit including a photodiode and circuitry coupled between the photodiode and a desired node (such as a floating diffusion node in this example) may be formed on the first die, and the remaining pixel circuits may be formed on the second die. The desired node may be formed on (i.e., as part of) a coupling structure (such as a conductive pad, a micro-pad, a conductive interconnect structure, or a conductive via) that connects the two dies. The coupling structure may have a first portion on a first die and a second portion on a second die before the two dies are bonded. The first die and the second die may be bonded to each other such that the first portion of the coupling structure and the second portion of the coupling structure are bonded together and electrically coupled. If desired, the first and second portions of the coupling structure may be in compressive engagement with one another. However, this is merely exemplary. The first and second portions of the coupling structure formed on the respective first and second die may be bonded together using any known metal-to-metal bonding technique, such as soldering or welding, if desired.
As described above, the desired node in the pixel circuit that is divided into two dies may be a floating diffusion node. Alternatively, the desired node may be a node between the floating diffusion region and the gate of the source follower transistor (i.e., the floating diffusion node may be formed on the first die on which the photodiode is formed while the coupling structure may connect the floating diffusion node to the source follower transistor on the second die), a node between the floating diffusion region and the source-drain node of the transfer transistor (i.e., the floating diffusion node may be formed on the second die on which the photodiode is not provided), a node between the source-drain node of the source follower transistor and the row select transistor, or any other desired node of the pixel circuit.
Fig. 3 is a circuit diagram and corresponding potential diagram of a dual gain image pixel. As shown in fig. 3, a dual-gain image pixel 200 includes a photosensitive element 202 (e.g., a photodiode) having a first terminal connected to a ground terminal 222 and a second terminal coupled to a floating diffusion node (FD)212 through a transfer transistor 204. The floating diffusion node 212 is coupled to a voltage source 220 through a gain select transistor 206 and a reset transistor 208. The gain selection capacitor 210 has a capacitance CGSAnd has a first terminal connected to ground 222 and a second terminal coupled to a node interposed between gain select transistor 206 and reset transistor 208. The first terminal of the gain selection capacitor 210 may alternatively be coupled to a fixed potential (not shown), if desired. The source follower transistor 214 has a gate terminal coupled to the floating diffusion node 212, a first source-drain terminal coupled to a voltage source 220, and a second source-drain terminal coupled to a column output line 218 through a row select transistor 216.
The gate terminal of the transfer transistor 204 receives a control signal TX. The gate terminal of the gain selection transistor 206 receives the control signal GS. The gate terminal of the RESET transistor 208 receives the control signal RESET. The gate terminal of row select transistor 216 receives control signal RS. The voltage source 220 provides a voltage Vdd. The control signals TX, GS, RESET and RS are provided by row control circuitry (such as row control circuitry 40 in fig. 2).
The potential map 230 shown in fig. 3 corresponds to the voltage levels (V) at different locations within the dual-gain pixel 200 and is used to show the voltage levels and the amount of charge at these locations during operation of the pixels of fig. 4 and 9. The photodiode region 232 corresponds to the voltage level at the photodiode 202. The transfer region 234 corresponds to the voltage level at the transfer transistor 204. The floating diffusion region 236 corresponds to the voltage level at the floating diffusion node 212. The gain select transistor region 238 corresponds to the voltage level at the gain select transistor 206. The gain selection storage area 240 corresponds to the voltage level at the gain selection capacitor 210. The reset region 242 corresponds to the voltage level at the reset transistor 208. Voltage source region 244 corresponds to the voltage level at voltage source 220. Charge (represented by the darkened areas in fig. 4 and 9) accumulates in the photodiode region 232 during photodiode accumulation and is transferred to regions 236 and 240 during charge transfer and signal readout operations.
Fig. 4 shows a series of potential diagrams corresponding to the potential diagram 230 in fig. 3, and showing potential levels in the dual gain pixel 200 for respective periods during the three-readout operation method of the dual gain pixel 200 under the low illuminance condition and the high illuminance condition. Fig. 5 illustrates a timing diagram of a three-readout operation method of the dual-gain pixel 200. The timing diagram of fig. 5 shows the states of the pixel 200, the control signal TX, GS, RESET and RS timings, the ADC and sensor readout operation timings of the image sensor comprising the dual gain pixel 200. The timing diagram of fig. 5 corresponds to the potential diagram of fig. 4. At a time period t1Signals TX, GS and RESET are asserted such that regions 234,238 and 242 are set to a high voltage level to RESET pixel 200. At a time period t2TX and RESET are asserted such that regions 234 and 242 are set to a low voltage level.
Pixel exposure and overflow over time period t2-t3Occurs when. Time period t2Indicating that photodiode charge accumulation is beginning. Time period t3Indicating the end of photodiode charge accumulation. Under low light condition, in time period t3When all the charges are contained in the photodiode region 232, no overflow occurs. In thatUnder high illumination condition, in time period t3The accumulated charge exceeds the capacity of the photodiode region 232 and overflows from the photodiode region 232 into the floating diffusion region 236 and the gain selection storage region 240.
Pixel readout in time period t4-t8Occurs when. At a time period t4When the signals RESET, TX, and GS are deasserted (i.e., when the pixel 200 is in the high-gain configuration), the control signal RS is pulsed to read out the high-gain RESET voltage HGR. Under low light conditions without charge overflow, the accumulated charge will remain in the photodiode region 232 and will not contribute to the HGR. Under high illumination conditions, overflowing charge in the floating diffusion region 236 will contribute to the HGR. At a time period t5While signals GS and RESET are deasserted, signal TX is asserted to transfer charge from the photodiode region 232 to the floating diffusion region 236. In low light conditions, the charge on the photodiode is fully transferred, while in high light conditions some charge remains in the photodiode region 232. At a time period t6At this time, when signals RESET, TX, and GS are deactivated, signal RS is pulsed to read high gain signal voltage HGS. At a time period t7When the signal RESET is deasserted, the signals TX and GS are asserted such that any charge remaining in the photodiode region 232 is distributed between the floating diffusion region 236 and the gain select region 240. At a time period t8At the time of deassertion signals RESET and TX (i.e., when pixel 200 is in the low-gain configuration), signal RS is pulsed to read out low-gain signal voltage LGS. The pixel is reset again in the period t9Occurs when. At a time period t9The hold signals RESET, TX, and GS are asserted until the new pixel exposure and overflow period begins.
As shown in fig. 5, three ADC and two sensor readout operations are performed by the image sensor containing the dual gain pixels 200 for each captured image. These signals are converted from analog to digital signals immediately after the HGR, HGS and LGS are read out separately. After the HGR and HGS signals undergo ADC processing, the HGR is subtracted from the HGS to generate a high gain signal HG (S-R), and this signal is subsequently read out from the image sensor. After reading HG (S-R), LGS is read out from the image sensor.
It should be noted that during the pixel operation of fig. 5, the low gain reset voltage is not read out. Instead, a frame buffer is used to store a calibration voltage CAL that corresponds to the voltage on the floating diffusion node during pixel reset. During downstream processing, the CAL is subtracted from the LGS to produce a low gain signal. Adding the frame buffer requires additional hardware to be included in the image sensor, but reduces the number of required readouts performed for each captured image.
Fig. 6 shows a timing diagram of a four-readout operation method of the dual-gain pixel 200. The timing diagram of fig. 6 shows the states of the pixel 200, the control signal TX, GS, RESET and RS timings, the ADC and sensor readout operation timings of the image sensor comprising the dual gain pixel 200. At a time period t1-t8The operation of the four readout method that occurs is substantially the same as that described above in connection with fig. 5, and for the sake of brevity, the description of these operations will not be repeated here. In the four-readout method of fig. 6, pixel readout does not end with readout of the LGS. In contrast, in the period t9While signals TX and RS are deasserted, signals RESET and GS may be asserted to RESET the pixel 200 to the voltage Vdd. At a time period t10When the signal GS is asserted and the signals TX and RESET are deasserted, the signal RS is pulsed to read out the low-gain RESET voltage LGR. The pixel is reset in the period t11Occurs when. At a time period t11The hold signals RESET, TX, and GS are asserted until the new pixel exposure and overflow period begins.
As shown in fig. 6, four ADC and two sensor readout operations are performed by the image sensor containing the dual gain pixels 200 for each captured image. These signals are converted from analog signals to digital signals immediately after the HGRs, HGS, LGS and LGR are read out, respectively. After the HGR and HGS signals undergo ADC processing, the HGR is subtracted from the HGS to generate a high gain signal HG (S-R), and this signal is subsequently read out from the image sensor. After reading out the HG (S-R), the LGR is subtracted from the LGS to generate a low gain signal LG (S-R), and this signal is then read out from the image sensor.
It should be noted that in the four readout approach of FIG. 6, the low gain signal LG (S-R) is based in part on the signal LG (S-R) at time t10The time-read out low-gain reset signal LGR is generated instead of based on a stored calibration signal (e.g., the signal CAL described above in connection with fig. 5). This eliminates the need for a frame buffer to store the calibration signal. This approach increases the number of readouts required per captured image, but without adding any additional hardware to the frame buffer.
Fig. 7 is a graph showing a relationship between light intensity and a signal level (-V) of a signal read using the three read/four read operation method of fig. 4 to 6. At the light intensity level 702, charge overflow occurs. In the three-read operation method of fig. 4 and 5, the calibration signal CAL corresponding to the voltage 740 may be stored in the frame buffer instead of the read signal LGR. The signals HGR and LGS and a portion of the signal HGS have the same light intensity slope 704 at the light intensity level 702, however, the signal HGS is clipped once the magnitude of the light intensity is large enough to form an HGS signal that is beyond the operating range of the analog readout chain in the image sensor. The signals HGS and HGR are read in a high gain configuration, while the signals LGS and LGR are read in a low gain configuration.
Fig. 8 is a graph showing the relationship of light intensity to the signal level (-V) of the pixel output signal and a method for mixing two pixel output signals to produce a single linear high dynamic range output signal HDR. The high-gain signal HG corresponds to the high-gain signal HG (S-R) of fig. 5-7. The low-gain signal LG corresponds to the low-gain signal LG (S-R) or LG (S) of fig. 5-7. The high dynamic range signal HDR represents the actual signal output by the pixel after processing. For light intensities in range 824, high gain signal HG is output as HDR. For light intensities in range 826, the low gain signal LG is amplified along path 822 and then output as HDR. In the range 826, the signal LG is used in the range because the signal HG may experience clipping 820 because its signal level is beyond the operating range of the image sensor's analog readout chain.
In the blending zone 828, HDR is defined as the sum of a portion of the high-gain signal HG and a portion of the amplified low-gain signal LG. For example, HDR can be calculated using the following equation (1),
HDR=(1-α)(HG)+(α)(G)(LG) (1)
where G is the gain ratio between HG and LG used to amplify LG, and where α is any desired function (e.g., linear function, sigmoid function) ranging from 0 to 1 when the light intensity ranges from the beginning of mixing zone 828 to the end of mixing zone 828. Transitioning the value of HDR from HG to LG in a hybrid manner avoids noise spikes and prevents errors in the assumed gain difference between HG and LG. Such mixing results in only a small signal that is non-linear compared to the discontinuity that is formed when hard switching is performed from HG to LG.
Fig. 9 shows a series of potential diagrams corresponding to the potential diagram 230 in fig. 3, and shows potential levels in the dual gain pixel 200 for respective periods during the dual readout operation method of the dual gain pixel 200 under high illuminance, medium illuminance, and low illuminance conditions. Fig. 10 illustrates a timing diagram of a dual read operation method of the dual gain pixel 200 of fig. 8. The timing diagram of fig. 10 shows the states of the pixel 200, the control signal TX, GS, RESET and RS timings, the ADC and sensor readout operation timings of the image sensor comprising the dual gain pixel 200. At a time period t1-t3The operations of the dual read out method of fig. 8 and 9 that occur may be substantially the same as those described above in connection with fig. 5, and for the sake of brevity, the description of these operations will not be repeated here. In the dual readout method of fig. 9 and 10, all signals can be read out from the pixel 200 in a high gain configuration. Time period t4-t6May correspond to a pixel readout. At a time period t4While the signals GS, TX and RESET are deactivated (i.e., high gain configuration), the signal RS is pulsed to sense the high gain RESET voltage HGR. Under low light conditions, the floating diffusion region 236 will contain little to no charge when the HGR is read. Under medium and high illumination conditions, the floating diffusion region 236 will contain charge that overflows from the photodiode region 232 during charge accumulation. At a time period t5While, signal TX may be asserted when signals GS, RS, and RESET are deasserted, so that non-overflow is enabledThe out charge is transferred from the photodiode region 232 to the floating diffusion region 236. Under low and medium light conditions, non-overflowing charge can be completely transferred from the photodiode region 232 to the floating diffusion region 236. Under high light conditions, the floating diffusion region 236 has limited capacity and thus is in time t5After charge transfer occurs, some non-overflowing charge can remain on the photodiode region 232. At a time period t6While signals GS, TX and RESET are deasserted, signal RS may be asserted to read high gain signal voltage HGS. The pixel is reset again in the period t7Occurs when. At a time period t7The hold signals RESET, TX, and GS are asserted until the new pixel exposure and overflow period begins.
As shown in fig. 10, two ADC and two sensor readout operations are performed by the image sensor containing the dual gain pixels 200 for each captured image. These signals are converted from analog to digital immediately after the HGR and HGS are read, respectively. After the signals HGR and HGS undergo ADC processing, a high dynamic range image signal HDR (sometimes referred to as a high dynamic range signal HDR) is generated. HDR may be generated, for example, using image processing circuitry, such as image processing and data formatting circuitry 16 in fig. 1. When the light intensity is lower than the first threshold, HDR is calculated based on the following formula (2). When the light intensity is between the first threshold and the second threshold, HDR is calculated based on the following formula (3). When the light intensity is between the second threshold and the third threshold, HDR is calculated based on the following formula (4). When the light intensity is between the third threshold and the fourth threshold, HDR is calculated based on the following formula (5). When the light intensity is higher than the fourth threshold value, HDR is calculated based on the following equation (6).
HDR=HGS–HGR (2)
HDR=HGS–HGR+(α)(G)(HGR–CAL1)α=[0..1](3)
HDR=HGS–HGR+(G)(HGR–CAL1) (4)
HDR=(1-β)((HGS–HGR)+(G)(HGR-CAL1))+(β)(CAL2+((G)(HGR-
CAL1)))β=[0..1](5)
HDR=CAL2+(G)(HGR-CAL1) (6)
Where G is the gain ratio between the HGR after the onset of spillover and the HGS before the onset of spillover, where spillover is defined to begin at a particular light intensity level, where CAL1 is a stored calibration value that corresponds to the value of the HGR in the dark (i.e., CAL1 is a dark offset calibration voltage), where CAL2 is a stored calibration value that corresponds to the value of (HGS-HGR) when the light intensity is between the second threshold and the third threshold (e.g., when charge begins spilling over the photodiode), where α is any desired function (e.g., linear function, sigmoid function) ranging from 0 to 1 when the light intensity ranges from the first threshold to the second threshold, and where β is any desired function (e.g., linear function, sigmoid function) ranging from 0 to 1 when the light intensity ranges from the third threshold to the fourth threshold. The functions α and β may be functions of predefined light intensities. The calibration values CAL1 and CAL2 may be stored, for example, in respective frame buffers on the image sensor.
The dual read method of fig. 9-10 may be more advantageous than the method of fig. 4-6. Because the dual readout approach requires fewer readouts and fewer ADCs than the three and four readout approaches, faster operation can be achieved at the same power level and reduced power consumption at the same operating speed. The dual readout method also slightly increases the maximum photo-charge storage capacity of the pixel using this method. It should be noted that these advantages are generated with trade-offs for: two calibration signals (i.e., external reference images) must be stored in the image sensor for HDR signal calibration. In addition, the dual readout method performs signal readout only in the high gain configuration, which is more advantageous than the conventional method using a combination of high gain configuration readout and low gain configuration readout so as to cause an electrical offset to occur between the high gain signal and the low gain signal.
Fig. 11 is a graph showing the relationship of light intensity to signal level of the pixel output signal (-V), and the analog decisions made to generate a linear high dynamic range output signal HDR using an improved blending method. The signals HGS and HGR shown in fig. 11 may correspond to those described in connection with fig. 9-10. The saturation point 1102 of the HGS may be limited by signal overflow and not by analog readout chain clipping. Analog readout chain clipping will eventually cause additional saturation at region 1120. The overflow begins to occur at the light intensity level associated with the saturation point 1102. It should be noted that the light intensity at which overflow starts to occur is lower than the light intensity at which signal HGS saturation occurs. Between the start of overflow and the start of saturation of signal HGS, signals HGS and HGRs may have the same slope 1104. The saturation start and overflow start may be marked by respective light intensity thresholds.
In the dual readout approach of fig. 9 and 10, HDR is calculated using equation (2) for the light intensity in 1150. For light intensities in region 1156, HDR is calculated using equation (3), where the gain ratio G may correspond to path 1122. For the light intensities in region 1152, HDR is calculated using equation (4). For the light intensities in region 1158, HDR is calculated using equation (5). For the light intensities in region 1154, HDR is calculated using equation (6). Regions 1150,1152,1154,1156 and 1158 may sometimes be referred to herein as a range of lighting conditions and a range of light intensity values.
Point 1142 represents the light intensity and signal level corresponding to the calibration signal CAL1 employed in the dual readout method of fig. 9 and 10. Point 1140 represents the light intensity and signal level corresponding to the HGR and HGS values used to calculate the calibration signal CAL2 used in the dual readout method of fig. 9 and 10.
The improved mixing method of fig. 11 may be more advantageous than the mixing method of fig. 8. Because the signal HGS experiences clipping above a certain light intensity level, the calculation of HDR becomes less accurate near the clipped light intensity. Therefore, it is advantageous to incorporate the use of the second mixing method when the light intensity falls within region 1158, and a wavefront is about to occur in region 1154. Using the second mixing method in this manner prevents discontinuities from occurring when transitioning from using equation (4) in zone 1152 to using equation (6) in zone 1154. In contrast, the second mixing method represented by equation (5) allows a smooth transition to occur between zones 1152 and 1154.
Various embodiments have been described showing an imaging system (e.g., system 100 of fig. 1) including an imaging system and a host subsystem. According to one example, an imaging system may include an array of pixels arranged in rows and columns. Each pixel in the pixel array may include: a photodiode that accumulates charge in response to incident light; a floating diffusion node coupled to the photodiode via a transfer transistor; a gain selection storage node coupled to the floating diffusion node; and a readout circuit coupled to the floating diffusion node. The readout circuit may read out the first signal when the pixel is in a high gain configuration. The first signal may select a first portion of the storage node based on overflow of accumulated charge from the photodiode into the floating diffusion node and the gain selection. The readout circuit may read out the second signal when the pixel is in a high gain configuration. The second signal may be based on a first portion of the accumulated charge and based on a second portion of the accumulated charge being transferred to the floating diffusion node through the transfer transistor.
The imaging system may also include an image processing circuit that receives the first signal and the second signal from the readout circuit and generates a high dynamic range signal based on the first signal and the second signal. The high dynamic range signal may be generated based on the first signal and the second signal and based on the first calibration signal and the second calibration signal. The first calibration signal may be a dark offset calibration voltage. The second calibration signal may correspond to a predetermined difference between a high-gain signal voltage sampled at a certain light intensity level and a high-gain reset voltage. This light intensity level corresponds to the onset of charge overflow from the photodiode.
A gain selection transistor may be interposed between the floating diffusion node and the gain selection storage node. The high gain configuration may occur when the gain select transistor is disabled such that the floating diffusion node is isolated from the gain select storage node by the gain select transistor.
According to another example, a method of operating an imaging system may comprise: accumulating charge in response to incident light with a photodiode in a dual-gain pixel; reading out a first signal with a readout circuit when the pixel is in a high gain configuration, wherein the first signal is based on a first portion of the accumulated charge overflowing from the photodiode into the floating diffusion node and the gain selection storage node; transferring a second portion of the accumulated charge from the photodiode to a floating diffusion node in a high gain configuration using a transfer transistor; and reading out a second signal with the readout circuit when the pixel is in the high gain configuration, wherein the second signal is based on the first portion and the second portion of the charge accumulated at the floating diffusion node. The high gain configuration may include deasserting a gate signal for the gain select transistor to isolate the floating diffusion node from the gain select storage region.
The method may also include receiving, with the image processing circuit, the first signal and the second signal from the readout circuit, and generating a high dynamic range signal based on the first signal and the second signal. The high dynamic range signal may be generated based on the first signal and the second signal and based on the first calibration signal and the second calibration signal. The first calibration signal may be a dark offset calibration signal. The second calibration signal may be based on a predetermined difference between the high-gain signal voltage and the high-gain reset voltage, respectively, sampled at a certain light intensity threshold. The light intensity threshold corresponds to the light intensity level at which charge spillover begins to occur at the photodiode.
The method may further include resetting the dual gain pixel to a pixel reset voltage after reading out the second signal.
According to another example, a method of operating an imaging system may comprise: during exposure, charge is accumulated in response to incident light using photodiodes in the pixels. Under high brightness conditions, a first portion of the accumulated charge may overflow from the photodiode into the storage node during exposure, and a second portion of the accumulated charge may remain on the photodiode during exposure. The method may further comprise: reading out a first signal with a readout circuit while the pixel is in a high gain configuration, wherein the first signal is based on the first portion of the accumulated charge; reading out a second signal with the readout circuit while the pixel is in the high gain configuration, wherein the second signal is based on the first and second portions of the accumulated charge; and generating, with the image processing circuit, a high dynamic range image signal. The high dynamic range image signal may be generated based on the first and second signals, and the first calibration signal in a first range of lighting conditions. The high dynamic range image signal may be generated based on the first and second signals, the first calibration signal, and the second calibration signal under a second range of lighting conditions.
The first range of lighting conditions may include low light conditions in which no portion of the accumulated charge spills from the photodiode. The second signal may become a clipped signal above the light intensity threshold. The second range of lighting conditions may include a range of light intensity values that are approximately or greater than the light intensity threshold. The first calibration signal may be a dark offset calibration signal. The second calibration signal may be based on a predetermined difference between a high-gain signal voltage and a high-gain reset voltage respectively sampled at a certain light intensity level. The high dynamic range image signal may additionally be based on a predefined function. These predefined functions may be functions of light intensity.
The foregoing is considered as illustrative only of the principles of the invention, and numerous modifications can be made by those skilled in the art without departing from the spirit and scope of the invention. The above embodiments may be implemented individually or in any combination.

Claims (19)

1. An imaging system, the imaging system comprising:
a pixel array arranged in rows and columns, each pixel in the pixel array comprising:
a photodiode that accumulates charge in response to incident light;
a floating diffusion node coupled to the photodiode via a transfer transistor;
a gain selection storage node coupled to the floating diffusion node; and
a readout circuit coupled to the floating diffusion node, wherein the readout circuit reads out a first signal when the pixel is in a high gain configuration, wherein the first signal is based on a first portion of the accumulated charge overflowing from the photodiode into the floating diffusion node and the gain selection storage node, wherein the readout circuit reads out a second signal when the pixel is in the high gain configuration, wherein the second signal is based on the first portion of the accumulated charge and on a second portion of the accumulated charge transferred to the floating diffusion node through the transfer transistor, and the pixel resets in response to the readout circuit reading out the second signal.
2. The imaging system of claim 1, further comprising:
an image processing circuit that receives the first signal and the second signal from the readout circuit and generates a high dynamic range signal based on the first signal and the second signal.
3. The imaging system of claim 2, wherein the high dynamic range signal is generated based on the first and second signals and based on first and second calibration signals.
4. The imaging system of claim 3, wherein the first calibration signal is a dark offset calibration voltage.
5. The imaging system of claim 4, wherein the second calibration signal corresponds to a predetermined difference between a high-gain signal voltage and a high-gain reset voltage sampled at a light intensity level, and wherein the light intensity level corresponds to a start of charge overflow from the photodiode.
6. The imaging system of claim 1, wherein a gain selection transistor is interposed between the floating diffusion node and the gain selection storage node.
7. The imaging system of claim 6, wherein the high gain configuration occurs when the gain select transistor is disabled such that the floating diffusion node is isolated from the gain select storage node by the gain select transistor.
8. A method of imaging system operation, the method comprising:
accumulating charge in response to incident light with a photodiode in a dual-gain pixel;
reading out a first signal with a readout circuit when the pixel is in a high gain configuration, wherein the first signal is based on a first portion of the accumulated charge overflowing from the photodiode into a floating diffusion node and a gain selection storage node;
transferring a second portion of the accumulated charge from the photodiode to the floating diffusion node in the high-gain configuration with a transfer transistor; and
reading out a second signal with the readout circuit while the pixel is in the high-gain configuration, and wherein the second signal is based on the first and second portions of the accumulated charge at the floating diffusion node, an
Resetting the dual-gain pixel to a pixel reset voltage in response to reading out the second signal.
9. The method of claim 8, wherein the high gain configuration comprises deasserting a gate signal for a gain select transistor in order to isolate the floating diffusion node from a gain select storage region.
10. The method of claim 8, further comprising:
with an image processing circuit, a first signal and a second signal are received from the readout circuit, and a high dynamic range signal is generated based on the first signal and the second signal.
11. The method of claim 10, wherein the high dynamic range signal is generated based on the first and second signals and based on first and second calibration signals.
12. The method of claim 11, wherein the first calibration signal is a dark offset calibration signal.
13. The method of claim 12, wherein the second calibration signal is based on a predetermined difference between a high-gain signal voltage and a high-gain reset voltage respectively sampled at a light intensity threshold, wherein the light intensity threshold corresponds to a light intensity level at which charge overflow begins to occur at the photodiode.
14. A method of imaging system operation, the method comprising:
accumulating charge in response to incident light with a photodiode in a pixel during an exposure period, wherein under high light conditions a first portion of the accumulated charge overflows from the photodiode into a storage node during the exposure period, and wherein a second portion of the accumulated charge remains at the photodiode during the exposure period;
reading out a first signal with a readout circuit when the pixel is in a high gain configuration, wherein the first signal is based on the first portion of the accumulated charge;
reading out a second signal with the readout circuit while the pixel is in the high gain configuration, and wherein the second signal is based on the first and second portions of the accumulated charge; and
generating, with an image processing circuit, a high dynamic range image signal, wherein the high dynamic range image signal is generated based on the first and second signals and a first calibration signal under a first range lighting condition, and wherein the high dynamic range image signal is generated based on the first and second signals, the first calibration signal, and a second calibration value under a second range lighting condition.
15. The method of claim 14, wherein the first range of lighting conditions includes a low lighting condition in which no portion of the accumulated charge overflows the photodiode.
16. The method of claim 15, wherein the second signal is clipped above a light intensity threshold, and wherein the second range lighting condition comprises a range of light intensity values that are proximate to and greater than the light intensity threshold.
17. The method of claim 16, wherein the first calibration signal is a dark offset calibration signal.
18. The method of claim 17, wherein the second calibration signal is based on a predetermined difference between a high gain signal voltage and a high gain reset voltage respectively sampled at the optical intensity threshold.
19. The method of claim 14, wherein the high dynamic range image signal is additionally based on a predefined function, wherein the predefined function is a function of light intensity.
CN201610854201.7A 2015-10-01 2016-09-27 High dynamic range imaging system with improved readout and method of system operation Active CN106561046B (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201562235817P 2015-10-01 2015-10-01
US62/235,817 2015-10-01
US15/145,643 2016-05-03
US15/145,643 US9843738B2 (en) 2015-10-01 2016-05-03 High dynamic range imaging pixels with improved readout

Publications (2)

Publication Number Publication Date
CN106561046A CN106561046A (en) 2017-04-12
CN106561046B true CN106561046B (en) 2020-09-04

Family

ID=58355981

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610854201.7A Active CN106561046B (en) 2015-10-01 2016-09-27 High dynamic range imaging system with improved readout and method of system operation

Country Status (3)

Country Link
US (1) US9843738B2 (en)
CN (1) CN106561046B (en)
DE (1) DE102016218838A1 (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3318175A1 (en) * 2015-06-30 2018-05-09 Olympus Corporation Image processing apparatus and imaging system
US9521351B1 (en) * 2015-09-21 2016-12-13 Rambus Inc. Fractional-readout oversampled image sensor
US20180188427A1 (en) * 2016-12-29 2018-07-05 Uber Technologies, Inc. Color Filter Array for Image Capture Device
JP7018294B2 (en) * 2017-11-10 2022-02-10 ブリルニクス シンガポール プライベート リミテッド Solid-state image sensor, solid-state image sensor driving method, and electronic equipment
US11317038B2 (en) 2017-12-19 2022-04-26 SmartSens Technology (HK) Co., Ltd. Pixel unit with a design for half row reading, an imaging apparatus including the same, and an imaging method thereof
US10356351B1 (en) * 2018-02-07 2019-07-16 Omnivision Technologies, Inc. Image sensor with dual conversion gain readout
US10560646B2 (en) * 2018-04-19 2020-02-11 Teledyne Scientific & Imaging, Llc Global-shutter vertically integrated pixel with high dynamic range
US10986290B2 (en) 2018-05-18 2021-04-20 Omnivision Technologies, Inc. Wide dynamic range image sensor with global shutter
US11431926B2 (en) 2018-11-09 2022-08-30 Semiconductor Components Industries, Llc Image sensors having high dynamic range imaging pixels
US10756129B2 (en) 2019-01-10 2020-08-25 Semiconductor Components Industries, Llc Image sensors having imaging pixels with ring-shaped gates
US10791292B1 (en) 2019-04-30 2020-09-29 Semiconductor Components Industries, Llc Image sensors having high dynamic range imaging pixels
US11064141B2 (en) * 2019-07-24 2021-07-13 Semiconductor Components Industries, Llc Imaging systems and methods for reducing dark signal non-uniformity across pixels
KR20210107957A (en) 2020-02-24 2021-09-02 삼성전자주식회사 Image sensor and image device including the same
US11343448B2 (en) * 2020-05-05 2022-05-24 Pixart Imaging Incorporation Method of operating an HDR pixel circuit achieving high precision
JP2022117079A (en) * 2021-01-29 2022-08-10 キヤノン株式会社 Image processing device, image processing method, and program
CN116803096A (en) * 2021-02-22 2023-09-22 Oppo广东移动通信有限公司 Sensor, electronic device, and non-transitory computer-readable medium
JP6967173B1 (en) * 2021-07-19 2021-11-17 テックポイント インクTechpoint, Inc. Image sensor and image sensor
KR20230065055A (en) 2021-11-04 2023-05-11 삼성전자주식회사 Image sensor
WO2023184265A1 (en) * 2022-03-30 2023-10-05 北京小米移动软件有限公司 Solid photographing apparatus and photographing apparatus having solid photographing apparatus
US11956557B1 (en) 2022-10-17 2024-04-09 BAE Systems Imaging Solutions Inc. Pixel architecture with high dynamic range

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101262564A (en) * 2007-03-09 2008-09-10 索尼株式会社 Image processing apparatus, image forming apparatus, image processing method, and computer program
CN102780849A (en) * 2011-05-13 2012-11-14 索尼公司 Image processing apparatus, image pickup apparatus, image processing method, and program
CN104144305A (en) * 2013-05-10 2014-11-12 江苏思特威电子科技有限公司 Dual-conversion gain imaging device and imaging method thereof
CN104780326A (en) * 2014-01-10 2015-07-15 全视科技有限公司 Method for capturing image data, high dynamic range imaging system and pixel unit

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6307195B1 (en) * 1999-10-26 2001-10-23 Eastman Kodak Company Variable collection of blooming charge to extend dynamic range
US7075049B2 (en) * 2003-06-11 2006-07-11 Micron Technology, Inc. Dual conversion gain imagers
JP4459064B2 (en) * 2005-01-14 2010-04-28 キヤノン株式会社 Solid-state imaging device, control method thereof, and camera
US7718459B2 (en) 2005-04-15 2010-05-18 Aptina Imaging Corporation Dual conversion gain pixel using Schottky and ohmic contacts to the floating diffusion region and methods of fabrication and operation
JP4745735B2 (en) * 2005-06-30 2011-08-10 キヤノン株式会社 Image input apparatus and control method thereof
US7728896B2 (en) 2005-07-12 2010-06-01 Micron Technology, Inc. Dual conversion gain gate and capacitor and HDR combination
US7432540B2 (en) 2005-08-01 2008-10-07 Micron Technology, Inc. Dual conversion gain gate and capacitor combination
US8184191B2 (en) * 2006-08-09 2012-05-22 Tohoku University Optical sensor and solid-state imaging device
US7719590B2 (en) * 2007-03-16 2010-05-18 International Business Machines Corporation High dynamic range imaging cell with electronic shutter extensions
JP2009005312A (en) * 2007-06-25 2009-01-08 Canon Inc Image processing apparatus and image processing method, computer program, and storage medium
US8077237B2 (en) 2007-10-16 2011-12-13 Aptina Imaging Corporation Method and apparatus for controlling dual conversion gain signal in imaging devices
US7948535B2 (en) * 2007-11-30 2011-05-24 International Business Machines Corporation High dynamic range imaging cell with electronic shutter extensions
US20090237540A1 (en) 2008-03-20 2009-09-24 Micron Technology, Inc. Imager method and apparatus having combined gate signals
US8299513B2 (en) * 2008-04-30 2012-10-30 Omnivision Technologies, Inc. High conversion gain image sensor
JP5257176B2 (en) * 2009-03-18 2013-08-07 ソニー株式会社 Solid-state imaging device, driving method of solid-state imaging device, and electronic apparatus
JP2011229120A (en) * 2010-03-30 2011-11-10 Sony Corp Solid-state imaging device, signal processing method of solid-state imaging device, and electronic apparatus
US8294077B2 (en) * 2010-12-17 2012-10-23 Omnivision Technologies, Inc. Image sensor having supplemental capacitive coupling node
JP6004652B2 (en) * 2012-01-18 2016-10-12 キヤノン株式会社 Image pickup apparatus and driving method thereof
US9247170B2 (en) * 2012-09-20 2016-01-26 Semiconductor Components Industries, Llc Triple conversion gain image sensor pixels
JP2014112760A (en) * 2012-12-05 2014-06-19 Sony Corp Solid-state image pickup device and electronic apparatus
JP2014120860A (en) * 2012-12-14 2014-06-30 Sony Corp Da converter, solid state imaging element, drive method thereof, and electronic apparatus
US9106851B2 (en) * 2013-03-12 2015-08-11 Tower Semiconductor Ltd. Single-exposure high dynamic range CMOS image sensor pixel with internal charge amplifier
US9729808B2 (en) * 2013-03-12 2017-08-08 Tower Semiconductor Ltd. Single-exposure high dynamic range CMOS image sensor pixel with internal charge amplifier
US9412782B2 (en) * 2013-07-08 2016-08-09 BAE Systems Imaging Solutions Inc. Imaging array with improved dynamic range utilizing parasitic photodiodes within floating diffusion nodes of pixels
US9948875B2 (en) * 2015-10-01 2018-04-17 Semiconductor Components Industries, Llc High dynamic range imaging pixels with improved readout

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101262564A (en) * 2007-03-09 2008-09-10 索尼株式会社 Image processing apparatus, image forming apparatus, image processing method, and computer program
CN102780849A (en) * 2011-05-13 2012-11-14 索尼公司 Image processing apparatus, image pickup apparatus, image processing method, and program
CN104144305A (en) * 2013-05-10 2014-11-12 江苏思特威电子科技有限公司 Dual-conversion gain imaging device and imaging method thereof
CN104780326A (en) * 2014-01-10 2015-07-15 全视科技有限公司 Method for capturing image data, high dynamic range imaging system and pixel unit

Also Published As

Publication number Publication date
US20170099423A1 (en) 2017-04-06
US9843738B2 (en) 2017-12-12
CN106561046A (en) 2017-04-12
DE102016218838A1 (en) 2017-04-06

Similar Documents

Publication Publication Date Title
CN106561046B (en) High dynamic range imaging system with improved readout and method of system operation
CN106561047B (en) High dynamic range imaging system operation method with improved readout
US10072974B2 (en) Image sensors with LED flicker mitigaton global shutter pixles
US9900481B2 (en) Imaging pixels having coupled gate structure
US11496704B2 (en) Photoelectric conversion device having select circuit with a switch circuit having a plurality of switches, and imaging system
US10504949B2 (en) Solid-state imaging device, method of driving solid-state imaging device, imaging system, and movable object
US9185273B2 (en) Imaging pixels with improved dynamic range
US10194103B2 (en) Solid-state imaging device and method of driving solid-state imaging device with clipping level set according to transfer operation frequency
US10841517B2 (en) Solid-state imaging device and imaging system
US10917588B2 (en) Imaging sensors with per-pixel control
JP7258629B2 (en) Imaging device, imaging system, and imaging device driving method
US9380232B2 (en) Image sensors with anti-eclipse circuitry
US20170024868A1 (en) High dynamic range imaging pixels with logarithmic response
US20210044766A1 (en) Photoelectric conversion device, imaging system, moving body, and exposure control device
US10425600B2 (en) Solid state imaging device
US8908071B2 (en) Pixel to pixel charge copier circuit apparatus, systems, and methods
US20210112212A1 (en) Imaging pixels having programmable dynamic range
CN110611773B (en) Image sensor and method of operating the same
US20230307483A1 (en) Photoelectric conversion device
US11310456B2 (en) Photoelectric conversion device and imaging system
US11258967B2 (en) Imaging device and method of driving imaging device
US20170026591A1 (en) Image sensor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant