US20220232179A1 - Image sensor - Google Patents

Image sensor Download PDF

Info

Publication number
US20220232179A1
US20220232179A1 US17/715,161 US202217715161A US2022232179A1 US 20220232179 A1 US20220232179 A1 US 20220232179A1 US 202217715161 A US202217715161 A US 202217715161A US 2022232179 A1 US2022232179 A1 US 2022232179A1
Authority
US
United States
Prior art keywords
pixels
sub
pixel
unit
charge detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US17/715,161
Inventor
Wontak CHOI
Kwi Sung Yoo
Jaejin JUNG
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Priority to US17/715,161 priority Critical patent/US20220232179A1/en
Publication of US20220232179A1 publication Critical patent/US20220232179A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/53Control of the integration time
    • H04N25/533Control of the integration time by using differing integration times for different sensor regions
    • H04N25/534Control of the integration time by using differing integration times for different sensor regions depending on the spectral component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • H04N5/3537
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/53Control of the integration time
    • H04N25/533Control of the integration time by using differing integration times for different sensor regions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • H04N25/58Control of the dynamic range involving two or more exposures
    • H04N25/581Control of the dynamic range involving two or more exposures acquired simultaneously
    • H04N25/583Control of the dynamic range involving two or more exposures acquired simultaneously with different integration times
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • H04N25/58Control of the dynamic range involving two or more exposures
    • H04N25/581Control of the dynamic range involving two or more exposures acquired simultaneously
    • H04N25/585Control of the dynamic range involving two or more exposures acquired simultaneously with pixels having different sensitivities within the sensor, e.g. fast or slow pixels or pixels having different sizes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • H04N25/58Control of the dynamic range involving two or more exposures
    • H04N25/587Control of the dynamic range involving two or more exposures acquired sequentially, e.g. using the combination of odd and even image fields
    • H04N25/589Control of the dynamic range involving two or more exposures acquired sequentially, e.g. using the combination of odd and even image fields with different integration times, e.g. short and long exposures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/71Charge-coupled device [CCD] sensors; Charge-transfer registers specially adapted for CCD sensors
    • H04N25/745Circuitry for generating timing or clock signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/71Charge-coupled device [CCD] sensors; Charge-transfer registers specially adapted for CCD sensors
    • H04N25/75Circuitry for providing, modifying or processing image signals from the pixel array
    • H04N5/3535
    • H04N5/355
    • H04N5/35554
    • H04N5/35563
    • H04N5/35581
    • H04N9/0451
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/77Circuits for processing the brightness signal and the chrominance signal relative to each other, e.g. adjusting the phase of the brightness signal relative to the colour signal, correcting differential gain or differential phase
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/797Processing of colour television signals in connection with recording for recording the signal in a plurality of channels, the bandwidth of each channel being less than the bandwidth of the signal
    • H04N9/7973Processing of colour television signals in connection with recording for recording the signal in a plurality of channels, the bandwidth of each channel being less than the bandwidth of the signal by dividing the luminance or colour component signal samples or frequency bands among a plurality of recording channels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/797Processing of colour television signals in connection with recording for recording the signal in a plurality of channels, the bandwidth of each channel being less than the bandwidth of the signal
    • H04N9/7976Processing of colour television signals in connection with recording for recording the signal in a plurality of channels, the bandwidth of each channel being less than the bandwidth of the signal by spectrum folding of the high frequency components of the luminance signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/82Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
    • H04N9/83Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only the recorded chrominance signal occupying a frequency band under the frequency band of the recorded brightness signal
    • H04N9/832Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only the recorded chrominance signal occupying a frequency band under the frequency band of the recorded brightness signal using an increased bandwidth for the luminance or the chrominance signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N9/0455

Definitions

  • Embodiments relate to an image sensor, and more particularly, to an image sensor providing a high dynamic range mode and a driving method thereof.
  • the dynamic range indicates a maximum range capable of processing an input signal without distortion of the input signal. As the dynamic range becomes wider, an image obtained by the image sensor may become clearer within a wide illumination range.
  • an image sensor for sensing an image signal of a plurality of illumination ranges includes a first unit pixel that includes a first sub-pixel and a second sub-pixel, a second unit pixel that includes a third sub-pixel and a fourth sub-pixel, a timing controller that applies a first effective integration time to the first sub-pixel and the fourth sub-pixel such that a first sensing signal and a fourth sensing signal are generated from the first sub-pixel and the fourth sub-pixel and applies a second effective integration time shorter than the first effective integration time to the second sub-pixel and the third sub-pixel such that a second sensing signal and a third sensing signal are generated from the second sub-pixel and the third sub-pixel, and an analog-to-digital converter that performs an averaging operation on the first sensing signal and the fourth sensing signal or on the second sensing signal and the third sensing signal.
  • a driving method of an image sensor including first to fourth sub-pixels constituting a unit color pixel includes sampling a first sensing signal and a second sensing signal from the first sub-pixel and the second sub-pixel respectively by applying a first effective integration time to the first sub-pixel and the second sub-pixel, sampling a third sensing signal and a fourth sensing signal from the third sub-pixel and the fourth sub-pixel respectively by applying a second effective integration time shorter than the first effective integration time to the third sub-pixel and the fourth sub-pixel, and performing an averaging operation on the first sensing signal and the second sensing signal and performing the averaging operation on the third sensing signal and the fourth sensing signal.
  • the first sub-pixel and the fourth sub-pixel share a first charge detection node
  • the second sub-pixel and the third sub-pixel share a second charge detection node.
  • an image sensor for sensing an image signal of a plurality of illumination ranges includes a first unit pixel that includes a first sub-pixel, a second sub-pixel, and a third sub-pixel sharing a first charge detection node, a second unit pixel that includes a fourth sub-pixel, a fifth sub-pixel, and a sixth sub-pixel sharing a second charge detection node, and a third unit pixel that includes a seventh sub-pixel, an eighth sub-pixel, and a ninth sub-pixel sharing a third charge detection node, and the first unit pixel, the second unit pixel, and the third unit pixel output sensing signals individually by using the first charge detection node, the second charge detection node, and the third charge detection node.
  • FIG. 1 illustrates an image sensor according to an embodiment.
  • FIG. 2 illustrates a diagram of a unit color pixel in FIG. 1 according to an embodiment.
  • FIGS. 3A to 3C illustrate circuit diagrams of structures of unit pixels.
  • FIG. 4 illustrates a timing diagram of a control method for implementing a high dynamic range (HDR) of a unit color pixel according to an embodiment.
  • HDR high dynamic range
  • FIG. 5 illustrates a flowchart of an HDR sensing method of an image sensor according to an embodiment.
  • FIGS. 6A, 6B, and 6C illustrate flowcharts describing a high illumination mode, a middle illumination mode, and a low illumination mode of FIG. 5 , respectively.
  • FIG. 7 illustrates image signals sensed from a unit color pixel according to an embodiment.
  • FIG. 8 illustrates image signals sensed from a unit color pixel according to an embodiment.
  • FIG. 9 illustrates a unit color pixel in FIG. 1 according to another embodiment.
  • FIGS. 10A and 10B illustrate circuit diagrams of structures of unit pixels illustrated in FIG. 9 .
  • FIG. 11 illustrates a timing diagram of a control method for performing an HDR sensing operation on a unit color pixel having a 2 ⁇ 2 pixel size in FIGS. 10A and 10B .
  • FIG. 12 illustrates a diagram of a unit color pixel in FIG. 1 according to another embodiment.
  • FIG. 13 illustrates a circuit diagram of a structure of a unit pixel in FIG. 12 .
  • FIG. 14 illustrates a diagram of a unit color pixel illustrated in FIG. 1 according to another embodiment.
  • FIG. 15 illustrates a circuit diagram of a structure of a unit pixel in FIG. 14 .
  • FIG. 1 illustrates an image sensor according to an embodiment.
  • an image sensor 100 may include a pixel array 110 , a row decoder 120 , an analog-to-digital converter (ADC) 130 , an output buffer 140 , and a timing controller 150 .
  • ADC analog-to-digital converter
  • the pixel array 110 may include a plurality of pixel sensors arranged two-dimensionally. Each of the pixel sensors converts a light signal into an electrical signal.
  • the pixel array 110 may be controlled by signals, which are provided from the row decoder 120 for the purpose of driving the pixel sensors, e.g., a selection signal SEL, a reset signal RG, and a transmission signal TG. Also, electrical signals that are sensed by the pixel sensors in response to the signals for driving pixel sensors are provided to the analog-to-digital converter 130 through a plurality of column lines CLm.
  • the plurality of pixel sensors included in the pixel array 110 are divided into unit pixel groups UPG each sensing a blue (B) color, a green (G 1 /G 2 ) color, and a red (R) color.
  • the unit pixel group UPG may include unit color pixels UCP for sensing the colors, respectively.
  • Each of the unit color pixels UCP may include a color filter capable of selectively transmitting a corresponding color.
  • the color filter includes filters to sense the red, green, and blue colors.
  • the color filter may include filters for sensing yellow, cyan, magenta, and green colors.
  • the color filter may include filters for sensing red, green, blue, and white colors.
  • Each of the unit color pixels UCP includes a plurality of unit pixels UP.
  • One unit pixel UP includes a plurality of sub-pixels SP.
  • One unit pixel UP includes a plurality of photoelectric conversion elements sharing one charge detection node (e.g., a floating diffusion region).
  • One photoelectric conversion element may correspond to one sub-pixel SP.
  • a unit color pixel (UCP) structure in a high dynamic range mode (below, it is interchangeable with an HDR mode), a plurality of sensing signals corresponding to the same effective integration time EIT may be obtained from one unit color pixel UCP. That is, to implement the HDR mode, the unit pixels UP constituting the unit color pixel UCP may output sensing signals at the same time. In this case, a speed at which sensing data are output may be improved in the HDR mode of the unit color pixel UCP, thus increasing a frame rate.
  • a configuration and an operation of the unit color pixel UCP will be more fully described with reference to drawings below.
  • the row decoder 120 may select one of rows of the pixel array 110 under control of the timing controller 150 .
  • the row decoder 120 generates the selection signal SEL in response to a control signal TC 1 from the timing controller 150 for the purpose of selecting one or more of a plurality of rows.
  • the row decoder 120 may sequentially activate (or enable) the reset signal RG and the transmission signal TG with regard to pixels corresponding to the selected row. In this case, sensing signals for each illumination, which are generated from the unit color pixels UCP of the selected row, are sequentially transmitted to the analog-to-digital converter 130 .
  • the analog-to-digital converter 130 converts the sensing signals generated from the unit color pixels UCP into digital signals in response to a control signal TC 2 from the timing controller 150 .
  • the analog-to-digital converter 130 may perform an averaging operation on sensing signals of a certain illumination, which are generated from one or more unit color pixels UCP.
  • the analog-to-digital converter 130 may perform an analogue binning operation.
  • the analog-to-digital converter 130 may sample HDR sensing signals in a correlated double sampling manner and may then convert the sampled HDR sensing signals into digital signals.
  • a correlated double sampler (CDS) may be further included in front of the analog-to-digital converter 130 .
  • the output buffer 140 may latch image data provided from the analog-to-digital converter 130 in the unit of column.
  • the output buffer 140 may temporarily store image data output from the analog-to-digital converter 130 in response to a control signal TC 3 from the timing controller 150 and may then output the latched image data sequentially by using a column decoder.
  • the timing controller 150 controls the pixel array 110 , the row decoder 120 , the analog-to-digital converter 130 , the output buffer 140 , etc.
  • the timing controller 150 may supply control signals, such as a clock signal and a timing control signal, to the pixel array 110 , the row decoder 120 , the analog-to-digital converter 130 , the output buffer 140 , etc.
  • the timing controller 150 may include a logic control circuit, a phase locked loop (PLL) circuit, a timing control circuit, a communication interface circuit, etc.
  • PLL phase locked loop
  • the configuration of the image sensor 100 is briefly described above.
  • the unit color pixels UCP constituting the pixel array 110 are able to simultaneously output a plurality of sensing signals corresponding to the same illumination range, the averaging operation is possible.
  • the image sensor 100 is able to perform the averaging operation on sensing signals at a high speed, thus improving a binning speed.
  • the unit color pixels UCP constituting the pixel array 110 simultaneously output a plurality of sensing signals corresponding to the same illumination range, it is possible to implement the HDR mode of a high resolution in the case of skipping the binning or averaging operation.
  • FIG. 2 is a diagram illustrating the unit color pixel UCP illustrated in FIG. 1 .
  • An example where one unit color pixel UCP includes a plurality of unit pixels UP and each unit pixel UP includes three sub-pixels SP will be described with reference to FIG. 2 .
  • the unit color pixel UCP includes three unit pixels UP.
  • the unit pixel UP may include three photoelectric conversion elements and one floating diffusion region FD.
  • one unit pixel UP includes three sub-pixels having different effective integration times EIT.
  • a sub-pixel L 1 of a unit pixel UP 1 is a sub-pixel having the longest effective integration time EIT from among three sub-pixels thereof.
  • a sub-pixel S 1 of the unit pixel UP 1 is a sub-pixel having the shortest effective integration time EIT from among the three sub-pixels.
  • a sub-pixel M 1 of the unit pixel UP 1 is a sub-pixel having a middle effective integration time EIT from among the three sub-pixels.
  • a unit pixel UP 2 may have substantially the same structure as the unit pixel UP 1 , but may be different from the unit pixel UP 1 in terms of the arrangement of sub-pixels and the order of allocating effective integration times. That is, a sub-pixel S 2 in the first row of the unit pixel UP 2 may have the shortest effective integration time EIT from among three sub-pixels thereof. A sub-pixel M 2 in the second row of the unit pixel UP 2 may have the middle effective integration time EIT from among the three sub-pixels. A sub-pixel L 2 in the third row takes may have the longest effective integration time EIT of effective integration times of the three sub-pixels of the unit pixel UP 2 .
  • a unit pixel UP 3 may have substantially the same structure as the unit pixels UP 1 and UP 2 , but may be different from the unit pixels UP 1 and UP 2 in terms of the arrangement of sub-pixels and the order of allocating effective integration times. That is, a sub-pixel M 3 placed in the first row of the unit pixel UP 3 may have the middle effective integration time EIT from among three sub-pixels thereof. A sub-pixel L 3 placed in the second row of the unit pixel UP 3 may have the longest effective integration time EIT of effective integration times of the three sub-pixels. A sub-pixel S 3 corresponding to the third row may take charge of the shortest effective integration time EIT of effective integration times of the three sub-pixels of the unit pixel UP 3 .
  • the unit color pixel UCP that performs the HDR mode sensing operation may include three unit pixels UP sharing a floating diffusion region. Each unit pixel UP may include three sub-pixels. Accordingly, a unit color pixel may have a pixel structure in which 3 ⁇ 3 pixels constituting three 1 ⁇ 3 unit pixels are arranged. Unit pixels may simultaneously output signals of sub-pixels having the same effective integration time EIT in the high dynamic range (HDR) sensing operation. Because the averaging operation can be performed on sensing signals output at the same time, high-speed binning and analog-to-digital conversion are possible.
  • HDR high dynamic range
  • FIGS. 3A to 3C are circuit diagrams illustrating structures of unit pixels according to embodiments.
  • the unit pixel UP 1 may include a plurality of photoelectric conversion elements PD 1 , PD 2 , and PD 3 , a plurality of transmission transistors TX 1 , TX 2 , and TX 3 , a reset transistor RX 1 , a selection transistor SX 1 , and a drive transistor DX 1 .
  • the unit pixel UP 1 may further include a conversion gain transistor (CGX) and a capacitor (CAP) for implementing a conversion gain changing circuit.
  • CGX conversion gain transistor
  • CAP capacitor
  • the photoelectric conversion elements PD 1 , PD 2 , and PD 3 may be photosensitive elements that generate and integrate charges depending on the amount of incident light or the intensity of the incident light.
  • Each of the photoelectric conversion elements PD 1 , PD 2 , and PD 3 may be a photo diode, a photo transistor, a photo gate, a pinned photo diode (PPD), or a combination thereof.
  • the transmission transistors TX 1 , TX 2 , and TX 3 transmit charges integrated in the photoelectric conversion elements PD 1 , PD 2 , and PD 3 connected thereto to a first charge detection node FD 1 (i.e., a floating diffusion region).
  • the transmission transistors TX 1 , TX 2 , and TX 3 are controlled by charge transmission signals TG_L 1 , TG_S 1 , and TG_M 1 , respectively.
  • the transmitted photoelectrons may be accumulated at the first charge detection node FD 1 having a capacity provided physically.
  • the drive transistor DX 1 may be controlled depending on the amount of photoelectrons accumulated at the first charge detection node FD 1 .
  • the reset transistor RX 1 may reset charges accumulated at the first charge detection node FD 1 .
  • a drain terminal of the reset transistor RX 1 is connected to the first charge detection node FD 1 , and a source terminal thereof is connected to a pixel power supply voltage VPIX.
  • VPIX pixel power supply voltage
  • the drive transistor DX 1 may be a source follower buffer amplifier that generates a source-drain current in proportion to the amount of charges of the first charge detection node FD 1 , which are input to a gate terminal of the drive transistor DX 1 .
  • the drive transistor DX 1 amplifies a potential change of the first charge detection node FD 1 and outputs the amplified signal to a column line CLi through the selection transistor SX 1 .
  • a source terminal of the drive transistor DX 1 may be connected to the pixel power supply voltage VPIX, and a drain terminal of the drive transistor DX 1 may be connected to a source terminal of the selection transistor SX 1 .
  • the selection transistor SX 1 may select the unit pixels UP 1 to be read in the unit of row.
  • an electrical signal output from the drain terminal of the drive transistor DX 1 may be provided to the column line CLi through the selection transistor SX 1 .
  • a circuit structure of the unit pixel UP 1 of a 1 ⁇ 3 pixel size for constituting the unit color pixel UCP of a 3 ⁇ 3 pixel size is described above.
  • the unit pixel UP 1 may accumulate charges by using one charge detection node (or a floating diffusion region) marked by “FD 1 ”.
  • the unit pixel UP 2 may include a plurality of photoelectric conversion elements PD 4 , PD 5 , and PD 6 , a plurality of transmission transistors TX 4 , TX 5 , and TX 6 , a reset transistor RX 2 , a selection transistor SX 2 , and a drive transistor DX 2 .
  • the unit pixel UP 3 may include a plurality of photoelectric conversion elements PD 7 , PD 8 , and PD 9 , a plurality of transmission transistors TX 7 , TX 8 , and TX 9 , a reset transistor RX 3 , a selection transistor SX 3 , and a drive transistor DX 3 .
  • the unit color pixel UCP of the 3 ⁇ 3 pixel size includes unit pixels UP of the 1 ⁇ 3 pixel size capable of outputting sensing signals independently of each other. Accordingly, unit pixels corresponding to the same effective integration time EIT may output sensing signals at the same time. The output sensing signals may be merged through the averaging operation.
  • FIG. 4 is a timing diagram illustrating a control method for implementing a high dynamic range (HDR) of a unit color pixel according to an embodiment.
  • sensing signals corresponding to the same effective integration time EIT may be simultaneously output from a selected unit color pixel UCP. That is, from a time T 0 to a time T 6 , charges integrated by the photoelectric conversion elements PD 1 , PD 6 , and PD 8 having the longest effective integration time EIT for high-illumination sensing are sensed. From the time T 6 to a time T 9 , charges integrated by the photoelectric conversion elements PD 3 , PD 5 , and PD 7 having the middle effective integration time EIT for middle-illumination sensing are sensed. From the time T 9 to a time T 11 , charges integrated by the photoelectric conversion elements PD 2 , PD 4 , and PD 9 having the shortest effective integration time EIT for low-illumination sensing are sensed.
  • a control operation of the unit color pixel UCP for high-illumination sensing may be performed from the time T 0 to the time T 6 .
  • the reset signal RG is maintained at a high level from the time T 0 to the time T 1 for the purpose of resetting charge detection nodes FD 1 , FD 2 , and FD 3 of the unit pixels UP 1 , UP 2 , and UP 3 .
  • the reset transistors RX 1 , RX 2 , and RX 3 are turned on.
  • the reset signal RG transitions to a low level.
  • the reset transistors RX 1 , RX 2 , and RX 3 are turned off.
  • the charge detection nodes FD 1 , FD 2 , and FD 3 may be in a state where charge accumulation is possible.
  • the selection transistors SX 1 , SX 2 , and SX 3 are turned on. In the case where the selection transistors SX 1 , SX 2 , and SX 3 are turned on, it is possible to output sensing signals.
  • the transmission signals TG_L 1 , TG_L 2 , and TG_L 3 transition to the high level for the purpose of turning on the transmission transistors TX 1 , TX 6 , and TX 8 of the sub-pixels L 1 , L 2 , and L 3 corresponding to the longest effective integration time.
  • the remaining charge transmission signals TG_M 1 , TG_M 2 , TG_M 3 , TG_S 1 , TG_S 2 , and TG_S 3 may be maintained at the low level.
  • the drive transistor DX 1 , DX 2 , and DX 3 are respectively connected to the charge detection nodes FD 1 , FD 2 , and FD 3 , in proportion to the amount of charges accumulated at the charge detection nodes FD 1 , FD 2 , and FD 3 .
  • the drive transistor DX 1 of the unit pixel UP 1 amplifies a potential change of the charge detection node FD 1 and outputs the amplified signal to the column line CLi through the selection transistor SX 1 .
  • the drive transistor DX 2 of the unit pixel UP 2 amplifies a potential change of the charge detection node FD 2 and outputs the amplified signal to the column line CLj through the selection transistor SX 2
  • the drive transistor DX 3 of the unit pixel UP 3 amplifies a potential change of the charge detection node FD 3 and outputs the amplified signal to the column line CLk through the selection transistor SX 3 .
  • the selection transistors SX 1 , SX 2 , and SX 3 are turned off. In this case, sensing signals of the unit pixels UP 1 , UP 2 , and UP 3 are blocked from being output.
  • the reset transistors RX 1 , RX 2 , and RX 3 are turned on.
  • the charge detection nodes FD 1 , FD 2 , and FD 3 of the unit pixels UP 1 , UP 2 , and UP 3 are reset to the pixel power supply voltage VPIX.
  • a control operation for middle-illumination sensing is performed from the time T 6 to the time T 9 .
  • the transitions of the selection signal SEL and the reset signal RG from the time T 6 to the time T 9 are the same as those from the time T 0 to the time T 6 , and thus, additional description will be omitted to avoid redundancy.
  • photoelectrons corresponding to an incident light are integrated by the photoelectric conversion elements PD 3 , PD 5 , and PD 7 having the effective integration time EIT of a middle length.
  • the transmission signals TG_M 1 , TG_M 2 , and TG_M 3 transition to the high level for the purpose of turning on the transmission transistors TX 3 , TX 5 , and TX 7 of the sub-pixels M 1 , M 2 , and M 3 corresponding to the effective integration time of the middle length.
  • the photoelectrons integrated by the photoelectric conversion elements PD 3 , PD 5 , and PD 7 are transmitted to the charge detection nodes FD 1 , FD 2 , and FD 3 .
  • the photoelectrons are accumulated at the charge detection nodes FD 1 , FD 2 , and FD 3 .
  • currents flow through the drive transistors DX 1 , DX 2 , and DX 3 , the gate terminals of which are respectively connected to the charge detection nodes FD 1 , FD 2 , and FD 3 , in proportion to the amount of charges accumulated at the charge detection nodes FD 1 , FD 2 , and FD 3 .
  • the drive transistor DX 2 of the unit pixel UP 2 amplifies a potential change of the charge detection node FD 2 and outputs the amplified signal to the column line CLj through the selection transistor SX 2 .
  • the drive transistor DX 1 of the unit pixel UP 1 amplifies a potential change of the charge detection node FD 1 and outputs the amplified signal to the column line CLi through the selection transistor SX 3
  • the drive transistor DX 3 of the unit pixel UP 3 amplifies a potential change of the charge detection node FD 3 and outputs the amplified signal to the column line CLk through the selection transistor SX 3 .
  • a control operation for low-illumination sensing is performed from the time T 9 to the time T 11 .
  • the time T 9 in a state where the reset signal RG transitions to the low level and the selection signal SEL transitions to the high level, photoelectrons corresponding to an incident light are integrated by the photoelectric conversion elements PD 2 , PD 4 , and PD 9 having the effective integration time EIT of the shortest length.
  • the transmission signals TG_S 1 , TG_S 2 , and TG_S 3 transition to the high level for the purpose of turning on the transmission transistors TX 2 , TX 4 , and TX 9 of the sub-pixels S 1 , S 2 , and S 3 corresponding to the shortest effective integration time.
  • the photoelectrons integrated by the photoelectric conversion elements PD 2 , PD 4 , and PD 9 are transmitted to the charge detection nodes FD 1 , FD 2 , and FD 3 .
  • the photoelectrons are accumulated at the charge detection nodes FD 1 , FD 2 , and FD 3 .
  • currents flow through the drive transistors DX 1 , DX 2 , and DX 3 , the gate terminals of which are respectively connected to the charge detection nodes FD 1 , FD 2 , and FD 3 , in proportion to the amount of charges accumulated at the charge detection nodes FD 1 , FD 2 , and FD 3 .
  • the drive transistor DX 3 of the unit pixel UP 3 amplifies a potential change of the charge detection node FD 3 and outputs the amplified signal to the column line CLk through the selection transistor SX 3 .
  • the drive transistor DX 1 of the unit pixel UP 1 amplifies a potential change of the charge detection node FD 1 and outputs the amplified signal to the column line CLi through the selection transistor SX 2
  • the drive transistor DX 2 of the unit pixel UP 2 amplifies a potential change of the charge detection node FD 2 and outputs the amplified signal to the column line CLj through the selection transistor SX 2 .
  • sensing signals are able to be simultaneously output from sub-pixels of the unit color pixel UCP in the HDR mode.
  • the sensing signals output from the unit pixels UP 1 , UP 2 , and UP 3 make it possible to perform the averaging operation. Accordingly, the frame rate of the image sensor 100 according to an embodiment may be improved in the high dynamic range (HDR) mode.
  • HDR high dynamic range
  • FIG. 5 is a flowchart illustrating a high dynamic range (HDR) sensing method of an image sensor according to an embodiment.
  • sensing signals corresponding to the same effective integration time EIT may be simultaneously output through the charge detection nodes FD 1 , FD 2 , and FD 3 independently provided in the unit color pixel UCP.
  • one unit color pixel UCP may be selected through the row decoder 120 of the image sensor 100 .
  • a plurality of unit color pixels present in the same row may be simultaneously selected.
  • a high-illumination mode (HIM) sensing operation is performed on the selected unit color pixel UCP.
  • the high-illumination mode (HIM) sensing operation may refer to an operation of sensing sub-pixels having the longest effective integration time EIT from among sub-pixels of the unit color pixel UCP. For example, photoelectrons integrated by the photoelectric conversion elements PD 1 , PD 6 , and PD 8 of the high-illumination sub-pixels L 1 , L 2 , and L 3 of FIG. 2 corresponding to the longest effective integration time EIT are accumulated at the charge detection nodes FD 1 , FD 2 , and FD 3 . Afterwards, sensing signals corresponding to the charges accumulated at the charge detection nodes FD 1 , FD 2 , and FD 3 may be simultaneously output to the column lines CLi, CLj, and CLk.
  • a middle-illumination mode (MIM) sensing operation is performed on the selected unit color pixel UCP.
  • the middle-illumination mode (MIM) sensing operation may refer to an operation of sensing sub-pixels having the middle effective integration time EIT from among the sub-pixels of the unit color pixel UCP.
  • photoelectrons integrated in the photoelectric conversion elements PD 3 , PD 5 , and PD 7 of the middle-illumination sub-pixels M 1 , M 2 , and M 3 of FIG. 2 corresponding to the middle effective integration time EIT are accumulated at the charge detection nodes FD 1 , FD 2 , and FD 3 .
  • sensing signals corresponding to the charges accumulated at the charge detection nodes FD 1 , FD 2 , and FD 3 may be simultaneously output to the column lines CLi, CLj, and CLk.
  • low-illumination mode (LIM) sensing operation is performed on the selected unit color pixel UCP.
  • the low-illumination mode (LIM) sensing operation may refer to an operation of sensing sub-pixels having the shortest effective integration time EIT from among the sub-pixels of the unit color pixel UCP.
  • photoelectrons integrated by the photoelectric conversion elements PD 2 , PD 4 , and PD 9 of the low-illumination sub-pixels S 1 , S 2 , and S 3 of FIG. 2 corresponding to the middle effective integration time EIT are accumulated at the charge detection nodes FD 1 , FD 2 , and FD 3 .
  • sensing signals corresponding to the charges accumulated at the charge detection nodes FD 1 , FD 2 , and FD 3 may be simultaneously output to the column lines CLi, CLj, and CLk.
  • a binning operation is performed on the sensing signals output from the selected unit color pixel UCP.
  • the averaging operation may be performed on the high-illumination mode sensing signals simultaneously output from the selected unit color pixel UCP.
  • the averaging operation may be performed on sensing signals output from a plurality of unit color pixels UCP corresponding to the same color.
  • Unit color pixels UCP that are present in the same row or in the same column may be selected as the plurality of unit color pixels UCP targeted for the averaging operation.
  • the plurality of unit color pixels UCP targeted for the averaging operation may be selected from groups of unit color pixels UCP that correspond to the same color and are distributed in a given region.
  • a sensing signal processed through the binning operation is converted into digital data.
  • a high dynamic range (HDR) image may be generated by combining pieces of image data corresponding to the low illumination mode (LIM), the middle illumination mode (MIM), and the high illumination mode (HIM).
  • a way to generate the HDR image by using an image sensor of a unit color pixel (UCP) structure according to embodiments is described above.
  • the unit color pixel (UCP) structure sensing signals corresponding to the same illumination may be simultaneously output, and the simultaneously output sensing signals may be processed through the averaging operation such as addition or subtraction.
  • the unit color pixel (UCP) structure a time taken for image sensing of the HDR mode is markedly decreased, and the HDR image may be obtained with a high frame rate.
  • FIGS. 6A, 6B, and 6C are flowcharts for describing a high illumination mode, a middle illumination mode, and a low illumination mode of FIG. 5 , respectively. Operation S 120 corresponding to the high-illumination mode (HIM) sensing operation will be more fully described with reference to FIGS. 6A and 5 .
  • HIM high-illumination mode
  • the charge detection nodes FD 1 , FD 2 , and FD 3 of the unit pixels UP 1 , UP 2 , and UP 3 are reset to perform the high-illumination mode (HIM) sensing operation on the selected unit color pixel UCP.
  • the reset signal RG is set to the high level, and reset transistors, for example, the reset transistors RX 1 , RX 2 , and RX 3 of the unit pixels UP 1 , UP 2 , and UP 3 are turned on by the reset signal RG.
  • the charge transmission signals TG_L 1 , TG_L 2 , and TG_L 3 transition to the high level.
  • photoelectrons integrated by the photoelectric conversion elements PD 1 , PD 6 , and PD 8 are accumulated at the charge detection nodes FD 1 , FD 2 , and FD 3 .
  • the averaging operation may be performed on the high-illumination mode sensing signals output to the column lines CLi, CLj, and CLk.
  • the high-illumination mode sensing signals output to the column lines CLi, CLj, and CLk may be merged to one sensing signal.
  • a plurality of sub-pixels that perform a sensing operation in the high illumination mode may simultaneously output sensing signals.
  • the averaging operation is performed on the output sensing signals, it is possible to perform a sensing operation at a high speed in the high-dynamic range (HDR) mode, and it is possible to improve a frame rate.
  • HDR high-dynamic range
  • Operation S 130 corresponding to the middle-illumination mode (MIM) sensing operation will be more fully described with reference to FIGS. 6B and 5 .
  • the charge detection nodes FD 1 , FD 2 , and FD 3 of the unit pixels UP 1 , UP 2 , and UP 3 are reset to perform the middle-illumination mode (MIM) sensing operation on the selected unit color pixel UCP.
  • MIM middle-illumination mode
  • the reset transistors RX 1 , RX 2 , and RX 3 are turned on in response to the reset signal RG transitioning to the high level, charges present at the charge detection nodes FD 1 , FD 2 , and FD 3 are discharged to the pixel power supply voltage (VPIX) terminal.
  • VPIX pixel power supply voltage
  • the charge transmission signals TG_M 1 , TG_M 2 , and TG_M 3 transition to the high level.
  • photoelectrons integrated by the photoelectric conversion elements PD 3 , PD 5 , and PD 7 are accumulated at the charge detection nodes FD 1 , FD 2 , and FD 3 .
  • the averaging operation may be performed on the middle-illumination mode sensing signals output to the column lines CLi, CLj, and CLk.
  • the middle-illumination mode sensing signals output to the column lines CLi, CLj, and CLk may be merged to one sensing signal.
  • a plurality of sub-pixels that perform a sensing operation in the middle illumination mode may simultaneously output sensing signals.
  • the averaging operation is performed on the output sensing signals, it is possible to perform a sensing operation at a high speed in the high-dynamic range (HDR) mode, and it is possible to improve a frame rate.
  • HDR high-dynamic range
  • Operation S 140 corresponding to the low-illumination mode (LIM) sensing operation will be more fully described with reference to FIGS. 6C and 5 .
  • the charge detection nodes FD 1 , FD 2 , and FD 3 of the unit pixels UP 1 , UP 2 , and UP 3 are reset to perform the low-illumination mode (LIM) sensing operation on the selected unit color pixel UCP.
  • the reset transistors RX 1 , RX 2 , and RX 3 are turned on in response to the reset signal RG transitioning to the high level, charges present at the charge detection nodes FD 1 , FD 2 , and FD 3 are discharged to the pixel power supply voltage (VPIX) terminal.
  • voltages of the charge detection nodes FD 1 , FD 2 , and FD 3 may be reset to a level of the pixel power supply voltage VPIX.
  • the charge transmission signals TG_S 1 , TG_S 2 , and TG_S 3 transition to the high level.
  • photoelectrons integrated by the photoelectric conversion elements PD 2 , PD 4 , and PD 9 are accumulated at the charge detection nodes FD 1 , FD 2 , and FD 3 .
  • the averaging operation may be performed on the low illumination mode sensing signals output to the column lines CLi, CLj, and CLk.
  • the low illumination mode sensing signals output to the column lines CLi, CLj, and CLk may be merged to one sensing signal.
  • a plurality of sub-pixels that perform a sensing operation in the low illumination mode may simultaneously output sensing signals.
  • the averaging operation is performed on the output sensing signals, it is possible to perform a sensing operation at a high speed in the high dynamic range (HDR) mode, and it is possible to improve a frame rate.
  • HDR high dynamic range
  • FIG. 7 illustrates another embodiment in which image signals sensed from a unit color pixel are processed.
  • a simultaneous sensing and readout operation of sub-pixels having the same effective integration time EIT may be performed on the unit color pixels UCP disposed in the same row.
  • the pixel array 110 may include a plurality of unit pixel groups (UPG) 112 and 114 including a plurality of unit color pixels 112 a to 112 d and 114 a to 114 d according to embodiments.
  • UPG unit pixel group
  • a unit pixel group UPG means a combination of unit color pixels UCP respectively corresponding to colors “R”, G 1 , G 2 , and “B” to be sensed.
  • Unit color pixels 112 a and 114 a may be selected from the unit pixel groups 112 and 114 .
  • a sensing operation of the high dynamic range (HDR) mode may be performed on the unit color pixels 112 a and 114 a.
  • HDR high dynamic range
  • the averaging operation may be performed on sub-pixels of two unit color pixels. That is, high-illumination mode sensing signals sensed from the sub-pixels L 1 , L 2 , and L 3 of the unit color pixel 112 a and high-illumination mode sensing signals sensed from the sub-pixels L 1 , L 2 , and L 3 of the unit color pixel 114 a may be output to a binning circuit 135 .
  • the binning circuit 135 may perform the averaging operation on the high-illumination mode sensing signals of the sub-pixels L 1 , L 2 , and L 3 included in the unit color pixels 112 a and 114 a.
  • the high-illumination mode sensing signals output from the unit color pixels 112 a and 114 a are converted into digital image data by the analog-to-digital converter 130 after being processed by the binning circuit 135 .
  • 1 ⁇ 1 pixel data may be generated based on a sensing result of two unit color pixels 112 a and 114 a each having a 3 ⁇ 3 pixel size.
  • the HDR sensing and binning operation that is performed in the unit of a plurality of unit color pixels 112 a and 114 b may be identically applied to the unit color pixels 112 b and 114 b .
  • an HDR sensing and binning operation that is performed in the unit of a plurality of unit color pixels 112 a and 114 a may be identically applied to the unit color pixels 112 c and 114 c and the unit color pixels 112 d and 114 d.
  • FIG. 8 illustrates another embodiment in which image signals sensed from a unit color pixel are processed.
  • a simultaneous sensing and readout operation of sub-pixels having the same effective integration time EIT may be performed on the unit color pixels UCP disposed in the same column.
  • the pixel array 110 may include a plurality of unit pixel groups 112 , 113 , 114 , and 115 including a plurality of unit color pixels 112 a to 112 d , 113 a to 113 d , 114 a to 114 d , and 115 a to 115 d according to embodiments.
  • Unit color pixels 112 a , 113 a , 114 a , and 115 a may be selected from the unit pixel groups 112 , 113 , 114 , and 115 for the high dynamic range (HDR) sensing mode.
  • HDR high dynamic range
  • the averaging operation may be performed on sub-pixels of two unit color pixels. That is, high-illumination mode sensing signals sensed from the sub-pixels L 1 , L 2 , and L 3 of the unit color pixel 112 a and high-illumination mode sensing signals sensed from the sub-pixels L 1 , L 2 , and L 3 of the unit color pixel 113 a may be output to a binning circuit 136 .
  • the binning circuit 136 may perform the averaging operation on the high illumination mode sensing signals of the sub-pixels L 1 , L 2 , and L 3 included in the unit color pixels 112 a and 113 a.
  • the averaging operation may be performed on sub-pixels of two unit color pixels. That is, high-illumination mode sensing signals sensed from the sub-pixels L 1 , L 2 , and L 3 of the unit color pixel 114 a and high-illumination mode sensing signals sensed from the sub-pixels L 1 , L 2 , and L 3 of the unit color pixel 115 a may be output to a binning circuit 137 .
  • the binning circuit 137 may perform the averaging operation on the high-illumination mode sensing signals of the sub-pixels L 1 , L 2 , and L 3 included in the unit color pixels 114 a and 115 a.
  • the high-illumination mode sensing signals output from the unit color pixels 112 a , 113 a , 114 a , and 115 a are converted into digital image data by the analog-to-digital converter 130 after being processed by the binning circuits 136 and 137 .
  • 1 ⁇ 1 pixel data may be generated based on a sensing result of two unit color pixels 112 a and 113 a each having a 3 ⁇ 3 pixel size and present in the same column.
  • the column-based HDR sensing and binning operation may be identically applied to the remaining unit color pixels.
  • FIG. 9 is a diagram illustrating another example of the unit color pixel UCP illustrated in FIG. 1 .
  • the unit color pixel UCP may include two unit pixels UP 1 and UP 2 each including two sub-pixels. That is, the unit color pixel UCP may be implemented with 2 ⁇ 2 pixels constituting two unit pixels each having a 1 ⁇ 2 pixel size.
  • the one unit color pixel UCP includes the two unit pixels UP 1 and UP 2 .
  • Each of the unit pixels UP 1 and UP 2 includes two sub-pixels.
  • the unit pixel UP 1 may include sub-pixels L 1 and S 1 and one charge detection node FD 1 .
  • the sub-pixels L 1 and S 1 having different effective integration times EIT correspond to photoelectric conversion elements PD 1 and PD 2 .
  • the unit pixel UP 2 may include sub-pixels S 2 and L 2 and one charge detection node FD 2 .
  • the sub-pixels S 2 and L 2 correspond to photoelectric conversion elements PD 3 and PD 4 .
  • sub-pixels correspond to different effective integration times EIT.
  • the sub-pixel L 1 of the unit pixel UP 1 may have a relatively long effective integration time EIT for high-illumination mode sensing.
  • the sub-pixel S 1 of the unit pixel UP 1 has an effective integration time EIT shorter than the sub-pixel L 1 .
  • the unit pixel UP 2 may have substantially the same structure as the unit pixel UP 1 . That is, the sub-pixel S 2 placed in the first row of the unit pixel UP 2 may have a short effective integration time EIT.
  • the sub-pixel L 2 placed in the second row of the unit pixel UP 2 may have a long effective integration time EIT.
  • the unit color pixel UCP that performs the HDR mode sensing operation includes two unit pixels UP, in each of which a charge detection node is shared.
  • Each unit pixel UP may include two sub-pixels.
  • a unit color pixel may have a pixel structure of a 2 ⁇ 2 pixel size, in which two unit pixels each having a 1 ⁇ 2 pixel size are arranged.
  • unit pixels according to embodiments may output signals of sub-pixels having the same effective integration time EIT in the HDR mode sensing operation. Because the averaging operation is able to be performed on the output sensing signals, high-speed binning and analog-to-digital conversion are possible.
  • FIGS. 10A and 10B are circuit diagrams illustrating structures of unit pixels illustrated in FIG. 9 .
  • the unit pixel UP 1 of a 1 ⁇ 2 pixel size may include a plurality of photoelectric conversion elements PD 1 and PD 2 , a plurality of transmission transistors TX 1 and TX 2 , the reset transistor RX 1 , the selection transistor SX 1 , and the drive transistor DX 1 .
  • the plurality of photoelectric conversion elements PD 1 and PD 2 , the plurality of transmission transistors TX 1 and TX 2 , the reset transistor RX 1 , the selection transistor SX 1 , and the drive transistor DX 1 are substantially the same as those of FIG. 3A . With regard to functions thereof, thus, additional description will be omitted to avoid redundancy.
  • the HDR mode sensing operation corresponding to two effective integration times EIT may be performed on the unit pixel UP 1 of the 1 ⁇ 2 pixel size. That is, with regard to the unit pixel UP 1 of the 1 ⁇ 2 pixel size, the HDR mode sensing operation is possible in two modes, i.e., the high illumination mode and the low illumination mode, by using the transmission signals TG_L 1 and TG_S 1 .
  • the unit pixel UP 2 of a 1 ⁇ 2 pixel size may include a plurality of photoelectric conversion elements PD 3 and PD 4 , a plurality of transmission transistors TX 3 and TX 4 , the reset transistor RX 2 , the selection transistor SX 2 , and the drive transistor DX 2 .
  • the plurality of photoelectric conversion elements PD 3 and PD 4 , the plurality of transmission transistors TX 3 and TX 4 , the reset transistor RX 2 , the selection transistor SX 2 , and the drive transistor DX 2 are substantially the same as those of FIG. 3B . With regard to functions thereof, thus, additional description will be omitted to avoid redundancy.
  • the unit color pixel UCP of the 2 ⁇ 2 pixel size includes unit pixels UP of the 1 ⁇ 2 pixel size capable of outputting sensing signals independently of each other. Accordingly, the unit color pixel UCP may perform the HDR mode sensing operation in two modes, i.e., the high illumination mode and the low illumination mode. In addition, it is possible to perform the averaging operation on high-illumination mode sensing signals or low-illumination mode sensing signals that are simultaneously output from respective unit pixels. Accordingly, a frame rate may be improved in the HDR mode sensing operation by adopting an image sensor according to embodiments.
  • FIG. 11 is a timing diagram illustrating a control method for performing an HDR mode sensing operation on a unit color pixel having a 2 ⁇ 2 pixel size illustrated in FIGS. 10A and 10B .
  • sensing signals corresponding to the same effective integration time EIT may be simultaneously accumulated at the charge detection nodes FD 1 and FD 2 .
  • From the time T 0 to the time T 6 charges integrated by the photoelectric conversion elements PD 1 and PD 4 for high-illumination sensing are sensed.
  • charges integrated by the photoelectric conversion elements PD 2 and PD 3 for low-illumination sensing are sensed.
  • a control operation of the unit color pixel UCP for high-illumination sensing may be performed from the time T 0 to the time T 6 .
  • the reset signal RG is maintained at the high level from the time T 0 to the time T 1 for the purpose of resetting charge detection nodes FD 1 and FD 2 of the unit pixels UP 1 and UP 2 .
  • the reset transistors RX 1 and RX 2 are turned on, and the charge detection nodes FD 1 and FD 2 are reset.
  • the reset signal RG transitions to the low level.
  • the reset transistors RX 1 and RX 2 may be turned off in response to the reset signal RG transitioning to the low level, and the charge detection nodes FD 1 and FD 2 may be set to a state capable of accumulating charges.
  • the selection transistors SX 1 and SX 1 are turned on in response to the selection signal SEL transitioning to the high level, and the output of sensed data is possible.
  • the transmission signals TG_L 1 and TG_L 2 transition to the high level for the purpose of turning on the transmission transistors TX 1 and TX 4 of the sub-pixels L 1 and L 2 .
  • the remaining charge transmission signals TG_S 1 and TG_S 2 may be maintained at the low level.
  • photoelectrons integrated by the photoelectric conversion elements PD 1 and PD 4 are transmitted to the charge detection nodes FD 1 and FD 2 . That is, the photoelectrons are accumulated at the charge detection nodes FD 1 and FD 2 .
  • the drive transistor DX 1 and DX 2 are respectively connected to the charge detection nodes FD 1 and FD 2 , in proportion to the amount of charges accumulated at the charge detection nodes FD 1 and FD 2 .
  • the drive transistor DX 1 of the unit pixel UP 1 amplifies a potential change of the charge detection node FD 1 and outputs the amplified signal to the column line CLi through the selection transistor SX 1 .
  • the drive transistor DX 2 of the unit pixel UP 2 amplifies a potential change of the charge detection node FD 2 and outputs the amplified signal to the column line CLj through the selection transistor SX 2 .
  • the selection transistors SX 1 and SX 2 are turned off. In this case, sensing signals of the unit pixels UP 1 and UP 2 are blocked from being output.
  • the reset transistors RX 1 and RX 2 are turned on.
  • the charge detection nodes FD 1 and FD 2 of the unit pixels UP 1 and UP 2 are reset to the pixel power supply voltage VPIX.
  • a control operation for low-illumination sensing is performed from the time T 6 to the time T 9 .
  • the transitions of the selection signal SEL and the reset signal RG from the time T 6 to the time T 9 are the same as those from the time T 0 to the time T 6 , and thus, additional description will be omitted to avoid redundancy.
  • photoelectrons corresponding to an incident light are integrated by the photoelectric conversion elements PD 2 and PD 3 provided for the low-illumination sensing operation.
  • the transmission signals TG_S 1 and TG_S 2 transition to the high level for the purpose of turning on the transmission transistors TX 2 and TX 3 of the sub-pixels S 1 and S 2 provided for the low-illumination mode sensing operation.
  • the charge transmission signals TG_S 1 and TG_S 2 photoelectrons integrated by the photoelectric conversion elements PD 2 and PD 3 are transmitted to the charge detection nodes FD 1 and FD 2 . That is, the photoelectrons are accumulated at the charge detection nodes FD 1 and FD 2 .
  • the drive transistor DX 1 and DX 2 amplifies a potential change of the charge detection node FD 2 and outputs the amplified signal to the column line CLj through the selection transistor SX 2 .
  • a way to sense the unit color pixel UCP of the 2 ⁇ 2 pixel size in the high dynamic range (HDR) mode is described above. That signals sensed from a plurality of sub-pixels in the high illumination mode or the low illumination mode are able to be output is described above.
  • the sensing signals output from the unit pixels UP 1 and UP 2 make it possible to perform the averaging operation. Accordingly, the frame rate of the image sensor 100 according to an embodiment may be improved in the high dynamic range (HDR) mode.
  • FIG. 12 is a diagram illustrating another example of the unit color pixel UCP illustrated in FIG. 1 .
  • the unit color pixel UCP may include four unit pixels UP 1 , UP 2 , UP 3 , and UP 4 each including four sub-pixels. That is, the unit color pixel UCP may be implemented with 4 ⁇ 4 pixels constituting four unit pixels each having a 1 ⁇ 4 pixel size.
  • one unit color pixel UCP includes four unit pixels UP 1 , UP 2 , UP 3 , and UP 4 .
  • Each of the unit pixels UP 1 , UP 2 , UP 3 , and UP 4 includes four sub-pixels.
  • the unit pixel UP 1 may include sub-pixels L 1 , M 1 , E 1 , and S 1 and one charge detection node FD 1 .
  • the sub-pixels L 1 , M 1 , E 1 , and S 1 having different effective integration times EIT correspond to photoelectric conversion elements PD 1 , PD 2 , PD 3 , and PD 4 .
  • the unit pixel UP 2 may include sub-pixels L 2 , M 2 , E 2 , and S 2 and one charge detection node FD 2 .
  • the unit pixels UP 3 and UP 4 have the same structures as the unit pixels UP 1 and UP 2 except for different sub-pixel arrangements.
  • the unit color pixel UCP that performs the HDR mode sensing operation includes four unit pixels UP, in each of which a charge detection node is shared. Each of the unit pixels UP may include four sub-pixels. Accordingly, the unit color pixel UCP may have a pixel structure of a 4 ⁇ 4 pixel size, in which four unit pixels each having a 1 ⁇ 4 pixel size are arranged. In this structure, unit pixels according to embodiments may output signals of four sub-pixels having the same effective integration time EIT in the HDR mode sensing operation. Because the averaging operation is able to be performed on the output sensing signals, high-speed binning and analog-to-digital conversion are possible.
  • FIG. 13 is a circuit diagram illustrating a structure of a unit pixel illustrated in FIG. 12 .
  • the unit pixel UP 1 of a 1 ⁇ 4 pixel size may include a plurality of photoelectric conversion elements PD 1 , PD 2 , PD 3 , and PD 4 , a plurality of transmission transistors TX 1 , TX 2 , TX 3 , and TX 4 , the reset transistor RX 1 , the selection transistor SX 1 , and the drive transistor DX 1 .
  • the plurality of photoelectric conversion elements PD 1 , PD 2 , PD 3 , and PD 4 , the plurality of transmission transistors TX 1 , TX 2 , TX 3 , and TX 4 , the reset transistor RX 1 , the selection transistor SX 1 , and the drive transistor DX 1 are substantially the same as those of FIG. 3A . With regard to functions thereof, thus, additional description will be omitted to avoid redundancy.
  • the HDR mode sensing operation corresponding to four effective integration times EIT may be performed on the four unit pixels UP 1 each having the 1 ⁇ 4 pixel size. That is, with regard to the unit pixel UP 1 of the 1 ⁇ 4 pixel size, the HDR mode sensing operation is possible in four illumination ranges by using the transmission signals TG_L 1 , TG_S 1 , TG_M 1 , and TG_E 1 .
  • the unit color pixel UCP of the 4 ⁇ 4 pixel size includes unit pixels UP of the 1 ⁇ 4 pixel size capable of outputting sensing signals independently of each other. Accordingly, sensing signals corresponding to the same illumination may be generated from the unit pixels UP of the unit color pixel UCP, and the averaging operation may be performed on the generated sensing signals. This means that a frame rate is improved in the HDR mode sensing operation by adopting an image sensor according to embodiments.
  • FIG. 14 is a diagram illustrating another example of the unit color pixel UCP illustrated in FIG. 1 .
  • the unit color pixel UCP may include four unit pixels UP 1 , UP 2 , UP 3 , UP 4 , and UP 5 each including five sub-pixels. That is, the unit color pixel UCP may be implemented with 5 ⁇ 5 pixels constituting four unit pixels each having a 1 ⁇ 5 pixel size.
  • one unit color pixel UCP includes five unit pixels UP 1 , UP 2 , UP 3 , UP 4 , and UP 5 .
  • Each of the unit pixels UP 1 , UP 2 , UP 3 , UP 4 , and UP 5 includes five sub-pixels.
  • the unit pixel UP 1 may include sub-pixels L 1 , M 1 , E 1 , S 1 , and A 1 and one charge detection node FD 1 .
  • the sub-pixels L 1 , M 1 , E 1 , S 1 , and A 1 having different effective integration times EIT correspond to photoelectric conversion elements PD 1 , PD 2 , PD 3 , PD 4 , and PD 5 .
  • the unit pixel UP 2 may include sub-pixels L 2 , M 2 , E 2 , S 2 , and A 2 and one charge detection node FD 2 .
  • the unit pixels UP 3 , UP 4 , and UP 5 including charge detection nodes FD 3 , FD 4 , and FD 5 have the same structures as the unit pixels UP 1 and UP 2 except for different sub-pixel arrangements.
  • the unit color pixel UCP that performs the HDR mode sensing on one color includes five unit pixels UP, in each of which a charge detection node is shared. Each of the unit pixels UP may include five sub-pixels. Accordingly, the unit color pixel UCP may have a pixel structure of a 5 ⁇ 5 pixel size, in which five unit pixels each having a 1 ⁇ 5 pixel size are arranged. In this structure, unit pixels may output signals of five sub-pixels having the same effective integration time EIT in the HDR mode sensing operation. Because the averaging operation is able to be performed on the output sensing signals, high-speed binning and analog-to-digital conversion are possible.
  • FIG. 15 is a circuit diagram illustrating a structure of a unit pixel of FIG. 14 .
  • the unit pixel UP 1 of a 1 ⁇ 5 pixel size may include a plurality of photoelectric conversion elements PD 1 , PD 2 , PD 3 , PD 4 , and PD 5 , a plurality of transmission transistors TX 1 , TX 2 , TX 3 , TX 4 , and TX 5 , the reset transistor RX 1 , the selection transistor SX 1 , and the drive transistor DX 1 .
  • the plurality of photoelectric conversion elements PD 1 , PD 2 , PD 3 , PD 4 , and PD 5 , the plurality of transmission transistors TX 1 , TX 2 , TX 3 , TX 4 , and TX 5 , the reset transistor RX 1 , the selection transistor SX 1 , and the drive transistor DX 1 are substantially the same as those of FIG. 3A . With regard to functions thereof, thus, additional description will be omitted to avoid redundancy.
  • the HDR mode sensing operation corresponding to five effective integration times EIT may be performed on the five unit pixels UP each having the 1 ⁇ 5 pixel size. That is, with regard to the unit pixel UP 1 of the 1 ⁇ 5 pixel size, the HDR mode sensing operation is possible in five illumination ranges by using the transmission signals TG_L 1 , TG_S 1 , TG_M 1 , TG_E 1 , and TG_A 1 .
  • the unit color pixel UCP of the 5 ⁇ 5 pixel size includes unit pixels UP of the 1 ⁇ 5 pixel size capable of outputting sensing signals independently of each other. Accordingly, sensing signals corresponding to the same illumination may be generated from the unit pixels UP of the unit color pixel UCP, and the averaging operation may be performed on the generated sensing signals. This means that a frame rate is improved in the HDR mode sensing operation by adopting an image sensor according to embodiments.
  • Unit color pixels having 2 ⁇ 2, 3 ⁇ 3, 4 ⁇ 4, and 5 ⁇ 5 pixel sizes are described above to provide the advantages according to embodiments. If necessary, the unit color pixel UCP may be implemented to have a pixel size, in which the number of rows and the number of columns are different, such as 2 ⁇ 3 or 4 ⁇ 3, or the unit color pixel UCP may be implemented to have a pixel size larger than the 5 ⁇ 5 pixel size.
  • HDR high dynamic range
  • an image sensor capable of performing a sensing operation under various illumination conditions without decreasing a frame rate. That is, an image sensor according to an embodiment may provide a high dynamic range (HDR) image capable of minimizing a decrease in a frame rate or a decrease in a resolution.
  • HDR high dynamic range

Abstract

An image sensor includes a first unit pixel including a first sub-pixel and a second sub-pixel, a second unit pixel including a third sub-pixel and a fourth sub-pixel, a timing controller configured to apply a first effective integration time to the first sub-pixel and the fourth sub-pixel, such that a first sensing signal and a fourth sensing signal are generated from the first sub-pixel and the fourth sub-pixel, respectively, and to apply a second effective integration time shorter than the first effective integration time to the second sub-pixel and the third sub-pixel, such that a second sensing signal and a third sensing signal are generated from the second sub-pixel and the third sub-pixel, respectively, and an analog-to-digital converter configured to perform an averaging operation on the first sensing signal and the fourth sensing signal or on the second sensing signal and the third sensing signal.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • This application is a continuation of U.S. patent application Ser. No. 16/745,508 filed Jan. 17, 2020, which is incorporated by reference herein in its entirety.
  • Korean Patent Application No. 10-2019-0078901, filed on Jul. 1, 2019, in the Korean Intellectual Property Office, and entitled: “Image Sensor and Driving Method Thereof,” is incorporated by reference herein in its entirety.
  • BACKGROUND 1. Field
  • Embodiments relate to an image sensor, and more particularly, to an image sensor providing a high dynamic range mode and a driving method thereof.
  • 2. Description of the Related Art
  • One of important criteria for determining the quality of an image sensor is a dynamic range. In general, the dynamic range indicates a maximum range capable of processing an input signal without distortion of the input signal. As the dynamic range becomes wider, an image obtained by the image sensor may become clearer within a wide illumination range.
  • SUMMARY
  • According to an exemplary embodiment, an image sensor for sensing an image signal of a plurality of illumination ranges includes a first unit pixel that includes a first sub-pixel and a second sub-pixel, a second unit pixel that includes a third sub-pixel and a fourth sub-pixel, a timing controller that applies a first effective integration time to the first sub-pixel and the fourth sub-pixel such that a first sensing signal and a fourth sensing signal are generated from the first sub-pixel and the fourth sub-pixel and applies a second effective integration time shorter than the first effective integration time to the second sub-pixel and the third sub-pixel such that a second sensing signal and a third sensing signal are generated from the second sub-pixel and the third sub-pixel, and an analog-to-digital converter that performs an averaging operation on the first sensing signal and the fourth sensing signal or on the second sensing signal and the third sensing signal.
  • According to an exemplary embodiment, a driving method of an image sensor including first to fourth sub-pixels constituting a unit color pixel includes sampling a first sensing signal and a second sensing signal from the first sub-pixel and the second sub-pixel respectively by applying a first effective integration time to the first sub-pixel and the second sub-pixel, sampling a third sensing signal and a fourth sensing signal from the third sub-pixel and the fourth sub-pixel respectively by applying a second effective integration time shorter than the first effective integration time to the third sub-pixel and the fourth sub-pixel, and performing an averaging operation on the first sensing signal and the second sensing signal and performing the averaging operation on the third sensing signal and the fourth sensing signal. The first sub-pixel and the fourth sub-pixel share a first charge detection node, and the second sub-pixel and the third sub-pixel share a second charge detection node.
  • According to an exemplary embodiment, an image sensor for sensing an image signal of a plurality of illumination ranges includes a first unit pixel that includes a first sub-pixel, a second sub-pixel, and a third sub-pixel sharing a first charge detection node, a second unit pixel that includes a fourth sub-pixel, a fifth sub-pixel, and a sixth sub-pixel sharing a second charge detection node, and a third unit pixel that includes a seventh sub-pixel, an eighth sub-pixel, and a ninth sub-pixel sharing a third charge detection node, and the first unit pixel, the second unit pixel, and the third unit pixel output sensing signals individually by using the first charge detection node, the second charge detection node, and the third charge detection node.
  • BRIEF DESCRIPTION OF THE FIGURES
  • Features will become apparent to those of skill in the art by describing in detail exemplary embodiments with reference to the attached drawings, in which:
  • FIG. 1 illustrates an image sensor according to an embodiment.
  • FIG. 2 illustrates a diagram of a unit color pixel in FIG. 1 according to an embodiment.
  • FIGS. 3A to 3C illustrate circuit diagrams of structures of unit pixels.
  • FIG. 4 illustrates a timing diagram of a control method for implementing a high dynamic range (HDR) of a unit color pixel according to an embodiment.
  • FIG. 5 illustrates a flowchart of an HDR sensing method of an image sensor according to an embodiment.
  • FIGS. 6A, 6B, and 6C illustrate flowcharts describing a high illumination mode, a middle illumination mode, and a low illumination mode of FIG. 5, respectively.
  • FIG. 7 illustrates image signals sensed from a unit color pixel according to an embodiment.
  • FIG. 8 illustrates image signals sensed from a unit color pixel according to an embodiment.
  • FIG. 9 illustrates a unit color pixel in FIG. 1 according to another embodiment.
  • FIGS. 10A and 10B illustrate circuit diagrams of structures of unit pixels illustrated in FIG. 9.
  • FIG. 11 illustrates a timing diagram of a control method for performing an HDR sensing operation on a unit color pixel having a 2×2 pixel size in FIGS. 10A and 10B.
  • FIG. 12 illustrates a diagram of a unit color pixel in FIG. 1 according to another embodiment.
  • FIG. 13 illustrates a circuit diagram of a structure of a unit pixel in FIG. 12.
  • FIG. 14 illustrates a diagram of a unit color pixel illustrated in FIG. 1 according to another embodiment.
  • FIG. 15 illustrates a circuit diagram of a structure of a unit pixel in FIG. 14.
  • DETAILED DESCRIPTION
  • FIG. 1 illustrates an image sensor according to an embodiment. Referring to FIG. 1, an image sensor 100 may include a pixel array 110, a row decoder 120, an analog-to-digital converter (ADC) 130, an output buffer 140, and a timing controller 150.
  • The pixel array 110 may include a plurality of pixel sensors arranged two-dimensionally. Each of the pixel sensors converts a light signal into an electrical signal. The pixel array 110 may be controlled by signals, which are provided from the row decoder 120 for the purpose of driving the pixel sensors, e.g., a selection signal SEL, a reset signal RG, and a transmission signal TG. Also, electrical signals that are sensed by the pixel sensors in response to the signals for driving pixel sensors are provided to the analog-to-digital converter 130 through a plurality of column lines CLm.
  • The plurality of pixel sensors included in the pixel array 110 are divided into unit pixel groups UPG each sensing a blue (B) color, a green (G1/G2) color, and a red (R) color. The unit pixel group UPG may include unit color pixels UCP for sensing the colors, respectively. Each of the unit color pixels UCP may include a color filter capable of selectively transmitting a corresponding color. For example, as illustrated in FIG. 1, the color filter includes filters to sense the red, green, and blue colors. In another example, the color filter may include filters for sensing yellow, cyan, magenta, and green colors. In yet another example, the color filter may include filters for sensing red, green, blue, and white colors.
  • Each of the unit color pixels UCP includes a plurality of unit pixels UP. One unit pixel UP includes a plurality of sub-pixels SP. One unit pixel UP includes a plurality of photoelectric conversion elements sharing one charge detection node (e.g., a floating diffusion region). One photoelectric conversion element may correspond to one sub-pixel SP. According to a unit color pixel (UCP) structure, in a high dynamic range mode (below, it is interchangeable with an HDR mode), a plurality of sensing signals corresponding to the same effective integration time EIT may be obtained from one unit color pixel UCP. That is, to implement the HDR mode, the unit pixels UP constituting the unit color pixel UCP may output sensing signals at the same time. In this case, a speed at which sensing data are output may be improved in the HDR mode of the unit color pixel UCP, thus increasing a frame rate. A configuration and an operation of the unit color pixel UCP will be more fully described with reference to drawings below.
  • The row decoder 120 may select one of rows of the pixel array 110 under control of the timing controller 150. The row decoder 120 generates the selection signal SEL in response to a control signal TC1 from the timing controller 150 for the purpose of selecting one or more of a plurality of rows. The row decoder 120 may sequentially activate (or enable) the reset signal RG and the transmission signal TG with regard to pixels corresponding to the selected row. In this case, sensing signals for each illumination, which are generated from the unit color pixels UCP of the selected row, are sequentially transmitted to the analog-to-digital converter 130.
  • The analog-to-digital converter 130 converts the sensing signals generated from the unit color pixels UCP into digital signals in response to a control signal TC2 from the timing controller 150. For example, the analog-to-digital converter 130 may perform an averaging operation on sensing signals of a certain illumination, which are generated from one or more unit color pixels UCP. For example, the analog-to-digital converter 130 may perform an analogue binning operation. The analog-to-digital converter 130 may sample HDR sensing signals in a correlated double sampling manner and may then convert the sampled HDR sensing signals into digital signals. To this end, a correlated double sampler (CDS) may be further included in front of the analog-to-digital converter 130.
  • The output buffer 140 may latch image data provided from the analog-to-digital converter 130 in the unit of column. The output buffer 140 may temporarily store image data output from the analog-to-digital converter 130 in response to a control signal TC3 from the timing controller 150 and may then output the latched image data sequentially by using a column decoder.
  • The timing controller 150 controls the pixel array 110, the row decoder 120, the analog-to-digital converter 130, the output buffer 140, etc. The timing controller 150 may supply control signals, such as a clock signal and a timing control signal, to the pixel array 110, the row decoder 120, the analog-to-digital converter 130, the output buffer 140, etc. The timing controller 150 may include a logic control circuit, a phase locked loop (PLL) circuit, a timing control circuit, a communication interface circuit, etc.
  • The configuration of the image sensor 100 according to an embodiment is briefly described above. In particular, as the unit color pixels UCP constituting the pixel array 110 are able to simultaneously output a plurality of sensing signals corresponding to the same illumination range, the averaging operation is possible. According to the above description, the image sensor 100 is able to perform the averaging operation on sensing signals at a high speed, thus improving a binning speed. According to an embodiment, it is possible to implement the image sensor 100 that provides a high frame rate in the HDR mode.
  • In another embodiment, because the unit color pixels UCP constituting the pixel array 110 simultaneously output a plurality of sensing signals corresponding to the same illumination range, it is possible to implement the HDR mode of a high resolution in the case of skipping the binning or averaging operation.
  • FIG. 2 is a diagram illustrating the unit color pixel UCP illustrated in FIG. 1. An example where one unit color pixel UCP includes a plurality of unit pixels UP and each unit pixel UP includes three sub-pixels SP will be described with reference to FIG. 2.
  • The unit color pixel UCP includes three unit pixels UP. The unit pixel UP may include three photoelectric conversion elements and one floating diffusion region FD. Here, one unit pixel UP includes three sub-pixels having different effective integration times EIT. For example, a sub-pixel L1 of a unit pixel UP1 is a sub-pixel having the longest effective integration time EIT from among three sub-pixels thereof. A sub-pixel S1 of the unit pixel UP1 is a sub-pixel having the shortest effective integration time EIT from among the three sub-pixels. A sub-pixel M1 of the unit pixel UP1 is a sub-pixel having a middle effective integration time EIT from among the three sub-pixels.
  • A unit pixel UP2 may have substantially the same structure as the unit pixel UP1, but may be different from the unit pixel UP1 in terms of the arrangement of sub-pixels and the order of allocating effective integration times. That is, a sub-pixel S2 in the first row of the unit pixel UP2 may have the shortest effective integration time EIT from among three sub-pixels thereof. A sub-pixel M2 in the second row of the unit pixel UP2 may have the middle effective integration time EIT from among the three sub-pixels. A sub-pixel L2 in the third row takes may have the longest effective integration time EIT of effective integration times of the three sub-pixels of the unit pixel UP2.
  • A unit pixel UP3 may have substantially the same structure as the unit pixels UP1 and UP2, but may be different from the unit pixels UP1 and UP2 in terms of the arrangement of sub-pixels and the order of allocating effective integration times. That is, a sub-pixel M3 placed in the first row of the unit pixel UP3 may have the middle effective integration time EIT from among three sub-pixels thereof. A sub-pixel L3 placed in the second row of the unit pixel UP3 may have the longest effective integration time EIT of effective integration times of the three sub-pixels. A sub-pixel S3 corresponding to the third row may take charge of the shortest effective integration time EIT of effective integration times of the three sub-pixels of the unit pixel UP3.
  • The unit color pixel UCP that performs the HDR mode sensing operation may include three unit pixels UP sharing a floating diffusion region. Each unit pixel UP may include three sub-pixels. Accordingly, a unit color pixel may have a pixel structure in which 3×3 pixels constituting three 1×3 unit pixels are arranged. Unit pixels may simultaneously output signals of sub-pixels having the same effective integration time EIT in the high dynamic range (HDR) sensing operation. Because the averaging operation can be performed on sensing signals output at the same time, high-speed binning and analog-to-digital conversion are possible.
  • FIGS. 3A to 3C are circuit diagrams illustrating structures of unit pixels according to embodiments. Referring to FIG. 3A, the unit pixel UP1 may include a plurality of photoelectric conversion elements PD1, PD2, and PD3, a plurality of transmission transistors TX1, TX2, and TX3, a reset transistor RX1, a selection transistor SX1, and a drive transistor DX1. The unit pixel UP1 may further include a conversion gain transistor (CGX) and a capacitor (CAP) for implementing a conversion gain changing circuit.
  • In detail, the photoelectric conversion elements PD1, PD2, and PD3 may be photosensitive elements that generate and integrate charges depending on the amount of incident light or the intensity of the incident light. Each of the photoelectric conversion elements PD1, PD2, and PD3 may be a photo diode, a photo transistor, a photo gate, a pinned photo diode (PPD), or a combination thereof.
  • The transmission transistors TX1, TX2, and TX3 transmit charges integrated in the photoelectric conversion elements PD1, PD2, and PD3 connected thereto to a first charge detection node FD1 (i.e., a floating diffusion region). The transmission transistors TX1, TX2, and TX3 are controlled by charge transmission signals TG_L1, TG_S1, and TG_M1, respectively.
  • The transmitted photoelectrons may be accumulated at the first charge detection node FD1 having a capacity provided physically. The drive transistor DX1 may be controlled depending on the amount of photoelectrons accumulated at the first charge detection node FD1.
  • The reset transistor RX1 may reset charges accumulated at the first charge detection node FD1. In detail, a drain terminal of the reset transistor RX1 is connected to the first charge detection node FD1, and a source terminal thereof is connected to a pixel power supply voltage VPIX. When the reset transistor RX1 is turned on, the pixel power supply voltage VPIX connected to the source electrode of the reset transistor RX1 is supplied to the first charge detection node FD1. Accordingly, charges accumulated at the first charge detection node FD1 may be discharged when the reset transistor RX1 is turned on, and thus, the first charge detection node FD1 may be reset.
  • The drive transistor DX1 may be a source follower buffer amplifier that generates a source-drain current in proportion to the amount of charges of the first charge detection node FD1, which are input to a gate terminal of the drive transistor DX1. The drive transistor DX1 amplifies a potential change of the first charge detection node FD1 and outputs the amplified signal to a column line CLi through the selection transistor SX1. A source terminal of the drive transistor DX1 may be connected to the pixel power supply voltage VPIX, and a drain terminal of the drive transistor DX1 may be connected to a source terminal of the selection transistor SX1.
  • The selection transistor SX1 may select the unit pixels UP1 to be read in the unit of row. When the selection transistor SX1 is turned on by the selection signal SEL provided from the row decoder 120, an electrical signal output from the drain terminal of the drive transistor DX1 may be provided to the column line CLi through the selection transistor SX1.
  • A circuit structure of the unit pixel UP1 of a 1×3 pixel size for constituting the unit color pixel UCP of a 3×3 pixel size is described above. The unit pixel UP1 may accumulate charges by using one charge detection node (or a floating diffusion region) marked by “FD1”.
  • Referring to FIG. 3B, the unit pixel UP2 may include a plurality of photoelectric conversion elements PD4, PD5, and PD6, a plurality of transmission transistors TX4, TX5, and TX6, a reset transistor RX2, a selection transistor SX2, and a drive transistor DX2. Also, referring to FIG. 3C, the unit pixel UP3 may include a plurality of photoelectric conversion elements PD7, PD8, and PD9, a plurality of transmission transistors TX7, TX8, and TX9, a reset transistor RX3, a selection transistor SX3, and a drive transistor DX3.
  • According to the above description, the unit color pixel UCP of the 3×3 pixel size includes unit pixels UP of the 1×3 pixel size capable of outputting sensing signals independently of each other. Accordingly, unit pixels corresponding to the same effective integration time EIT may output sensing signals at the same time. The output sensing signals may be merged through the averaging operation.
  • FIG. 4 is a timing diagram illustrating a control method for implementing a high dynamic range (HDR) of a unit color pixel according to an embodiment. Referring to FIG. 4, sensing signals corresponding to the same effective integration time EIT may be simultaneously output from a selected unit color pixel UCP. That is, from a time T0 to a time T6, charges integrated by the photoelectric conversion elements PD1, PD6, and PD8 having the longest effective integration time EIT for high-illumination sensing are sensed. From the time T6 to a time T9, charges integrated by the photoelectric conversion elements PD3, PD5, and PD7 having the middle effective integration time EIT for middle-illumination sensing are sensed. From the time T9 to a time T11, charges integrated by the photoelectric conversion elements PD2, PD4, and PD9 having the shortest effective integration time EIT for low-illumination sensing are sensed.
  • First, a control operation of the unit color pixel UCP for high-illumination sensing may be performed from the time T0 to the time T6. The reset signal RG is maintained at a high level from the time T0 to the time T1 for the purpose of resetting charge detection nodes FD1, FD2, and FD3 of the unit pixels UP1, UP2, and UP3. In this case, the reset transistors RX1, RX2, and RX3 are turned on. When the reset transistors RX1, RX2, and RX3 are turned on, charges accumulated at the charge detection nodes FD1, FD2, and FD3 may be discharged, and the charge detection nodes FD1, FD2, and FD3 may be reset.
  • At the time T1, the reset signal RG transitions to a low level. As the reset signal RG transitions to the low level, the reset transistors RX1, RX2, and RX3 are turned off. In this case, the charge detection nodes FD1, FD2, and FD3 may be in a state where charge accumulation is possible.
  • At the time T2, as the selection signal SEL transitions to the high level, the selection transistors SX1, SX2, and SX3 are turned on. In the case where the selection transistors SX1, SX2, and SX3 are turned on, it is possible to output sensing signals.
  • At the time T3, the transmission signals TG_L1, TG_L2, and TG_L3 transition to the high level for the purpose of turning on the transmission transistors TX1, TX6, and TX8 of the sub-pixels L1, L2, and L3 corresponding to the longest effective integration time. In this case, the remaining charge transmission signals TG_M1, TG_M2, TG_M3, TG_S1, TG_S2, and TG_S3 may be maintained at the low level. During a high period (T3 to T4) of the charge transmission signals TG_L1, TG_L2, and TG_L3, photoelectrons integrated by the photoelectric conversion elements PD1, PD6, and PD8 are transmitted to the charge detection nodes FD1, FD2, and FD3. That is, the photoelectrons are accumulated at the charge detection nodes FD1, FD2, and FD3.
  • Between the time T4 and the time T5, currents flow through the drive transistors DX1, DX2, and DX3, the gate terminals of which are respectively connected to the charge detection nodes FD1, FD2, and FD3, in proportion to the amount of charges accumulated at the charge detection nodes FD1, FD2, and FD3. For example, the drive transistor DX1 of the unit pixel UP1 amplifies a potential change of the charge detection node FD1 and outputs the amplified signal to the column line CLi through the selection transistor SX1. Likewise, the drive transistor DX2 of the unit pixel UP2 amplifies a potential change of the charge detection node FD2 and outputs the amplified signal to the column line CLj through the selection transistor SX2, and the drive transistor DX3 of the unit pixel UP3 amplifies a potential change of the charge detection node FD3 and outputs the amplified signal to the column line CLk through the selection transistor SX3.
  • At the time T5, as the selection signal SEL transitions to the low level, the selection transistors SX1, SX2, and SX3 are turned off. In this case, sensing signals of the unit pixels UP1, UP2, and UP3 are blocked from being output.
  • At the time T6, as the reset signal RG transitions to the high level, the reset transistors RX1, RX2, and RX3 are turned on. When the reset transistors RX1, RX2, and RX3 are turned on, the charge detection nodes FD1, FD2, and FD3 of the unit pixels UP1, UP2, and UP3 are reset to the pixel power supply voltage VPIX.
  • A control operation for middle-illumination sensing is performed from the time T6 to the time T9. The transitions of the selection signal SEL and the reset signal RG from the time T6 to the time T9 are the same as those from the time T0 to the time T6, and thus, additional description will be omitted to avoid redundancy. In a state where the reset signal RG transitions to the low level and the selection signal SEL transitions to the high level, photoelectrons corresponding to an incident light are integrated by the photoelectric conversion elements PD3, PD5, and PD7 having the effective integration time EIT of a middle length.
  • At the time T8, the transmission signals TG_M1, TG_M2, and TG_M3 transition to the high level for the purpose of turning on the transmission transistors TX3, TX5, and TX7 of the sub-pixels M1, M2, and M3 corresponding to the effective integration time of the middle length. During a high period of the charge transmission signals TG_M1, TG_M2, and TG_M3, the photoelectrons integrated by the photoelectric conversion elements PD3, PD5, and PD7 are transmitted to the charge detection nodes FD1, FD2, and FD3. That is, the photoelectrons are accumulated at the charge detection nodes FD1, FD2, and FD3. Then, currents flow through the drive transistors DX1, DX2, and DX3, the gate terminals of which are respectively connected to the charge detection nodes FD1, FD2, and FD3, in proportion to the amount of charges accumulated at the charge detection nodes FD1, FD2, and FD3. For example, the drive transistor DX2 of the unit pixel UP2 amplifies a potential change of the charge detection node FD2 and outputs the amplified signal to the column line CLj through the selection transistor SX2. Likewise, the drive transistor DX1 of the unit pixel UP1 amplifies a potential change of the charge detection node FD1 and outputs the amplified signal to the column line CLi through the selection transistor SX3, and the drive transistor DX3 of the unit pixel UP3 amplifies a potential change of the charge detection node FD3 and outputs the amplified signal to the column line CLk through the selection transistor SX3.
  • A control operation for low-illumination sensing is performed from the time T9 to the time T11. After the time T9, in a state where the reset signal RG transitions to the low level and the selection signal SEL transitions to the high level, photoelectrons corresponding to an incident light are integrated by the photoelectric conversion elements PD2, PD4, and PD9 having the effective integration time EIT of the shortest length.
  • At the time T10, the transmission signals TG_S1, TG_S2, and TG_S3 transition to the high level for the purpose of turning on the transmission transistors TX2, TX4, and TX9 of the sub-pixels S1, S2, and S3 corresponding to the shortest effective integration time. During a high period of the charge transmission signals TG_S1, TG_S2, and TG_S3, the photoelectrons integrated by the photoelectric conversion elements PD2, PD4, and PD9 are transmitted to the charge detection nodes FD1, FD2, and FD3. That is, the photoelectrons are accumulated at the charge detection nodes FD1, FD2, and FD3. Then, currents flow through the drive transistors DX1, DX2, and DX3, the gate terminals of which are respectively connected to the charge detection nodes FD1, FD2, and FD3, in proportion to the amount of charges accumulated at the charge detection nodes FD1, FD2, and FD3. For example, the drive transistor DX3 of the unit pixel UP3 amplifies a potential change of the charge detection node FD3 and outputs the amplified signal to the column line CLk through the selection transistor SX3. Likewise, the drive transistor DX1 of the unit pixel UP1 amplifies a potential change of the charge detection node FD1 and outputs the amplified signal to the column line CLi through the selection transistor SX2, and the drive transistor DX2 of the unit pixel UP2 amplifies a potential change of the charge detection node FD2 and outputs the amplified signal to the column line CLj through the selection transistor SX2.
  • An example in which sensing signals are able to be simultaneously output from sub-pixels of the unit color pixel UCP in the HDR mode is described above. The sensing signals output from the unit pixels UP1, UP2, and UP3 make it possible to perform the averaging operation. Accordingly, the frame rate of the image sensor 100 according to an embodiment may be improved in the high dynamic range (HDR) mode.
  • FIG. 5 is a flowchart illustrating a high dynamic range (HDR) sensing method of an image sensor according to an embodiment. Referring to FIG. 5, sensing signals corresponding to the same effective integration time EIT may be simultaneously output through the charge detection nodes FD1, FD2, and FD3 independently provided in the unit color pixel UCP.
  • In operation S110, one unit color pixel UCP may be selected through the row decoder 120 of the image sensor 100. A plurality of unit color pixels present in the same row may be simultaneously selected.
  • In operation S120, a high-illumination mode (HIM) sensing operation is performed on the selected unit color pixel UCP. The high-illumination mode (HIM) sensing operation may refer to an operation of sensing sub-pixels having the longest effective integration time EIT from among sub-pixels of the unit color pixel UCP. For example, photoelectrons integrated by the photoelectric conversion elements PD1, PD6, and PD8 of the high-illumination sub-pixels L1, L2, and L3 of FIG. 2 corresponding to the longest effective integration time EIT are accumulated at the charge detection nodes FD1, FD2, and FD3. Afterwards, sensing signals corresponding to the charges accumulated at the charge detection nodes FD1, FD2, and FD3 may be simultaneously output to the column lines CLi, CLj, and CLk.
  • In operation S130, a middle-illumination mode (MIM) sensing operation is performed on the selected unit color pixel UCP. The middle-illumination mode (MIM) sensing operation may refer to an operation of sensing sub-pixels having the middle effective integration time EIT from among the sub-pixels of the unit color pixel UCP. For example, referring to FIGS. 2 and 3A to 3C, photoelectrons integrated in the photoelectric conversion elements PD3, PD5, and PD7 of the middle-illumination sub-pixels M1, M2, and M3 of FIG. 2 corresponding to the middle effective integration time EIT are accumulated at the charge detection nodes FD1, FD2, and FD3. Afterwards, sensing signals corresponding to the charges accumulated at the charge detection nodes FD1, FD2, and FD3 may be simultaneously output to the column lines CLi, CLj, and CLk.
  • In operation S140, low-illumination mode (LIM) sensing operation is performed on the selected unit color pixel UCP. The low-illumination mode (LIM) sensing operation may refer to an operation of sensing sub-pixels having the shortest effective integration time EIT from among the sub-pixels of the unit color pixel UCP. For example, referring to FIGS. 2 and 3A to 3C, photoelectrons integrated by the photoelectric conversion elements PD2, PD4, and PD9 of the low-illumination sub-pixels S1, S2, and S3 of FIG. 2 corresponding to the middle effective integration time EIT are accumulated at the charge detection nodes FD1, FD2, and FD3. Afterwards, sensing signals corresponding to the charges accumulated at the charge detection nodes FD1, FD2, and FD3 may be simultaneously output to the column lines CLi, CLj, and CLk.
  • In operation S150, a binning operation is performed on the sensing signals output from the selected unit color pixel UCP. For example, the averaging operation may be performed on the high-illumination mode sensing signals simultaneously output from the selected unit color pixel UCP. Alternatively, the averaging operation may be performed on sensing signals output from a plurality of unit color pixels UCP corresponding to the same color. Unit color pixels UCP that are present in the same row or in the same column may be selected as the plurality of unit color pixels UCP targeted for the averaging operation. Alternatively, the plurality of unit color pixels UCP targeted for the averaging operation may be selected from groups of unit color pixels UCP that correspond to the same color and are distributed in a given region.
  • In operation S160, a sensing signal processed through the binning operation is converted into digital data. Afterwards, a high dynamic range (HDR) image may be generated by combining pieces of image data corresponding to the low illumination mode (LIM), the middle illumination mode (MIM), and the high illumination mode (HIM).
  • A way to generate the HDR image by using an image sensor of a unit color pixel (UCP) structure according to embodiments is described above. In the unit color pixel (UCP) structure according to embodiments, sensing signals corresponding to the same illumination may be simultaneously output, and the simultaneously output sensing signals may be processed through the averaging operation such as addition or subtraction. According to the unit color pixel (UCP) structure, a time taken for image sensing of the HDR mode is markedly decreased, and the HDR image may be obtained with a high frame rate.
  • FIGS. 6A, 6B, and 6C are flowcharts for describing a high illumination mode, a middle illumination mode, and a low illumination mode of FIG. 5, respectively. Operation S120 corresponding to the high-illumination mode (HIM) sensing operation will be more fully described with reference to FIGS. 6A and 5.
  • In operation S121, the charge detection nodes FD1, FD2, and FD3 of the unit pixels UP1, UP2, and UP3 are reset to perform the high-illumination mode (HIM) sensing operation on the selected unit color pixel UCP. To this end, the reset signal RG is set to the high level, and reset transistors, for example, the reset transistors RX1, RX2, and RX3 of the unit pixels UP1, UP2, and UP3 are turned on by the reset signal RG. When the reset transistors RX1, RX2, and RX3 are turned on, charges present at the charge detection nodes FD1, FD2, and FD3 are discharged to a pixel power supply voltage (VPIX) terminal. As a result, voltages of the charge detection nodes FD1, FD2, and FD3 may be reset to a level of the pixel power supply voltage VPIX.
  • In operation S123, the charge transmission signals TG_L1, TG_L2, and TG_L3 transition to the high level. In this case, photoelectrons integrated by the photoelectric conversion elements PD1, PD6, and PD8 are accumulated at the charge detection nodes FD1, FD2, and FD3.
  • In operation S125, currents flow through the drive transistors DX1, DX2, and DX3, the gate terminals of which are respectively connected to the charge detection nodes FD1, FD2, and FD3, in proportion to the amount of charges accumulated at the charge detection nodes FD1, FD2, and FD3. A voltage level corresponding to the amount of charges accumulated at each of the charge detection nodes FD1, FD2, and FD3 is amplified as a source-drain current of each of the drive transistors DX1, DX2, and DX3. The amplified signals may be output to the column lines CLi, CLj, and CLk through the selection transistors SX1, SX2, and SX3.
  • In operation S127, the averaging operation may be performed on the high-illumination mode sensing signals output to the column lines CLi, CLj, and CLk. For example, the high-illumination mode sensing signals output to the column lines CLi, CLj, and CLk may be merged to one sensing signal.
  • According to the high-illumination mode sensing method corresponding to operation S120, a plurality of sub-pixels that perform a sensing operation in the high illumination mode may simultaneously output sensing signals. In addition, as the averaging operation is performed on the output sensing signals, it is possible to perform a sensing operation at a high speed in the high-dynamic range (HDR) mode, and it is possible to improve a frame rate.
  • Operation S130 corresponding to the middle-illumination mode (MIM) sensing operation will be more fully described with reference to FIGS. 6B and 5.
  • In operation S131, the charge detection nodes FD1, FD2, and FD3 of the unit pixels UP1, UP2, and UP3 are reset to perform the middle-illumination mode (MIM) sensing operation on the selected unit color pixel UCP. As the reset transistors RX1, RX2, and RX3 are turned on in response to the reset signal RG transitioning to the high level, charges present at the charge detection nodes FD1, FD2, and FD3 are discharged to the pixel power supply voltage (VPIX) terminal. As a result, voltages of the charge detection nodes FD1, FD2, and FD3 may be reset to a level of the pixel power supply voltage VPIX.
  • In operation S133, the charge transmission signals TG_M1, TG_M2, and TG_M3 transition to the high level. In this case, photoelectrons integrated by the photoelectric conversion elements PD3, PD5, and PD7 are accumulated at the charge detection nodes FD1, FD2, and FD3.
  • In operation S135, currents flow through the drive transistors DX1, DX2, and DX3, the gate terminals of which are respectively connected to the charge detection nodes FD1, FD2, and FD3, in proportion to the amount of charges accumulated at the charge detection nodes FD1, FD2, and FD3. A voltage level corresponding to the amount of charges accumulated at each of the charge detection nodes FD1, FD2, and FD3 is amplified as a source-drain current of each of the drive transistors DX1, DX2, and DX3. The amplified signals may be output to the column lines CLi, CLj, and CLk through the selection transistors SX1, SX2, and SX3.
  • In operation S137, the averaging operation may be performed on the middle-illumination mode sensing signals output to the column lines CLi, CLj, and CLk. For example, the middle-illumination mode sensing signals output to the column lines CLi, CLj, and CLk may be merged to one sensing signal.
  • According to the middle-illumination mode sensing method corresponding to operation S130, a plurality of sub-pixels that perform a sensing operation in the middle illumination mode may simultaneously output sensing signals. In addition, as the averaging operation is performed on the output sensing signals, it is possible to perform a sensing operation at a high speed in the high-dynamic range (HDR) mode, and it is possible to improve a frame rate.
  • Operation S140 corresponding to the low-illumination mode (LIM) sensing operation will be more fully described with reference to FIGS. 6C and 5.
  • In operation S141, the charge detection nodes FD1, FD2, and FD3 of the unit pixels UP1, UP2, and UP3 are reset to perform the low-illumination mode (LIM) sensing operation on the selected unit color pixel UCP. As the reset transistors RX1, RX2, and RX3 are turned on in response to the reset signal RG transitioning to the high level, charges present at the charge detection nodes FD1, FD2, and FD3 are discharged to the pixel power supply voltage (VPIX) terminal. As a result, voltages of the charge detection nodes FD1, FD2, and FD3 may be reset to a level of the pixel power supply voltage VPIX.
  • In operation S143, the charge transmission signals TG_S1, TG_S2, and TG_S3 transition to the high level. In this case, photoelectrons integrated by the photoelectric conversion elements PD2, PD4, and PD9 are accumulated at the charge detection nodes FD1, FD2, and FD3.
  • In operation S145, currents flow through the drive transistors DX1, DX2, and DX3, the gate terminals of which are respectively connected to the charge detection nodes FD1, FD2, and FD3, in proportion to the amount of charges accumulated at the charge detection nodes FD1, FD2, and FD3. A voltage level corresponding to the amount of charges accumulated at each of the charge detection nodes FD1, FD2, and FD3 is amplified as a source-drain current of each of the drive transistors DX1, DX2, and DX3. The amplified signals may be output to the column lines CLi, CLj, and CLk through the selection transistors SX1, SX2, and SX3.
  • In operation S147, the averaging operation may be performed on the low illumination mode sensing signals output to the column lines CLi, CLj, and CLk. For example, the low illumination mode sensing signals output to the column lines CLi, CLj, and CLk may be merged to one sensing signal.
  • According to the low-illumination mode sensing method corresponding to operation S140, a plurality of sub-pixels that perform a sensing operation in the low illumination mode may simultaneously output sensing signals. In addition, as the averaging operation is performed on the output sensing signals, it is possible to perform a sensing operation at a high speed in the high dynamic range (HDR) mode, and it is possible to improve a frame rate.
  • FIG. 7 illustrates another embodiment in which image signals sensed from a unit color pixel are processed. Referring to FIG. 7, a simultaneous sensing and readout operation of sub-pixels having the same effective integration time EIT may be performed on the unit color pixels UCP disposed in the same row.
  • The pixel array 110 may include a plurality of unit pixel groups (UPG) 112 and 114 including a plurality of unit color pixels 112 a to 112 d and 114 a to 114 d according to embodiments. Here, a unit pixel group UPG means a combination of unit color pixels UCP respectively corresponding to colors “R”, G1, G2, and “B” to be sensed. Unit color pixels 112 a and 114 a may be selected from the unit pixel groups 112 and 114. A sensing operation of the high dynamic range (HDR) mode may be performed on the unit color pixels 112 a and 114 a.
  • In the sensing operation of the high dynamic range (HDR) mode associated with the unit color pixels 112 a and 114 a, the averaging operation may be performed on sub-pixels of two unit color pixels. That is, high-illumination mode sensing signals sensed from the sub-pixels L1, L2, and L3 of the unit color pixel 112 a and high-illumination mode sensing signals sensed from the sub-pixels L1, L2, and L3 of the unit color pixel 114 a may be output to a binning circuit 135. The binning circuit 135 may perform the averaging operation on the high-illumination mode sensing signals of the sub-pixels L1, L2, and L3 included in the unit color pixels 112 a and 114 a.
  • The high-illumination mode sensing signals output from the unit color pixels 112 a and 114 a are converted into digital image data by the analog-to-digital converter 130 after being processed by the binning circuit 135. As a result, in the high dynamic range (HDR) sensing mode, 1×1 pixel data may be generated based on a sensing result of two unit color pixels 112 a and 114 a each having a 3×3 pixel size. The HDR sensing and binning operation that is performed in the unit of a plurality of unit color pixels 112 a and 114 b may be identically applied to the unit color pixels 112 b and 114 b. Also, an HDR sensing and binning operation that is performed in the unit of a plurality of unit color pixels 112 a and 114 a may be identically applied to the unit color pixels 112 c and 114 c and the unit color pixels 112 d and 114 d.
  • FIG. 8 illustrates another embodiment in which image signals sensed from a unit color pixel are processed. Referring to FIG. 8, a simultaneous sensing and readout operation of sub-pixels having the same effective integration time EIT may be performed on the unit color pixels UCP disposed in the same column.
  • The pixel array 110 may include a plurality of unit pixel groups 112, 113, 114, and 115 including a plurality of unit color pixels 112 a to 112 d, 113 a to 113 d, 114 a to 114 d, and 115 a to 115 d according to embodiments. Unit color pixels 112 a, 113 a, 114 a, and 115 a may be selected from the unit pixel groups 112, 113, 114, and 115 for the high dynamic range (HDR) sensing mode.
  • In the sensing operation of the high dynamic range (HDR) mode associated with the unit color pixels 112 a and 113 a present in the same column, the averaging operation may be performed on sub-pixels of two unit color pixels. That is, high-illumination mode sensing signals sensed from the sub-pixels L1, L2, and L3 of the unit color pixel 112 a and high-illumination mode sensing signals sensed from the sub-pixels L1, L2, and L3 of the unit color pixel 113 a may be output to a binning circuit 136. The binning circuit 136 may perform the averaging operation on the high illumination mode sensing signals of the sub-pixels L1, L2, and L3 included in the unit color pixels 112 a and 113 a.
  • In the sensing operation of the high dynamic range (HDR) mode associated with the unit color pixels 114 a and 115 a present in the same column, the averaging operation may be performed on sub-pixels of two unit color pixels. That is, high-illumination mode sensing signals sensed from the sub-pixels L1, L2, and L3 of the unit color pixel 114 a and high-illumination mode sensing signals sensed from the sub-pixels L1, L2, and L3 of the unit color pixel 115 a may be output to a binning circuit 137. The binning circuit 137 may perform the averaging operation on the high-illumination mode sensing signals of the sub-pixels L1, L2, and L3 included in the unit color pixels 114 a and 115 a.
  • The high-illumination mode sensing signals output from the unit color pixels 112 a, 113 a, 114 a, and 115 a are converted into digital image data by the analog-to-digital converter 130 after being processed by the binning circuits 136 and 137. As a result, in the high dynamic range (HDR) sensing mode, 1×1 pixel data may be generated based on a sensing result of two unit color pixels 112 a and 113 a each having a 3×3 pixel size and present in the same column. The column-based HDR sensing and binning operation may be identically applied to the remaining unit color pixels.
  • FIG. 9 is a diagram illustrating another example of the unit color pixel UCP illustrated in FIG. 1. Referring to FIG. 9, the unit color pixel UCP may include two unit pixels UP1 and UP2 each including two sub-pixels. That is, the unit color pixel UCP may be implemented with 2×2 pixels constituting two unit pixels each having a 1×2 pixel size.
  • The one unit color pixel UCP includes the two unit pixels UP1 and UP2. Each of the unit pixels UP1 and UP2 includes two sub-pixels. The unit pixel UP1 may include sub-pixels L1 and S1 and one charge detection node FD1. The sub-pixels L1 and S1 having different effective integration times EIT correspond to photoelectric conversion elements PD1 and PD2. The unit pixel UP2 may include sub-pixels S2 and L2 and one charge detection node FD2. The sub-pixels S2 and L2 correspond to photoelectric conversion elements PD3 and PD4.
  • Here, sub-pixels correspond to different effective integration times EIT. For example, the sub-pixel L1 of the unit pixel UP1 may have a relatively long effective integration time EIT for high-illumination mode sensing. The sub-pixel S1 of the unit pixel UP1 has an effective integration time EIT shorter than the sub-pixel L1. The unit pixel UP2 may have substantially the same structure as the unit pixel UP1. That is, the sub-pixel S2 placed in the first row of the unit pixel UP2 may have a short effective integration time EIT. The sub-pixel L2 placed in the second row of the unit pixel UP2 may have a long effective integration time EIT.
  • The unit color pixel UCP that performs the HDR mode sensing operation according to embodiments includes two unit pixels UP, in each of which a charge detection node is shared. Each unit pixel UP may include two sub-pixels. Accordingly, a unit color pixel may have a pixel structure of a 2×2 pixel size, in which two unit pixels each having a 1×2 pixel size are arranged. In this structure, unit pixels according to embodiments may output signals of sub-pixels having the same effective integration time EIT in the HDR mode sensing operation. Because the averaging operation is able to be performed on the output sensing signals, high-speed binning and analog-to-digital conversion are possible.
  • FIGS. 10A and 10B are circuit diagrams illustrating structures of unit pixels illustrated in FIG. 9. Referring to FIG. 10A, the unit pixel UP1 of a 1×2 pixel size may include a plurality of photoelectric conversion elements PD1 and PD2, a plurality of transmission transistors TX1 and TX2, the reset transistor RX1, the selection transistor SX1, and the drive transistor DX1. The plurality of photoelectric conversion elements PD1 and PD2, the plurality of transmission transistors TX1 and TX2, the reset transistor RX1, the selection transistor SX1, and the drive transistor DX1 are substantially the same as those of FIG. 3A. With regard to functions thereof, thus, additional description will be omitted to avoid redundancy.
  • The HDR mode sensing operation corresponding to two effective integration times EIT may be performed on the unit pixel UP1 of the 1×2 pixel size. That is, with regard to the unit pixel UP1 of the 1×2 pixel size, the HDR mode sensing operation is possible in two modes, i.e., the high illumination mode and the low illumination mode, by using the transmission signals TG_L1 and TG_S1.
  • Referring to FIG. 10B, the unit pixel UP2 of a 1×2 pixel size may include a plurality of photoelectric conversion elements PD3 and PD4, a plurality of transmission transistors TX3 and TX4, the reset transistor RX2, the selection transistor SX2, and the drive transistor DX2. The plurality of photoelectric conversion elements PD3 and PD4, the plurality of transmission transistors TX3 and TX4, the reset transistor RX2, the selection transistor SX2, and the drive transistor DX2 are substantially the same as those of FIG. 3B. With regard to functions thereof, thus, additional description will be omitted to avoid redundancy.
  • According to the description given with reference to FIGS. 10A and 10B, the unit color pixel UCP of the 2×2 pixel size includes unit pixels UP of the 1×2 pixel size capable of outputting sensing signals independently of each other. Accordingly, the unit color pixel UCP may perform the HDR mode sensing operation in two modes, i.e., the high illumination mode and the low illumination mode. In addition, it is possible to perform the averaging operation on high-illumination mode sensing signals or low-illumination mode sensing signals that are simultaneously output from respective unit pixels. Accordingly, a frame rate may be improved in the HDR mode sensing operation by adopting an image sensor according to embodiments.
  • FIG. 11 is a timing diagram illustrating a control method for performing an HDR mode sensing operation on a unit color pixel having a 2×2 pixel size illustrated in FIGS. 10A and 10B. Referring to FIG. 11, in a selected unit color pixel UCP, sensing signals corresponding to the same effective integration time EIT may be simultaneously accumulated at the charge detection nodes FD1 and FD2. From the time T0 to the time T6, charges integrated by the photoelectric conversion elements PD1 and PD4 for high-illumination sensing are sensed. From the time T6 to the time T9, charges integrated by the photoelectric conversion elements PD2 and PD3 for low-illumination sensing are sensed.
  • First, a control operation of the unit color pixel UCP for high-illumination sensing may be performed from the time T0 to the time T6. The reset signal RG is maintained at the high level from the time T0 to the time T1 for the purpose of resetting charge detection nodes FD1 and FD2 of the unit pixels UP1 and UP2. In this case, the reset transistors RX1 and RX2 are turned on, and the charge detection nodes FD1 and FD2 are reset.
  • At the time T1, the reset signal RG transitions to the low level. The reset transistors RX1 and RX2 may be turned off in response to the reset signal RG transitioning to the low level, and the charge detection nodes FD1 and FD2 may be set to a state capable of accumulating charges.
  • At the time T2, as the selection signal SEL transitions to the high level, the selection transistors SX1 and SX1 are turned on in response to the selection signal SEL transitioning to the high level, and the output of sensed data is possible.
  • At the time T3, the transmission signals TG_L1 and TG_L2 transition to the high level for the purpose of turning on the transmission transistors TX1 and TX4 of the sub-pixels L1 and L2. In this case, the remaining charge transmission signals TG_S1 and TG_S2 may be maintained at the low level. During a high period (T3 to T4) of the charge transmission signals TG_L1 and TG_L2, photoelectrons integrated by the photoelectric conversion elements PD1 and PD4 are transmitted to the charge detection nodes FD1 and FD2. That is, the photoelectrons are accumulated at the charge detection nodes FD1 and FD2.
  • Between the time T4 and the time T5, currents flow through the drive transistors DX1 and DX2, the gate terminals of which are respectively connected to the charge detection nodes FD1 and FD2, in proportion to the amount of charges accumulated at the charge detection nodes FD1 and FD2. For example, the drive transistor DX1 of the unit pixel UP1 amplifies a potential change of the charge detection node FD1 and outputs the amplified signal to the column line CLi through the selection transistor SX1. Likewise, the drive transistor DX2 of the unit pixel UP2 amplifies a potential change of the charge detection node FD2 and outputs the amplified signal to the column line CLj through the selection transistor SX2.
  • At the time T5, as the selection signal SEL transitions to the low level, the selection transistors SX1 and SX2 are turned off. In this case, sensing signals of the unit pixels UP1 and UP2 are blocked from being output.
  • At the time T6, as the reset signal RG transitions to the high level, the reset transistors RX1 and RX2 are turned on. When the reset transistors RX1 and RX2 are turned on, the charge detection nodes FD1 and FD2 of the unit pixels UP1 and UP2 are reset to the pixel power supply voltage VPIX.
  • A control operation for low-illumination sensing is performed from the time T6 to the time T9. The transitions of the selection signal SEL and the reset signal RG from the time T6 to the time T9 are the same as those from the time T0 to the time T6, and thus, additional description will be omitted to avoid redundancy. In a state where the reset signal RG transitions to the low level and the selection signal SEL transitions to the high level, photoelectrons corresponding to an incident light are integrated by the photoelectric conversion elements PD2 and PD3 provided for the low-illumination sensing operation.
  • At the time T8, the transmission signals TG_S1 and TG_S2 transition to the high level for the purpose of turning on the transmission transistors TX2 and TX3 of the sub-pixels S1 and S2 provided for the low-illumination mode sensing operation. During a high period of the charge transmission signals TG_S1 and TG_S2, photoelectrons integrated by the photoelectric conversion elements PD2 and PD3 are transmitted to the charge detection nodes FD1 and FD2. That is, the photoelectrons are accumulated at the charge detection nodes FD1 and FD2. Then, currents flow through the drive transistors DX1 and DX2, the gate terminals of which are respectively connected to the charge detection nodes FD1 and FD2, in proportion to the amount of charges accumulated at the charge detection nodes FD1 and FD2. For example, the drive transistor DX2 of the unit pixel UP2 amplifies a potential change of the charge detection node FD2 and outputs the amplified signal to the column line CLj through the selection transistor SX2.
  • A way to sense the unit color pixel UCP of the 2×2 pixel size in the high dynamic range (HDR) mode is described above. That signals sensed from a plurality of sub-pixels in the high illumination mode or the low illumination mode are able to be output is described above. The sensing signals output from the unit pixels UP1 and UP2 make it possible to perform the averaging operation. Accordingly, the frame rate of the image sensor 100 according to an embodiment may be improved in the high dynamic range (HDR) mode.
  • FIG. 12 is a diagram illustrating another example of the unit color pixel UCP illustrated in FIG. 1. Referring to FIG. 12, the unit color pixel UCP may include four unit pixels UP1, UP2, UP3, and UP4 each including four sub-pixels. That is, the unit color pixel UCP may be implemented with 4×4 pixels constituting four unit pixels each having a 1×4 pixel size.
  • In this embodiment, one unit color pixel UCP includes four unit pixels UP1, UP2, UP3, and UP4. Each of the unit pixels UP1, UP2, UP3, and UP4 includes four sub-pixels. The unit pixel UP1 may include sub-pixels L1, M1, E1, and S1 and one charge detection node FD1. The sub-pixels L1, M1, E1, and S1 having different effective integration times EIT correspond to photoelectric conversion elements PD1, PD2, PD3, and PD4. The unit pixel UP2 may include sub-pixels L2, M2, E2, and S2 and one charge detection node FD2. The unit pixels UP3 and UP4 have the same structures as the unit pixels UP1 and UP2 except for different sub-pixel arrangements.
  • The unit color pixel UCP that performs the HDR mode sensing operation according to embodiments with regard to one color includes four unit pixels UP, in each of which a charge detection node is shared. Each of the unit pixels UP may include four sub-pixels. Accordingly, the unit color pixel UCP may have a pixel structure of a 4×4 pixel size, in which four unit pixels each having a 1×4 pixel size are arranged. In this structure, unit pixels according to embodiments may output signals of four sub-pixels having the same effective integration time EIT in the HDR mode sensing operation. Because the averaging operation is able to be performed on the output sensing signals, high-speed binning and analog-to-digital conversion are possible.
  • FIG. 13 is a circuit diagram illustrating a structure of a unit pixel illustrated in FIG. 12. Referring to FIG. 13, the unit pixel UP1 of a 1×4 pixel size may include a plurality of photoelectric conversion elements PD1, PD2, PD3, and PD4, a plurality of transmission transistors TX1, TX2, TX3, and TX4, the reset transistor RX1, the selection transistor SX1, and the drive transistor DX1. The plurality of photoelectric conversion elements PD1, PD2, PD3, and PD4, the plurality of transmission transistors TX1, TX2, TX3, and TX4, the reset transistor RX1, the selection transistor SX1, and the drive transistor DX1 are substantially the same as those of FIG. 3A. With regard to functions thereof, thus, additional description will be omitted to avoid redundancy.
  • The HDR mode sensing operation corresponding to four effective integration times EIT may be performed on the four unit pixels UP1 each having the 1×4 pixel size. That is, with regard to the unit pixel UP1 of the 1×4 pixel size, the HDR mode sensing operation is possible in four illumination ranges by using the transmission signals TG_L1, TG_S1, TG_M1, and TG_E1.
  • According to the description given with reference to FIGS. 12 and 13, the unit color pixel UCP of the 4×4 pixel size includes unit pixels UP of the 1×4 pixel size capable of outputting sensing signals independently of each other. Accordingly, sensing signals corresponding to the same illumination may be generated from the unit pixels UP of the unit color pixel UCP, and the averaging operation may be performed on the generated sensing signals. This means that a frame rate is improved in the HDR mode sensing operation by adopting an image sensor according to embodiments.
  • FIG. 14 is a diagram illustrating another example of the unit color pixel UCP illustrated in FIG. 1. Referring to FIG. 14, the unit color pixel UCP may include four unit pixels UP1, UP2, UP3, UP4, and UP5 each including five sub-pixels. That is, the unit color pixel UCP may be implemented with 5×5 pixels constituting four unit pixels each having a 1×5 pixel size.
  • In this embodiment, one unit color pixel UCP includes five unit pixels UP1, UP2, UP3, UP4, and UP5. Each of the unit pixels UP1, UP2, UP3, UP4, and UP5 includes five sub-pixels. The unit pixel UP1 may include sub-pixels L1, M1, E1, S1, and A1 and one charge detection node FD1. The sub-pixels L1, M1, E1, S1, and A1 having different effective integration times EIT correspond to photoelectric conversion elements PD1, PD2, PD3, PD4, and PD5. The unit pixel UP2 may include sub-pixels L2, M2, E2, S2, and A2 and one charge detection node FD2. The unit pixels UP3, UP4, and UP5 including charge detection nodes FD3, FD4, and FD5 have the same structures as the unit pixels UP1 and UP2 except for different sub-pixel arrangements.
  • The unit color pixel UCP that performs the HDR mode sensing on one color includes five unit pixels UP, in each of which a charge detection node is shared. Each of the unit pixels UP may include five sub-pixels. Accordingly, the unit color pixel UCP may have a pixel structure of a 5×5 pixel size, in which five unit pixels each having a 1×5 pixel size are arranged. In this structure, unit pixels may output signals of five sub-pixels having the same effective integration time EIT in the HDR mode sensing operation. Because the averaging operation is able to be performed on the output sensing signals, high-speed binning and analog-to-digital conversion are possible.
  • FIG. 15 is a circuit diagram illustrating a structure of a unit pixel of FIG. 14. Referring to FIG. 15, the unit pixel UP1 of a 1×5 pixel size may include a plurality of photoelectric conversion elements PD1, PD2, PD3, PD4, and PD5, a plurality of transmission transistors TX1, TX2, TX3, TX4, and TX5, the reset transistor RX1, the selection transistor SX1, and the drive transistor DX1. The plurality of photoelectric conversion elements PD1, PD2, PD3, PD4, and PD5, the plurality of transmission transistors TX1, TX2, TX3, TX4, and TX5, the reset transistor RX1, the selection transistor SX1, and the drive transistor DX1 are substantially the same as those of FIG. 3A. With regard to functions thereof, thus, additional description will be omitted to avoid redundancy.
  • The HDR mode sensing operation corresponding to five effective integration times EIT may be performed on the five unit pixels UP each having the 1×5 pixel size. That is, with regard to the unit pixel UP1 of the 1×5 pixel size, the HDR mode sensing operation is possible in five illumination ranges by using the transmission signals TG_L1, TG_S1, TG_M1, TG_E1, and TG_A1.
  • According to the description given with reference to FIGS. 14 and 15, the unit color pixel UCP of the 5×5 pixel size includes unit pixels UP of the 1×5 pixel size capable of outputting sensing signals independently of each other. Accordingly, sensing signals corresponding to the same illumination may be generated from the unit pixels UP of the unit color pixel UCP, and the averaging operation may be performed on the generated sensing signals. This means that a frame rate is improved in the HDR mode sensing operation by adopting an image sensor according to embodiments.
  • Unit color pixels having 2×2, 3×3, 4×4, and 5×5 pixel sizes are described above to provide the advantages according to embodiments. If necessary, the unit color pixel UCP may be implemented to have a pixel size, in which the number of rows and the number of columns are different, such as 2×3 or 4×3, or the unit color pixel UCP may be implemented to have a pixel size larger than the 5×5 pixel size.
  • By way of summation and review, when a certain color is saturated due to a narrow dynamic range, the image sensor fails to properly express an original color of the image. Therefore, attempts have been made to implement a high dynamic range (HDR) pixel, e.g., implement a high dynamic range while adjusting a light integration time at the image sensor and to increase a capacity of a floating diffusion (FD) region.
  • However, the above techniques that are applied to the image sensor require the relatively large area or cause a decrease in a frame rate of the image sensor. Accordingly, there is required a technology for providing a high frame rate while implementing the high dynamic range (HDR).
  • In contrast, embodiments provide an image sensor capable of performing a sensing operation under various illumination conditions without decreasing a frame rate. That is, an image sensor according to an embodiment may provide a high dynamic range (HDR) image capable of minimizing a decrease in a frame rate or a decrease in a resolution.
  • Example embodiments have been disclosed herein, and although specific terms are employed, they are used and are to be interpreted in a generic and descriptive sense only and not for purpose of limitation. In some instances, as would be apparent to one of ordinary skill in the art as of the filing of the present application, features, characteristics, and/or elements described in connection with a particular embodiment may be used singly or in combination with features, characteristics, and/or elements described in connection with other embodiments unless otherwise specifically indicated. Accordingly, it will be understood by those of skill in the art that various changes in form and details may be made without departing from the spirit and scope of the present invention as set forth in the following claims.

Claims (20)

What is claimed is:
1. An image sensor, comprising:
a plurality of unit color pixels each including a plurality of sub-pixels arranged in a matrix of rows and columns, each of the plurality of unit color pixels including:
a first unit pixel including first to third sub-pixels included in a first column of the matrix;
a second unit pixel including fourth to sixth sub-pixels included in a second column of the matrix; and
a third unit pixel including seventh to ninth sub-pixels included in a third column of the matrix, wherein:
the first to third sub-pixels share a first charge detection node, the fourth to seventh sub-pixels share a second charge detection node, and the seventh to ninth sub-pixels share a third charge detection node, and
the first, fourth, and seventh sub-pixels generate a first set of sensing signals at a first time point, the second, fifth, and eighth sub-pixels generate a second set of sensing signals at a second time point, and the third, sixth, and ninth sub-pixels generate a third set of sensing signals at a third time point.
2. The image sensor of claim 1, wherein:
a first effective integration time is applied to the first, fourth, and seventh sub-pixels to generate the first set of sensing signals at the first time point,
a second effective integration time is applied to the second, fifth, and eighth sub-pixels to generate the second set of sensing signals at the second time point, and
a third effective integration time is applied to the third, sixth, and ninth sub-pixels to generate the third set of sensing signals at the third time point.
3. The image sensor of claim 1, wherein the first, fourth, and seventh sub-pixels are included in different rows to each other, the second, fifth, and eighth sub-pixels are included in different rows to each other, and the third, sixth, and ninth sub-pixels are included in different rows to each other.
4. The image sensor of claim 3, wherein:
the first, sixth, and eighth sub-pixels are included in a first row of the matrix,
the second, fourth, and ninth sub-pixels are included in a second row of the matrix, and
the third, fifth, and seventh sub-pixels are included in a third row of the matrix.
5. The image sensor of claim 1, further comprising a binning circuit configured to perform averaging operation for the first set of sensing signals, the second set of sensing signals, and the third set of sensing signals, respectively.
6. The image sensor of claim 1, wherein the plurality of unit color pixels includes first to fourth unit color pixels arranged in 2 by 2 structure to constitute a first unit pixel group.
7. The image sensor of claim 6, wherein:
the plurality of unit color pixels further includes fifth to eighth unit color pixels arranged in 2 by 2 structure to constitute a second unit pixel group, and
the fifth to eighth unit color pixels correspond to the first to fourth unit color pixels, respectively.
8. The image sensor of claim 7, further comprising a binning circuit configured to perform averaging operation for each pair of corresponding unit color pixels in the first and second unit pixel groups.
9. The image sensor of claim 8, wherein the first unit pixel group is adjacent to the second unit pixel group in a row direction or a column direction.
10. The image sensor of claim 1, wherein:
the first to third sub-pixels share a first read-out circuit, connected to the first charge detection node, including a first reset transistor, a first drive transistor, and a first selection transistor,
the fourth to sixth sub-pixels share a second read-out circuit, connected to the second charge detection node, including a second reset transistor, a second drive transistor, and a second selection transistor, and
the seventh to ninth sub-pixels share a third read-out circuit, connected to the third charge detection node, including a third reset transistor, a third drive transistor, and a third selection transistor.
11. The image sensor of claim 10, wherein:
the first to third read-out circuits are connected to first to third column lines, respectively,
the first, fourth, and seventh sub-pixels transmit the first set of sensing signals through the first to third column lines, respectively, in response to the first to third selection transistors being turned on after the first time point,
the second, fifth, and eighth sub-pixels transmit the second set of sensing signals through the first to third column lines, respectively, in response to the first to third selection transistors being turned on after the second time point, and
the third, sixth, and ninth sub-pixels transmit the third set of sensing signals through the first to third column lines, respectively, in response to the first to third selection transistors being turned on after the third time point.
12. The image sensor of claim 11, further comprising a binning circuit connected to the first to third column lines, wherein the binning circuit is configured to:
perform an averaging operation for the first set of sensing signals transmitted from the first to third column lines in response to the first to third selection transistors being turned on after the first time point,
perform an averaging operation for the second set of sensing signals transmitted from the first to third column lines in response to the first to third selection transistors being turned on after the second time point, and
perform an averaging operation for the third set of sensing signals transmitted from the first to third column lines in response to the first to third selection transistors being turned on after the third time point.
13. An image sensor, comprising:
a first unit pixel including first to third sub-pixels sharing a first charge detection node;
a second unit pixel including fourth to sixth sub-pixels sharing a second charge detection node;
a third unit pixel including seventh to ninth sub-pixels sharing a third charge detection node; and
an analog-to-digital converter, wherein:
the first, fourth, and seventh sub-pixels are configured to have applied thereto a first effective integration time to generate a first set of sensing signals at a first time point, the second, fifth, and eighth sub-pixels are configured to have applied thereto a second effective integration time to generate a second set of sensing signals at a second time point, and the third, sixth, and ninth sub-pixels are configured to have applied thereto a third effective integration time to generate a third set of sensing signals at a third time point,
the first to third time points are different from each other, and
the analog-to-digital converter is configured to perform an averaging operation for the first set of sensing signals, the second set of sensing signals, and the third set of sensing signals, respectively.
14. The image sensor of claim 13, wherein the first to third effective integration times are different from each other.
15. The image sensor of claim 14, wherein:
the first effective integration time is further applied to the fourth and seventh sub-pixels,
the second effective integration time is further applied to the fifth and eighth sub-pixels, and
the third effective integration time is further applied to the sixth and ninth sub-pixels.
16. The image sensor of claim 12, wherein the first to third unit pixels constitute a unit color pixel corresponding to a single color.
17. An image sensor, comprising:
a row decoder configured to generate first and second transmission signals that transition to a high level at a first time point, and to generate third and fourth transmission signals that transition to a high level at a second time point;
a first unit pixel including first and third photoelectric conversion elements connected to a first charge detection node through first and third transmission transistors, respectively, the first and third transmission transistors operating in response to the first and third transmission signals, respectively;
a second unit pixel including second and fourth photoelectric conversion elements connected to a second charge detection node through second and fourth transmission transistors, respectively, the second and fourth transmission transistors operating in response to the second and fourth transmission signals, respectively; and
an analog-to-digital converter configured to:
average a first set of sensing signals read out from the first and second charge detection nodes according to the transition at the first time point, and
average a second set of sensing signals read out from the second and third charge detection nodes according to the transition at the second time point.
18. The image sensor of claim 17, wherein:
the first and third photoelectric conversion elements integrate photoelectrons during a first effective integration time, and
the second and fourth photoelectric conversion elements integrate photoelectrons during a second effective integration time, which is shorter than the first effective integration time.
19. The image sensor of claim 17, wherein the analog-to-digital converter is connected to the first charge detection node through a first read-out circuit and a first column line, and connected to the second charge detection node through a second read-out circuit and a second column line.
20. The image sensor of claim 17, wherein the first to third unit pixels constitute a unit color pixel corresponding to a single color.
US17/715,161 2019-07-01 2022-04-07 Image sensor Abandoned US20220232179A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/715,161 US20220232179A1 (en) 2019-07-01 2022-04-07 Image sensor

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
KR1020190078901A KR20210002966A (en) 2019-07-01 2019-07-01 Image sensor and driving method thereof
KR10-2019-0078901 2019-07-01
US16/745,508 US11336845B2 (en) 2019-07-01 2020-01-17 Image sensor and driving method thereof
US17/715,161 US20220232179A1 (en) 2019-07-01 2022-04-07 Image sensor

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/745,508 Continuation US11336845B2 (en) 2019-07-01 2020-01-17 Image sensor and driving method thereof

Publications (1)

Publication Number Publication Date
US20220232179A1 true US20220232179A1 (en) 2022-07-21

Family

ID=73919541

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/745,508 Active 2040-07-31 US11336845B2 (en) 2019-07-01 2020-01-17 Image sensor and driving method thereof
US17/715,161 Abandoned US20220232179A1 (en) 2019-07-01 2022-04-07 Image sensor

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US16/745,508 Active 2040-07-31 US11336845B2 (en) 2019-07-01 2020-01-17 Image sensor and driving method thereof

Country Status (4)

Country Link
US (2) US11336845B2 (en)
KR (1) KR20210002966A (en)
CN (1) CN112188124A (en)
DE (1) DE102020105687A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210002966A (en) * 2019-07-01 2021-01-11 삼성전자주식회사 Image sensor and driving method thereof

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11336845B2 (en) * 2019-07-01 2022-05-17 Samsung Electronics Co., Ltd. Image sensor and driving method thereof

Family Cites Families (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7084905B1 (en) * 2000-02-23 2006-08-01 The Trustees Of Columbia University In The City Of New York Method and apparatus for obtaining high dynamic range images
JP4449936B2 (en) * 2006-03-31 2010-04-14 ソニー株式会社 Imaging apparatus, camera system, and driving method thereof
KR100830587B1 (en) * 2007-01-10 2008-05-21 삼성전자주식회사 Image sensor and method of displaying a image using the same
US7855740B2 (en) * 2007-07-20 2010-12-21 Eastman Kodak Company Multiple component readout of image sensor
JP5484548B2 (en) * 2010-04-08 2014-05-07 キヤノン株式会社 Image processing apparatus and control method thereof
FR2970598B1 (en) 2011-01-17 2013-08-16 Commissariat Energie Atomique IMAGING DEVICE WITH A HIGH DYNAMIC RANGE
US9191556B2 (en) * 2011-05-19 2015-11-17 Foveon, Inc. Imaging array having photodiodes with different light sensitivities and associated image restoration methods
JP6120508B2 (en) * 2011-10-03 2017-04-26 キヤノン株式会社 Imaging device and imaging apparatus
US9635287B2 (en) * 2011-10-11 2017-04-25 Raytheon Company Method and apparatus for integrated sensor to provide higher resolution, lower frame rate and lower resolution, higher frame rate imagery simultaneously
US9304301B2 (en) * 2012-12-26 2016-04-05 GM Global Technology Operations LLC Camera hardware design for dynamic rearview mirror
WO2014138697A1 (en) * 2013-03-08 2014-09-12 Pelican Imaging Corporation Systems and methods for high dynamic range imaging using array cameras
US9571763B2 (en) * 2014-01-10 2017-02-14 Omnivision Technologies, Inc. Split pixel high dynamic range sensor
DE102014201181A1 (en) * 2014-01-23 2015-07-23 Robert Bosch Gmbh Camera system, in particular for a vehicle, and method for determining image information of a time-pulsed signal source
US9888198B2 (en) * 2014-06-03 2018-02-06 Semiconductor Components Industries, Llc Imaging systems having image sensor pixel arrays with sub-pixel resolution capabilities
US20150363912A1 (en) * 2014-06-12 2015-12-17 Samsung Electronics Co., Ltd. Rgbw demosaic method by combining rgb chrominance with w luminance
US20160037043A1 (en) * 2014-08-01 2016-02-04 Omnivision Technologies, Inc. High dynamic range (hdr) images free of motion artifacts
TWI552594B (en) * 2014-10-27 2016-10-01 聯詠科技股份有限公司 Color filter array for image sensing device and manufacturing method thereof
US9774801B2 (en) 2014-12-05 2017-09-26 Qualcomm Incorporated Solid state image sensor with enhanced charge capacity and dynamic range
US9467633B2 (en) * 2015-02-27 2016-10-11 Semiconductor Components Industries, Llc High dynamic range imaging systems having differential photodiode exposures
US9749556B2 (en) * 2015-03-24 2017-08-29 Semiconductor Components Industries, Llc Imaging systems having image sensor pixel arrays with phase detection capabilities
KR102382183B1 (en) * 2015-05-20 2022-04-01 삼성전자주식회사 Image Sensor For Improving Signal-to-Noise Ratio and Random Noise, and Image Processing System Including The Same
US9819889B2 (en) 2015-08-07 2017-11-14 Omnivision Technologies, Inc. Method and system to implement a stacked chip high dynamic range image sensor
CN208093559U (en) 2015-09-17 2018-11-13 半导体元件工业有限责任公司 image pixel
US10666881B2 (en) 2015-09-24 2020-05-26 Sony Semiconductor Solutions Corporation Solid-state image sensor and electronic device
JPWO2017077775A1 (en) * 2015-11-05 2018-08-23 ソニーセミコンダクタソリューションズ株式会社 Solid-state imaging device, imaging device, and electronic device
EP3169071B1 (en) 2015-11-16 2020-01-29 InterDigital VC Holdings, Inc. Backward-compatible encoding of a hdr picture
JP6733159B2 (en) * 2015-12-01 2020-07-29 株式会社ニコン Imaging device and imaging device
JP6785429B2 (en) * 2015-12-03 2020-11-18 パナソニックIpマネジメント株式会社 Imaging device
US11201186B2 (en) 2016-01-20 2021-12-14 Sony Corporation Solid-state imaging device, driving method therefor, and electronic apparatus
CA3017935C (en) * 2016-03-16 2020-04-21 BAE Systems Imaging Solutions Inc. High dynamic range imaging sensor array
US10044960B2 (en) * 2016-05-25 2018-08-07 Omnivision Technologies, Inc. Systems and methods for detecting light-emitting diode without flickering
US10033949B2 (en) * 2016-06-16 2018-07-24 Semiconductor Components Industries, Llc Imaging systems with high dynamic range and phase detection pixels
US10306191B2 (en) * 2016-11-29 2019-05-28 Cista System Corp. System and method for high dynamic range image sensing
KR20180074368A (en) * 2016-12-23 2018-07-03 삼성전자주식회사 Method for Processing Image and the Electronic Device supporting the same
KR20180079519A (en) * 2016-12-30 2018-07-11 삼성전자주식회사 Image sensor
JP2018138139A (en) 2017-02-24 2018-09-06 ソニー・オリンパスメディカルソリューションズ株式会社 Medical imaging device and medical observation system
US10593712B2 (en) * 2017-08-23 2020-03-17 Semiconductor Components Industries, Llc Image sensors with high dynamic range and infrared imaging toroidal pixels
US10313613B2 (en) * 2017-10-24 2019-06-04 Semiconductor Components Industries, Llc High dynamic range image sensors with flicker and fixed pattern noise mitigation
US10638055B2 (en) * 2018-01-15 2020-04-28 Qualcomm Incorporated Aperture simulation
CN108270977A (en) * 2018-03-06 2018-07-10 广东欧珀移动通信有限公司 Control method and device, imaging device, computer equipment and readable storage medium storing program for executing
JP7146424B2 (en) * 2018-03-19 2022-10-04 キヤノン株式会社 Photoelectric conversion device and imaging system
US10447951B1 (en) * 2018-04-11 2019-10-15 Qualcomm Incorporated Dynamic range estimation with fast and slow sensor pixels

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11336845B2 (en) * 2019-07-01 2022-05-17 Samsung Electronics Co., Ltd. Image sensor and driving method thereof

Also Published As

Publication number Publication date
CN112188124A (en) 2021-01-05
KR20210002966A (en) 2021-01-11
US20210006761A1 (en) 2021-01-07
US11336845B2 (en) 2022-05-17
DE102020105687A1 (en) 2021-01-07

Similar Documents

Publication Publication Date Title
US11089253B2 (en) Image sensor with controllable conversion gain
US20180352200A1 (en) Solid-state imaging device, driving method, and electronic device
US8537241B2 (en) Image sensor with sensitivity control and sensitivity based wide dynamic range
JP5355079B2 (en) CMOS image sensor pixel with selective binning mechanism
US9438828B2 (en) Photoelectric conversion apparatus and imaging system using the same
EP1271930B1 (en) Image sensing apparatus capable of outputting image with converted resolution, its control method, and image sensing system
US8130302B2 (en) Methods and apparatus providing selective binning of pixel circuits
US9241117B2 (en) Image pickup apparatus
JP4611296B2 (en) Charge binning image sensor
JP4630901B2 (en) Rod and cone response sensors
US10785432B2 (en) Image sensor
US20160014304A1 (en) Solid-state imaging device
KR20180079519A (en) Image sensor
EP1569278A2 (en) Amplifying solid-state image pickup device
KR100920166B1 (en) Amplification type solid-state image pickup device
US9001240B2 (en) Common element pixel architecture (CEPA) for fast speed readout
US20220232179A1 (en) Image sensor
KR101580178B1 (en) An image sensor including the same
KR101585978B1 (en) A image sensor
JP4252247B2 (en) CMOS image sensor that can increase sensitivity
US7969492B2 (en) Image pickup apparatus
US9006631B2 (en) Image sensor and row averaging method for image sensor
US11025851B2 (en) Fast image sensor with pixel binning
US9762839B2 (en) Image capturing apparatus, image capturing system, and method for driving image capturing apparatus
EP1874044B1 (en) Solid state imaging device

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION