US20080211943A1 - Solid-state image pickup device - Google Patents

Solid-state image pickup device Download PDF

Info

Publication number
US20080211943A1
US20080211943A1 US11/967,585 US96758507A US2008211943A1 US 20080211943 A1 US20080211943 A1 US 20080211943A1 US 96758507 A US96758507 A US 96758507A US 2008211943 A1 US2008211943 A1 US 2008211943A1
Authority
US
United States
Prior art keywords
signal
pixel
color filters
solid
image pickup
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US11/967,585
Other versions
US7911507B2 (en
Inventor
Yoshitaka Egawa
Hiroto Honda
Yoshinori Iida
Goh Itoh
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Corp
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Assigned to KABUSHIKI KAISHA TOSHIBA reassignment KABUSHIKI KAISHA TOSHIBA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HONDA, HIROTO, IIDA, YOSHINORI, ITOH, GOH, EGAWA, YOSHITAKA
Publication of US20080211943A1 publication Critical patent/US20080211943A1/en
Application granted granted Critical
Publication of US7911507B2 publication Critical patent/US7911507B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/133Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements including elements passing panchromatic light, e.g. filters passing white light
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/135Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on four or more different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • H04N25/58Control of the dynamic range involving two or more exposures
    • H04N25/581Control of the dynamic range involving two or more exposures acquired simultaneously
    • H04N25/583Control of the dynamic range involving two or more exposures acquired simultaneously with different integration times
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • H04N25/58Control of the dynamic range involving two or more exposures
    • H04N25/587Control of the dynamic range involving two or more exposures acquired sequentially, e.g. using the combination of odd and even image fields
    • H04N25/589Control of the dynamic range involving two or more exposures acquired sequentially, e.g. using the combination of odd and even image fields with different integration times, e.g. short and long exposures

Definitions

  • This invention relates to a solid-state image pickup device, and more particularly to a CMOS image sensor used for a mobile-phone with an image sensor, a digital camera, or a video camera.
  • a solid-state image pickup device comprising: a pixel unit in which cells are arranged in rows and columns two-dimensionally on a semiconductor substrate, each of the cells having photoelectric conversion section, accumulating signal charge obtained by photoelectrically converting light incident on the photoelectric conversion section, and outputting a voltage corresponding to the accumulated signal charge; W (white), R (red), G (green), and B (blue) color filters provided on the cells in the pixel unit; an analog/digital converter circuit which converts analog signals output from a W pixel, an R pixel, a G pixel, and a B pixel on whose cells the W (white), R (red), G (green), and B (blue) color filters are provided respectively into digital signals, and outputs a W signal, an R signal, a G signal, and a B signal separately; a saturated signal quantity control circuit which controls the saturated signal quantity of the W pixel; and a signal generator circuit which corrects and generates the R signal, the G signal,
  • FIG. 1 is a block diagram schematically showing the configuration of a CMOS image sensor according to a first embodiment of the invention
  • FIG. 2 is a waveform diagram showing the operation timing of the CMOS image sensor shown in FIG. 1 ;
  • FIGS. 3A , 3 B, and 3 C are photoelectric conversion characteristic diagrams showing the operation of a linear conversion circuit 31 shown in FIG. 1 ;
  • FIG. 4 shows an example of the processing at a signal generator circuit 32 shown in FIG. 1 ;
  • FIG. 5 schematically shows a method of generating RGB signals from a conventional W signal and a method of generating RGB signals from a W signal of the first embodiment
  • FIG. 6 is a block diagram schematically showing the configuration of a CMOS image sensor according to a second embodiment of the invention.
  • FIGS. 7A , 7 B, and 7 C are photoelectric conversion characteristic diagrams showing the operation of a linear conversion circuit 42 shown in FIG. 6 ;
  • FIG. 8 is a block diagram schematically showing the configuration of a CMOS image sensor according to a third embodiment of the invention.
  • FIG. 9 is a block diagram schematically showing the configuration of a CMOS image sensor according to a fourth embodiment of the invention.
  • FIG. 10 shows a color filter array and an example of the processing at the signal generator circuit in a CMOS image sensor according to a fifth embodiment of the invention
  • FIG. 11 shows a color filter array and an example of the processing at the signal generator circuit in a CMOS image sensor according to a sixth embodiment of the invention
  • FIG. 12 shows a color filter array and an example of the processing at the signal generator circuit in a CMOS image sensor according to a seventh embodiment of the invention
  • FIG. 13 shows a color filter array and an example of the processing at the signal generator circuit in a CMOS image sensor according to an eighth embodiment of the invention
  • FIG. 14 shows a color filter array and an example of the processing at the signal generator circuit in a CMOS image sensor according to a ninth embodiment of the invention.
  • FIG. 15 shows a color filter array and an example of the processing at the signal generator circuit in a CMOS image sensor according to a tenth embodiment of the invention
  • FIG. 16 is a block diagram schematically showing the configuration of a CMOS image sensor according to an eleventh embodiment of the invention.
  • FIG. 17 shows a color filter array in a CMOS image sensor according to a thirteenth embodiment of the invention.
  • FIG. 18 shows another color filter array in a CMOS image sensor according to the thirteenth embodiment.
  • CMOS image sensor according to a first embodiment of the invention will be explained.
  • FIG. 1 is a block diagram schematically showing the configuration of a CMOS image sensor according to the first embodiment.
  • a sensor core unit 1 includes a pixel unit 11 , a column noise cancel circuit (CDS) 12 , a column analog digital converter (ADC) 13 , a latch circuit 14 , two line memories (a first line memory MSTH and a second line memory MSTL), and a horizontal shift register 15 .
  • CDS column noise cancel circuit
  • ADC column analog digital converter
  • pixel unit 11 Light is caused to pass through a lens 2 and enter the pixel unit 11 , which then generates charge corresponding to the quantity of incident light by photoelectric conversion.
  • cells (pixels) CE are arranged in rows and columns two-dimensionally. Each of the cells is composed of four transistors Ta, Tb, Tc, Td and photoelectric conversion section, such as a photodiode PD. Pulse signals ADRESn, RESETn, READn are supplied to each cell.
  • Source follower circuit load transistors TLM are arranged horizontally in the lower part of the pixel unit 11 . One end of the current path of each of the load transistors TLM is connected to a vertical signal line VLIN. The other end of the current path is connected to the ground point.
  • a color filter formed in the upper part of a photodiode PD is a 4-color filter obtained by substituting W (white) for one of Gs in the Bayer arrangement using 3 colors, G (green), R (red), G (green), and B (blue) in conventional 2 ⁇ 2 pixels.
  • W is realized by enabling light to pass through in the entire wavelength region without using a color filter.
  • An analog signal corresponding to the signal charge generated at the pixel unit 1 is supplied via the CDS 12 to the ADC 13 .
  • the ADC 13 converts the analog signal into a digital signal, which is latched in the latch circuit 14 .
  • the digital signal latched in the latch circuit 14 is accumulated in the line memories MSTH, MSTL.
  • the accumulated signal is selected and read by the horizontal shift register 15 sequentially. Then, the signal is read from the sensor core unit 1 sequentially.
  • the line memories MSTH, MSTL store two signals STL (long-time storage) and STH (short-time storage) differing in storage time.
  • the digital signals OUT 0 to OUT 9 read from the line memories MSTH, MSTL are supplied to a linear conversion circuit 31 which performs conversion so as to make a signal linear with respect to the quantity of light.
  • the linear conversion circuit 31 combines the two signals STL and STH into a signal SF.
  • the combined signal SF is supplied to a signal generator circuit 32 in a subsequent stage.
  • the signal generator circuit 32 generates RGB signals from the W signal and further converts the signals into an RGB Bayer arrangement and outputs RGB 10-bit digital outputs DOUT 0 to DOUT 9 to a signal process IC in a subsequent stage (not shown).
  • a pulse selector circuit (selector) 21 a signal read vertical register (VR register) 22 , a storage time control vertical register (ES register, long storage time control register) 23 , and a storage time control vertical register (WD register, short storage time control register) 24 .
  • the reading of the pixel unit 11 and the control of the column noise cancel circuit (CDS) 12 are performed by pulse signals S 1 to S 4 , READP, IRESET/IADRES/IREAD, VRR, ESR, WDR output from a timing generator (TG) 33 . That is, the timing generator (TG) 33 functions as a control circuit.
  • the pulse signals S 2 S 4 are supplied to the CDS 12 .
  • the pulse signal READP is supplied to a pulse amplitude control circuit 34 .
  • the output signal VREAD from the pulse amplitude control circuit 34 is supplied to the pulse selector circuit 21 .
  • Also supplied to the pulse selector circuit 21 are the pulse signals IRESET/IADRES/IREAD.
  • the pulse signal VRR is supplied to the VR register 22
  • the pulse signal ESR is supplied to the ES register 23
  • the pulse signal WDR is supplied to the WD register 24 .
  • the registers 22 , 23 , 24 select a vertical line of the pixel unit 11 and supply the pulse signals RESET/ADRES/READ (represented by RESETn, ADRESn, READn in FIG.
  • the address pulse signal ADRESn is supplied to the gate of the row select transistor Ta in the cell
  • the reset pulse signal RESETn is supplied to the gate of the reset transistor Tc in the cell
  • the read pulse signal READn is supplied to the gate of the read transistor Td in the cell.
  • a bias voltage VVL is applied from a bias generator circuit (bias 1 ) 35 .
  • the bias voltage VVL is supplied to the gate of the source follower circuit load transistor TLM.
  • a reference voltage generator circuit (VREF) 36 is a circuit which operates in response to a main clock signal MCK and generates reference waveforms VREFTL, VREFTH for AD conversion at the ADC 13 .
  • the amplitude of the reference waveforms is controlled by data DATA input to a serial interface (serial I/F) 37 .
  • a command input to the serial interface 37 is supplied to a command decoder 38 , which decodes the command and supplies the decoded signal together with the main clock signal MCK to the timing generator 33 .
  • the reference voltage generator circuit 36 To perform AD conversion twice in one horizontal scanning period, the reference voltage generator circuit 36 generates triangular waveforms VREFTL and VREFTH and supplies these to the ADC 13 .
  • the pulse signal READP output from the timing generator 33 is supplied to the pulse amplitude control circuit 34 .
  • the pulse amplitude control circuit 34 controls the amplitude to generate a 3-valued pulse signal VREAD and supplies the signal VREAD to the pulse selector circuit 21 .
  • the linear conversion circuit 31 is a circuit which converts and combines two signals STL (long-time storage) and STH (short-time storage) differing in storage time so that the resulting signal may be linear with respect to the quantity of light.
  • the linear conversion circuit 31 includes two subtraction circuits (-dark) 31 - 1 , 31 - 2 which subtract a black-level dark signal, a gain circuit GA which amplifies the output of the subtraction circuit 31 - 2 , a comparison circuit A, and a switch 31 - 3 .
  • a short exposure time (charge storage time) signal STH stored in the line memory MSTH and a long exposure time signal STL stored in the line memory MSTL are input simultaneously.
  • the dark level 64 is subtracted from the output signals STL, STH of the line memories MSTL, MSTH at the subtraction circuits 31 - 1 , 31 - 2 .
  • the signal SB subjected to the subtraction process is amplified by the gain circuit GA, thereby generating a signal SC. If the exposure time of the signal STL and that of the signal STH are TL and TH respectively, the gain quantity can be calculated from the ratio of TL/TH. By multiplying the signal SB by the gain, the inclinations can be made equivalently the same even if the photoelectric conversion characteristic curves differ in inclination.
  • the comparison circuit A compares the signal SC with the signal SA obtained by subtracting the dark level from the STL signal. The larger signal is selected at the switch circuit 31 - 3 . As a result, the signal SA is combined smoothly with the signal SC obtained by multiplying the signal SB by the gain.
  • the output signal SF of the linear conversion circuit 31 is increased in the number of bits and is output in 12 bits.
  • FIG. 2 is a waveform diagram showing the operation timing of the CMOS image sensor shown in FIG. 1 .
  • the amplitude of the read pulse READ is controlled by the output signal VREAD of the pulse amplitude control circuit 34 .
  • the storage time TL can be controlled by the ES register 23 in units of 1 H.
  • the storage time TH can be controlled by the WD register 24 in units of 1 H.
  • control can be performed in units of less than 1 H by changing the input pulse position of the pulse selector circuit 21 .
  • the pulse signals RESETn, READn, ADRESn are supplied to the pixel unit 11 in synchronization with a horizontal synchronizing pulse HP, thereby reading the signal charge converted photoelectrically and accumulated by the photodiode PD.
  • the amplitude of the read pulse READ at this time is set to the low level Vm.
  • the signal charge read in the first read operation is discharged in such a manner that the read pulse READ of the low level Vm is input at time t 2 in the middle of the storage time 520 H and a part of the signal charge in the photodiode PD is read out.
  • the signal accumulated again in the period between time t 2 and time t 4 is read from the photodiode PD at time t 4 .
  • the amplitude of the reference waveform is set to an intermediate level and reading is done.
  • the intermediate level is adjusted automatically in the image sensor so that a light shielding pixel (OB) section of the pixel unit 11 may be at a 64 LSB level.
  • the signal READn is made on, thereby outputting the signal.
  • a triangular waveform is generated as a reference waveform in a 0.5-H period, the first half of the horizontal scanning period, thereby performing 10-bit AD conversion.
  • the AD-converted signal (digital data) is held in the latch circuit 14 . After the AD conversion has been completed, the AD-converted signal is stored in the line memory MSTH.
  • the pulse signals RESETn, READn, ADRESn are supplied to the pixel unit 11 after 0.5 H has elapsed since the first read operation, thereby reading the signal charge converted photoelectrically and accumulated by the photodiode PD.
  • the amplitude of the read pulse READ at this time is set to the high level Vn.
  • the signal charge left in the photodiode PD is read by inputting the pulse signals READn and ADRESn without applying the pulse signal RESETn.
  • the signal at time t 4 is used as the reset level of the sensing section.
  • the read-out signal is added to the STH signal stored in the sensing section after the pulse signal READn is made on and the resulting signal is output.
  • a triangular waveform is generated as a reference waveform in a 0.5-H period, the last half of the horizontal scanning period, thereby performing 10-bit AD conversion.
  • the AD-converted signal is held in the latch circuit 14 .
  • the AD-converted signal is stored in the line memory MSTL. In this way, the signals (digital data) stored in the line memories MSTH, MSTL are supplied as data OUT 0 to OUT 9 simultaneously to the linear conversion circuit 31 in the next one horizontal scanning period, thereby processing the signal in pixels.
  • FIGS. 3A , 3 B, and 3 C are photoelectric conversion characteristic diagrams showing the operation of the linear conversion circuit 31 shown in FIG. 1 .
  • FIG. 3A shows the signal STH.
  • FIG. 3B shows the signal STL.
  • FIG. 3C shows the signal SF.
  • the abscissa axis indicates the quantity of light.
  • the ordinate axis indicates the digital output level.
  • the W, R, G, B signals rise at the same inclination as that of the signal STL of FIG. 3B up to an output level of Knee 1 .
  • the output level Knee 1 on they are output in such a manner that they are suppressed to 1 ⁇ 4 the inclination of the storage time ratio of TH/TL.
  • a saturation level occurs within a 10-bit output.
  • the amplifying circuit GA determined by the storage time ratio quadruples the signal STH. When the gain is quadrupled in this way, the output level Knee 1 is shifted to a quadrupled output level of Knee 2 .
  • the inclination is quadrupled, giving W ⁇ 4, G ⁇ 4, R ⁇ 4, and B ⁇ 4.
  • the inclination is the same as that up to the output level Knee 2 of the signal STL shown in FIG. 3B . Since the sensing unit is not reset in the period between time t 4 and time t 5 as shown in FIG. 2 , the signal STH is added to the signal STL at and above Vm on the ordinate axis of FIG. 3B and the resulting signal is output.
  • the saturation level of the signal STL is 10 bits at which the signal is clipped by AD conversion.
  • the signal STL is output up to the output level Knee 2 .
  • the linear conversion circuit 31 processes the signals STL and STH differing in storage time so as to make their storage times equal spuriously.
  • the W signal when the quantity of light was set at point P 1 where the G signal was saturated, the W signal was saturated at a light quantity of 0.5.
  • the W signal is extended at a light quantity of 1 to an AD conversion 11-bit level twice that of the G signal in a wide dynamic range operation (WDR). That is, using the wide dynamic range operation enables the W signal to be set to a light quantity of 1 as in the past.
  • the light quantity of the G signal was limited to the same level as that of noise at the smallest subject light quantity.
  • the sensitivity was doubled with respect to the G signal, with the result that the smallest subject light quantity was improved to be a small light quantity of 1 ⁇ 2 (from observation results).
  • the saturation level of the W signal can be improved further. This makes it possible to shift point P 1 to the light quantity 2 or 4 on the right side. That is, the light quantity reaching 10 bits can be shifted to 2 or 4, which further improves the dynamic range. At this time, the output of the image sensor shown in FIG. 1 is made larger than 10 bits so as to be 12 bits or 14 bits.
  • FIG. 4 show examples of processing at the signal generator circuit 32 of FIG. 1 .
  • the signal generator circuit 32 includes a color generator circuit, a color arrangement conversion circuit, and a line memory.
  • G in the R line is replaced with the W signal in an ordinary RGB Bayer arrangement. Such a 2 ⁇ 2 pattern is input repeatedly.
  • FIG. 5 (a) schematically shows a conventional subtraction method of generating RGB signals from the W signal.
  • FIG. 5 (b) schematically shows a ratio multiplying method of generating RGB signals from the W signal in the first embodiment.
  • a G signal when a G signal was generated from the W signal, a G signal was calculated in a conventional subtraction method by subtracting “subtraction coefficient ⁇ (R+B)” from the W signal as shown by (a) in FIG. 5 , which resulted in high random noise.
  • a G signal is calculated by multiplying the W signal by “the ratio multiplying coefficient G/W” as shown by (b) in FIG. 5 , which enables random noise to be suppressed.
  • Gw ⁇ ⁇ 11 W ⁇ ⁇ 11 - Kg * ( R ⁇ ⁇ 11 + R ⁇ ⁇ 12 2 + B ⁇ ⁇ 11 + B ⁇ ⁇ 21 2 )
  • Rw ⁇ ⁇ 11 W ⁇ ⁇ 11 - Kr * ( G ⁇ ⁇ 11 + G ⁇ ⁇ 12 + G ⁇ ⁇ 21 + G ⁇ ⁇ 22 4 + B ⁇ ⁇ 1 ⁇ ⁇ 1 + B ⁇ ⁇ 21 2 )
  • Bw ⁇ ⁇ 11 W ⁇ ⁇ 11 - Kb * ( G ⁇ ⁇ 11 + G ⁇ ⁇ 12 + G ⁇ ⁇ 21 + G ⁇ ⁇ 22 4 + R ⁇ ⁇ 11 + R ⁇ ⁇ 12 2 )
  • Kg, Kr, and Kb are coefficients for adjusting the amount of signal obtained from the spectroscopic characteristic.
  • FIG. 4 shows a ratio multiplying method of calculating a W (white) signal level and RGB signal levels from 8 surrounding pixels and generating Rw, Gw, and Bw signals from pixel W 11 on the basis of the ratio of RGB to W:
  • Wrgb ⁇ ⁇ 11 G ⁇ ⁇ 11 + G ⁇ ⁇ 12 + G ⁇ ⁇ 21 + G ⁇ ⁇ 22 4 + R ⁇ ⁇ 11 + R ⁇ ⁇ 21 2 + B ⁇ ⁇ 1 ⁇ ⁇ 1 + B ⁇ ⁇ 21 2
  • Gw ⁇ ⁇ 11 W ⁇ ⁇ 11 * ( ( G ⁇ ⁇ 11 + G ⁇ ⁇ 12 + G ⁇ ⁇ 21 + G ⁇ ⁇ 22 ) / 4 Wrgb ⁇ ⁇ 11 )
  • Rw ⁇ ⁇ 11 W ⁇ ⁇ 11 * ( ( R ⁇ ⁇ 11 + R ⁇ ⁇ 21 ) / 2 Wrgb ⁇ ⁇ 11 )
  • Bw ⁇ ⁇ 11 W ⁇ ⁇ 11 * ( ( B ⁇ ⁇ 1 ⁇ ⁇ 1 + B ⁇ ⁇ 21 ) / 2 Wrgb ⁇ ⁇ 11 )
  • FIG. 4 shows a pixel array subjected to the RGB Bayer arrangement conversion at the color arrangement conversion circuit.
  • Gw 11 color-generated is used directly as pixel W 11 .
  • a method of processing each of GRB will be described using the processing of pixels G 22 , R 12 , and B 21 as an example.
  • pieces of information on the four pixels, Gw 11 , Gw 12 , Gw 21 , Gw 22 , around pixel G 22 are added.
  • the S/N of pixel R 12 is improved by adding information on Rw 11 and Rw 12 generated from W on both sides.
  • the S/N of pixel B 21 is improved by adding information on Bw 11 and Bw 21 generated from W above pixel B 21 and W below pixel B 21 .
  • G ⁇ ⁇ 22 ⁇ w ( Gw ⁇ ⁇ 11 + Gw ⁇ ⁇ 12 + Gw ⁇ ⁇ 21 + Gw ⁇ ⁇ 22 ) / 4 + G ⁇ ⁇ 22 2
  • R ⁇ ⁇ 12 ⁇ w ( Rw ⁇ ⁇ 11 + Rw ⁇ ⁇ 12 ) / 2 + R ⁇ ⁇ 12 2
  • B ⁇ ⁇ 21 ⁇ w ( Bw ⁇ ⁇ 11 + Bw ⁇ ⁇ 21 ) / 2 + B ⁇ ⁇ 21 2
  • the signal generator circuit 32 outputs data DOUT 0 to DOUT 9 converted into a Bayer arrangement.
  • the S/N of the luminance signal Y was improved by about 4.5 dB at an ordinary light quantity.
  • effective use of the W signal realized twice the sensitivity determined by a conventional G signal.
  • the W signal can be prevented from being saturated even if the W signal obtained from high-sensitivity W pixels is used, the standard setting light quantity input to the pixel unit will never be shifted to the low light quantity side.
  • the RGB signals are obtained from the W signal using the ratio multiplying method, noise in the RGB signals can be improved.
  • RGB resolution information can be increased, false color signals can be reduced.
  • the conversion of the output signal into the RGB Bayer arrangement enables a general-purpose signal processing IC to be used, products can be commercialized early.
  • the dynamic range can be extended, which makes it possible to realize an image sensor capable of covering a low to a high light quantity.
  • since RGB signals can be extracted even if many W pixels are arranged, this produces the effect of apparently increasing the number of RGB pixels.
  • CMOS image sensor according to a second embodiment of the invention will be explained.
  • the same parts as those of the configuration of the first embodiment are indicated by the same reference numerals and an explanation of them will be omitted.
  • the second embodiment is such that the dynamic range expending method is modified in the first embodiment.
  • FIG. 6 is a block diagram schematically showing the configuration of the CMOS image sensor according to the second embodiment.
  • the image sensor outputs the signal at twice the speed of the first embodiment (the image sensor may output at the same speed as that of the first embodiment).
  • a PRE signal processing circuit 4 which includes a frame memory 41 , combines two frames of data into one frame of signal. To obtain a small-light-quantity signal in a first frame, the storage time is made longer and signal S 1 st is output. The signal S 1 st is stored in the frame memory 41 . In a second frame, the storage time is set to 1 ⁇ 4 of the storage time of the first frame and signal S 2 nd is output from the sensor.
  • the linear conversion circuit 42 of the PRE signal processing circuit 4 amplifies signal S 2 nd so that the gain of signal S 2 nd may be quadrupled. If signal S 1 st output from the frame memory 41 is saturated in 10 bits, signal S 2 nd is output as signal SF. Thereafter, as in the first embodiment, 12-bit data DOUT 0 to DOUT 11 are output via the signal generator circuit 32 to a signal processing IC (not shown) in a subsequent stage.
  • FIGS. 7A , 7 B, and 7 C are photoelectric conversion characteristic diagrams showing the operation of the linear conversion circuit 42 shown in FIG. 6 .
  • FIG. 7A shows signal S 1 st
  • FIG. 7B shows signal S 2 nd
  • FIG. 7C shows signal SF.
  • the abscissa axis indicates the quantity of light.
  • the ordinate axis indicates the digital output level.
  • the G signal is saturated in 10 bits at a light quantity of 1. Since the W signal has twice the sensitivity of the G signal, it is saturated at a light quantity of 0.5.
  • signal S 2 nd shown in FIG. 7B since the storage time is set to 1 ⁇ 4, the W signal is saturated at a light quantity of 2. If the gain of the W signal is quadrupled, a 11-bit signal can be reproduced at a light quantity of 1.
  • signal S 1 st is switched to the signal (W ⁇ 4) obtained by quadrupling signal S 2 nd in gain at a 10-bit saturation level 1023LSB, which enables the W signal to be reproduced as a linear signal in up to 12 bits at a light quantity of 2. Shifting the standard light quantity from conventional point P 1 to point 2 makes it possible to extend the dynamic range to twice that of a conventional one.
  • the noise level the lowest subject illuminance
  • point P 3 previously determined by the G signal can be shifted to point P 4 determined by the W signal, which makes it possible to reduce the light quantity to 1 ⁇ 2.
  • This improves the sensitivity of the pixel unit by doubling the sensitivity.
  • the storage time in the second frame is made still smaller to 1 ⁇ 8 or 1/16 of that of the first frame, which enables the dynamic range to be quadrupled or octupled.
  • a third embodiment of the invention is such that a signal processing circuit is incorporated into the CMOS image sensor of the first embodiment to form a one-chip sensor.
  • the remaining configuration is the same as that of the first embodiment.
  • the same parts as those of the configuration of the first embodiment are indicated by the same reference numerals and an explanation of them will be omitted.
  • FIG. 8 is a block diagram schematically showing the configuration of the CMOS image sensor according to the third embodiment.
  • a signal processing circuit 51 is added to the CMOS image sensor of FIG. 1 , thereby producing a one-chip configuration.
  • the signal converted into an RGB Bayer arrangement at the signal generator circuit 32 is supplied to the signal processing circuit 51 .
  • the signal processing circuit 51 carries out normal processes, including a while balance process, a color separation interpolating process, an edge enhancement process, a y correction process, and a color tuning process using an RGB matrix.
  • RGB signals or YUV signals can be output as the outputs DOUT 0 to DOUT 7 of the CMOS image sensor.
  • a fourth embodiment of the invention is such that a signal processing circuit is added to the PRE signal processing circuit 4 of the second embodiment so as to produce a 2-chip configuration.
  • the remaining configuration is the same as that of the second embodiment.
  • the same parts as those of the configuration of the second embodiment are indicated by the same reference numerals and an explanation of them will be omitted.
  • FIG. 9 is a block diagram schematically showing the configuration of the CMOS image sensor according to the fourth embodiment.
  • a signal processing circuit 51 is added to the PRE signal processing circuit 4 of the CMOS image sensor of FIG. 6 .
  • the signal converted into an RGB Bayer arrangement at the signal generator circuit 32 is supplied to the signal processing circuit 51 .
  • the signal processing circuit 51 carries out normal processes, including a while balance process, a color separation interpolating process, an edge enhancement process, a correction process, and a color tuning process using an RGB matrix.
  • RGB signals or YUV signals can be output as the outputs DOUT 0 to DOUT 7 of the CMOS image sensor.
  • the fifth embodiment is such that the color filter array is modified in the first embodiment.
  • the remaining configuration is the same as that of the first embodiment.
  • FIG. 10 shows a color filter array and an example of the processing at the signal generator circuit in the CMOS image sensor of the fifth embodiment.
  • color filters are arranged using 4 ⁇ 4 pixels as a unit.
  • the R and B color signals which require less resolution information are reduced to 1 ⁇ 2 and the G signal is arranged above and below and on the right and left sides of each pixel.
  • Four W pixels are arranged equally in the remaining positions.
  • R pixels and B pixels are arranged diagonally at the four remaining pixels.
  • a 4 ⁇ 4 basic array including 8 G pixels, 4 W pixels, 2 R pixels, and 2 B pixels is repeated.
  • a ratio multiplying method is used for the processing at the color generator circuit as in FIG. 4 .
  • (b) shows a ratio multiplying method of calculating a W (white) signal level and RGB signal levels from 8 pixels around, for example, pixel W 11 , and generating Rw, Gw, and Bw signals from pixel W 11 on the basis of the ratio of RGB to W:
  • Wrgb ⁇ ⁇ 11 G ⁇ ⁇ 11 + G ⁇ ⁇ 12 + G ⁇ ⁇ 22 + G ⁇ ⁇ 31 4 + R ⁇ ⁇ 11 + R ⁇ ⁇ 21 2 + B ⁇ ⁇ 1 ⁇ ⁇ 1 + B ⁇ ⁇ 21 2
  • Gw ⁇ ⁇ 11 W ⁇ ⁇ 11 * ( ( G ⁇ ⁇ 11 + G ⁇ ⁇ 21 + G ⁇ ⁇ 22 + G ⁇ ⁇ 31 ) / 4 Wrgb ⁇ ⁇ 11 )
  • Rw ⁇ ⁇ 11 W ⁇ ⁇ 11 * ( ( R ⁇ ⁇ 11 + R ⁇ ⁇ 21 ) / 2 Wrgb ⁇ ⁇ 11 )
  • Bw ⁇ ⁇ 11 W ⁇ ⁇ 11 * ( ( B ⁇ ⁇ 1 ⁇ ⁇ 1 + B ⁇ ⁇ 21 ) / 2 Wrgb ⁇ ⁇ 11 )
  • FIG. 10 shows a pixel array subjected to the RGB Bayer arrangement conversion at the color arrangement conversion circuit.
  • Gw 11 color-generated is used directly as pixel W 11 .
  • a method of processing each of GRB will be described using the processing of pixels G 22 , R 12 , and B 21 as an example.
  • information on the left and right pixels, Gw 11 , Gw 12 are added.
  • the S/N of pixel R 21 is improved by adding information on Rw 10 , Rw 11 , Rw 20 , and Rw 21 created from W of the four oblique pixels.
  • the S/N of pixel B 21 is improved by adding information on Bw 11 , Bw 12 , Bw 21 , and Bw 22 generated from W of the four oblique pixels.
  • G ⁇ ⁇ 22 ⁇ w ( Gw ⁇ ⁇ 11 + Gw ⁇ ⁇ 12 ) / 2 + G ⁇ ⁇ 22 2
  • R ⁇ ⁇ 12 ⁇ w ( Rw ⁇ ⁇ 10 + Rw ⁇ ⁇ 11 + Rw ⁇ ⁇ 20 + Rw ⁇ ⁇ 21 ) / 4 + R ⁇ ⁇ 21 2
  • B ⁇ ⁇ 21 ⁇ w ( Bw ⁇ ⁇ 11 + Bw ⁇ ⁇ 12 + Bw ⁇ ⁇ 21 + Bw ⁇ ⁇ 22 ) / 4 + B ⁇ ⁇ 21 2
  • the S/N of the luminance signal Y was improved by about 3 dB at an ordinary light quantity.
  • effective use of the W signal realized twice the sensitivity determined by a conventional G signal.
  • a sixth embodiment of the invention is such that the color filter array is modified in the first embodiment.
  • the remaining configuration is the same as that of the first embodiment.
  • FIG. 11 shows a color filter array and an example of the processing at the signal generator circuit in the CMOS image sensor of the sixth embodiment.
  • color filters are arranged using 4 ⁇ 4 pixels as a unit.
  • the R and B color signals which require less resolution information are reduced to 1 ⁇ 2 and the W signal is arranged above and below and on the right and left sides of each pixel.
  • Four G pixels are arranged equally in the remaining positions.
  • R pixels and B pixels are arranged diagonally at the four remaining pixels.
  • a 4 ⁇ 4 basic array including 8 W pixels, 4 G pixels, 2 R pixels, and 2 B pixels is repeated.
  • a ratio multiplying method is used for the processing at the color generator circuit as in FIG. 4 .
  • (b) shows a ratio multiplying method of calculating a W (white) signal level and RGB signal levels from 4 pixels around, for example, pixel W 22 , and generating Rw, Gw, and Bw signals from pixel W 22 on the basis of the ratio of RGB to W:
  • Wrgb ⁇ ⁇ 12 G ⁇ ⁇ 11 + G ⁇ ⁇ 12 2 + ⁇ R ⁇ ⁇ 11 + B ⁇ ⁇ 21
  • Gw ⁇ ⁇ 22 W ⁇ ⁇ 22 * ( G ⁇ ⁇ 11 + G ⁇ ⁇ 12 ) / 2
  • Rw ⁇ ⁇ 22 ⁇ W ⁇ ⁇ 22 * R ⁇ ⁇ 11 Wrgb ⁇ ⁇ 22
  • Bw ⁇ ⁇ 22 W ⁇ ⁇ 22 * B ⁇ ⁇ 21 Wrgb ⁇ ⁇ 22
  • a seventh embodiment of the invention is such that the color filter array is modified in the first embodiment and four line memories are provided in the signal generator circuit 32 , thereby improving the S/N by vertical five-line processing.
  • the remaining configuration is the same as that of the first embodiment.
  • FIG. 12 shows a color filter array and an example of the processing at the signal generator circuit in the CMOS image sensor of the seventh embodiment.
  • the color filter array is the same as that of FIG. 11 in the sixth embodiment.
  • a ratio multiplying method is used for the processing at the color generator circuit as in FIG. 4 .
  • (b) shows an example of calculating a W (white) signal level and RGB signal levels from 7 horizontal pixels and 5 vertical lines, focusing on pixel W 32 , and generating Rw, Gw, and Bw signals from pixel W 32 on the basis of the ratio of RGB to W:
  • Wrgb ⁇ ⁇ 32 G ⁇ ⁇ 12 + G ⁇ ⁇ 13 + G ⁇ ⁇ 21 + G ⁇ ⁇ 22 + G ⁇ ⁇ 23 + G ⁇ ⁇ 24 + G ⁇ ⁇ 32 + G ⁇ ⁇ 33 8 + R ⁇ ⁇ 11 + ( R ⁇ ⁇ 21 / 2 ) + ( R ⁇ ⁇ 22 / 2 ) 2 + B ⁇ ⁇ 21 + ( B ⁇ ⁇ 11 / 2 ) + ( B ⁇ ⁇ 12 / 2 ) 2
  • the seventh embodiment increasing the number of pixels of the W signal makes it possible to improve the S/N and resolution at a low illuminance. Moreover, since the G signals from 8 pixels arranged around the target pixel are used, the S/N can be improved more than in the sixth embodiment.
  • An eighth embodiment of the invention is such that the color filter array is modified in the first embodiment.
  • the remaining configuration is the same as that of the first embodiment.
  • FIG. 13 shows a color filter array and an example of the processing at the signal generator circuit in the CMOS image sensor of the eighth embodiment.
  • color filters are arranged using 4 ⁇ 4 pixels as a unit in such a manner that the square array of FIG. 10 is inclined at an angle of 45 degrees.
  • the R and B color signals which require less resolution information are reduced to 1 ⁇ 2 and the G signal is arranged above and below and on the right and left sides of each pixel.
  • Four W pixels are arranged equally in the remaining positions.
  • R pixels and B pixels are arranged diagonally at the four remaining pixels.
  • a 4 ⁇ 4 basic array including 8 G pixels, 4 W pixels, 2 R pixels, and 2 B pixels is repeated.
  • a ratio multiplying method is used for the processing at the color generator circuit as in FIG. 4 .
  • (b) shows an example of calculating a W (white) signal level and RGB signal levels from 8 pixels around, for example, pixel W 32 , and generating Rw, Gw, and Bw signals from pixel W 32 on the basis of the ratio of RGB to W:
  • Wrgb ⁇ ⁇ 32 G ⁇ ⁇ 33 + G ⁇ ⁇ 34 + G ⁇ ⁇ 43 + G ⁇ ⁇ 44 4 + R ⁇ ⁇ 12 + R ⁇ ⁇ 22 2 + B ⁇ ⁇ 22 + B ⁇ ⁇ 23 2
  • Gw ⁇ ⁇ 32 W ⁇ ⁇ 32 * ( G ⁇ ⁇ 33 + G ⁇ ⁇ 34 + G ⁇ ⁇ 43 + G ⁇ ⁇ 44 ) / 4
  • Wrgb ⁇ ⁇ 32 W ⁇ ⁇ 32 * ( R ⁇ ⁇ 12 + R ⁇ ⁇ 22 ) / 2 Wrgb ⁇ ⁇ 32
  • Bw ⁇ ⁇ 32 W ⁇ ⁇ 32 * ( B ⁇ ⁇ 22 + B ⁇ ⁇ 23 ) / 2 Wrgb ⁇ ⁇ 32
  • a ninth embodiment of the invention is such that the color filter array is modified in the first embodiment.
  • the remaining configuration is the same as that of the first embodiment.
  • FIG. 14 shows a color filter array and an example of the processing at the signal generator circuit in the CMOS image sensor of the ninth embodiment.
  • the color filter array of FIG. 12 is inclined at an angle of 45 degrees using 4 ⁇ 4 pixels as a unit.
  • the R and B color signals which require less resolution information are reduced to 1 ⁇ 2 and the W signal is arranged above and below and on the right and left sides of each pixel.
  • Four G pixels are arranged equally in the remaining positions.
  • R pixels and B pixels are arranged diagonally at the four remaining pixels.
  • a 4 ⁇ 4 basic array including 8 W pixels, 4 G pixels, 2 R pixels, and 2 B pixels is repeated.
  • a ratio multiplying method is used for the processing at the color generator circuit as in FIG. 4 .
  • (b) shows an example of calculating a W (white) signal level and RGB signal levels from 4 pixels around, for example, pixel W 33 , and generating Rw, Gw, and Bw signals from pixel W 33 on the basis of the ratio of RGB to W:
  • Wrgb ⁇ ⁇ 33 G ⁇ ⁇ 32 + G ⁇ ⁇ 41 2 + R ⁇ ⁇ 22 + B ⁇ ⁇ 11
  • Gw ⁇ ⁇ 33 W ⁇ ⁇ 33 * ( G ⁇ ⁇ 32 + G ⁇ ⁇ 41 ) / 2 Wrgb ⁇ ⁇ 33
  • Rw ⁇ ⁇ 33 W ⁇ ⁇ 33 * R ⁇ ⁇ 22 Wrgb ⁇ ⁇ 33
  • Bw ⁇ ⁇ 33 W ⁇ ⁇ 33 * B ⁇ ⁇ 11 Wrgb ⁇ ⁇ 33
  • a tenth embodiment of the invention is such that the color filter array is modified in the first embodiment.
  • the remaining configuration is the same as that of the first embodiment.
  • FIG. 15 shows a color filter array and an example of the processing at the signal generator circuit in the CMOS image sensor according to the tenth embodiment.
  • the color filter array is the same as that of FIG. 14 in the ninth embodiment.
  • FIG. 15 shows an example of generating Rw, Gw, and Bw signals from the signal of pixel W 33 :
  • Wrgb ⁇ ⁇ 33 G ⁇ ⁇ 21 + G ⁇ ⁇ 22 + G ⁇ ⁇ 31 + G ⁇ ⁇ 32 + G ⁇ ⁇ 42 + G ⁇ ⁇ 43 + G ⁇ ⁇ 51 + G ⁇ ⁇ 52 8 + R ⁇ ⁇ 22 + ( R ⁇ ⁇ 11 / 2 ) + ( R ⁇ ⁇ 21 / 2 ) 2 + B ⁇ ⁇ 11 + ( B ⁇ ⁇ 12 / 2 ) + ( B ⁇ ⁇ 21 / 2 ) 2
  • the tenth embodiment increasing the number of pixels of the W signal makes it possible to improve the S/N and resolution at a low illuminance. Moreover, since the G signals from 8 pixels arranged around the target pixel are used, the S/N can be improved more than in the ninth embodiment.
  • the G signal used in the equations is illustrative and not restrictive. The G signal from another G pixel may be used.
  • FIG. 16 is a block diagram schematically showing the configuration of a CMOS image sensor according to the eleventh embodiment.
  • a pulse selector circuit (selector) 21 a signal read vertical register (VR register) 22 , a storage time control vertical register (ESRGB register) 25 , and a storage time control vertical register (ESW register) 26 .
  • the ESRGB register 25 is a register which controls the storage time at an R pixel, a G pixel, or a B pixel.
  • the ESW register 26 is a register which controls the storage time at a W pixel.
  • Each row of color filters arranged in the pixel unit 11 is provided with a W line.
  • the storage time is controlled by a special electronic shutter so as to prevent the high-sensitivity W pixel from being saturated. Since the W signal has twice the sensitivity of the G signal, the storage time of the W signal is set to 1 ⁇ 2 of the storage time of the remaining RGB signals using the special electronic shutter. At a low light quantity, the lowest subject illuminance is improved to be 1 ⁇ 2 using twice the sensitivity of the G signal without applying the electronic shutter to the W pixel.
  • the linear conversion circuit 52 when the electronic shutter operates only on the W pixel, only the signal from the W pixel is amplified with the storage time ratio by the gain circuit GA, thereby switching to the original array to produce an SF signal.
  • the linear conversion circuit 52 switches to the RGB signal side to cause the W signal to pass through. The switching operation enables the signal generator circuit 32 in a subsequent stage to be used without modification even at a low light quantity.
  • the dynamic range extended mode has been used to prevent the output from the W pixel from being saturated or only light incident on the W pixel has been controlled by the electronic shutter. If the two methods are not applied, the output from the W pixel will be saturated at a light quantity of 0.5 or more in a standard image sensor as shown in FIG. 7A . As a measure against this, an example of generating a W signal by making an estimate from pixels around the W pixel will be shown in a twelfth embodiment of the invention.
  • W ⁇ ⁇ 11 kg * G ⁇ ⁇ 11 + G ⁇ ⁇ 12 + G ⁇ ⁇ 21 + G ⁇ ⁇ 22 4 + kr * R ⁇ ⁇ 11 + R ⁇ ⁇ 12 2 + kb * B ⁇ ⁇ 11 + B ⁇ ⁇ 21 2
  • the thirteenth embodiment is such that the color filter array is modified in the first embodiment.
  • the remaining configuration is the same as that of the first embodiment.
  • FIG. 17 shows a color filter array in the CMOS image sensor of the thirteenth embodiment.
  • the color filter array is such that 10 W pixels, 2 G pixels, 2 R pixels, and 2 B pixels are provided in a 4 ⁇ 4 basic array.
  • FIG. 18 shows another color filter array.
  • the color filter array is such that 12 W pixels, 2 G pixels, an R pixel, and a B pixel are provided in a 4 ⁇ 4 basic array.
  • the number of G pixels, the number of R pixels, and the number of B pixels are virtually increased by 14, 13, and 13, respectively. Accordingly, since the color resolution is high, the number of false signals is small. Moreover, since the color signal requires less information amount, the S/N can be improved by an adding process. As compared with the Bayer arrangement, an increase in the number of G, R, and B pixels improves the G signal by about 2 dB, the R signal by about 5 dB, and the B signal by about 5 dB.
  • the signal generator circuit 32 has generated RGB signals from the W signal of the W pixel and converted the RGB signals into the RGB Bayer arrangement, and the signal processing circuit 51 has performed conventional signal processing as shown in FIGS. 8 and 9 , the signal generator circuit 32 may be eliminated and the signal SF may be input directly to the signal processing circuit 51 , thereby performing signal processing suitable for WRGB pixels.
  • a wide variety of color filter arrays can be considered and the aforementioned embodiments can be applied to them, provided that the four colors of WRGB are basically included.
  • a W pixel saturated signal quantity control circuit can be applied to the dynamic range extending method for increasing the saturated signal quantity.
  • the W pixel saturated signal quantity control circuit can be applied to the method for increasing the saturated signal quantity.
  • the W pixel saturated signal quantity control circuit can be applied to other various dynamic extending methods.
  • an solid-state image pickup device capable of preventing a W signal obtained from W (white) pixels from being saturated and improving the sensitivity and S/N by a signal process using the W signal.
  • the above embodiments may be not only practiced independently but also combined suitably.
  • the embodiments include inventions of different stages and therefore various inventions can be extracted by combining suitably a plurality of component elements disclosed in the embodiments.

Abstract

In a pixel unit, cells are arranged in rows and columns two-dimensionally. Each of the cells accumulates signal charge obtained by photoelectrically converting light incident on photoelectric conversion section and outputs a voltage corresponding to the accumulated signal charge. On the cells, W, R, G, and B color filters are provided. Analog signals output from the W pixel, R pixel, G pixel, and B pixel are converted into digital signals by an analog/digital converter circuit, which outputs a W signal, an R signal, a G signal, and a B signal separately. A W signal saturated signal quantity is controlled by a saturated signal quantity control circuit. Then, a signal generator circuit corrects the R signal, the G signal, and the B signal using the W signal, the R signal, the G signal, and B signal output from the analog/digital converter circuit and outputs the corrected R, G, and B signals.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is based upon and claims the benefit of priority from prior Japanese Patent Application No. 2007-000718, filed Jan. 5, 2007, the entire contents of which are incorporated herein by reference.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • This invention relates to a solid-state image pickup device, and more particularly to a CMOS image sensor used for a mobile-phone with an image sensor, a digital camera, or a video camera.
  • 2. Description of the Related Art
  • A wide variety of arrangements of color filters used in image sensors, ranging from complementary color filers to primary-color Bayer arrangements, have been proposed together with a method of processing the signals. With the further microfabrication of pixels in recent years, image sensors with pixels of the order of 2 μm have been put to practical use and the development of 1.75-μm pixels and 1.4-μm pixels is now in progress. In microscopic pixels of the order of 2 μm or less, since the quantity of incident light decreases significantly, deterioration by noise is liable to take place. In this connection, as a method of improving the sensitivity of microscopic pixels, an image sensor using a white (W) color filter has been proposed (e.g., refer to Jpn. Pat. Appln. KOKAI Publication No. 8-23542, Jpn. Pat. Appln. KOKAI Publication No. 2003-318375, Jpn. Pat. Appln. KOKAI Publication No. 2004-304706, or Jpn. Pat. Appln. KOKAI Publication No. 2005-295381).
  • However, since white (W) pixels are highly sensitive, the white (W) signals obtained from the W pixels get saturated easily. In addition to this problem, since Y signal (luminance signal)=W signal, there is a problem with color reproducibility. Normally, the color reproducibility of the RGB signals produced from the YUV signals becomes worse unless the Y signal is generated at this ratio: Y=0.59G+0.3R+0.11B. Furthermore, in the above patent documents, an effective signal process using W pixels has not been carried out.
  • BRIEF SUMMARY OF THE INVENTION
  • According to an aspect of the invention, there is provided a solid-state image pickup device comprising: a pixel unit in which cells are arranged in rows and columns two-dimensionally on a semiconductor substrate, each of the cells having photoelectric conversion section, accumulating signal charge obtained by photoelectrically converting light incident on the photoelectric conversion section, and outputting a voltage corresponding to the accumulated signal charge; W (white), R (red), G (green), and B (blue) color filters provided on the cells in the pixel unit; an analog/digital converter circuit which converts analog signals output from a W pixel, an R pixel, a G pixel, and a B pixel on whose cells the W (white), R (red), G (green), and B (blue) color filters are provided respectively into digital signals, and outputs a W signal, an R signal, a G signal, and a B signal separately; a saturated signal quantity control circuit which controls the saturated signal quantity of the W pixel; and a signal generator circuit which corrects and generates the R signal, the G signal, and the B signal using the W signal, the R signal, the G signal, and B signal output from the analog/digital converter circuit.
  • BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING
  • FIG. 1 is a block diagram schematically showing the configuration of a CMOS image sensor according to a first embodiment of the invention;
  • FIG. 2 is a waveform diagram showing the operation timing of the CMOS image sensor shown in FIG. 1;
  • FIGS. 3A, 3B, and 3C are photoelectric conversion characteristic diagrams showing the operation of a linear conversion circuit 31 shown in FIG. 1;
  • FIG. 4 shows an example of the processing at a signal generator circuit 32 shown in FIG. 1;
  • FIG. 5 schematically shows a method of generating RGB signals from a conventional W signal and a method of generating RGB signals from a W signal of the first embodiment;
  • FIG. 6 is a block diagram schematically showing the configuration of a CMOS image sensor according to a second embodiment of the invention;
  • FIGS. 7A, 7B, and 7C are photoelectric conversion characteristic diagrams showing the operation of a linear conversion circuit 42 shown in FIG. 6;
  • FIG. 8 is a block diagram schematically showing the configuration of a CMOS image sensor according to a third embodiment of the invention;
  • FIG. 9 is a block diagram schematically showing the configuration of a CMOS image sensor according to a fourth embodiment of the invention;
  • FIG. 10 shows a color filter array and an example of the processing at the signal generator circuit in a CMOS image sensor according to a fifth embodiment of the invention;
  • FIG. 11 shows a color filter array and an example of the processing at the signal generator circuit in a CMOS image sensor according to a sixth embodiment of the invention;
  • FIG. 12 shows a color filter array and an example of the processing at the signal generator circuit in a CMOS image sensor according to a seventh embodiment of the invention;
  • FIG. 13 shows a color filter array and an example of the processing at the signal generator circuit in a CMOS image sensor according to an eighth embodiment of the invention;
  • FIG. 14 shows a color filter array and an example of the processing at the signal generator circuit in a CMOS image sensor according to a ninth embodiment of the invention;
  • FIG. 15 shows a color filter array and an example of the processing at the signal generator circuit in a CMOS image sensor according to a tenth embodiment of the invention;
  • FIG. 16 is a block diagram schematically showing the configuration of a CMOS image sensor according to an eleventh embodiment of the invention;
  • FIG. 17 shows a color filter array in a CMOS image sensor according to a thirteenth embodiment of the invention; and
  • FIG. 18 shows another color filter array in a CMOS image sensor according to the thirteenth embodiment.
  • DETAILED DESCRIPTION OF THE INVENTION
  • Hereinafter, referring to the accompanying drawings, an amplification CMOS image sensor functioning as a solid-state image pickup device according to an embodiment of the invention will be explained. In explanation, the same parts are indicated by the same reference numerals throughout the drawings.
  • First Embodiment
  • First, a CMOS image sensor according to a first embodiment of the invention will be explained.
  • FIG. 1 is a block diagram schematically showing the configuration of a CMOS image sensor according to the first embodiment.
  • A sensor core unit 1 includes a pixel unit 11, a column noise cancel circuit (CDS) 12, a column analog digital converter (ADC) 13, a latch circuit 14, two line memories (a first line memory MSTH and a second line memory MSTL), and a horizontal shift register 15.
  • Light is caused to pass through a lens 2 and enter the pixel unit 11, which then generates charge corresponding to the quantity of incident light by photoelectric conversion. In the pixel unit 11, cells (pixels) CE are arranged in rows and columns two-dimensionally. Each of the cells is composed of four transistors Ta, Tb, Tc, Td and photoelectric conversion section, such as a photodiode PD. Pulse signals ADRESn, RESETn, READn are supplied to each cell. Source follower circuit load transistors TLM are arranged horizontally in the lower part of the pixel unit 11. One end of the current path of each of the load transistors TLM is connected to a vertical signal line VLIN. The other end of the current path is connected to the ground point. A color filter formed in the upper part of a photodiode PD is a 4-color filter obtained by substituting W (white) for one of Gs in the Bayer arrangement using 3 colors, G (green), R (red), G (green), and B (blue) in conventional 2×2 pixels. W is realized by enabling light to pass through in the entire wavelength region without using a color filter.
  • An analog signal corresponding to the signal charge generated at the pixel unit 1 is supplied via the CDS 12 to the ADC 13. The ADC 13 converts the analog signal into a digital signal, which is latched in the latch circuit 14. The digital signal latched in the latch circuit 14 is accumulated in the line memories MSTH, MSTL. The accumulated signal is selected and read by the horizontal shift register 15 sequentially. Then, the signal is read from the sensor core unit 1 sequentially. Specifically, the line memories MSTH, MSTL store two signals STL (long-time storage) and STH (short-time storage) differing in storage time. The digital signals OUT0 to OUT9 read from the line memories MSTH, MSTL are supplied to a linear conversion circuit 31 which performs conversion so as to make a signal linear with respect to the quantity of light. The linear conversion circuit 31 combines the two signals STL and STH into a signal SF. The combined signal SF is supplied to a signal generator circuit 32 in a subsequent stage. The signal generator circuit 32 generates RGB signals from the W signal and further converts the signals into an RGB Bayer arrangement and outputs RGB 10-bit digital outputs DOUT0 to DOUT9 to a signal process IC in a subsequent stage (not shown).
  • Next to the pixel unit 11, there are provided a pulse selector circuit (selector) 21, a signal read vertical register (VR register) 22, a storage time control vertical register (ES register, long storage time control register) 23, and a storage time control vertical register (WD register, short storage time control register) 24.
  • The reading of the pixel unit 11 and the control of the column noise cancel circuit (CDS) 12 are performed by pulse signals S1 to S4, READP, IRESET/IADRES/IREAD, VRR, ESR, WDR output from a timing generator (TG) 33. That is, the timing generator (TG) 33 functions as a control circuit.
  • The pulse signals S2 S4 are supplied to the CDS 12. The pulse signal READP is supplied to a pulse amplitude control circuit 34. The output signal VREAD from the pulse amplitude control circuit 34 is supplied to the pulse selector circuit 21. Also supplied to the pulse selector circuit 21 are the pulse signals IRESET/IADRES/IREAD. The pulse signal VRR is supplied to the VR register 22, the pulse signal ESR is supplied to the ES register 23, and the pulse signal WDR is supplied to the WD register 24. The registers 22, 23, 24 select a vertical line of the pixel unit 11 and supply the pulse signals RESET/ADRES/READ (represented by RESETn, ADRESn, READn in FIG. 1) to the pixel unit 11 via the pulse selector circuit 21. The address pulse signal ADRESn is supplied to the gate of the row select transistor Ta in the cell, the reset pulse signal RESETn is supplied to the gate of the reset transistor Tc in the cell, and the read pulse signal READn is supplied to the gate of the read transistor Td in the cell. To the pixel unit 11, a bias voltage VVL is applied from a bias generator circuit (bias 1) 35. The bias voltage VVL is supplied to the gate of the source follower circuit load transistor TLM.
  • A reference voltage generator circuit (VREF) 36 is a circuit which operates in response to a main clock signal MCK and generates reference waveforms VREFTL, VREFTH for AD conversion at the ADC 13. The amplitude of the reference waveforms is controlled by data DATA input to a serial interface (serial I/F) 37. A command input to the serial interface 37 is supplied to a command decoder 38, which decodes the command and supplies the decoded signal together with the main clock signal MCK to the timing generator 33. To perform AD conversion twice in one horizontal scanning period, the reference voltage generator circuit 36 generates triangular waveforms VREFTL and VREFTH and supplies these to the ADC 13. The pulse signal READP output from the timing generator 33 is supplied to the pulse amplitude control circuit 34. The pulse amplitude control circuit 34 controls the amplitude to generate a 3-valued pulse signal VREAD and supplies the signal VREAD to the pulse selector circuit 21.
  • The linear conversion circuit 31 is a circuit which converts and combines two signals STL (long-time storage) and STH (short-time storage) differing in storage time so that the resulting signal may be linear with respect to the quantity of light. The linear conversion circuit 31 includes two subtraction circuits (-dark) 31-1, 31-2 which subtract a black-level dark signal, a gain circuit GA which amplifies the output of the subtraction circuit 31-2, a comparison circuit A, and a switch 31-3. To the linear conversion circuit 31, a short exposure time (charge storage time) signal STH stored in the line memory MSTH and a long exposure time signal STL stored in the line memory MSTL are input simultaneously.
  • In an analog/digital converting operation at the ADC 13, since a dark level is set to a 64 LSB level, the dark level 64 is subtracted from the output signals STL, STH of the line memories MSTL, MSTH at the subtraction circuits 31-1, 31-2. The signal SB subjected to the subtraction process is amplified by the gain circuit GA, thereby generating a signal SC. If the exposure time of the signal STL and that of the signal STH are TL and TH respectively, the gain quantity can be calculated from the ratio of TL/TH. By multiplying the signal SB by the gain, the inclinations can be made equivalently the same even if the photoelectric conversion characteristic curves differ in inclination. The comparison circuit A compares the signal SC with the signal SA obtained by subtracting the dark level from the STL signal. The larger signal is selected at the switch circuit 31-3. As a result, the signal SA is combined smoothly with the signal SC obtained by multiplying the signal SB by the gain. The output signal SF of the linear conversion circuit 31 is increased in the number of bits and is output in 12 bits.
  • FIG. 2 is a waveform diagram showing the operation timing of the CMOS image sensor shown in FIG. 1. In the first embodiment, suppose the storage time during which n vertical lines of photodiodes PD perform photoelectric conversion and accumulate charge is TL=520 H. Moreover, let the short storage time TH be TH=130 H, which is ¼ the quantity of light of the long storage time TL. In the long storage time TL, the amplitude of the read pulse signal READ is controlled at a high level (Vn=2.8 V). In the short storage time TH, the amplitude of the read pulse signal READ is controlled at a low level (Vm=1 V). To generate read pulse signals READ differing in amplitude level, the amplitude of the read pulse READ is controlled by the output signal VREAD of the pulse amplitude control circuit 34. The storage time TL can be controlled by the ES register 23 in units of 1 H. The storage time TH can be controlled by the WD register 24 in units of 1 H. Moreover, control can be performed in units of less than 1 H by changing the input pulse position of the pulse selector circuit 21.
  • At the time (t4) of a first operation of reading the signal charge stored in the photodiode PD, the pulse signals RESETn, READn, ADRESn are supplied to the pixel unit 11 in synchronization with a horizontal synchronizing pulse HP, thereby reading the signal charge converted photoelectrically and accumulated by the photodiode PD. The amplitude of the read pulse READ at this time is set to the low level Vm. The signal charge read in the first read operation is discharged in such a manner that the read pulse READ of the low level Vm is input at time t2 in the middle of the storage time 520 H and a part of the signal charge in the photodiode PD is read out. Moreover, the signal accumulated again in the period between time t2 and time t4 is read from the photodiode PD at time t4.
  • When the reset level of a sensing unit at the time of making the pulse signal RESETn on and off is taken in, the amplitude of the reference waveform is set to an intermediate level and reading is done. The intermediate level is adjusted automatically in the image sensor so that a light shielding pixel (OB) section of the pixel unit 11 may be at a 64 LSB level. Next, the signal READn is made on, thereby outputting the signal. For the read-out signal, a triangular waveform is generated as a reference waveform in a 0.5-H period, the first half of the horizontal scanning period, thereby performing 10-bit AD conversion. The AD-converted signal (digital data) is held in the latch circuit 14. After the AD conversion has been completed, the AD-converted signal is stored in the line memory MSTH.
  • At the time (t5) of a second operation of reading from the photodiode PD, the pulse signals RESETn, READn, ADRESn are supplied to the pixel unit 11 after 0.5 H has elapsed since the first read operation, thereby reading the signal charge converted photoelectrically and accumulated by the photodiode PD. The amplitude of the read pulse READ at this time is set to the high level Vn.
  • The signal charge left in the photodiode PD is read by inputting the pulse signals READn and ADRESn without applying the pulse signal RESETn. The signal at time t4 is used as the reset level of the sensing section. The read-out signal is added to the STH signal stored in the sensing section after the pulse signal READn is made on and the resulting signal is output. For the read-out signal, a triangular waveform is generated as a reference waveform in a 0.5-H period, the last half of the horizontal scanning period, thereby performing 10-bit AD conversion. The AD-converted signal is held in the latch circuit 14. After the AD conversion has been completed, the AD-converted signal is stored in the line memory MSTL. In this way, the signals (digital data) stored in the line memories MSTH, MSTL are supplied as data OUT0 to OUT9 simultaneously to the linear conversion circuit 31 in the next one horizontal scanning period, thereby processing the signal in pixels.
  • FIGS. 3A, 3B, and 3C are photoelectric conversion characteristic diagrams showing the operation of the linear conversion circuit 31 shown in FIG. 1. FIG. 3A shows the signal STH. FIG. 3B shows the signal STL. FIG. 3C shows the signal SF. The abscissa axis indicates the quantity of light. The ordinate axis indicates the digital output level.
  • In the signal STH of FIG. 3A, the W, R, G, B signals rise at the same inclination as that of the signal STL of FIG. 3B up to an output level of Knee1. From the output level Knee1 on, they are output in such a manner that they are suppressed to ¼ the inclination of the storage time ratio of TH/TL. At this time, a saturation level occurs within a 10-bit output. In the linear conversion circuit 31, the amplifying circuit GA determined by the storage time ratio quadruples the signal STH. When the gain is quadrupled in this way, the output level Knee1 is shifted to a quadrupled output level of Knee2. Then, at the output level Knee2 or more, the inclination is quadrupled, giving W×4, G×4, R×4, and B×4. The inclination is the same as that up to the output level Knee2 of the signal STL shown in FIG. 3B. Since the sensing unit is not reset in the period between time t4 and time t5 as shown in FIG. 2, the signal STH is added to the signal STL at and above Vm on the ordinate axis of FIG. 3B and the resulting signal is output. The saturation level of the signal STL is 10 bits at which the signal is clipped by AD conversion. In the signal SF of FIG. 3C (the linearly-converted last output), the signal STL is output up to the output level Knee2. Beyond the output level Knee2, the signal STL is changed to the signal STH quadrupled in gain, which is then output. As described above, the linear conversion circuit 31 processes the signals STL and STH differing in storage time so as to make their storage times equal spuriously.
  • Previously, when the quantity of light was set at point P1 where the G signal was saturated, the W signal was saturated at a light quantity of 0.5. In the first embodiment, the W signal is extended at a light quantity of 1 to an AD conversion 11-bit level twice that of the G signal in a wide dynamic range operation (WDR). That is, using the wide dynamic range operation enables the W signal to be set to a light quantity of 1 as in the past. Previously, the light quantity of the G signal was limited to the same level as that of noise at the smallest subject light quantity. When the W signal was used, the sensitivity was doubled with respect to the G signal, with the result that the smallest subject light quantity was improved to be a small light quantity of ½ (from observation results). If the storage time ratio is made smaller than ¼, the saturation level of the W signal can be improved further. This makes it possible to shift point P1 to the light quantity 2 or 4 on the right side. That is, the light quantity reaching 10 bits can be shifted to 2 or 4, which further improves the dynamic range. At this time, the output of the image sensor shown in FIG. 1 is made larger than 10 bits so as to be 12 bits or 14 bits.
  • FIG. 4 show examples of processing at the signal generator circuit 32 of FIG. 1. The signal generator circuit 32 includes a color generator circuit, a color arrangement conversion circuit, and a line memory. In the signal SF as shown by (a) in FIG. 4 input to the signal generator circuit 32, G in the R line is replaced with the W signal in an ordinary RGB Bayer arrangement. Such a 2×2 pattern is input repeatedly.
  • First, the processing at the color generator circuit which generates RGB signals from the W signal will be explained. In FIG. 5, (a) schematically shows a conventional subtraction method of generating RGB signals from the W signal. In FIG. 5, (b) schematically shows a ratio multiplying method of generating RGB signals from the W signal in the first embodiment.
  • For example, when a G signal was generated from the W signal, a G signal was calculated in a conventional subtraction method by subtracting “subtraction coefficient×(R+B)” from the W signal as shown by (a) in FIG. 5, which resulted in high random noise. In contrast, in the radio multiplying method used in the first embodiment, a G signal is calculated by multiplying the W signal by “the ratio multiplying coefficient G/W” as shown by (b) in FIG. 5, which enables random noise to be suppressed.
  • Here, concrete calculation examples will be described, focusing attention on pixel W11.
  • In conventional ordinary processing, the following equations are given:
  • Gw 11 = W 11 - Kg * ( R 11 + R 12 2 + B 11 + B 21 2 ) Rw 11 = W 11 - Kr * ( G 11 + G 12 + G 21 + G 22 4 + B 1 1 + B 21 2 ) Bw 11 = W 11 - Kb * ( G 11 + G 12 + G 21 + G 22 4 + R 11 + R 12 2 )
  • where Kg, Kr, and Kb are coefficients for adjusting the amount of signal obtained from the spectroscopic characteristic.
  • In contrast, in the first embodiment, in FIG. 4, (b) shows a ratio multiplying method of calculating a W (white) signal level and RGB signal levels from 8 surrounding pixels and generating Rw, Gw, and Bw signals from pixel W11 on the basis of the ratio of RGB to W:
  • Wrgb 11 = G 11 + G 12 + G 21 + G 22 4 + R 11 + R 21 2 + B 1 1 + B 21 2 Gw 11 = W 11 * ( ( G 11 + G 12 + G 21 + G 22 ) / 4 Wrgb 11 ) Rw 11 = W 11 * ( ( R 11 + R 21 ) / 2 Wrgb 11 ) Bw 11 = W 11 * ( ( B 1 1 + B 21 ) / 2 Wrgb 11 )
  • The experimental results have shown that the S/N of the Gw signal generated from W was improved by about 4 dB with respect to the G signal, the S/N of the Rw signal was improved by about 3 dB with respect to the R signal, and the S/N of the Bw signal was improved by about 3 dB with respect to the B signal. In the processing method, since the S/N is improved so greatly that the coefficient K is not necessary, the adjustment of K becomes unnecessary. Moreover, increasing the number of surrounding pixels enables the S/N to be improved further.
  • In FIG. 4, (c) shows a pixel array subjected to the RGB Bayer arrangement conversion at the color arrangement conversion circuit. Gw11 color-generated is used directly as pixel W11. A method of processing each of GRB will be described using the processing of pixels G22, R12, and B21 as an example. To make the same improvement in the S/N as that of Gw11 and obtain the same resolution level as that of Gw11, pieces of information on the four pixels, Gw11, Gw12, Gw21, Gw22, around pixel G22 are added. Furthermore, the S/N of pixel R12 is improved by adding information on Rw11 and Rw12 generated from W on both sides. Similarly, the S/N of pixel B21 is improved by adding information on Bw11 and Bw21 generated from W above pixel B21 and W below pixel B21.
  • The following calculations are done sequentially, thereby converting the data into a Bayer arrangement:
  • G 22 w = ( Gw 11 + Gw 12 + Gw 21 + Gw 22 ) / 4 + G 22 2 R 12 w = ( Rw 11 + Rw 12 ) / 2 + R 12 2 B 21 w = ( Bw 11 + Bw 21 ) / 2 + B 21 2
  • Then, the signal generator circuit 32 outputs data DOUT0 to DOUT9 converted into a Bayer arrangement.
  • In experiments, in the result of processing the luminance signal Y=0.59G+0.3R+0.11B of the YUV signal, the S/N of the luminance signal Y was improved by about 4.5 dB at an ordinary light quantity. Moreover, at the lowest subject illuminance, effective use of the W signal realized twice the sensitivity determined by a conventional G signal.
  • With the first embodiment configured as described above, since the W signal can be prevented from being saturated even if the W signal obtained from high-sensitivity W pixels is used, the standard setting light quantity input to the pixel unit will never be shifted to the low light quantity side. Moreover, since the RGB signals are obtained from the W signal using the ratio multiplying method, noise in the RGB signals can be improved. In addition, since RGB resolution information can be increased, false color signals can be reduced. Additionally, since the conversion of the output signal into the RGB Bayer arrangement enables a general-purpose signal processing IC to be used, products can be commercialized early. Moreover, in combination with a dynamic range extending mode, the dynamic range can be extended, which makes it possible to realize an image sensor capable of covering a low to a high light quantity. Furthermore, in the first embodiment, since RGB signals can be extracted even if many W pixels are arranged, this produces the effect of apparently increasing the number of RGB pixels.
  • Second Embodiment
  • Next, a CMOS image sensor according to a second embodiment of the invention will be explained. The same parts as those of the configuration of the first embodiment are indicated by the same reference numerals and an explanation of them will be omitted. The second embodiment is such that the dynamic range expending method is modified in the first embodiment.
  • FIG. 6 is a block diagram schematically showing the configuration of the CMOS image sensor according to the second embodiment. The image sensor outputs the signal at twice the speed of the first embodiment (the image sensor may output at the same speed as that of the first embodiment). A PRE signal processing circuit 4, which includes a frame memory 41, combines two frames of data into one frame of signal. To obtain a small-light-quantity signal in a first frame, the storage time is made longer and signal S1 st is output. The signal S1 st is stored in the frame memory 41. In a second frame, the storage time is set to ¼ of the storage time of the first frame and signal S2 nd is output from the sensor. At this time, the linear conversion circuit 42 of the PRE signal processing circuit 4 amplifies signal S2 nd so that the gain of signal S2 nd may be quadrupled. If signal S1 st output from the frame memory 41 is saturated in 10 bits, signal S2 nd is output as signal SF. Thereafter, as in the first embodiment, 12-bit data DOUT0 to DOUT11 are output via the signal generator circuit 32 to a signal processing IC (not shown) in a subsequent stage.
  • FIGS. 7A, 7B, and 7C are photoelectric conversion characteristic diagrams showing the operation of the linear conversion circuit 42 shown in FIG. 6. FIG. 7A shows signal S1 st, FIG. 7B shows signal S2 nd, and FIG. 7C shows signal SF. The abscissa axis indicates the quantity of light. The ordinate axis indicates the digital output level. In signal S1 st shown in FIG. 7C, the G signal is saturated in 10 bits at a light quantity of 1. Since the W signal has twice the sensitivity of the G signal, it is saturated at a light quantity of 0.5. In signal S2 nd shown in FIG. 7B, since the storage time is set to ¼, the W signal is saturated at a light quantity of 2. If the gain of the W signal is quadrupled, a 11-bit signal can be reproduced at a light quantity of 1.
  • In the combined signal SF shown in FIG. 7C, signal S1 st is switched to the signal (W×4) obtained by quadrupling signal S2 nd in gain at a 10-bit saturation level 1023LSB, which enables the W signal to be reproduced as a linear signal in up to 12 bits at a light quantity of 2. Shifting the standard light quantity from conventional point P1 to point 2 makes it possible to extend the dynamic range to twice that of a conventional one.
  • Furthermore, if the noise level, the lowest subject illuminance, is equal to the signal level, point P3 previously determined by the G signal can be shifted to point P4 determined by the W signal, which makes it possible to reduce the light quantity to ½. This improves the sensitivity of the pixel unit by doubling the sensitivity. Moreover, the storage time in the second frame is made still smaller to ⅛ or 1/16 of that of the first frame, which enables the dynamic range to be quadrupled or octupled.
  • Third Embodiment
  • A third embodiment of the invention is such that a signal processing circuit is incorporated into the CMOS image sensor of the first embodiment to form a one-chip sensor. The remaining configuration is the same as that of the first embodiment. The same parts as those of the configuration of the first embodiment are indicated by the same reference numerals and an explanation of them will be omitted.
  • FIG. 8 is a block diagram schematically showing the configuration of the CMOS image sensor according to the third embodiment. A signal processing circuit 51 is added to the CMOS image sensor of FIG. 1, thereby producing a one-chip configuration. As shown in FIG. 8, the signal converted into an RGB Bayer arrangement at the signal generator circuit 32 is supplied to the signal processing circuit 51. The signal processing circuit 51 carries out normal processes, including a while balance process, a color separation interpolating process, an edge enhancement process, a y correction process, and a color tuning process using an RGB matrix. As a result, RGB signals or YUV signals can be output as the outputs DOUT0 to DOUT7 of the CMOS image sensor.
  • Fourth Embodiment
  • A fourth embodiment of the invention is such that a signal processing circuit is added to the PRE signal processing circuit 4 of the second embodiment so as to produce a 2-chip configuration. The remaining configuration is the same as that of the second embodiment. The same parts as those of the configuration of the second embodiment are indicated by the same reference numerals and an explanation of them will be omitted.
  • FIG. 9 is a block diagram schematically showing the configuration of the CMOS image sensor according to the fourth embodiment. A signal processing circuit 51 is added to the PRE signal processing circuit 4 of the CMOS image sensor of FIG. 6. As shown in FIG. 9, the signal converted into an RGB Bayer arrangement at the signal generator circuit 32 is supplied to the signal processing circuit 51. The signal processing circuit 51 carries out normal processes, including a while balance process, a color separation interpolating process, an edge enhancement process, a
    Figure US20080211943A1-20080904-P00001
    correction process, and a color tuning process using an RGB matrix. As a result, RGB signals or YUV signals can be output as the outputs DOUT0 to DOUT7 of the CMOS image sensor.
  • Fifth Embodiment
  • Next, a color filter array and a signal generator circuit in a CMOS image sensor according to a fifth embodiment of the invention will be explained. The fifth embodiment is such that the color filter array is modified in the first embodiment. The remaining configuration is the same as that of the first embodiment.
  • FIG. 10 shows a color filter array and an example of the processing at the signal generator circuit in the CMOS image sensor of the fifth embodiment. In the signal SF as shown by (a) in FIG. 10 input to the signal generator circuit 32, color filters are arranged using 4×4 pixels as a unit. As compared with an ordinary RGB Bayer arrangement, the R and B color signals which require less resolution information are reduced to ½ and the G signal is arranged above and below and on the right and left sides of each pixel. Four W pixels are arranged equally in the remaining positions. R pixels and B pixels are arranged diagonally at the four remaining pixels. A 4×4 basic array including 8 G pixels, 4 W pixels, 2 R pixels, and 2 B pixels is repeated.
  • Hereinafter, the process of generating RGB signals from the W signal at the color generator circuit will be described.
  • In the fifth embodiment, a ratio multiplying method is used for the processing at the color generator circuit as in FIG. 4. In FIG. 10, (b) shows a ratio multiplying method of calculating a W (white) signal level and RGB signal levels from 8 pixels around, for example, pixel W11, and generating Rw, Gw, and Bw signals from pixel W11 on the basis of the ratio of RGB to W:
  • Wrgb 11 = G 11 + G 12 + G 22 + G 31 4 + R 11 + R 21 2 + B 1 1 + B 21 2 Gw 11 = W 11 * ( ( G 11 + G 21 + G 22 + G 31 ) / 4 Wrgb 11 ) Rw 11 = W 11 * ( ( R 11 + R 21 ) / 2 Wrgb 11 ) Bw 11 = W 11 * ( ( B 1 1 + B 21 ) / 2 Wrgb 11 )
  • The experimental results have shown that the S/N of the Gw signal generated from W was improved by about 3 dB with respect to the G signal, the S/N of the Rw signal was improved by about 4.5 dB with respect to the R signal, and the S/N of the Bw signal was improved by about 4.5 dB with respect to the B signal as in the first embodiment.
  • In FIG. 10, (c) shows a pixel array subjected to the RGB Bayer arrangement conversion at the color arrangement conversion circuit. Gw11 color-generated is used directly as pixel W11. A method of processing each of GRB will be described using the processing of pixels G22, R12, and B21 as an example. To make the same improvement in the S/N as that of Gw11 and obtain the same resolution level as that of Gw11, information on the left and right pixels, Gw11, Gw12, are added. Furthermore, the S/N of pixel R21 is improved by adding information on Rw10, Rw11, Rw20, and Rw21 created from W of the four oblique pixels. Similarly, the S/N of pixel B21 is improved by adding information on Bw11, Bw12, Bw21, and Bw22 generated from W of the four oblique pixels.
  • The following calculations are done sequentially, thereby converting the data into a Bayer arrangement:
  • G 22 w = ( Gw 11 + Gw 12 ) / 2 + G 22 2 R 12 w = ( Rw 10 + Rw 11 + Rw 20 + Rw 21 ) / 4 + R 21 2 B 21 w = ( Bw 11 + Bw 12 + Bw 21 + Bw 22 ) / 4 + B 21 2
  • In experiments, in the result of processing the luminance signal Y=0.59G+0.3R+0.11B of the YUV signal, the S/N of the luminance signal Y was improved by about 3 dB at an ordinary light quantity. Moreover, at the lowest subject illuminance, effective use of the W signal realized twice the sensitivity determined by a conventional G signal.
  • Sixth Embodiment
  • A sixth embodiment of the invention is such that the color filter array is modified in the first embodiment. The remaining configuration is the same as that of the first embodiment.
  • FIG. 11 shows a color filter array and an example of the processing at the signal generator circuit in the CMOS image sensor of the sixth embodiment. In the signal SF as shown by (a) in FIG. 11 input to the signal generator circuit 32, color filters are arranged using 4×4 pixels as a unit. As compared with an ordinary RGB Bayer arrangement, the R and B color signals which require less resolution information are reduced to ½ and the W signal is arranged above and below and on the right and left sides of each pixel. Four G pixels are arranged equally in the remaining positions. R pixels and B pixels are arranged diagonally at the four remaining pixels. A 4×4 basic array including 8 W pixels, 4 G pixels, 2 R pixels, and 2 B pixels is repeated.
  • Hereinafter, the process of generating RGB signals from the W signal at the color generator circuit will be described.
  • In the sixth embodiment, a ratio multiplying method is used for the processing at the color generator circuit as in FIG. 4. In FIG. 11, (b) shows a ratio multiplying method of calculating a W (white) signal level and RGB signal levels from 4 pixels around, for example, pixel W22, and generating Rw, Gw, and Bw signals from pixel W22 on the basis of the ratio of RGB to W:
  • Wrgb 12 = G 11 + G 12 2 + R 11 + B 21 Gw 22 = W 22 * ( G 11 + G 12 ) / 2 Wrgb 22 Rw 22 = W 22 * R 11 Wrgb 22 Bw 22 = W 22 * B 21 Wrgb 22
  • With the sixth embodiment, increasing the number of pixels of the W signal makes it possible to improve the S/N and resolution at a low illuminance.
  • Seventh Embodiment
  • A seventh embodiment of the invention is such that the color filter array is modified in the first embodiment and four line memories are provided in the signal generator circuit 32, thereby improving the S/N by vertical five-line processing. The remaining configuration is the same as that of the first embodiment.
  • FIG. 12 shows a color filter array and an example of the processing at the signal generator circuit in the CMOS image sensor of the seventh embodiment. The color filter array is the same as that of FIG. 11 in the sixth embodiment.
  • Hereinafter, the process of generating RGB signals from the W signal at the color generator circuit will be described.
  • In the seventh embodiment, a ratio multiplying method is used for the processing at the color generator circuit as in FIG. 4. In FIG. 12, (b) shows an example of calculating a W (white) signal level and RGB signal levels from 7 horizontal pixels and 5 vertical lines, focusing on pixel W32, and generating Rw, Gw, and Bw signals from pixel W32 on the basis of the ratio of RGB to W:
  • Wrgb 32 = G 12 + G 13 + G 21 + G 22 + G 23 + G 24 + G 32 + G 33 8 + R 11 + ( R 21 / 2 ) + ( R 22 / 2 ) 2 + B 21 + ( B 11 / 2 ) + ( B 12 / 2 ) 2 Gw 32 = W 32 * ( G 12 + G 13 + G 21 + G 22 + G 23 + G 24 + G 32 + G 33 ) / 8 Wrgb 32 Rw 32 = W 32 * R 11 + ( R 21 / 2 ) + ( R 22 / 2 ) 2 * 1 Wrgb 32 Bw 32 = W 32 * B 21 + ( B 11 / 2 ) + ( B 12 / 2 ) 2 * 1 Wrgb 32
  • With the seventh embodiment, increasing the number of pixels of the W signal makes it possible to improve the S/N and resolution at a low illuminance. Moreover, since the G signals from 8 pixels arranged around the target pixel are used, the S/N can be improved more than in the sixth embodiment.
  • Eighth Embodiment
  • An eighth embodiment of the invention is such that the color filter array is modified in the first embodiment. The remaining configuration is the same as that of the first embodiment.
  • FIG. 13 shows a color filter array and an example of the processing at the signal generator circuit in the CMOS image sensor of the eighth embodiment. In the signal SF as shown by (a) in FIG. 13 input to the signal generator circuit 32, color filters are arranged using 4×4 pixels as a unit in such a manner that the square array of FIG. 10 is inclined at an angle of 45 degrees. As compared with an ordinary RGB Bayer arrangement, the R and B color signals which require less resolution information are reduced to ½ and the G signal is arranged above and below and on the right and left sides of each pixel. Four W pixels are arranged equally in the remaining positions. R pixels and B pixels are arranged diagonally at the four remaining pixels. A 4×4 basic array including 8 G pixels, 4 W pixels, 2 R pixels, and 2 B pixels is repeated.
  • Hereinafter, the process of generating RGB signals from the W signal at the color generator circuit will be described.
  • In the eighth embodiment, a ratio multiplying method is used for the processing at the color generator circuit as in FIG. 4. In FIG. 13, (b) shows an example of calculating a W (white) signal level and RGB signal levels from 8 pixels around, for example, pixel W32, and generating Rw, Gw, and Bw signals from pixel W32 on the basis of the ratio of RGB to W:
  • Wrgb 32 = G 33 + G 34 + G 43 + G 44 4 + R 12 + R 22 2 + B 22 + B 23 2 Gw 32 = W 32 * ( G 33 + G 34 + G 43 + G 44 ) / 4 Wrgb 32 Rw 32 = W 32 * ( R 12 + R 22 ) / 2 Wrgb 32 Bw 32 = W 32 * ( B 22 + B 23 ) / 2 Wrgb 32
  • The experimental results have shown that the S/N of the Gw signal generated from W was improved by about 3 dB with respect to the G signal, the S/N of the Rw signal was improved by about 4.5 dB with respect to the R signal, and the S/N of the Bw signal was improved by about 4.5 dB with respect to the B signal as in the first embodiment.
  • Ninth Embodiment
  • A ninth embodiment of the invention is such that the color filter array is modified in the first embodiment. The remaining configuration is the same as that of the first embodiment.
  • FIG. 14 shows a color filter array and an example of the processing at the signal generator circuit in the CMOS image sensor of the ninth embodiment. In the signal SF as shown by (a) in FIG. 14, the color filter array of FIG. 12 is inclined at an angle of 45 degrees using 4×4 pixels as a unit. As compared with an ordinary RGB Bayer arrangement, the R and B color signals which require less resolution information are reduced to ½ and the W signal is arranged above and below and on the right and left sides of each pixel. Four G pixels are arranged equally in the remaining positions. R pixels and B pixels are arranged diagonally at the four remaining pixels. A 4×4 basic array including 8 W pixels, 4 G pixels, 2 R pixels, and 2 B pixels is repeated.
  • Hereinafter, the process of generating RGB signals from the W signal at the color generator circuit will be described.
  • In the ninth embodiment, a ratio multiplying method is used for the processing at the color generator circuit as in FIG. 4. In FIG. 14, (b) shows an example of calculating a W (white) signal level and RGB signal levels from 4 pixels around, for example, pixel W33, and generating Rw, Gw, and Bw signals from pixel W33 on the basis of the ratio of RGB to W:
  • Wrgb 33 = G 32 + G 41 2 + R 22 + B 11 Gw 33 = W 33 * ( G 32 + G 41 ) / 2 Wrgb 33 Rw 33 = W 33 * R 22 Wrgb 33 Bw 33 = W 33 * B 11 Wrgb 33
  • With the ninth embodiment, increasing the number of pixels of the W signal makes it possible to improve the S/N and resolution at a low illuminance.
  • Tenth Embodiment
  • A tenth embodiment of the invention is such that the color filter array is modified in the first embodiment. The remaining configuration is the same as that of the first embodiment.
  • FIG. 15 shows a color filter array and an example of the processing at the signal generator circuit in the CMOS image sensor according to the tenth embodiment. The color filter array is the same as that of FIG. 14 in the ninth embodiment.
  • The process of generating RGB signals from the W signal at the color generator circuit is carried out using the ratio multiplying method as described below. In FIG. 15, (b) shows an example of generating Rw, Gw, and Bw signals from the signal of pixel W33:
  • Wrgb 33 = G 21 + G 22 + G 31 + G 32 + G 42 + G 43 + G 51 + G 52 8 + R 22 + ( R 11 / 2 ) + ( R 21 / 2 ) 2 + B 11 + ( B 12 / 2 ) + ( B 21 / 2 ) 2 Gw 33 = W 33 * ( G 21 + G 22 + G 31 + G 32 + G 42 + G 43 + G 51 + G 52 ) / 8 Wrgb 33 Rw 33 = W 33 * R 22 + ( R 11 / 2 ) + ( R 21 / 2 ) 2 * 1 Wrgb 33 Bw 33 = W 33 * B 11 + ( B 12 / 2 ) + ( B 21 / 2 ) 2 * 1 Wrgb 33
  • With the tenth embodiment, increasing the number of pixels of the W signal makes it possible to improve the S/N and resolution at a low illuminance. Moreover, since the G signals from 8 pixels arranged around the target pixel are used, the S/N can be improved more than in the ninth embodiment. The G signal used in the equations is illustrative and not restrictive. The G signal from another G pixel may be used.
  • Eleventh Embodiment
  • In an eleventh embodiment of the invention, the configuration of a CMOS image sensor corresponding to the color filter arrays shown in FIGS. 14 and 15 will be explained. The same parts as those of the configuration of the first embodiment are indicated by the same reference numerals and an explanation of them will be omitted.
  • FIG. 16 is a block diagram schematically showing the configuration of a CMOS image sensor according to the eleventh embodiment. Next to the pixel unit 11, there are provided a pulse selector circuit (selector) 21, a signal read vertical register (VR register) 22, a storage time control vertical register (ESRGB register) 25, and a storage time control vertical register (ESW register) 26. The ESRGB register 25 is a register which controls the storage time at an R pixel, a G pixel, or a B pixel. The ESW register 26 is a register which controls the storage time at a W pixel. Each row of color filters arranged in the pixel unit 11 is provided with a W line. Therefore, the storage time is controlled by a special electronic shutter so as to prevent the high-sensitivity W pixel from being saturated. Since the W signal has twice the sensitivity of the G signal, the storage time of the W signal is set to ½ of the storage time of the remaining RGB signals using the special electronic shutter. At a low light quantity, the lowest subject illuminance is improved to be ½ using twice the sensitivity of the G signal without applying the electronic shutter to the W pixel. In the linear conversion circuit 52, when the electronic shutter operates only on the W pixel, only the signal from the W pixel is amplified with the storage time ratio by the gain circuit GA, thereby switching to the original array to produce an SF signal.
  • Since the W pixel is not controlled by the electronic shutter at a low light quantity, the storage time of the W signal become equal to that of the RGB signals. For this reason, to prevent the signal from the W pixel from being amplified, the linear conversion circuit 52 switches to the RGB signal side to cause the W signal to pass through. The switching operation enables the signal generator circuit 32 in a subsequent stage to be used without modification even at a low light quantity.
  • Twelfth Embodiment
  • In the first, second, and eleventh embodiments, the dynamic range extended mode has been used to prevent the output from the W pixel from being saturated or only light incident on the W pixel has been controlled by the electronic shutter. If the two methods are not applied, the output from the W pixel will be saturated at a light quantity of 0.5 or more in a standard image sensor as shown in FIG. 7A. As a measure against this, an example of generating a W signal by making an estimate from pixels around the W pixel will be shown in a twelfth embodiment of the invention.
  • Explanation will be given using the color filter array shown by (a) in FIG. 4. When pixel W11 has been saturated, a W signal of a saturated signal level or more can be newly generated from the following equation:
  • W 11 = kg * G 11 + G 12 + G 21 + G 22 4 + kr * R 11 + R 12 2 + kb * B 11 + B 21 2
  • This enables a measure against a saturated signal of the W pixel to be taken. That is, even when the W signal has been saturated, a W signal of the saturated signal level or more can be obtained and used. In the equation, kg, kr, and kb indicate white balance coefficients. The remaining configuration and effects are the same as those of the above embodiments.
  • Thirteenth Embodiment
  • Next, a color filter array in a CMOS image sensor according to a thirteenth embodiment of the invention will be explained. The thirteenth embodiment is such that the color filter array is modified in the first embodiment. The remaining configuration is the same as that of the first embodiment.
  • FIG. 17 shows a color filter array in the CMOS image sensor of the thirteenth embodiment. The color filter array is such that 10 W pixels, 2 G pixels, 2 R pixels, and 2 B pixels are provided in a 4×4 basic array. FIG. 18 shows another color filter array. The color filter array is such that 12 W pixels, 2 G pixels, an R pixel, and a B pixel are provided in a 4×4 basic array.
  • In the color filter arrays shown in FIGS. 17 and 18, since use of the ratio multiplying method with less RGB color information enables RGB signals to be generated from each W pixel, the number of G pixels, the number of R pixels, and the number of B pixels are virtually increased by 14, 13, and 13, respectively. Accordingly, since the color resolution is high, the number of false signals is small. Moreover, since the color signal requires less information amount, the S/N can be improved by an adding process. As compared with the Bayer arrangement, an increase in the number of G, R, and B pixels improves the G signal by about 2 dB, the R signal by about 5 dB, and the B signal by about 5 dB.
  • Increasing the number of surrounding pixels enables the ratio coefficient to improve the S/N more.
  • While in the above embodiments, the signal generator circuit 32 has generated RGB signals from the W signal of the W pixel and converted the RGB signals into the RGB Bayer arrangement, and the signal processing circuit 51 has performed conventional signal processing as shown in FIGS. 8 and 9, the signal generator circuit 32 may be eliminated and the signal SF may be input directly to the signal processing circuit 51, thereby performing signal processing suitable for WRGB pixels. A wide variety of color filter arrays can be considered and the aforementioned embodiments can be applied to them, provided that the four colors of WRGB are basically included.
  • Furthermore, by adding a transistor and a capacitance to a pixel cell (CE) and storing the signal charge overflowing the photodiode (PD) into the added capacitance, a W pixel saturated signal quantity control circuit can be applied to the dynamic range extending method for increasing the saturated signal quantity. Moreover, by using a direct current to control the gate voltage of the read transistor of the photodiode (PD) in the pixel cell (CE), the W pixel saturated signal quantity control circuit can be applied to the method for increasing the saturated signal quantity. In addition, the W pixel saturated signal quantity control circuit can be applied to other various dynamic extending methods.
  • With the invention, it is possible to provide an solid-state image pickup device capable of preventing a W signal obtained from W (white) pixels from being saturated and improving the sensitivity and S/N by a signal process using the W signal.
  • Furthermore, the above embodiments may be not only practiced independently but also combined suitably. Moreover, the embodiments include inventions of different stages and therefore various inventions can be extracted by combining suitably a plurality of component elements disclosed in the embodiments.
  • Additional advantages and modifications will readily occur to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details and representative embodiments shown and described herein. Accordingly, various modifications may be made without departing from the spirit or scope of the general inventive concept as defined by the appended claims and their equivalents.

Claims (15)

1. A solid-state image pickup device comprising:
a pixel unit in which cells are arranged in rows and columns two-dimensionally on a semiconductor substrate, each of the cells having photoelectric conversion section, accumulating signal charge obtained by photoelectrically converting light incident on the photoelectric conversion section, and outputting a voltage corresponding to the accumulated signal charge;
W (white), R (red), G (green), and B (blue) color filters provided on the cells in the pixel unit;
an analog/digital converter circuit which converts analog signals output from a W pixel, an R pixel, a G pixel, and a B pixel on whose cells the W (white), R (red), G (green), and B (blue) color filters are provided respectively into digital signals, and outputs a W signal, an R signal, a G signal, and a B signal separately;
a saturated signal quantity control circuit which controls the saturated signal quantity of the W pixel; and
a signal generator circuit which corrects and generates the R signal, the G signal, and the B signal using the W signal, the R signal, the G signal, and B signal output from the analog/digital converter circuit.
2. The solid-state image pickup device according to claim 1, wherein the saturated signal quantity control circuit includes a storage time control circuit which accumulates signal charge in such a manner that the storage time during which signal charge is stored in the photoelectric conversion section is caused to differ.
3. The solid-state image pickup device according to claim 2, further comprising a linear conversion circuit which processes signals in such a manner that the signals differing in the storage time have the same storage time spuriously.
4. The solid-state image pickup device according to claim 1, wherein the saturated signal quantity control circuit includes an electronic shutter for W pixel only which independently controls only the storage time of signal charge in the W pixel.
5. The solid-state image pickup device according to claim 1, wherein the signal generator circuit includes ratio multiplying section which calculates a W signal by adding an R signal, a G signal, and a B signal obtained from an R pixel, a G pixel, and a B pixel around the W pixel and multiplies the W signal of the W pixel by the ratio of the R signal to the W signal, the ratio of the G signal to the W signal, and the ratio of the B signal to the W signal, thereby generating an R signal, a G signal, and a B signal.
6. The solid-state image pickup device according to claim 5, further comprising a color arrangement conversion circuit which makes a conversion into a 2-row, 2-column Bayer arrangement including two G signals by performing an arithmetic operation on the R signal, G signal, and B signal generated from the W signal by the ratio multiplying section and the R signal, G signal, and B signal output from the analog/digital converter circuit.
7. The solid-state image pickup device according to claim 1, further comprising a noise cancel circuit which is provided in front of the analog/digital converter circuit and cancels noise in the analog signal.
8. The solid-state image pickup device according to claim 1, wherein the W (white) color filter allows light to pass through in the entire wavelength range.
9. The solid-state image pickup device according to claim 1, wherein the W (white), R (red), G (green), and B (blue) color filters include a 2-row, 2-column array, with the R (red) and B (blue) color filters arranged on one diagonal and the W (white) and G (green) color filters arranged on the other diagonal.
10. The solid-state image pickup device according to claim 1, wherein the W (white), R (red), G (green), and B (blue) color filters include a 4-row, 4-column array, with the G (green) color filters arranged every other pixel in the row and column directions, the W (white) color filters arranged equally at four of the remaining pixels, and each of the R (red) and B (blue) color filters arranged at the still remaining pixels diagonally.
11. The solid-state image pickup device according to claim 10, wherein the 4-row, 4-column array is inclined at an angle of 45 degrees.
12. The solid-state image pickup device according to claim 1, wherein the W (white), R (red), G (green), and B (blue) color filters include a 4-row, 4-column array, with the W (white) color filters arranged every other pixel in the row and column directions, the G (green) color filters arranged equally at four of the remaining pixels, and each of the R (red) and B (blue) color filters arranged at the still remaining pixels diagonally.
13. The solid-state image pickup device according to claim 12, wherein the 4-row, 4-column array is inclined at an angle of 45 degrees.
14. The solid-state image pickup device according to claim 1, wherein the W (white), R (red), G (green), and B (blue) color filters include a 4-row, 4-column array, with ten W (white), two G (green), two R (red), and two B (blue) color filters being arranged.
15. The solid-state image pickup device according to claim 1, wherein the W (white), R (red), G (green), and B (blue) color filters include a 4-row, 4-column array, with twelve W (white), two G (green), one R (red), and one B (blue) color filters being arranged.
US11/967,585 2007-01-05 2007-12-31 Solid-state image pickup device in which the saturated signal quantity of a W pixel is controlled Expired - Fee Related US7911507B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2007000718A JP5085140B2 (en) 2007-01-05 2007-01-05 Solid-state imaging device
JP2007-000718 2007-01-05

Publications (2)

Publication Number Publication Date
US20080211943A1 true US20080211943A1 (en) 2008-09-04
US7911507B2 US7911507B2 (en) 2011-03-22

Family

ID=39632176

Family Applications (1)

Application Number Title Priority Date Filing Date
US11/967,585 Expired - Fee Related US7911507B2 (en) 2007-01-05 2007-12-31 Solid-state image pickup device in which the saturated signal quantity of a W pixel is controlled

Country Status (4)

Country Link
US (1) US7911507B2 (en)
JP (1) JP5085140B2 (en)
CN (1) CN101222642B (en)
TW (1) TWI399086B (en)

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090073284A1 (en) * 2007-09-19 2009-03-19 Kenzo Isogawa Imaging apparatus and method
US20090167893A1 (en) * 2007-03-05 2009-07-02 Fotonation Vision Limited RGBW Sensor Array
US20090295973A1 (en) * 2008-05-20 2009-12-03 Texas Instruments Japan, Ltd. Solid-State Image Pickup Device
US20100128149A1 (en) * 2008-11-24 2010-05-27 Samsung Electronics Co., Ltd. Color filter array, image sensor including the color filter array and system including the image sensor
EP2194721A2 (en) 2008-12-08 2010-06-09 Sony Corporation solid-state imaging device, method for processing signal of solid-state imaging device, and imaging apparatus
US20100232692A1 (en) * 2009-03-10 2010-09-16 Mrityunjay Kumar Cfa image with synthetic panchromatic image
US20100245636A1 (en) * 2009-03-27 2010-09-30 Mrityunjay Kumar Producing full-color image using cfa image
US20100265370A1 (en) * 2009-04-15 2010-10-21 Mrityunjay Kumar Producing full-color image with reduced motion blur
US20100302423A1 (en) * 2009-05-27 2010-12-02 Adams Jr James E Four-channel color filter array pattern
US20100309350A1 (en) * 2009-06-05 2010-12-09 Adams Jr James E Color filter array pattern having four-channels
US20100309347A1 (en) * 2009-06-09 2010-12-09 Adams Jr James E Interpolation for four-channel color filter array
US20110115957A1 (en) * 2008-07-09 2011-05-19 Brady Frederick T Backside illuminated image sensor with reduced dark current
US20110176036A1 (en) * 2010-01-15 2011-07-21 Samsung Electronics Co., Ltd. Image interpolation method using bayer pattern conversion, apparatus for the same, and recording medium recording the method
US8077234B2 (en) 2007-07-27 2011-12-13 Kabushiki Kaisha Toshiba Image pickup device and method for processing an interpolated color signal
US8119435B2 (en) 2008-07-09 2012-02-21 Omnivision Technologies, Inc. Wafer level processing for backside illuminated image sensors
US8139130B2 (en) 2005-07-28 2012-03-20 Omnivision Technologies, Inc. Image sensor with improved light sensitivity
US8169486B2 (en) 2006-06-05 2012-05-01 DigitalOptics Corporation Europe Limited Image acquisition method and apparatus
US8194296B2 (en) 2006-05-22 2012-06-05 Omnivision Technologies, Inc. Image sensor with improved light sensitivity
US8199222B2 (en) 2007-03-05 2012-06-12 DigitalOptics Corporation Europe Limited Low-light video frame enhancement
US8212882B2 (en) 2007-03-25 2012-07-03 DigitalOptics Corporation Europe Limited Handheld article with movement discrimination
US8237831B2 (en) * 2009-05-28 2012-08-07 Omnivision Technologies, Inc. Four-channel color filter array interpolation
US8244053B2 (en) 2004-11-10 2012-08-14 DigitalOptics Corporation Europe Limited Method and apparatus for initiating subsequent exposures based on determination of motion blurring artifacts
US8270751B2 (en) 2004-11-10 2012-09-18 DigitalOptics Corporation Europe Limited Method of notifying users regarding motion artifacts based on image analysis
US8274715B2 (en) 2005-07-28 2012-09-25 Omnivision Technologies, Inc. Processing color and panchromatic pixels
US8417055B2 (en) 2007-03-05 2013-04-09 DigitalOptics Corporation Europe Limited Image processing method and apparatus
US8416339B2 (en) 2006-10-04 2013-04-09 Omni Vision Technologies, Inc. Providing multiple video signals from single sensor
US20140049670A1 (en) * 2009-09-15 2014-02-20 Bum Suk Kim Image sensor for outputting rgb bayer signal through internal conversion and image processing apparatus including the same
US8698924B2 (en) 2007-03-05 2014-04-15 DigitalOptics Corporation Europe Limited Tone mapping for low-light video frame enhancement
US20140253766A1 (en) * 2013-03-08 2014-09-11 Kabushiki Kaisha Toshiba Solid-state imaging device
US20140328538A1 (en) * 2013-05-03 2014-11-06 Samsung Electronics Co., Ltd. Image signal processing device and method and image processing system using the same
US8989516B2 (en) 2007-09-18 2015-03-24 Fotonation Limited Image processing method and apparatus
US9160897B2 (en) 2007-06-14 2015-10-13 Fotonation Limited Fast motion estimation method
US20160112659A1 (en) * 2012-01-24 2016-04-21 Sony Corporation Image processing apparatus and image processing method, and program
CN106791461A (en) * 2016-11-25 2017-05-31 维沃移动通信有限公司 A kind of exposal control method, exposure control circuit and mobile terminal
US9699429B2 (en) 2012-03-27 2017-07-04 Sony Corporation Image processing apparatus, imaging device, image processing method, and program for reducing noise or false colors in an image
EP4036617A4 (en) * 2019-09-26 2022-11-16 Sony Semiconductor Solutions Corporation Imaging device

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100976284B1 (en) 2007-06-07 2010-08-16 가부시끼가이샤 도시바 Image pickup device
US7855740B2 (en) * 2007-07-20 2010-12-21 Eastman Kodak Company Multiple component readout of image sensor
JP4484944B2 (en) 2008-04-01 2010-06-16 富士フイルム株式会社 Imaging device and driving method of imaging device
JP5115335B2 (en) * 2008-05-27 2013-01-09 ソニー株式会社 Solid-state imaging device and camera system
JP4661912B2 (en) * 2008-07-18 2011-03-30 ソニー株式会社 Solid-state imaging device and camera system
JP4683121B2 (en) * 2008-12-08 2011-05-11 ソニー株式会社 Solid-state imaging device, signal processing method for solid-state imaging device, and imaging device
JP5139350B2 (en) * 2009-02-24 2013-02-06 株式会社東芝 Image processing apparatus, image processing method, and imaging apparatus
JP5359465B2 (en) * 2009-03-31 2013-12-04 ソニー株式会社 Solid-state imaging device, signal processing method for solid-state imaging device, and imaging device
US8218068B2 (en) 2009-04-01 2012-07-10 Omnivision Technologies, Inc. Exposing pixel groups in producing digital images
US8179458B2 (en) * 2009-10-13 2012-05-15 Omnivision Technologies, Inc. System and method for improved image processing
JP4547462B1 (en) * 2009-11-16 2010-09-22 アキュートロジック株式会社 IMAGING ELEMENT, IMAGING ELEMENT DRIVE DEVICE, IMAGING ELEMENT DRIVE METHOD, IMAGE PROCESSING DEVICE, PROGRAM, AND IMAGING DEVICE
JP5724185B2 (en) * 2010-03-04 2015-05-27 ソニー株式会社 Image processing apparatus, image processing method, and program
JP5434761B2 (en) * 2010-04-08 2014-03-05 株式会社ニコン Imaging device and imaging apparatus
JP2011239252A (en) * 2010-05-12 2011-11-24 Panasonic Corp Imaging device
JP5664141B2 (en) * 2010-11-08 2015-02-04 ソニー株式会社 Solid-state imaging device and camera system
JP5141757B2 (en) * 2010-12-13 2013-02-13 ソニー株式会社 Imaging device, imaging and signal processing method
JP5212536B2 (en) * 2011-12-09 2013-06-19 ソニー株式会社 Solid-state imaging device, signal processing method for solid-state imaging device, and imaging device
CN104025579B (en) * 2011-12-27 2016-05-04 富士胶片株式会社 Solid camera head
JP5500193B2 (en) * 2012-03-21 2014-05-21 ソニー株式会社 Solid-state imaging device, imaging device, imaging and signal processing method
JP6012375B2 (en) * 2012-09-28 2016-10-25 株式会社メガチップス Pixel interpolation processing device, imaging device, program, and integrated circuit
CN103066097B (en) * 2013-01-31 2015-05-06 南京邮电大学 High-sensitivity solid-state color image sensor
JP2015109588A (en) * 2013-12-05 2015-06-11 株式会社東芝 Signal processor and imaging system
JP6302272B2 (en) * 2014-02-06 2018-03-28 株式会社東芝 Image processing apparatus, image processing method, and imaging apparatus
JP5884847B2 (en) * 2014-03-12 2016-03-15 ソニー株式会社 Solid-state imaging device, signal processing method for solid-state imaging device, and imaging device
KR102184714B1 (en) * 2014-04-11 2020-12-02 에스케이하이닉스 주식회사 Image sensing device
CN104637465B (en) * 2014-12-31 2017-04-05 广东威创视讯科技股份有限公司 The bright chroma compensation method of display device local and system
JP2016197794A (en) * 2015-04-03 2016-11-24 株式会社シグマ Imaging device
CN109246373B (en) * 2018-10-31 2021-03-02 上海集成电路研发中心有限公司 Method and device for adjusting pixel arrangement of image output by image sensor
KR20210135380A (en) 2020-05-04 2021-11-15 삼성전자주식회사 Image sensor

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6714243B1 (en) * 1999-03-22 2004-03-30 Biomorphic Vlsi, Inc. Color filter pattern
US20050248667A1 (en) * 2004-05-07 2005-11-10 Dialog Semiconductor Gmbh Extended dynamic range in color imagers
US20070076269A1 (en) * 2005-10-03 2007-04-05 Konica Minolta Photo Imaging, Inc. Imaging unit and image sensor
US20070097240A1 (en) * 2005-10-28 2007-05-03 Yoshitaka Egawa Amplification-type cmos image sensor of wide dynamic range
US20070257998A1 (en) * 2004-12-16 2007-11-08 Fujitsu Limited Imaging apparatus, imaging element, and image processing method
US20090040353A1 (en) * 2007-08-10 2009-02-12 Takeshi Yamamoto Imaging apparatus and method of driving solid-state imaging device
US20090167893A1 (en) * 2007-03-05 2009-07-02 Fotonation Vision Limited RGBW Sensor Array
US20090213256A1 (en) * 2008-02-26 2009-08-27 Sony Corporation Solid-state imaging device and camera
US20100141812A1 (en) * 2008-12-08 2010-06-10 Sony Corporation Solid-state imaging device, method for processing signal of solid-state imaging device, and imaging apparatus

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0823542A (en) 1994-07-11 1996-01-23 Canon Inc Image pickup device
TW420956B (en) 1998-05-08 2001-02-01 Matsushita Electric Ind Co Ltd Solid state color camera
JP4179719B2 (en) 1999-10-07 2008-11-12 株式会社東芝 Solid-state imaging device
US7154549B2 (en) * 2000-12-18 2006-12-26 Fuji Photo Film Co., Ltd. Solid state image sensor having a single-layered electrode structure
EP1324592A1 (en) * 2001-12-19 2003-07-02 STMicroelectronics Limited Image sensor with improved noise cancellation
JP3988457B2 (en) * 2001-12-25 2007-10-10 ソニー株式会社 Imaging apparatus and signal processing method for solid-state imaging device
JP4159307B2 (en) 2002-04-23 2008-10-01 富士フイルム株式会社 How to reproduce captured images
JP2004304706A (en) * 2003-04-01 2004-10-28 Fuji Photo Film Co Ltd Solid-state imaging apparatus and interpolation processing method thereof
JP4665422B2 (en) 2004-04-02 2011-04-06 ソニー株式会社 Imaging device
JP4882297B2 (en) * 2004-12-10 2012-02-22 ソニー株式会社 Physical information acquisition apparatus and semiconductor device manufacturing method
JP2006253876A (en) * 2005-03-09 2006-09-21 Sony Corp Physical quantity distribution sensor and drive method of physical quantity distribution sensor
JP4855704B2 (en) * 2005-03-31 2012-01-18 株式会社東芝 Solid-state imaging device
JP5151075B2 (en) 2005-06-21 2013-02-27 ソニー株式会社 Image processing apparatus, image processing method, imaging apparatus, and computer program
JP4144630B2 (en) * 2006-04-14 2008-09-03 ソニー株式会社 Imaging device
JP4241754B2 (en) * 2006-04-14 2009-03-18 ソニー株式会社 Imaging device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6714243B1 (en) * 1999-03-22 2004-03-30 Biomorphic Vlsi, Inc. Color filter pattern
US20050248667A1 (en) * 2004-05-07 2005-11-10 Dialog Semiconductor Gmbh Extended dynamic range in color imagers
US20070257998A1 (en) * 2004-12-16 2007-11-08 Fujitsu Limited Imaging apparatus, imaging element, and image processing method
US20070076269A1 (en) * 2005-10-03 2007-04-05 Konica Minolta Photo Imaging, Inc. Imaging unit and image sensor
US20070097240A1 (en) * 2005-10-28 2007-05-03 Yoshitaka Egawa Amplification-type cmos image sensor of wide dynamic range
US20090167893A1 (en) * 2007-03-05 2009-07-02 Fotonation Vision Limited RGBW Sensor Array
US20090040353A1 (en) * 2007-08-10 2009-02-12 Takeshi Yamamoto Imaging apparatus and method of driving solid-state imaging device
US20090213256A1 (en) * 2008-02-26 2009-08-27 Sony Corporation Solid-state imaging device and camera
US20100141812A1 (en) * 2008-12-08 2010-06-10 Sony Corporation Solid-state imaging device, method for processing signal of solid-state imaging device, and imaging apparatus

Cited By (61)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8494300B2 (en) 2004-11-10 2013-07-23 DigitalOptics Corporation Europe Limited Method of notifying users regarding motion artifacts based on image analysis
US8285067B2 (en) 2004-11-10 2012-10-09 DigitalOptics Corporation Europe Limited Method of notifying users regarding motion artifacts based on image analysis
US8270751B2 (en) 2004-11-10 2012-09-18 DigitalOptics Corporation Europe Limited Method of notifying users regarding motion artifacts based on image analysis
US8244053B2 (en) 2004-11-10 2012-08-14 DigitalOptics Corporation Europe Limited Method and apparatus for initiating subsequent exposures based on determination of motion blurring artifacts
US8139130B2 (en) 2005-07-28 2012-03-20 Omnivision Technologies, Inc. Image sensor with improved light sensitivity
US8711452B2 (en) 2005-07-28 2014-04-29 Omnivision Technologies, Inc. Processing color and panchromatic pixels
US8330839B2 (en) 2005-07-28 2012-12-11 Omnivision Technologies, Inc. Image sensor with improved light sensitivity
US8274715B2 (en) 2005-07-28 2012-09-25 Omnivision Technologies, Inc. Processing color and panchromatic pixels
US8194296B2 (en) 2006-05-22 2012-06-05 Omnivision Technologies, Inc. Image sensor with improved light sensitivity
US8520082B2 (en) 2006-06-05 2013-08-27 DigitalOptics Corporation Europe Limited Image acquisition method and apparatus
US8169486B2 (en) 2006-06-05 2012-05-01 DigitalOptics Corporation Europe Limited Image acquisition method and apparatus
US8416339B2 (en) 2006-10-04 2013-04-09 Omni Vision Technologies, Inc. Providing multiple video signals from single sensor
US8698924B2 (en) 2007-03-05 2014-04-15 DigitalOptics Corporation Europe Limited Tone mapping for low-light video frame enhancement
US20090167893A1 (en) * 2007-03-05 2009-07-02 Fotonation Vision Limited RGBW Sensor Array
US8417055B2 (en) 2007-03-05 2013-04-09 DigitalOptics Corporation Europe Limited Image processing method and apparatus
US8264576B2 (en) 2007-03-05 2012-09-11 DigitalOptics Corporation Europe Limited RGBW sensor array
US8199222B2 (en) 2007-03-05 2012-06-12 DigitalOptics Corporation Europe Limited Low-light video frame enhancement
US8878967B2 (en) 2007-03-05 2014-11-04 DigitalOptics Corporation Europe Limited RGBW sensor array
US8212882B2 (en) 2007-03-25 2012-07-03 DigitalOptics Corporation Europe Limited Handheld article with movement discrimination
US9160897B2 (en) 2007-06-14 2015-10-13 Fotonation Limited Fast motion estimation method
US8077234B2 (en) 2007-07-27 2011-12-13 Kabushiki Kaisha Toshiba Image pickup device and method for processing an interpolated color signal
US8989516B2 (en) 2007-09-18 2015-03-24 Fotonation Limited Image processing method and apparatus
US20090073284A1 (en) * 2007-09-19 2009-03-19 Kenzo Isogawa Imaging apparatus and method
US20090295973A1 (en) * 2008-05-20 2009-12-03 Texas Instruments Japan, Ltd. Solid-State Image Pickup Device
US8149296B2 (en) * 2008-05-20 2012-04-03 Texas Instruments Incorporated Solid-state image pickup device
US20110115957A1 (en) * 2008-07-09 2011-05-19 Brady Frederick T Backside illuminated image sensor with reduced dark current
US8119435B2 (en) 2008-07-09 2012-02-21 Omnivision Technologies, Inc. Wafer level processing for backside illuminated image sensors
US20100128149A1 (en) * 2008-11-24 2010-05-27 Samsung Electronics Co., Ltd. Color filter array, image sensor including the color filter array and system including the image sensor
US8350935B2 (en) * 2008-11-24 2013-01-08 Samsung Electronics Co., Ltd. Color filter array, image sensor including the color filter array and system including the image sensor
US8436925B2 (en) * 2008-12-08 2013-05-07 Sony Corporation Solid-state imaging device, method for processing signal of solid-state imaging device, and imaging apparatus
US20100141812A1 (en) * 2008-12-08 2010-06-10 Sony Corporation Solid-state imaging device, method for processing signal of solid-state imaging device, and imaging apparatus
EP2194721A3 (en) * 2008-12-08 2013-01-02 Sony Corporation Solid-state imaging device, method for processing signal of solid-state imaging device, and imaging apparatus
EP2194721A2 (en) 2008-12-08 2010-06-09 Sony Corporation solid-state imaging device, method for processing signal of solid-state imaging device, and imaging apparatus
WO2010066381A1 (en) * 2008-12-09 2010-06-17 Fotonation Ireland Limited Rgbw sensor array
US8224082B2 (en) 2009-03-10 2012-07-17 Omnivision Technologies, Inc. CFA image with synthetic panchromatic image
US20100232692A1 (en) * 2009-03-10 2010-09-16 Mrityunjay Kumar Cfa image with synthetic panchromatic image
US8068153B2 (en) 2009-03-27 2011-11-29 Omnivision Technologies, Inc. Producing full-color image using CFA image
US20100245636A1 (en) * 2009-03-27 2010-09-30 Mrityunjay Kumar Producing full-color image using cfa image
US20100265370A1 (en) * 2009-04-15 2010-10-21 Mrityunjay Kumar Producing full-color image with reduced motion blur
US8045024B2 (en) 2009-04-15 2011-10-25 Omnivision Technologies, Inc. Producing full-color image with reduced motion blur
US8203633B2 (en) * 2009-05-27 2012-06-19 Omnivision Technologies, Inc. Four-channel color filter array pattern
US20100302423A1 (en) * 2009-05-27 2010-12-02 Adams Jr James E Four-channel color filter array pattern
US8237831B2 (en) * 2009-05-28 2012-08-07 Omnivision Technologies, Inc. Four-channel color filter array interpolation
US8125546B2 (en) 2009-06-05 2012-02-28 Omnivision Technologies, Inc. Color filter array pattern having four-channels
US20100309350A1 (en) * 2009-06-05 2010-12-09 Adams Jr James E Color filter array pattern having four-channels
US8253832B2 (en) 2009-06-09 2012-08-28 Omnivision Technologies, Inc. Interpolation for four-channel color filter array
US20100309347A1 (en) * 2009-06-09 2010-12-09 Adams Jr James E Interpolation for four-channel color filter array
US9380243B2 (en) * 2009-09-15 2016-06-28 Samsung Electronics Co., Ltd. Image sensor for outputting RGB bayer signal through internal conversion and image processing apparatus including the same
US20140049670A1 (en) * 2009-09-15 2014-02-20 Bum Suk Kim Image sensor for outputting rgb bayer signal through internal conversion and image processing apparatus including the same
US20110176036A1 (en) * 2010-01-15 2011-07-21 Samsung Electronics Co., Ltd. Image interpolation method using bayer pattern conversion, apparatus for the same, and recording medium recording the method
US8576296B2 (en) * 2010-01-15 2013-11-05 Samsung Electronics Co., Ltd. Image interpolation method using Bayer pattern conversion, apparatus for the same, and recording medium recording the method
US20160112659A1 (en) * 2012-01-24 2016-04-21 Sony Corporation Image processing apparatus and image processing method, and program
US9445022B2 (en) * 2012-01-24 2016-09-13 Sony Corporation Image processing apparatus and image processing method, and program
US9699429B2 (en) 2012-03-27 2017-07-04 Sony Corporation Image processing apparatus, imaging device, image processing method, and program for reducing noise or false colors in an image
US10200664B2 (en) 2012-03-27 2019-02-05 Sony Corporation Image processing apparatus, image device, image processing method, and program for reducing noise or false colors in an image
US20140253766A1 (en) * 2013-03-08 2014-09-11 Kabushiki Kaisha Toshiba Solid-state imaging device
US9191636B2 (en) * 2013-03-08 2015-11-17 Kabushiki Kaisha Toshiba Solid-state imaging device having varying pixel exposure times
US20140328538A1 (en) * 2013-05-03 2014-11-06 Samsung Electronics Co., Ltd. Image signal processing device and method and image processing system using the same
US9324132B2 (en) * 2013-05-03 2016-04-26 Samsung Electronics Co., Ltd. Image signal processing device, image processing system having the same, and image signal processing method using the image signal processing device
CN106791461A (en) * 2016-11-25 2017-05-31 维沃移动通信有限公司 A kind of exposal control method, exposure control circuit and mobile terminal
EP4036617A4 (en) * 2019-09-26 2022-11-16 Sony Semiconductor Solutions Corporation Imaging device

Also Published As

Publication number Publication date
US7911507B2 (en) 2011-03-22
JP2008172289A (en) 2008-07-24
CN101222642A (en) 2008-07-16
CN101222642B (en) 2011-09-21
JP5085140B2 (en) 2012-11-28
TWI399086B (en) 2013-06-11
TW200836553A (en) 2008-09-01

Similar Documents

Publication Publication Date Title
US7911507B2 (en) Solid-state image pickup device in which the saturated signal quantity of a W pixel is controlled
US7586523B2 (en) Amplification-type CMOS image sensor of wide dynamic range
KR101464750B1 (en) Solid-state imaging device, signal processing device and signal processing method for solid-state imaging device, and imaging apparatus
US7969484B2 (en) Solid-state image sensing device
US7786921B2 (en) Data processing method, data processing apparatus, semiconductor device, and electronic apparatus
US7800526B2 (en) Data processing method, semiconductor device for detecting physical quantity distribution, and electronic apparatus
KR101424033B1 (en) Solid-state imaging device, method for driving solid-state imaging device, and imaging device
JP4691930B2 (en) PHYSICAL INFORMATION ACQUISITION METHOD, PHYSICAL INFORMATION ACQUISITION DEVICE, PHYSICAL QUANTITY DISTRIBUTION SENSING SEMICONDUCTOR DEVICE, PROGRAM, AND IMAGING MODULE
US8259189B2 (en) Electronic camera
US20080218598A1 (en) Imaging method, imaging apparatus, and driving device
US20070076269A1 (en) Imaging unit and image sensor
KR20130138360A (en) Image processing apparatus, image pickup apparatus, image processing method, and program
JP2010268529A (en) Solid-state imaging apparatus and electronic apparatus
KR101939402B1 (en) Solid-state imaging device and driving method thereof, and electronic apparatus using the same
JP4745735B2 (en) Image input apparatus and control method thereof
JP2008263547A (en) Imaging apparatus
JP4501350B2 (en) Solid-state imaging device and imaging device
JP2014241581A (en) Imaging apparatus, and control method and program for imaging apparatus
JP2008042298A (en) Solid-state image pickup device
JP2009055433A (en) Imaging apparatus
JP6160139B2 (en) Imaging apparatus and method
KR100761376B1 (en) Image sensor with expanding dynamic range
JP2014175778A (en) Imaging apparatus and imaging method
JP2012235534A (en) Imaging apparatus

Legal Events

Date Code Title Description
AS Assignment

Owner name: KABUSHIKI KAISHA TOSHIBA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:EGAWA, YOSHITAKA;HONDA, HIROTO;IIDA, YOSHINORI;AND OTHERS;REEL/FRAME:020574/0654;SIGNING DATES FROM 20080128 TO 20080129

Owner name: KABUSHIKI KAISHA TOSHIBA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:EGAWA, YOSHITAKA;HONDA, HIROTO;IIDA, YOSHINORI;AND OTHERS;SIGNING DATES FROM 20080128 TO 20080129;REEL/FRAME:020574/0654

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FEPP Fee payment procedure

Free format text: 7.5 YR SURCHARGE - LATE PMT W/IN 6 MO, LARGE ENTITY (ORIGINAL EVENT CODE: M1555); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20230322