WO2010131533A1 - Image capturing apparatus and control method for the same - Google Patents

Image capturing apparatus and control method for the same Download PDF

Info

Publication number
WO2010131533A1
WO2010131533A1 PCT/JP2010/056275 JP2010056275W WO2010131533A1 WO 2010131533 A1 WO2010131533 A1 WO 2010131533A1 JP 2010056275 W JP2010056275 W JP 2010056275W WO 2010131533 A1 WO2010131533 A1 WO 2010131533A1
Authority
WO
WIPO (PCT)
Prior art keywords
correction
pixel
pixels
pixel area
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/JP2010/056275
Other languages
English (en)
French (fr)
Inventor
Mie Ishii
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to CN201080020918.2A priority Critical patent/CN102422633B/zh
Priority to US13/255,923 priority patent/US8792021B2/en
Publication of WO2010131533A1 publication Critical patent/WO2010131533A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/67Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response
    • H04N25/671Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response for non-uniformity detection or correction
    • H04N25/677Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response for non-uniformity detection or correction for reducing the column or line fixed pattern noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/618Noise processing, e.g. detecting, correcting, reducing or removing noise for random or high-frequency noise

Definitions

  • the present invention relates to noise correction in an image capturing apparatus, and in particular relates to the correction of stripe noise.
  • CMOS image sensors have often been used in digital single-lens reflex cameras and video cameras.
  • An increase in the number of pixels, an increase in image capturing speed, and an increase in ISO speed (an improvement of sensitivity) have been required for such CMOS image sensors.
  • Pixel size tends to become smaller due to an increase in the number of pixels, and this means that less electric charge can be accumulated in each pixel. Meanwhile, in order to accommodate an increase in ISO speed, a larger gain needs to be applied to the obtained electric charge. Although the original optical signal component is amplified when gain is applied, noise generated by circuits and the like is also amplified, and therefore high ISO speed images have more random noise than low ISO speed images. [0004] Also, one method of realizing high-speed image capturing is multichannelization in which the image sensor is provided with a plurality of output paths, and readout is performed simultaneously for a plurality of pixels. However, since the amount of noise varies depending on the output path, there is the problem that the amount of noise differs for each CH (for each channel) .
  • FIG. 9 shows an overall layout of the CMOS image sensor.
  • the CMOS image sensor includes an aperture pixel area (effective pixel area) 903 having aperture pixels (effective pixels), and a vertical optical black area (VOB, first reference pixel area) 902 and a horizontal optical black area (HOB, second reference pixel area) 901 that have shielded pixels (reference pixels) .
  • the HOB 901 is provided adjacent to the head (on the left side) of the aperture pixel area 903 in the horizontal direction, and is an area shielded so that light does not enter.
  • the VOB 902 is provided adjacent to the head (on the top side) of the aperture pixel area 903 in the vertical direction, and is an area shielded so that light does not enter.
  • the aperture pixel area 903 and the optical black areas 901 and 902 have the same structure, and the aperture pixel area 903 is not shielded, whereas the optical black areas 901 and 902 are shielded.
  • the pixels in the optical black areas are called OB pixels.
  • OB pixels are used to obtain a reference signal whose signal level is a reference, that is to say a black reference signal.
  • the aperture pixels of the aperture pixel area 903 each accumulate an electric charge generated according to incident light, and output the electric charge .
  • FIG. 10 shows an example of a circuit of a unit pixel (corresponding to one pixel) in the CMOS image sensor.
  • a photodiode (hereinafter, called a PD) 1001 receives an optical image formed by an imaging lens, generates an electric charge, and accumulates the electric charge.
  • Reference numeral 1002 indicates a transfer switch that is configured by a MOS transistor.
  • Reference numeral 1004 indicates a floating diffusion (hereinafter, called an FD) .
  • the electric charge accumulated by the PD 1001 is transferred to the FD 1004 via the transfer MOS transistor 1002, and then converted to a voltage and output from a source follower amplifier 1005.
  • Reference numeral 1006 indicates a selection switch that collectively outputs one row-worth of pixel signals to a vertical output line 1007.
  • Reference numeral 1003 indicates a reset switch that, with use of a power source VDD, resets the - A -
  • FIG. 11 is a block diagram showing an exemplary configuration of a CMOS image sensor. Note that although FIG. 11 shows a 3 ⁇ 3 pixel configuration, normally the number of pixels is high, such as several millions or several tens of millions.
  • a vertical shift register 1101 outputs signals from row select lines Presl, Ptxl, Psell, and the like to a pixel area 1108.
  • the pixel area 1108 has the configuration shown in FIG. 9, and has a plurality of pixel cells Pixel. Even- numbered columns and odd-numbered columns of the pixel cells Pixel output pixel signals to vertical signal lines of a CHl and a CH2 respectively.
  • a constant current source 1107 is connected as a load to the vertical signals lines.
  • a readout circuit 1102 receives an input of a pixel signal from a vertical signal line, outputs the pixel signal to a differential amplifier 1105 via an n-channel MOS transistor 1103, and outputs a noise signal to the differential amplifier 1105 via an n-channel MOS transistor 1104.
  • a horizontal shift register 1106 controls the switching on/off of the transistors 1103 and 1104, and the differential amplifier 1105 outputs a difference between the pixel signal and the noise signal. Note that although the output path configuration in FIG. 11 is a two-channel configuration including CHl and CH2, high-speed processing is made possible by increasing the number of output paths. For example, if a total of eight output paths (in other words, four output paths both above and below in the image sensor configuration) are provided, eight pixels can be processed at the same time.
  • Using the differential amplifier described above enables obtaining an output signal from which noise unique to the CMOS image sensor has been removed.
  • a substantially uniform level difference occurs in each column. This is called vertical pattern noise.
  • the pixels have a common power source and GND. If the power source and GND fluctuate during a readout operation, the pixels read out at that time have a substantially uniform level difference. Normally, readout is performed in an image sensor row- by-row, from left to right, beginning at the top left of the screen. The level difference occurring due to fluctuation of the power source and the GND appears as a different level difference for substantially each row. This is called horizontal pattern noise.
  • Japanese Patent Laid-Open No. 7-67038 discloses a method of calculating a line average value for pixel signals of OB pixels, and subtracting the line average value from the pixel signals of aperture pixels in that row.
  • Japanese Patent Laid-Open No. 7-67038 Japanese Patent Laid-Open No. 2005-167918, and the like as well.
  • Japanese Patent Laid-Open No. 2005-167918 if the stripe noise is reduced to from 1/8 to 1/10 of the random noise, the stripe noise becomes buried in the random noise, and thus becomes difficult to see.
  • Japanese Patent Laid-Open No. 2005-167918 discloses a method in which noise is mitigated by adding random noise.
  • an image capturing apparatus includes: an image sensor having an effective pixel area composed of effective pixels that photoelectrically convert an object image, and a reference pixel area composed of reference pixels that output pixel signals to be a reference; a correction means for correcting pixel signals output from the effective pixel area with use of a correction value calculated based on the pixel signals output from the reference pixel area; and a determination means for determining whether correction is to be performed by the correction means, in accordance with values of a statistical measure of the pixel signals output from the reference pixel area.
  • a control method for an image capturing apparatus is a control method for an image capturing apparatus provided with an image sensor having an effective pixel area composed of effective pixels that photoelectrically convert an object image, and a reference pixel area composed of reference pixels that output pixel signals to be a reference, the control method including the steps of: calculating values of a statistical measure of the pixel signals output from the reference pixel area; calculating a correction value for correcting pixel signals output from the effective pixel area, based on the pixel signals output from the reference pixel area; correcting the pixel signals output from the effective pixel area with use of the correction value; and determining whether correction is to be performed in the correction step, according to the values of a statistical measure.
  • FIG. 1 is an overall block diagram showing a configuration of an image capturing apparatus according to Embodiment 1 of the present invention.
  • FIG. 2 is a cross-sectional view of a CMOS image sensor.
  • FIG. 3 is a diagram showing an example of a circuit corresponding to one column in a readout circuit block shown in FIG. 5.
  • FIG. 4 is a timing chart showing an example of operations performed by the CMOS image sensor.
  • FIG. 5 is a diagram showing an example of an image obtained by the image capturing apparatus .
  • FIG. 6 is flowchart of horizontal stripe noise correction processing according to Embodiment 1 of the present invention.
  • FIG. 7 is flowchart of horizontal stripe noise correction processing according to Embodiment 2 of the present invention.
  • FIG. 8 is flowchart of horizontal stripe noise correction processing according to Embodiment 3 of the present invention.
  • FIG. 9 is a diagram showing an overall layout of the CMOS image sensor.
  • FIG. 10 is a diagram showing an example of a circuit of a unit pixel (corresponding to one pixel) in the CMOS image sensor.
  • FIG. 11 is a block diagram showing an exemplary configuration of a CMOS image sensor.
  • FIG. 1 is an overall block diagram showing a configuration of an image capturing apparatus according to Embodiment 1 of the present invention.
  • an image sensor 101 is a CMOS image sensor that photoelectrically converts an object image formed by an imaging lens (not shown) .
  • An AFE 102 is an analog front end, which is a signal processing circuit that performs amplification, black level adjustment (OB clamp) , and the like on signals from the image sensor 101.
  • the AFE 102 receives an OB clamp timing, an OB clamp target level, and the like from a timing generation circuit 110, and performs processing in accordance with these.
  • the AFE 102 also converts processed analog signals into digital signals.
  • a DFE 103 is a digital front end that receives digital signals of pixels obtained by the conversion performed by the AFE 102, and performs digital processing such as image signal correction and pixel rearrangement.
  • Reference numeral 105 indicates an image processing apparatus that performs developing processing, and also processing such as displaying an image on a display circuit 108 and recording an image to a recording medium 109 via a control circuit 106.
  • the control circuit 106 also receives instructions from a control unit 107 and performs control such as sending instructions to the timing generation circuit 110.
  • a CompactFlash (registered trademark) memory or the like is used as the recording medium 109.
  • a memory circuit 104 is used as a work memory in the developing stage in the image processing apparatus 105.
  • the memory circuit 104 is also used as a buffer memory for when image capturing is performed in succession and developing processing is not completed on time.
  • the control unit 107 includes, for example, a power source switch for starting a digital camera, and a shutter switch that instructs the start of imaging preparation operations such as photometric processing and ranging processing, and the start of a series of image capturing operations for driving a mirror and a shutter, processing signals read out from the image sensor 101, and writing the resulting signals to the recording medium 109.
  • the configurations of pixel areas of the image sensor 101 are similar to the configurations in FIG. 9, and specifically the image sensor 101 includes an aperture pixel area (effective pixel area) 903 having aperture pixels (effective pixels), and a vertical optical black area (VOB, first reference pixel area) 902 and a horizontal optical black area (HOB, second reference pixel area) 901 that have shielded pixels (reference pixels) that are shielded such that light does not enter.
  • an aperture pixel area (effective pixel area) 903 having aperture pixels (effective pixels)
  • VOB vertical optical black area
  • HOB horizontal optical black area
  • FIG. 2 is a cross-sectional view of the CMOS image sensor.
  • An ALl, an AL2, and an AL3 (205, 204, and 203 in FIG. 2) are wiring layers, and are configured by aluminum or the ' like.
  • the AL3 (203) is also used for light shielding, and a pixel 1 and a pixel 2, which are OB pixels, are shielded by the AL3.
  • MLs (201) are microlenses that converge light onto photodiodes PD (207).
  • CFs (202) are color filters.
  • PTXs (206) are . transfer switches that transfer electric charge accumulated in the PDs (207) to FDs (208) .
  • CMOS image sensor (corresponding to one pixel) of the CMOS image sensor according to the present embodiment is similar to the configuration in FIG. 10, and therefore a detailed description thereof has been omitted.
  • the overall configuration of the CMOS image sensor according to the present embodiment is similar to the configuration in FIG. 11.
  • FIG. 11 disposed extending in the horizontal direction.
  • the gates of similar transfer MOS transistors 1002 of other pixel cells Pixel disposed in the same row are also connected to the first row select line Ptxl in common.
  • the gate of a reset MOS transistor 1003 in FIG. 10 is connected to a second row select line Presl (FIG. 11) disposed extending in the horizontal direction.
  • the gates of similar reset MOS transistors 1003 of other pixel cells Pixel disposed in the same row are also connected to the second row select line Presl in common.
  • the gate of a select MOS transistor 1006 in FIG. 10 is connected to a third row select line Psell disposed extending in the horizontal direction.
  • the gates of similar select MOS transistors 1006 of other pixel cells Pixel disposed in the same row are also connected to the third row select line Psell in common, and the first to third row select lines Ptxl, Presl, and Psell are connected to a vertical shift register 1101, and are thus driven.
  • Pixel cells Pixel and row select lines having a similar configuration are provided in the remaining rows shown in FIG. 11 as well. These row select lines include row select lines Ptx2 and Ptx3, Pres2 and Pres3, and Psel2 and Psel3, which are formed by the vertical shift register 1101.
  • the source of the select MOS transistor 1006 is connected to a terminal Vout of a vertical signal line disposed extending in the vertical direction.
  • FIG. 3 is a diagram showing an example of a circuit corresponding to one column in the readout circuit 1102 block shown in FIG. 11. The portion enclosed in dashed lines is the portion corresponding to the column, and the terminal Vout is connected to each vertical signal line.
  • FIG. 4 is a timing chart showing an example of operations performed by the CMOS image sensor.
  • the gate line Presl of the reset MOS transistor 1003 changes to the high level prior to the readout of the signal electric charge from the photodiode 1001. Accordingly, the gate of the amplification MOS transistor is reset to a reset power source voltage.
  • a gate line PcOr (FIG. 3) of a clamp switch changes to the high level, and thereafter the gate line Psell of the select MOS transistor 1006 changes to the high level.
  • reset signals (noise signals) having reset noise superimposed thereon are read out to the vertical signal line Vout, and clamped by clamp capacitors CO in the columns.
  • the gate line PcOr of the clamp switch returns to the low level, and thereafter a gate line Pctn of a transfer switch on the noise signal side changes to the high level, and the reset signals are held in noise holding capacitors Ctn provided in the columns.
  • a gate line Pets of a transfer switch on the pixel signal side is changed to the high level, and thereafter the gate line Ptxl of the transfer MOS transistor 1002 changes to the high level, and the signal electric charge of the photodiode 1001 is transferred to the gate of a source follower amplifier 1005 and also read out to the vertical signal line Vout at the same time.
  • the gate line Ptxl of the transfer MOS transistor 1002 returns to the low level, and thereafter the gate line Pets of the transfer switch on the pixel signal side changes to the low level. Accordingly, changed portions (optical signal components) from the reset signal's are read out to signal holding capacitors Cts provided in the columns. As a result of the operations up to this point, the signal electric charges of the pixels Pixel connected in the first row are held in the signal holding capacitors Ctn and Cts connected in the respective columns .
  • the gates of horizontal transfer switches in the columns sequentially change to the high level in accordance with signals Ph supplied from a horizontal shift register 1106.
  • the voltages held in the signal holding capacitors Ctn and Cts are sequentially read out by horizontal output lines Chn and Chs, difference processing is performed thereon by an output amplifier, and the resulting signals are sequentially output to an output terminal OUT.
  • the horizontal output lines Chn and Chs are reset to reset voltages VCHRN and VCHRS by a reset switch. This completes the readout of the pixel cells Pixel connected in the first row.
  • FIG. 5 shows an example of images obtained by the processing described above. There is a time difference between Pctn and Pets, and if the power source and GND fluctuate during such time, the signal level of the entire row uniformly changes. Horizontal stripe noise appears since such fluctuation is different for each row. Since more gain is supplied when high ISO speed imaging is performed (when high sensitivity imaging is performed) , the noise is also amplified, and therefore the horizontal stripe noise becomes prominent.
  • FIG. 6 is flowchart of horizontal stripe noise correction processing according to Embodiment 1 of the present invention.
  • the following is a description of the stripe noise correction method according to the present embodiment with reference to this flowchart. Note that the description in the present embodiment is based on the assumption that correction is performed after acquiring an image that has not been developed yet.
  • the correction value and correction coefficient that appear in the following description are defined as follows.
  • the correction value is a value obtained from HOB signals for each row, and the pixels in each row are corrected in accordance with expressions that are described later.
  • the correction coefficient is defined as a coefficient by which a shift amount is multiplied, where the shift amount is an amount of shift from a black reference value calculated from the HOB signals.
  • step S601. The readout is performed from left to right row-by-row, starting at the upper left of the pixel configuration layouts shown in FIGS. 5 and 9.
  • the VOB area is provided at the top of the screen in the pixel configurations in FIGS. 5 and 9, and first a standard deviation ⁇ V oB of pixel signals output from the VOB area is calculated (step S602) (first calculation step).
  • the pixel area targeted for calculation may be any area as long as OB pixels are included, it is better for the calculation to be performed using pixel signals from as many pixels as possible (first predetermined area) in order to properly determine the state of the image.
  • ⁇ VO B is substantially equal to the standard deviation ⁇ of the overall image (The same applies to ⁇ E0B as well. In other words, O VOB * O HOB * ⁇ of overall image . ) .
  • step S603 a determination is made as to whether correction is to be performed, based on the calculated value of O VOB - If O ⁇ VOB is less than or equal to ⁇ t h_yohr which is a threshold value set in advance, processing proceeds to step S604, and correction is performed. If O VO B is greater than ⁇ t h_vo B/ processing proceeds to step S609, and correction is not executed. The reason for this is that if ⁇ of the image (here, O VOB ) is high, properly obtaining the correction value is difficult, and there is the risk of performing erroneous correction, that is to say, increasing the amount of noise.
  • a correction coefficient ⁇ is determined according to the value of O VOB . Normally, since erroneous correction tends to be performed when the amount of shift from the black reference value calculated from the HOB signals is set as the correction value, a favorable correction result is obtained by setting the correction coefficient to a value of 1 or less, determining the correction value, and then executing correction. In particular, there is a stronger tendency for overcorrection to be performed as the number of columns in the HOB decreases, or as the amount of random noise in the image increases. In other words, it is desirable for a. to be lower as O VOB is higher (A) .
  • the correction coefficient ⁇ may be caused to reflect the width of the HOB as well (B) .
  • can be set to 0.5 if the width of the HOB is 100, and to 1.0 if the width of the HOB is 400.
  • the correction coefficient ⁇ may be a table or function in the case of (A) and (B) .
  • step S605 in order to determine the correction value for an i-th row, an integrated value Si of pixel signals output from the HOB (second predetermined area) is calculated (i being a vertical coordinate) (second calculation step) . Since correction only needs to be performed for effective pixels, step S605 may begin to be executed when the readout row reaches the effective pixel area. Also, in the case in which there are multiple channels as the output paths as in FIG. 11, an integrated value of pixel signals output from the HOB may be calculated for each output path, or since horizontal stripes are constant for each row regardless of the CH (channel) and color, an integrated value of all pixel signals output from HOB pixels in a row, regardless of the output path, may be calculated. Alternatively, in consideration of simplifying the calculation and the like, an integrated value may be calculated for each of the colors R, G, and B.
  • step S606 a correction value Vi for the i-th row is determined (i being a vertical coordinate) .
  • the correction value is calculated according to expression (1) (third calculation step) . Specifically, an average value is calculated by dividing the integrated value calculated in step S605 by the number of data pieces used in the calculation of the integrated value, and then a black reference level set in advance is subtracted from the average value.
  • correction value Vi ⁇ x (Si/number of data pieces - black reference value) ... (1)
  • a correction value that reflects the state of random noise in an image is determined, which enables the execution of stripe noise correction without newly increasing the amount of noise.
  • correction is not performed, or processing in which the correction amount is reduced by reducing the correction coefficient is executed, thus enabling the execution of stripe noise correction without newly increasing the amount of noise.
  • the reason for this is that a situation in which the amount of noise is newly increased often occurs in the case in which the correction value is larger than the proper correction value, and due to a large amount of random noise, and the present embodiment solves this issue.
  • an integrated value of the HOB is calculated using only the signals of pixels in rows on which correction is to be performed in step S605 in the present embodiment
  • the integrated value may be calculated using several rows of HOB pixels in higher/lower rows.
  • a more proper correction value can be calculated by adding processing such as clipping such a pixel to a certain level before calculating the integrated value, or not using (skipping) such a pixel in the calculation of the integrated value.
  • processing such as clipping such a pixel to a certain level before calculating the integrated value, or not using (skipping) such a pixel in the calculation of the integrated value.
  • a description is given in the present embodiment in which correction processing is performed after acquiring an image, such processing may be performed in the AFE 102 at the same time as readout .
  • Embodiment 2 Embodiment 2
  • Embodiment 2 of the present invention with reference to the flowchart shown in FIG. 7. Note that the processing up to and including the acquisition of an image that has not been developed yet is similar to that in Embodiment 1, and therefore a description thereof has been omitted.
  • step S701. Similarly to Embodiment 1, the readout is performed from left to right row-by-row, starting at the upper left of the pixel configuration layouts shown in FIGS. 5 and 9.
  • the standard deviation ⁇ V0B of pixel signals output from the VOB area is calculated (step S702).
  • the pixel area targeted for calculation may be any area as long as OB pixels are included, it is better for the calculation to be performed using as many pixels as possible in order to properly determine the state of the image.
  • the integrated value Si is held in a memory, and integrated value calculation is executed through to the last row of the image.
  • the pixel signals may be demultiplexed into channels before executing the integrated value calculation. Also, in the calculation of the integrated value Si of the pixel signals output from the HOB pixels in the i-th row, not only the pixel signals output from the HOB pixels in the i-th row, but also signals output from HOB pixels in several higher/lower rows may be used.
  • step S704 a standard deviation ⁇ v ii ne of integrated values from SO to Sn of the pixel signals output from the HOB that were calculated in step S703 is obtained (fourth calculation step) .
  • step S705 a determination is made as to whether correction is to be performed (step S705) , based on the value of O VOB obtained in step S702 and the value of ⁇ V ii ne obtained in step S704.
  • the calculation ⁇ v une/ ⁇ voB is performed, and if the result is greater than or equal to a determination value K, processing proceeds to step S706, and correction is executed. If the result is less than the determination value K, processing proceeds to step S710, and correction is not executed.
  • o V ii ne reflects the magnitude and amount of stripe noise in the image, and if ⁇ V ii ne is approximately greater than or equal to 0.1 times ⁇ V o Br which reflects the random noise component of the image, stripe noise can be confirmed visually as well, and therefore correction is executed.
  • ⁇ V iin e is less than 0.1 times ⁇ V oB f the stripe noise is not prominent. In this case, correction is not executed since there is the risk of undesirably creating stripe noise if correction is executed.
  • the correction coefficient ⁇ is determined according to the value of O VOB (step S706) .
  • the correction coefficient is calculated similarly to as in Embodiment 1.
  • the correction value Vi for the i-th row is determined (i being a vertical coordinate) .
  • the correction value is calculated in accordance with Expression (1) shown in Embodiment 1. Specifically, an average value is calculated by dividing the integrated value calculated in step S704 by the number of data pieces used in the calculation of the integrated value, and then a black reference level set in advance is subtracted from the average value. The result is then multiplied by the correction coefficient a determined in step S706, thus obtaining the correction value for that row.
  • step S708 correction is performed on an effective pixel unit in the i-th row in accordance with Expression (2) , with use of the correction value Vi.
  • Embodiment 3 of the present invention with reference to the flowchart shown in FIG. 8. Note that the processing up to and including readout for all pixels is similar to that in Embodiment 1. Also, the processing up to and including the determination of whether to execute correction is substantially the same as in Embodiment 2. [0061] First, readout is started in step S801. Similarly to Embodiment 1, the readout is performed from left to right row-by-row, starting at the upper left of the pixel configuration layouts shown in FIGS. 5 and 9. Next, the standard deviation ⁇ V o B of pixel signals output from the VOB area is calculated (step S802) . Similarly to Embodiment 1, although the pixel area targeted for calculation may be any area as long as OB pixels are included, it is better for the calculation to be performed using as many pixels as possible in order to properly determine the state of the image .
  • the integrated value Si and the standard deviation ⁇ are held in a memory, and the integrated value calculation is executed through to the last row of the image.
  • the pixel signals may be demultiplexed into channels before executing the integrated value calculation. Also, in the calculation of the integrated value Si and the standard deviation Oi of signals of the i-th row, not only the pixel signals output from the HOB pixels in the i-th row, but also signals output from HOB pixels in several higher/lower rows may be used.
  • step S804 the Sn standard deviation ⁇ viine is obtained from the integrated values SO of the pixel signals output from the HOB that were calculated in step S803.
  • a determination is made as to whether correction is to be performed (step S805) , based on the value of ⁇ V 0B obtained in step S802 and the value of ⁇ v ii n e obtained in step S804.
  • the calculation o " viine/o " voB is performed, and if the result is greater than or equal to the determination value K, processing proceeds to step S806, and correction is executed. If the result is less than the determination value K, processing proceeds to step S810, and correction is not executed.
  • the correction coefficient ⁇ i for the i-th row is calculated using ⁇ i that was calculated in step S803. If ⁇ i is high, the degree of reliability is low since the integrated value S ⁇ of HOB pixels in the i-th row has been calculated using pixel data that includes a large amount of variation. For this reason, the correction coefficient a. ⁇ is set to a low value.
  • the correction coefficient a ⁇ is set to 1, or a number less than 1, but close to 1.
  • correction coefficient ⁇ ⁇ ⁇ x ⁇ VOB /J ⁇ ⁇ i ... (4)
  • is a constant that is determined arbitrarily.
  • the correction coefficient ot ⁇ is set to 1 since there is the risk of over-correction if the correction coefficient exceeds 1. Note that the expression for calculating the correction coefficient in the present embodiment is merely an example, and the present invention is not limited to this.
  • the correction value is calculated in accordance with Expression (5) . Specifically, an average value is calculated by dividing the integrated value calculated in step S803 by the number of data pieces used in the calculation of the integrated value, and then a black reference level set in advance is subtracted from the average value. The result is then multiplied by the correction coefficient di determined in step S806, thus obtaining the correction value for that row.
  • correction value Vi a ⁇ * (Si/number of data pieces - black reference value) ... (5)
  • step S808 correction is performed on an effective pixel unit in the i-th row in accordance with Expression (2), with use of the correction value Vi.
  • the reference pixels included in the reference pixel areas do not need to include photodiodes. In such a case, the reference pixels do not need to be shielded.
  • a determination is made as to whether correction is to be performed, based on the value of the standard deviation of pixel signals that is a reference, thus enabling suppressing the occurrence of new stripe noise in an image due to over-correction. Also, horizontal stripe noise can be effectively corrected by changing the correction coefficient according to the value of the standard deviation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)
  • Solid State Image Pick-Up Elements (AREA)
PCT/JP2010/056275 2009-05-11 2010-03-31 Image capturing apparatus and control method for the same Ceased WO2010131533A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201080020918.2A CN102422633B (zh) 2009-05-11 2010-03-31 摄像设备及其控制方法
US13/255,923 US8792021B2 (en) 2009-05-11 2010-03-31 Image capturing apparatus and control method for the same

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2009-114979 2009-05-11
JP2009114979A JP5489527B2 (ja) 2009-05-11 2009-05-11 撮像装置及びその制御方法

Publications (1)

Publication Number Publication Date
WO2010131533A1 true WO2010131533A1 (en) 2010-11-18

Family

ID=43084911

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2010/056275 Ceased WO2010131533A1 (en) 2009-05-11 2010-03-31 Image capturing apparatus and control method for the same

Country Status (4)

Country Link
US (1) US8792021B2 (enExample)
JP (1) JP5489527B2 (enExample)
CN (1) CN102422633B (enExample)
WO (1) WO2010131533A1 (enExample)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103139494A (zh) * 2011-12-02 2013-06-05 佳能株式会社 摄像装置

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5787856B2 (ja) * 2012-09-24 2015-09-30 株式会社東芝 固体撮像装置
JP6037170B2 (ja) * 2013-04-16 2016-11-30 ソニー株式会社 固体撮像装置およびその信号処理方法、並びに電子機器
EP3236652A4 (en) * 2014-12-19 2018-07-11 Olympus Corporation Endoscope and endoscope system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006121478A (ja) * 2004-10-22 2006-05-11 Canon Inc 撮像装置
JP2007201735A (ja) * 2006-01-25 2007-08-09 Canon Inc 撮像装置及びその制御方法
WO2007111264A1 (ja) * 2006-03-24 2007-10-04 Nikon Corporation 信号処理方法、信号処理システム、係数生成装置、およびデジタルカメラ
JP2008067060A (ja) * 2006-09-07 2008-03-21 Canon Inc 撮像装置
JP2009081528A (ja) * 2007-09-25 2009-04-16 Nikon Corp 撮像装置

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0767038A (ja) 1993-08-24 1995-03-10 Sony Corp 固体撮像装置
EP1143706A3 (en) * 2000-03-28 2007-08-01 Fujitsu Limited Image sensor with black level control and low power consumption
US7113210B2 (en) * 2002-05-08 2006-09-26 Hewlett-Packard Development Company, L.P. Incorporating pixel replacement for negative values arising in dark frame subtraction
JP4383827B2 (ja) * 2003-10-31 2009-12-16 キヤノン株式会社 撮像装置、白傷補正方法、コンピュータプログラム、及びコンピュータ読み取り可能な記録媒体
JP4144517B2 (ja) 2003-12-05 2008-09-03 ソニー株式会社 固体撮像装置、撮像方法
JP2006025148A (ja) 2004-07-07 2006-01-26 Sony Corp 信号処理装置及び方法
JP4396425B2 (ja) * 2004-07-07 2010-01-13 ソニー株式会社 固体撮像装置及び信号処理方法
JP4742652B2 (ja) * 2005-04-14 2011-08-10 富士フイルム株式会社 撮像装置
JP2007027845A (ja) * 2005-07-12 2007-02-01 Konica Minolta Photo Imaging Inc 撮像装置
JP4827524B2 (ja) * 2005-12-26 2011-11-30 キヤノン株式会社 撮像装置
US7760258B2 (en) * 2007-03-07 2010-07-20 Altasens, Inc. Apparatus and method for stabilizing image sensor black level

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006121478A (ja) * 2004-10-22 2006-05-11 Canon Inc 撮像装置
JP2007201735A (ja) * 2006-01-25 2007-08-09 Canon Inc 撮像装置及びその制御方法
WO2007111264A1 (ja) * 2006-03-24 2007-10-04 Nikon Corporation 信号処理方法、信号処理システム、係数生成装置、およびデジタルカメラ
JP2008067060A (ja) * 2006-09-07 2008-03-21 Canon Inc 撮像装置
JP2009081528A (ja) * 2007-09-25 2009-04-16 Nikon Corp 撮像装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103139494A (zh) * 2011-12-02 2013-06-05 佳能株式会社 摄像装置
US9432603B2 (en) 2011-12-02 2016-08-30 Canon Kabushiki Kaisha Imaging apparatus

Also Published As

Publication number Publication date
US8792021B2 (en) 2014-07-29
JP5489527B2 (ja) 2014-05-14
CN102422633A (zh) 2012-04-18
CN102422633B (zh) 2015-01-21
US20120044390A1 (en) 2012-02-23
JP2010263585A (ja) 2010-11-18

Similar Documents

Publication Publication Date Title
US11089256B2 (en) Image sensor with correction of detection error
KR101494243B1 (ko) 촬상장치 및 그 구동방법
US7999866B2 (en) Imaging apparatus and processing method thereof
US8975569B2 (en) Solid-state imaging device, driving method thereof, and solid-state imaging system to perform difference processing using effective and reference pixels
US7679658B2 (en) Solid-state image pickup apparatus
US9544512B2 (en) Image capturing apparatus and method of reading out pixel signals from an image sensor
US9093351B2 (en) Solid-state imaging apparatus
US8422819B2 (en) Image processing apparatus having a noise reduction technique
JP5322816B2 (ja) 撮像装置およびその制御方法
US8023022B2 (en) Solid-state imaging apparatus
JP2012231333A (ja) 撮像装置及びその制御方法、プログラム
US7630007B2 (en) Driving method for solid-state imaging device and solid-state imaging device
US8339482B2 (en) Image capturing apparatus with correction using optical black areas, control method therefor and program
US9800810B2 (en) Imaging apparatus and imaging system
US10321075B2 (en) Imaging apparatus and imaging system
US8792021B2 (en) Image capturing apparatus and control method for the same
US7787036B2 (en) Imaging apparatus configured to correct noise
JP5224900B2 (ja) 固体撮像装置
JP6071323B2 (ja) 撮像装置、その制御方法、および制御プログラム

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201080020918.2

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10774791

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 13255923

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10774791

Country of ref document: EP

Kind code of ref document: A1