WO2021090663A1 - Image capture device and method for correcting pixel signal in image capture device - Google Patents

Image capture device and method for correcting pixel signal in image capture device Download PDF

Info

Publication number
WO2021090663A1
WO2021090663A1 PCT/JP2020/039150 JP2020039150W WO2021090663A1 WO 2021090663 A1 WO2021090663 A1 WO 2021090663A1 JP 2020039150 W JP2020039150 W JP 2020039150W WO 2021090663 A1 WO2021090663 A1 WO 2021090663A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
pixels
charge
amount
sensitivity ratio
Prior art date
Application number
PCT/JP2020/039150
Other languages
French (fr)
Japanese (ja)
Inventor
進 大木
Original Assignee
ソニーセミコンダクタソリューションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーセミコンダクタソリューションズ株式会社 filed Critical ソニーセミコンダクタソリューションズ株式会社
Publication of WO2021090663A1 publication Critical patent/WO2021090663A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/12Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with one sensor only
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/62Detection or reduction of noise due to excess charges produced by the exposure, e.g. smear, blooming, ghost image, crosstalk or leakage between pixels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith

Definitions

  • the present technology relates to an imaging device and a pixel signal correction processing method in the imaging device.
  • Patent Document 1 discloses a technique for reducing deterioration of color reproducibility due to incomplete transfer, particularly deterioration of color reproducibility at high sensitivity.
  • Patent Document 1 describes, for each settable shooting sensitivity, for each color generated by using output signals of different colors included in the imaging signal output from the imaging means corresponding to different exposure amounts.
  • the color ratio correction coefficient is stored in the storage means, and the correction means for correcting the image pickup signal output from the image pickup means is controlled by using the stored color ratio correction coefficient to correspond to the shooting sensitivity set in the shooting of the subject.
  • the color ratio correction coefficient is acquired from the storage means, and the output signals of different colors included in the image pickup signal of the subject output from the image pickup means are obtained, and the color ratio correction coefficient of the corresponding color among the acquired color ratio correction coefficients is obtained.
  • the imaging apparatus to be used and corrected is disclosed.
  • Patent Document 2 discloses a technique for improving the accuracy of a pixel signal output from an image pickup device included in an image pickup device. Specifically, Patent Document 2 provides incomplete transfer charge amount information according to the incomplete transfer charge amount remaining in the charge storage unit at the time of charge transfer from the charge storage unit to the charge holding unit for each of the plurality of pixels. , A storage means that stores in advance for each pixel corresponding to the charge storage unit, and a correction means that corrects the pixel signal output by the signal output unit based on the incompletely transferred charge amount information stored by the storage means. Disclose a device comprising.
  • a technique as disclosed in the above document acquires a correction coefficient or data for each pixel in advance and stores it in a storage means in order to correct the deterioration of the linearity of pixel characteristics due to incomplete transfer of electric charge or deterioration of transfer efficiency. I had to do it. Further, there is a problem that the amount of correction data acquired and stored in advance becomes enormous in order to cope with changes in the operating temperature and the exposure time.
  • an object of the present technology is to provide an imaging device and a pixel signal correction processing method in the imaging device that can secure the linearity of the pixel characteristics without lowering the charge transfer efficiency.
  • the present technology for solving the above problems is configured to include the following invention-specific matters or technical features.
  • the present technology is an imaging device including a plurality of pixels capable of accumulating electric charges generated according to the amount of incident light.
  • the plurality of pixels have a first pixel formed so as to have a first saturated charge amount, and a second saturated charge amount smaller than the first saturated charge amount. Includes a second pixel formed.
  • the correction processing method includes reading out the electric charge from a plurality of pixels accumulating electric charges generated according to the amount of incident light, and correcting a signal based on the amount of the electric charge read out from the plurality of pixels.
  • the plurality of pixels are formed so as to have a first pixel formed to have a first saturated charge amount and a second saturated charge amount smaller than the first saturated charge amount.
  • the correction is any of the plurality of pixels based on the sensitivity ratio based on the amount of electric charge transferred from the first pixel and the sensitivity ratio based on the amount of electric charge transferred from the second pixel. Corrects the signal based on the amount of charge transferred from.
  • the means does not simply mean a physical means, but also includes a case where the function of the means is realized by software. Further, the function of one means may be realized by two or more physical means, or the function of two or more means may be realized by one physical means.
  • system means a logical assembly of a plurality of devices (or functional modules that realize a specific function), and whether or not each device or functional module is in a single housing. Is not particularly limited.
  • FIG. 1 is a diagram showing an example of a configuration of an imaging device according to an embodiment of the present technology.
  • the image pickup device 1 includes, for example, a control unit 10, a pixel array unit 20, a vertical drive unit 30, a horizontal drive unit 40, and a column processing unit 50. Further, the image pickup device 1 may include a signal processing unit 60 and an image memory 70.
  • the imaging device 1 can be configured, for example, as a system on chip (SOC), but is not limited to this.
  • SOC system on chip
  • the control unit 10 is a circuit that comprehensively controls the image pickup device 1.
  • the control unit 10 may include a timing generator (not shown) that generates various timing signals.
  • the control unit 10 controls the operations of the vertical drive unit 30, the horizontal drive unit 40, and the column processing unit 50 according to various timing signals generated by the timing generator based on, for example, a clock signal supplied from the outside.
  • the pixel array unit 20 includes a group of photoelectric conversion elements arranged in an array that generates and stores electric charges according to the intensity of incident light.
  • the embedded photodiode is an aspect of the photoelectric conversion element 222 (see FIG. 4A).
  • Each or some of the plurality of photoelectric conversion elements 222 may constitute one pixel.
  • Each pixel typically includes a color filter (see FIG. 4A) and is configured to receive light of a color component corresponding to the color filter.
  • a quad arrangement and a Bayer arrangement are known, but the arrangement is not limited to this.
  • the vertical direction of the pixel array unit 20 is referred to as a column direction or a vertical direction, and the left-right direction is defined as a row direction or a horizontal direction.
  • the details of the pixel configuration in the pixel array unit 20 will be described later.
  • the vertical drive unit 30 includes a shift register, an address decoder (not shown), and the like. Under the control of the control unit 10, the vertical drive unit 30 drives, for example, a group of pixels of the pixel array unit 20 in a row-by-row manner in the vertical direction. In the present disclosure, the vertical drive unit 30 sweeps out (reset) unnecessary charges from the read-out scanning circuit 32 and the photoelectric conversion element 222 (see FIG. 4A) that scan for reading signals, and the sweep-out scanning circuit 34. Can include.
  • the read-out scanning circuit 32 sequentially selects and scans the pixel groups of the pixel array unit 20 row by row in order to read out a signal based on the electric charge from each pixel.
  • the sweep scanning circuit 34 performs sweep scanning for the read row for which the read operation is performed by the read scan circuit 32, ahead of the read operation by the time of the operating speed of the electronic shutter.
  • the so-called electronic shutter operation is performed by sweeping (resetting) unnecessary charges by the sweep scanning circuit 34.
  • the electronic shutter operation refers to an operation of sweeping out the electric charge of the photoelectric conversion element 222 and newly starting exposure (accumulation of electric charge).
  • the signal based on the electric charge read by the read operation by the read scan circuit 32 corresponds to the magnitude of the light energy incidented after the read operation immediately before or the electronic shutter operation. Then, the period from the read timing by the immediately preceding read operation or the sweep operation timing by the electronic shutter operation to the read timing by the current read operation is the charge accumulation time in the pixel.
  • the horizontal drive unit 40 includes a shift register, an address decoder (not shown), and the like. Under the control of the control unit 10, the horizontal drive unit 40 drives, for example, a group of pixels of the pixel array unit 20 in a row-by-column manner in the horizontal direction. By selectively driving the pixels by the vertical drive unit 30 and the horizontal drive unit 40, a signal based on the electric charge accumulated in the selected pixels is output to the column processing unit 50.
  • the column processing unit 50 performs predetermined processing such as CDS (Correlated Double Sampling) processing on the signals output from each of the pixel groups in the selected row of the pixel array unit 20. Specifically, the column processing unit 50 receives the difference signals output from each of the pixel groups in the selected row, and obtains the level (potential) difference indicated by the difference signals for each row of pixels. Get the signal. Further, the column processing unit 50 can remove fixed pattern noise from the acquired signal. The column processing unit 50 converts the signal subjected to such predetermined processing into a digital signal by the A / D conversion unit (not shown), and outputs this as a pixel signal.
  • predetermined processing such as CDS (Correlated Double Sampling) processing
  • the signal processing unit 60 is a circuit that has at least an arithmetic processing function and performs various signal processing such as arithmetic processing on the pixel signal output from the column processing unit 50.
  • a digital signal processor (DSP) is an aspect of the signal processing unit 60.
  • the image memory 70 temporarily stores data necessary for signal processing by the signal processing unit 60.
  • the signal processing unit 60 of the present disclosure performs processing for correcting normal pixels and / or correction pixels based on pixel signals from normal pixels and / or correction pixels.
  • the signal processing unit 60 may be configured to perform all or part of the above-mentioned arithmetic processing in the column processing unit 50.
  • FIG. 2 is a diagram for explaining an example of the pixel arrangement of the pixel array unit 20 according to the embodiment of the present technology.
  • each of the plurality of pixels includes a color filter, and the pixels of each color component corresponding to the color filter are arranged according to a predetermined arrangement pattern.
  • Color filters include, but are not limited to, for example, three types of filters, red, green and blue.
  • some of the plurality of pixels are formed to function as correction pixels.
  • the pixels indicated by hatching are shown as correction pixels.
  • the correction pixel is a pixel formed so as to secure a charge transfer ability with a smaller amount of saturated charge as compared with a normal pixel. By performing the correction process using the signal based on such a correction pixel, it is possible to prevent the spatial resolution from being lowered in the low illuminance region.
  • 2 (a) and 2 (b) show an example of the pixel arrangement of each color component according to the Bayer arrangement.
  • the red pixel R, the green pixel G, and the blue pixel B are arranged in a ratio of 1: 2: 1.
  • the green pixel G may be referred to as a green pixel Gb and Gr, respectively, depending on, for example, adjacent pixels in the horizontal direction.
  • four pixels of red pixel R, green pixels Gb and Gr, and blue pixel B form one pixel block, and the pixel block groups are arranged in an array to form a Bayer array.
  • one of the green pixels G that is, the green pixel Gr
  • the green pixel Gb may be formed as the correction green pixel Gb'.
  • FIG. 3B shows an example in which the correction pixels are arranged so that the distance between the correction pixels is larger than the distance between the pixels of the same color component. That is, the corrected green pixels Gr'shown in FIG. 3B are decimated and arranged as compared with FIG. In this example, every other green pixel Gr is formed as a correction green pixel Gr', but the interval thereof can be appropriately set. As another example, the correction pixels (not limited to the green pixels) may be randomly arranged in an array of a plurality of pixels of the pixel array unit 20.
  • FIGS. 3A and 3B are diagrams for explaining an example of the pixel arrangement of the pixel array unit 20 according to the embodiment of the present technology.
  • FIGS. 3A and 3B show an example of an array of pixels in which the pixels of each color component are arranged according to a quad array.
  • the quad array is an array in which four adjacent pixels have the same color component.
  • the pixel block group of each color component has a Bayer array.
  • the quad array forms a pixel block composed of 2 ⁇ 2 pixels for each color component, and the pixel blocks are arranged according to the Bayer array.
  • the pixel array as shown in the figure is sometimes referred to as a quad Bayer array.
  • one pixel for example, the lower left pixel in the pixel block in the figure
  • four adjacent pixels having the same color component is formed as a correction pixel.
  • two pixels out of four adjacent pixels having the same color component are formed as correction pixels. May be done. As a result, the number of correction pixels is increased, so that it is possible to prevent the spatial resolution from being lowered in the low illuminance region.
  • a part of each of the four red pixel group R and the four blue pixel group B may be formed as correction pixels.
  • the green pixel G does not include the correction pixel. Therefore, for example, by effectively using the signal read from the green pixel G in the re-mosaic process for converting to a different pixel array, the re-mosaic process can be efficiently performed.
  • a part of the green pixel G adjacent to the red pixel R may be formed as a correction pixel in the horizontal direction.
  • the number of correction pixels can be reduced, and the calculation load required for the correction process can be reduced.
  • FIG. 4A is a partial vertical sectional view showing an example of a schematic structure of pixels in a pixel array unit according to an embodiment of the present technology. In the figure, adjacent normal pixels 200a and correction pixels 200b are shown.
  • the pixel 200 in the pixel array unit 20 includes, for example, a microlens 201, a filter layer 210, a semiconductor substrate 220, and a wiring layer 230, and these are laminated in this order. That is, the imaging device 1 (see FIG. 1) of the present disclosure is a back-illuminated imaging device in which a wiring layer 230 is provided on a surface opposite to the surface of the semiconductor substrate 220 to be irradiated with light.
  • the microlens 201 is formed on the surface of the pixel 200 and collects the incident light on the irradiation surface of the semiconductor substrate 220 via the filter layer 210.
  • one microlens 201 is formed for one pixel 200, but the present invention is not limited to this, and one microlens 201 is formed for a plurality of (for example, two or four) pixels 200. May be formed.
  • the filter layer 210 includes a color filter so that each pixel 200 receives light of a specific color component.
  • Color filters include, but are not limited to, for example, red, green and blue filters. As mentioned above, in the present disclosure, the color filter is configured to follow a quad or Bayer sequence.
  • the semiconductor substrate 220 includes a photoelectric conversion element 222 that receives incident light and accumulates electric charges.
  • the photoelectric conversion element 222 is, for example, an embedded photodiode composed of a P-type semiconductor region 224 and an N-type semiconductor region 226.
  • the N-type semiconductor region 226 of this example is a charge storage region in which charges can be stored.
  • the N-type semiconductor region 226 of the photoelectric conversion element 222 that functions as the correction pixel 200b is smaller than the N-type semiconductor region 226 of the photoelectric conversion element 222 that functions as the normal pixel 200a (that is, the saturated charge amount). Is formed).
  • the normal pixel 200a and the correction pixel 200b are formed, for example, by applying a resist for forming the N-type semiconductor region 226 on the epitaxial growth layer and performing implantation (impurity injection) in the semiconductor manufacturing process.
  • the correction pixel 200b is formed, for example, by reducing the area of the opening of the resist so that the amount of impurities injected is reduced.
  • the wiring layer 230 includes a transfer gate electrode 232 and a metal wiring 234.
  • the transfer gate electrode 232 is electrically connected to the photoelectric conversion element 222.
  • a gate voltage is applied to the transfer gate electrode 232 under the control of the control unit 10. By controlling the gate voltage to the transfer gate electrode 232 during the operation of the image pickup device 1, the charge accumulated in the charge storage region is transferred to the floating diffusion 236 (see FIG. 5).
  • the structure of the normal pixel 200a and the correction pixel 200b is basically the same, but as described above, the correction pixel 200b is formed so that the photoelectric conversion element 222 on the semiconductor substrate 220 is smaller than that of the normal pixel 200a. In that respect, it is different from the normal pixel 200a. As described above, the photoelectric conversion element 222 of the correction pixel 200b is formed to be smaller than the photoelectric conversion element 222 of the normal pixel 200a, so that the saturation charge amount of the correction pixel 200b is smaller than the saturation charge amount of the normal pixel 200a, while the correction is performed. The charge transfer efficiency of the pixel 200b is improved.
  • FIG. 5 is a partial cross-sectional plan view for explaining pixel electrodes in the pixel array portion of the present technology.
  • the figure shows pixel blocks according to a quad array.
  • one of four adjacent pixels 200 of the same color is formed as the correction pixel 200b.
  • a floating diffusion 236 is formed in the central portion of the four adjacent pixels 200. Further, the transfer gate electrode 232 of each pixel 200 is arranged so as to surround the floating diffusion 236. That is, the four adjacent pixels 200 share one floating diffusion 236. In this example, the transfer gate electrode 232 of the correction pixel 200b is formed so that its area is larger than that of the transfer gate electrode 232 of the normal pixel 200a. As a result, the charge transfer from the charge storage region of the normal pixel 200a to the floating diffusion 236 can be performed more efficiently.
  • an amplification transistor 238, a selection transistor 239, and the like are arranged around each pixel 200.
  • the charge-based signal transferred to the floating diffusion 236 via the transfer gate electrode 232 is amplified by the amplification transistor 238 and output as a pixel signal via the selection transistor 239.
  • the transfer gate electrode 232 of the correction pixel 200b described above is sufficiently large and has charge transfer characteristics, for example, as shown in FIG. 4B, the charge storage region of the correction pixel 200b (in this example, the N-type semiconductor region). 226) may be formed so as to be the same as or slightly smaller than the charge storage region of the normal pixel 200a. Further, when the charge storage region of the correction pixel 200b is sufficiently small and the charge transfer characteristics are good, as shown in FIG. 4C, the area of the transfer gate electrode 232 of the correction pixel 200b is the transfer gate of the normal pixel 200a. It may be formed to be the same as or slightly larger than that of the electrode 232.
  • FIG. 6 is a flowchart for explaining an example of pixel signal correction processing in the imaging device according to the embodiment of the present technology.
  • the correction process is executed by the signal processing unit 60.
  • the correction process for the pixel signal output from the pixel array unit 20 of the quad array shown in FIG. 3A will be described as an example.
  • the signal processing unit 60 receives a pixel signal output from the column processing unit 50 in units of rows (S601).
  • the received pixel signal is temporarily held in the image memory 70 for correction processing and the like.
  • the signal processing unit 60 calculates the sensitivity ratio of the normal pixel and the sensitivity ratio of the correction pixel for the pixel of the predetermined color component based on the received pixel signal (S602). Specifically, in this example, the signal processing unit 60 calculates the sensitivity ratio of the normal red pixel R and the sensitivity ratio of the corrected red pixel R', respectively, and the sensitivity ratio of the normal blue pixel B and the corrected blue pixel B'. Calculate the sensitivity ratio of each.
  • the sensitivity ratio of the normal red pixel R is, for example, the ratio of the charge amount of the normal red pixel R to the charge amount of the normal green pixel Gr
  • the sensitivity ratio of the correction red pixel R' is, for example, the correction green pixel Gr'.
  • the ratio of the charge amount of the corrected red pixel R'to the charge amount of It is the ratio of the charge amount of the corrected red pixel R'to the charge amount of.
  • the sensitivity ratio of the normal blue pixel B is, for example, the ratio of the charge amount of the normal blue pixel B to the charge amount of the normal green pixel Gb
  • the sensitivity ratio of the correction blue pixel B' is, for example, the correction green pixel. It is a ratio of the charge amount of the corrected blue pixel B'to the charge amount of Gb'.
  • the signal processing unit 60 calculates the difference between the calculated sensitivity ratio of the correction pixel and the sensitivity ratio of the normal pixel for the pixel having the above-mentioned predetermined color component (S603). That is, the signal processing unit 60 calculates the absolute value
  • the signal processing unit 60 determines whether or not the absolute value
  • the image quality parameter reference value ref_R is a numerical value (for example, 0.1) adjusted and determined in advance with respect to the image quality of the red pixel. That is, when the absolute value
  • the signal processing unit 60 determines that the absolute value
  • the image quality parameter reference value ref_B is a numerical value (for example, 0.1) adjusted and determined in advance with respect to the image quality of the blue pixel.
  • the charge amount of the normal blue pixel B is sufficient when compared with the charge amount of the corrected blue pixel B', so that the normal blue pixel is usually a blue pixel. It means that there is no problem in the charge transfer of B.
  • the signal processing unit 60 determines that the absolute value
  • the signal processing unit 60 corrects the signal of the correction pixel 200b. Specifically, as shown in FIG. 7A, the signal processing unit 60 transmits a signal based on the electric charge from the correction pixel 200b to the electric charge from three normal pixels 200a having the same color component adjacent to each other in the same pixel block. Correct based on the signal based on. At this time, the signal processing unit 60 calculates, for example, the average of the charges from the normal pixels 200a of the same color component adjacent to the correction pixels 200b, and adjusts the charges from the correction pixels 200b according to the signal based on the average of the charges. Based on the signal may be corrected. In this example, the signal processing unit 60 outputs the signal based on the electric charge of the normal pixel 200a as it is without correction.
  • the signal processing unit 60 outputs the signal based on the electric charge of the normal pixel 200a as it is without correction.
  • the signal processing unit 60 corrects the signal of the correction pixel 200b
  • the signal processing unit 60 sends the signal to the subsequent stage (S611) and ends the processing for the pixel signal.
  • the signal processing unit 60 determines that the absolute value
  • of the difference between the sensitivity ratio of the corrected blue pixel B'and the sensitivity ratio of the normal blue pixel B is not smaller than the image quality parameter reference value ref_B (No in S605)
  • signal processing The unit 60 corrects the signals of the normal blue pixel B and the normal green pixel Gb (S607).
  • the signal processing unit 60 transmits signals based on the charges of the normal blue pixel B and the normal green pixel Gb to the corrected blue pixel B'and the correction blue pixel B'of the same pixel block, respectively. Correction Correction is performed based on the signal based on the charge from the green pixel Gb'.
  • the signal processing unit 60 may be configured to normally correct only the signal of the blue pixel B.
  • the signal processing unit 60 corrects the signals of the normal blue pixel B and the normal green pixel Gb, the signal processing unit 60 sends the signal to the subsequent stage (S611) and ends the processing for the pixel signal.
  • the signal processing unit 60 determines that the difference between the sensitivity ratio of the corrected red pixel R'and the sensitivity ratio of the normal red pixel R is not smaller than the image quality parameter reference value ref_R (No in S604), the calculated correction is performed. It is determined whether or not the absolute value
  • the signal processing unit 60 determines that the absolute value
  • the signal processing unit 60 Corrects the signals of the normal red pixel R and the normal green pixel Gr (S609).
  • the signal processing unit 60 transmits signals based on the charges of the normal red pixel R and the normal green pixel Gr to the corrected red pixel R'and the corrected green of the same pixel block, respectively.
  • the correction is based on the signal based on the charge from the pixel Gr'.
  • the signal processing unit 60 may usually correct only the signal of the red pixel R.
  • the signal processing unit 60 corrects the signals of the normal red pixel R and the normal green pixel Gr, the signal processing unit 60 sends the signal to the subsequent stage (S611) and ends the processing for the pixel signal.
  • the signal processing unit 60 determines that the absolute value
  • the signal processing unit 60 corrects the signal of the normal pixel 200a
  • the signal processing unit 60 sends the signal to the subsequent stage (S611) and ends the processing for the pixel signal.
  • the signal processing unit 60 can correct the correction pixels 200b in the high illuminance region and can correct the normal pixels 200a in the low illuminance region. This makes it possible to prevent deterioration of the linearity of the pixel characteristics. Further, the signal processing unit 60 can correct the pixels 200 for each color component.
  • FIG. 8 is a flowchart for explaining an example of pixel signal correction processing in the imaging device according to the embodiment of the present technology.
  • the correction process is executed by the signal processing unit 60.
  • the correction process for the pixel signal output from the pixel array unit 20 of the Bayer array shown in FIG. 2 will be described as an example.
  • the green pixel Gr is used as the correction pixel.
  • the signal processing unit 60 receives a pixel signal output from the column processing unit 50 in units of rows (S801).
  • the received pixel signal is temporarily held in the image memory 70 for correction processing and the like.
  • the signal processing unit 60 calculates the sensitivity ratio of the normal pixel and the sensitivity ratio of the correction pixel for the pixel of the predetermined color component based on the received pixel signal (S802). Specifically, the signal processing unit 60 calculates the sensitivity ratio of the normal green pixel Gb and the sensitivity ratio of the corrected green pixel Gr', respectively.
  • the sensitivity ratio of the normal green pixel Gb is, for example, the ratio of the charge amount of the normal green pixel Gb to the charge amount of the normal red pixel R
  • the sensitivity ratio of the corrected green pixel Gr' is, for example, the charge amount of the normal red pixel R. It is the ratio of the charge amount of the corrected green pixel Gr'to the charge amount.
  • the signal processing unit 60 calculates the difference between the calculated sensitivity ratio of the correction pixel and the sensitivity ratio of the normal pixel for the pixel of the predetermined color component (S803). That is, the signal processing unit 60 calculates the absolute value
  • the signal processing unit 60 determines whether or not the absolute value
  • the image quality parameter reference value ref_G is, for example, a predetermined numerical value (for example, 0.1). That is, when the absolute value
  • Correction The signal of the green pixel Gr' is corrected (S805). Specifically, as shown in FIG. 9A, the signal processing unit 60 bases the signal based on the charge from the corrected green pixel Gr'based on the signal based on the charge from the normal green pixel Gb having the same color component. to correct.
  • the normal green pixel Gb having the same color component is a pixel arranged around the correction green pixel Gr'.
  • the signal processing unit 60 may correct the signal based on the charge from the correction pixel 200b, for example, according to the output amount of the charge from the normal pixel 200a of the different color component adjacent to the correction pixel 200b.
  • the signal processing unit 60 corrects the signal of the corrected green pixel Gr'
  • the signal processing unit 60 sends the signal to the subsequent stage (S806) and ends the processing for the pixel signal.
  • the signal processing unit 60 determines that the difference between the sensitivity ratio of the corrected green pixel Gr'and the sensitivity ratio of the normal green pixel Gb is not smaller than the image quality parameter reference value ref_G (No in S804), the normal green pixel Gb (S806). Specifically, as shown in FIG. 9B, the signal processing unit 60 converts the signal based on the charge from the normal green pixel Gb into the charge from the corrected green pixel Gr'of the same color component in the same pixel block. Correct based on the signal based. When the signal processing unit 60 normally corrects the signal of the green pixel Gb, the signal processing unit 60 sends the signal to the subsequent stage (S806) and ends the processing for the pixel signal.
  • the signal processing unit 60 can correct the correction pixels 200b in the high illuminance region and can correct the normal pixels 200a in the low illuminance region. This makes it possible to prevent deterioration of the linearity of the pixel characteristics.
  • the image pickup device 1 described above can be applied to various electronic devices such as an image pickup device such as a digital still camera or a digital video camera, a mobile phone having an image pickup function, or another device having an image pickup function. ..
  • FIG. 10 is a block diagram showing an example of the configuration of an imaging device as an electronic device to which the present technology is applied.
  • the image pickup device 100 may include an optical system 12, a shutter device 14, an image pickup device 1, a control unit 10, a signal processing unit 60, an image memory 70, and a monitor 80.
  • the optical system 12 may be composed of one or a plurality of lenses.
  • the optical system 12 guides the light from the subject to the image pickup device 1 and causes the pixel array unit 20 of the image pickup device 1 to form an image. Further, the optical system 12 adjusts the focus of the lens and controls the drive according to the control of the control unit 10.
  • the shutter device 14 performs an electronic shutter operation by sweeping out (resetting) unnecessary charges by the sweeping scanning circuit 34 of the imaging device 1.
  • the shutter device 14 controls the light irradiation period and the light receiving period of the image pickup device 1 according to the control of the control unit 10.
  • the monitor 80 displays the image data obtained by the signal processing unit 60 performing signal processing.
  • the user of the image pickup apparatus for example, the photographer
  • steps, actions or functions may be performed in parallel or in a different order, as long as the results are not inconsistent.
  • the steps, actions and functions described are provided merely as examples, and some of the steps, actions and functions can be omitted and combined with each other to the extent that they do not deviate from the gist of the invention. It may be one, or other steps, actions or functions may be added.
  • the technology according to the present disclosure can be applied to various products.
  • the technology according to the present disclosure is realized as a device mounted on a moving body of any kind such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobility, an airplane, a drone, a ship, and a robot. You may.
  • FIG. 11 is a block diagram showing a schematic configuration example of a vehicle control system, which is an example of a mobile control system to which the technique according to the present disclosure can be applied.
  • the vehicle control system 12000 includes a plurality of electronic control units connected via the communication network 12001.
  • the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, an outside information detection unit 12030, an in-vehicle information detection unit 12040, and an integrated control unit 12050.
  • a microcomputer 12051, an audio image output unit 12052, and an in-vehicle network I / F (interface) 12053 are shown as a functional configuration of the integrated control unit 12050.
  • the drive system control unit 12010 controls the operation of the device related to the drive system of the vehicle according to various programs.
  • the drive system control unit 12010 provides a driving force generator for generating the driving force of the vehicle such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting the driving force to the wheels, and a steering angle of the vehicle. It functions as a control device such as a steering mechanism for adjusting and a braking device for generating a braking force of a vehicle.
  • the body system control unit 12020 controls the operation of various devices mounted on the vehicle body according to various programs.
  • the body system control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as headlamps, back lamps, brake lamps, blinkers or fog lamps.
  • the body system control unit 12020 may be input with radio waves transmitted from a portable device that substitutes for the key or signals of various switches.
  • the body system control unit 12020 receives inputs of these radio waves or signals and controls a vehicle door lock device, a power window device, a lamp, and the like.
  • the vehicle outside information detection unit 12030 detects information outside the vehicle equipped with the vehicle control system 12000.
  • the image pickup unit 12031 is connected to the vehicle exterior information detection unit 12030.
  • the vehicle outside information detection unit 12030 causes the image pickup unit 12031 to capture an image of the outside of the vehicle and receives the captured image.
  • the vehicle exterior information detection unit 12030 may perform object detection processing or distance detection processing such as a person, a vehicle, an obstacle, a sign, or characters on the road surface based on the received image.
  • the imaging unit 12031 is an optical sensor that receives light and outputs an electric signal according to the amount of the light received.
  • the image pickup unit 12031 can output an electric signal as an image or can output it as distance measurement information. Further, the light received by the imaging unit 12031 may be visible light or invisible light such as infrared light.
  • the in-vehicle information detection unit 12040 detects the in-vehicle information.
  • a driver state detection unit 12041 that detects the driver's state is connected to the in-vehicle information detection unit 12040.
  • the driver state detection unit 12041 includes, for example, a camera that images the driver, and the in-vehicle information detection unit 12040 determines the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 12041. It may be calculated, or it may be determined whether the driver is dozing.
  • the microcomputer 12051 calculates the control target value of the driving force generator, the steering mechanism, or the braking device based on the information inside and outside the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040, and the drive system control unit.
  • a control command can be output to 12010.
  • the microcomputer 12051 realizes ADAS (Advanced Driver Assistance System) functions including vehicle collision avoidance or impact mitigation, follow-up driving based on inter-vehicle distance, vehicle speed maintenance driving, vehicle collision warning, vehicle lane deviation warning, and the like. It is possible to perform cooperative control for the purpose of.
  • ADAS Advanced Driver Assistance System
  • the microcomputer 12051 controls the driving force generator, the steering mechanism, the braking device, and the like based on the information around the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040, so that the driver can control the driver. It is possible to perform coordinated control for the purpose of automatic driving, etc., which runs autonomously without depending on the operation.
  • the microcomputer 12051 can output a control command to the body system control unit 12020 based on the information outside the vehicle acquired by the vehicle exterior information detection unit 12030.
  • the microcomputer 12051 controls the headlamps according to the position of the preceding vehicle or the oncoming vehicle detected by the external information detection unit 12030, and performs coordinated control for the purpose of anti-glare such as switching the high beam to the low beam. It can be carried out.
  • the audio image output unit 12052 transmits the output signal of at least one of the audio and the image to the output device capable of visually or audibly notifying the passenger or the outside of the vehicle of the information.
  • an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are exemplified as output devices.
  • the display unit 12062 may include, for example, at least one of an onboard display and a heads-up display.
  • FIG. 12 is a diagram showing an example of the installation position of the imaging unit 12031.
  • the vehicle 12100 has image pickup units 12101, 12102, 12103, 12104, 12105 as the image pickup unit 12031.
  • the imaging units 12101, 12102, 12103, 12104, 12105 are provided at positions such as the front nose, side mirrors, rear bumpers, back doors, and the upper part of the windshield in the vehicle interior of the vehicle 12100, for example.
  • the imaging unit 12101 provided on the front nose and the imaging unit 12105 provided on the upper part of the windshield in the vehicle interior mainly acquire an image in front of the vehicle 12100.
  • the imaging units 12102 and 12103 provided in the side mirrors mainly acquire images of the side of the vehicle 12100.
  • the imaging unit 12104 provided on the rear bumper or the back door mainly acquires an image of the rear of the vehicle 12100.
  • the images in front acquired by the imaging units 12101 and 12105 are mainly used for detecting the preceding vehicle, pedestrians, obstacles, traffic lights, traffic signs, lanes, and the like.
  • FIG. 12 shows an example of the photographing range of the imaging units 12101 to 12104.
  • the imaging range 12111 indicates the imaging range of the imaging unit 12101 provided on the front nose
  • the imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided on the side mirrors, respectively
  • the imaging range 12114 indicates the imaging range of the imaging units 12102 and 12103.
  • the imaging range of the imaging unit 12104 provided on the rear bumper or the back door is shown. For example, by superimposing the image data captured by the imaging units 12101 to 12104, a bird's-eye view image of the vehicle 12100 as viewed from above can be obtained.
  • At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information.
  • at least one of the image pickup units 12101 to 12104 may be a stereo camera composed of a plurality of image pickup elements, or may be an image pickup element having pixels for phase difference detection.
  • the microcomputer 12051 has a distance to each three-dimensional object within the imaging range 12111 to 12114 based on the distance information obtained from the imaging units 12101 to 12104, and a temporal change of this distance (relative velocity with respect to the vehicle 12100).
  • a predetermined speed for example, 0 km / h or more.
  • the microcomputer 12051 can set an inter-vehicle distance to be secured in front of the preceding vehicle in advance, and can perform automatic braking control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like. In this way, it is possible to perform cooperative control for the purpose of automatic driving or the like in which the vehicle travels autonomously without depending on the operation of the driver.
  • the microcomputer 12051 converts three-dimensional object data related to a three-dimensional object into two-wheeled vehicles, ordinary vehicles, large vehicles, pedestrians, electric poles, and other three-dimensional objects based on the distance information obtained from the imaging units 12101 to 12104. It can be classified and extracted and used for automatic avoidance of obstacles. For example, the microcomputer 12051 distinguishes obstacles around the vehicle 12100 into obstacles that can be seen by the driver of the vehicle 12100 and obstacles that are difficult to see. Then, the microcomputer 12051 determines the collision risk indicating the risk of collision with each obstacle, and when the collision risk is equal to or higher than the set value and there is a possibility of collision, the microcomputer 12051 via the audio speaker 12061 or the display unit 12062. By outputting an alarm to the driver and performing forced deceleration and avoidance steering via the drive system control unit 12010, driving support for collision avoidance can be provided.
  • At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays.
  • the microcomputer 12051 can recognize a pedestrian by determining whether or not a pedestrian is present in the captured image of the imaging units 12101 to 12104.
  • pedestrian recognition includes, for example, a procedure for extracting feature points in an image captured by an imaging unit 12101 to 12104 as an infrared camera, and pattern matching processing for a series of feature points indicating the outline of an object to determine whether or not the pedestrian is a pedestrian. It is done by the procedure to determine.
  • the audio image output unit 12052 When the microcomputer 12051 determines that a pedestrian is present in the captured images of the imaging units 12101 to 12104 and recognizes the pedestrian, the audio image output unit 12052 outputs a square contour line for emphasizing the recognized pedestrian.
  • the display unit 12062 is controlled so as to superimpose and display. Further, the audio image output unit 12052 may control the display unit 12062 so as to display an icon or the like indicating a pedestrian at a desired position.
  • the above is an example of a vehicle control system to which the technology according to the present disclosure can be applied.
  • the technique according to the present disclosure can be applied to the imaging unit 12301 among the configurations described above. Specifically, some of the plurality of pixels in the pixel array unit of the imaging unit 12301 may be formed so as to function as correction pixels.
  • the technique according to the present disclosure to the image pickup unit 12301, it is possible to perform correction processing of correction pixels or normal pixels and prevent deterioration of image quality due to deterioration of linearity of pixel characteristics.
  • FIG. 13 is a diagram showing an example of a schematic configuration of an endoscopic surgery system to which the technique according to the present disclosure (the present technique) can be applied.
  • FIG. 13 illustrates how the surgeon (doctor) 11131 is performing surgery on patient 11132 on patient bed 11133 using the endoscopic surgery system 11000.
  • the endoscopic surgery system 11000 includes an endoscope 11100, other surgical tools 11110 such as an abdominal tube 11111 and an energy treatment tool 11112, and a support arm device 11120 that supports the endoscope 11100.
  • a cart 11200 equipped with various devices for endoscopic surgery.
  • the endoscope 11100 is composed of a lens barrel 11101 in which a region having a predetermined length from the tip is inserted into the body cavity of the patient 11132, and a camera head 11102 connected to the base end of the lens barrel 11101.
  • the endoscope 11100 configured as a so-called rigid mirror having a rigid barrel 11101 is illustrated, but the endoscope 11100 may be configured as a so-called flexible mirror having a flexible barrel. Good.
  • An opening in which an objective lens is fitted is provided at the tip of the lens barrel 11101.
  • a light source device 11203 is connected to the endoscope 11100, and the light generated by the light source device 11203 is guided to the tip of the lens barrel by a light guide extending inside the lens barrel 11101 to be an objective. It is irradiated toward the observation target in the body cavity of the patient 11132 through the lens.
  • the endoscope 11100 may be a direct endoscope, a perspective mirror, or a side endoscope.
  • An optical system and an image sensor are provided inside the camera head 11102, and the reflected light (observation light) from the observation target is focused on the image sensor by the optical system.
  • the observation light is photoelectrically converted by the image sensor, and an electric signal corresponding to the observation light, that is, an image signal corresponding to the observation image is generated.
  • the image signal is transmitted as RAW data to the camera control unit (CCU: Camera Control Unit) 11201.
  • CCU Camera Control Unit
  • the CCU11201 is composed of a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), and the like, and comprehensively controls the operations of the endoscope 11100 and the display device 11202. Further, the CCU 11201 receives an image signal from the camera head 11102, and performs various image processing on the image signal for displaying an image based on the image signal, such as development processing (demosaic processing).
  • a CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • the display device 11202 displays an image based on the image signal processed by the CCU 11201 under the control of the CCU 11201.
  • the light source device 11203 is composed of, for example, a light source such as an LED (Light Emitting Diode), and supplies irradiation light to the endoscope 11100 when photographing an operating part or the like.
  • a light source such as an LED (Light Emitting Diode)
  • LED Light Emitting Diode
  • the input device 11204 is an input interface for the endoscopic surgery system 11000.
  • the user can input various information and input instructions to the endoscopic surgery system 11000 via the input device 11204.
  • the user inputs an instruction to change the imaging conditions (type of irradiation light, magnification, focal length, etc.) by the endoscope 11100.
  • the treatment tool control device 11205 controls the drive of the energy treatment tool 11112 for cauterizing, incising, sealing a blood vessel, or the like of a tissue.
  • the pneumoperitoneum device 11206 uses a gas in the pneumoperitoneum tube 11111 to inflate the body cavity of the patient 11132 for the purpose of securing the field of view by the endoscope 11100 and securing the work space of the operator.
  • Recorder 11207 is a device capable of recording various information related to surgery.
  • the printer 11208 is a device capable of printing various information related to surgery in various formats such as text, images, and graphs.
  • the light source device 11203 that supplies the irradiation light to the endoscope 11100 when photographing the surgical site can be composed of, for example, an LED, a laser light source, or a white light source composed of a combination thereof.
  • a white light source is configured by combining RGB laser light sources, the output intensity and output timing of each color (each wavelength) can be controlled with high accuracy. Therefore, the light source device 11203 adjusts the white balance of the captured image. It can be carried out.
  • the laser light from each of the RGB laser light sources is irradiated to the observation target in a time-division manner, and the drive of the image sensor of the camera head 11102 is controlled in synchronization with the irradiation timing to correspond to each of RGB. It is also possible to capture the image in a time-division manner. According to this method, a color image can be obtained without providing a color filter on the image sensor.
  • the drive of the light source device 11203 may be controlled so as to change the intensity of the output light at predetermined time intervals.
  • the drive of the image sensor of the camera head 11102 in synchronization with the timing of changing the light intensity to acquire images in a time-divided manner and synthesizing the images, so-called high dynamic without blackout and overexposure Range images can be generated.
  • the light source device 11203 may be configured to be able to supply light in a predetermined wavelength band corresponding to special light observation.
  • special light observation for example, by utilizing the wavelength dependence of light absorption in body tissue to irradiate light in a narrow band as compared with the irradiation light (that is, white light) in normal observation, the surface layer of the mucous membrane.
  • a so-called narrow band imaging is performed in which a predetermined tissue such as a blood vessel is photographed with high contrast.
  • fluorescence observation may be performed in which an image is obtained by fluorescence generated by irradiating with excitation light.
  • the body tissue is irradiated with excitation light to observe the fluorescence from the body tissue (autofluorescence observation), or a reagent such as indocyanine green (ICG) is locally injected into the body tissue and the body tissue is injected. It is possible to obtain a fluorescence image by irradiating excitation light corresponding to the fluorescence wavelength of the reagent.
  • the light source device 11203 may be configured to be capable of supplying narrow band light and / or excitation light corresponding to such special light observation.
  • FIG. 14 is a block diagram showing an example of the functional configuration of the camera head 11102 and CCU11201 shown in FIG.
  • the camera head 11102 includes a lens unit 11401, an imaging unit 11402, a driving unit 11403, a communication unit 11404, and a camera head control unit 11405.
  • CCU11201 includes a communication unit 11411, an image processing unit 11412, and a control unit 11413.
  • the camera head 11102 and CCU11201 are communicatively connected to each other by a transmission cable 11400.
  • the lens unit 11401 is an optical system provided at a connection portion with the lens barrel 11101.
  • the observation light taken in from the tip of the lens barrel 11101 is guided to the camera head 11102 and incident on the lens unit 11401.
  • the lens unit 11401 is configured by combining a plurality of lenses including a zoom lens and a focus lens.
  • the image pickup unit 11402 is composed of an image pickup element.
  • the image sensor constituting the image pickup unit 11402 may be one (so-called single plate type) or a plurality (so-called multi-plate type).
  • each image pickup element may generate an image signal corresponding to each of RGB, and a color image may be obtained by synthesizing them.
  • the image pickup unit 11402 may be configured to have a pair of image pickup elements for acquiring image signals for the right eye and the left eye corresponding to 3D (Dimensional) display, respectively.
  • the 3D display enables the operator 11131 to more accurately grasp the depth of the biological tissue in the surgical site.
  • a plurality of lens units 11401 may be provided corresponding to each image pickup element.
  • the imaging unit 11402 does not necessarily have to be provided on the camera head 11102.
  • the imaging unit 11402 may be provided inside the lens barrel 11101 immediately after the objective lens.
  • the drive unit 11403 is composed of an actuator, and the zoom lens and focus lens of the lens unit 11401 are moved by a predetermined distance along the optical axis under the control of the camera head control unit 11405. As a result, the magnification and focus of the image captured by the imaging unit 11402 can be adjusted as appropriate.
  • the communication unit 11404 is composed of a communication device for transmitting and receiving various information to and from the CCU11201.
  • the communication unit 11404 transmits the image signal obtained from the image pickup unit 11402 as RAW data to the CCU 11201 via the transmission cable 11400.
  • the communication unit 11404 receives a control signal for controlling the drive of the camera head 11102 from the CCU 11201 and supplies the control signal to the camera head control unit 11405.
  • the control signal includes, for example, information to specify the frame rate of the captured image, information to specify the exposure value at the time of imaging, and / or information to specify the magnification and focus of the captured image, and the like. Contains information about the condition.
  • the above-mentioned imaging conditions such as frame rate, exposure value, magnification, and focus may be appropriately specified by the user, or may be automatically set by the control unit 11413 of CCU11201 based on the acquired image signal. Good.
  • the so-called AE (Auto Exposure) function, AF (Auto Focus) function, and AWB (Auto White Balance) function are mounted on the endoscope 11100.
  • the camera head control unit 11405 controls the drive of the camera head 11102 based on the control signal from the CCU 11201 received via the communication unit 11404.
  • the communication unit 11411 is composed of a communication device for transmitting and receiving various information to and from the camera head 11102.
  • the communication unit 11411 receives an image signal transmitted from the camera head 11102 via the transmission cable 11400.
  • the communication unit 11411 transmits a control signal for controlling the drive of the camera head 11102 to the camera head 11102.
  • Image signals and control signals can be transmitted by telecommunications, optical communication, or the like.
  • the image processing unit 11412 performs various image processing on the image signal which is the RAW data transmitted from the camera head 11102.
  • the control unit 11413 performs various controls related to the imaging of the surgical site and the like by the endoscope 11100 and the display of the captured image obtained by the imaging of the surgical site and the like. For example, the control unit 11413 generates a control signal for controlling the drive of the camera head 11102.
  • control unit 11413 causes the display device 11202 to display an image captured by the surgical unit or the like based on the image signal processed by the image processing unit 11412.
  • the control unit 11413 may recognize various objects in the captured image by using various image recognition techniques. For example, the control unit 11413 detects the shape, color, and the like of the edge of an object included in the captured image to remove surgical tools such as forceps, a specific biological part, bleeding, and mist when using the energy treatment tool 11112. Can be recognized.
  • the control unit 11413 may superimpose and display various surgical support information on the image of the surgical unit by using the recognition result. By superimposing and displaying the surgical support information and presenting it to the surgeon 11131, it is possible to reduce the burden on the surgeon 11131 and to allow the surgeon 11131 to proceed with the surgery reliably.
  • the transmission cable 11400 that connects the camera head 11102 and CCU11201 is an electric signal cable that supports electric signal communication, an optical fiber that supports optical communication, or a composite cable thereof.
  • the communication was performed by wire using the transmission cable 11400, but the communication between the camera head 11102 and the CCU11201 may be performed wirelessly.
  • the above is an example of an endoscopic surgery system to which the technology according to the present disclosure can be applied.
  • the technique according to the present disclosure can be applied to the imaging unit 11402 of the camera head 11102 among the configurations described above. Specifically, some of the plurality of pixels in the pixel array unit of the image pickup unit 11402 may be formed so as to function as correction pixels.
  • the technique according to the present disclosure it is possible to perform correction processing of correction pixels or normal pixels and prevent deterioration of image quality due to deterioration of pixel-specific linearity.
  • the technique according to the present disclosure may be applied to other, for example, a microscopic surgery system.
  • An imaging device including a plurality of pixels capable of accumulating electric charges generated according to the amount of incident light.
  • the plurality of pixels A first pixel formed to have a first saturated charge amount and A second pixel formed to have a second saturated charge amount smaller than the first saturated charge amount, and the like.
  • Imaging device. (2)
  • the plurality of pixels consist of an array of pixel blocks including the plurality of the first pixels and at least one of the second pixels.
  • Each of the pixel blocks is composed of four adjacent pixels having the same color component, and the array of the pixel blocks constitutes a Bayer array.
  • the plurality of first pixels in the pixel block are provided so as to correspond to any of the predetermined color components.
  • the at least one second pixel in the pixel block is provided so as to correspond to any of the color components.
  • the imaging device according to (3) above. (5)
  • the at least one second pixel is provided so as to correspond to the green component.
  • the imaging device according to (1) or (4) above. (6)
  • the at least one second pixel is provided so as to correspond to a color component other than the green component.
  • the second pixel is arranged diagonally to each other in the pixel block.
  • the plurality of the second pixels are arranged so that the distance between the second pixels is larger than the size of the pixel block.
  • the imaging device according to (2) or (7) above. The plurality of pixels are provided so as to correspond to any one of the predetermined color components according to the Bayer arrangement.
  • the imaging device according to (1) above. The plurality of second pixels are provided so as to correspond to either a green component adjacent to a blue component or a green component adjacent to a red component in one direction of the Bayer array.
  • the imaging device according to (9) above. (11) The plurality of second pixels are arranged so that the distance between the second pixels is larger than the distance between any of the color components.
  • the area of the transfer gate electrode for transferring the charge from the charge accumulating region in the second pixel is the area of the transfer gade electrode for transferring the charge from the charge accumulating region in the first pixel. Greater than The imaging device according to any one of (1) to (11). (13) The charge storage region for accumulating charges in the second pixel is smaller than the charge storage region for accumulating charges in the first pixel. The imaging device according to any one of (1) to (12). (14) A signal processing circuit for processing a signal based on the amount of charge transferred from the plurality of pixels is further provided. The signal processing circuit is a signal based on the amount of charge transferred from any of the plurality of pixels based on the amount of charge transferred from the first pixel and the amount of charge transferred from the second pixel.
  • the imaging device has a first sensitivity ratio based on the amount of electric charge transferred from each of the plurality of first pixels and a second sensitivity ratio based on the amount of electric charge transferred from the second pixel. On the basis of, Corrects a signal based on the amount of charge transferred from any of the plurality of pixels.
  • the imaging device (14) above.
  • the signal processing circuit calculates the difference between the first sensitivity ratio and the second sensitivity ratio, and transfers the difference from any of the plurality of pixels based on the calculated difference and a predetermined reference value. Correct the signal based on the amount of charge
  • the imaging device (14) or (15).
  • the difference between the first sensitivity ratio with respect to the red component and the second sensitivity ratio is smaller than the predetermined reference value, and the first sensitivity ratio with respect to the blue component and the second sensitivity ratio are described.
  • the difference from the sensitivity ratio of is smaller than the predetermined reference value, the signal based on the amount of charge transferred from the second pixel is corrected.
  • the difference between the first sensitivity ratio with respect to the red component and the second sensitivity ratio is smaller than the predetermined reference value, and the first sensitivity ratio with respect to the blue component and the second sensitivity ratio are described.
  • a signal based on the amount of charge transferred from the first pixel of at least the blue component is converted to the amount of charge transferred from the second pixel.
  • the signal based on the correction, or the difference between the first sensitivity ratio and the second sensitivity ratio regarding the red component is not smaller than the predetermined reference value, and the first sensitivity ratio regarding the blue component is not smaller than the predetermined reference value.
  • a signal based on the amount of charge transferred from the first pixel of at least the red component is transferred from the second pixel.
  • the imaging device Correct based on the signal based on the amount of charge to be made, The imaging device according to (16) or (17). (19) In the signal processing circuit, the difference between the first sensitivity ratio and the second sensitivity ratio regarding the red component is not smaller than the predetermined reference value, and the first sensitivity ratio and the first sensitivity ratio regarding the blue component are not smaller than the predetermined reference value. When the difference from the sensitivity ratio of 2 is not smaller than the predetermined reference value, the signal based on the amount of charge transferred from the first pixel in the same color component is converted to the amount of charge transferred from the second pixel. Correct based on the signal based, The imaging device according to any one of (16) to (18). (20) This is a signal correction processing method in an imaging device.
  • the plurality of pixels have a first pixel formed to have a first saturated charge amount and a second pixel formed to have a second saturated charge amount smaller than the first saturated charge amount.
  • Including the pixels of The correction is performed from any of the plurality of pixels based on the sensitivity ratio based on the amount of charge transferred from the first pixel and the sensitivity ratio based on the amount of charge transferred from the second pixel. Correct the signal based on the amount of charge transferred, Correction processing method.
  • Imaging device 10 Control unit 20 . Pixel array unit 200a ... Normal pixel 200b ... Correction pixel 201 ... Microlens 210 ... Filter layer 220 ... Semiconductor substrate 222 ... Photoelectric conversion element 224 ... P-type semiconductor region 226 ... N-type semiconductor region 230 ... Wiring layer 232 ... Transfer gate electrode 234 ... Metal wiring 236 ... Floating diffusion 238 ... Amplification transistor 239 ... Selective transistor 30 ... Vertical drive unit 32 ... Read scanning circuit 34 ... Sweep scanning circuit 40 ... Horizontal drive unit 50 ... Column processing Unit 60 ... Signal processing unit 70 ... Image memory

Abstract

The purpose of the present invention is to ensure the linearity of pixel output characteristics without reducing charge transfer efficiency. The present invention provides an image capture device comprising a plurality of pixels capable of accumulating charges generated in accordance with the amount of incident light. The plurality of pixels are arrayed so as to include a first pixel formed so as to have a first saturation charge amount, and a second pixel formed so as to have a second saturation charge amount smaller than the first saturation charge amount.

Description

撮像デバイス及び撮像デバイスにおける画素信号の補正処理方法Pixel signal correction processing method in imaging device and imaging device
 本技術は、撮像デバイス及び撮像デバイスにおける画素信号の補正処理方法に関する。 The present technology relates to an imaging device and a pixel signal correction processing method in the imaging device.
 近年、CCD(Charge Coupled Device)やCMOS(Complementary MOS)を用いた撮像デバイスの画素の高密度化・微細化が進んでいる。これに対応して、画素の特性が劣化しないように、画素の単位面積当たりの飽和電荷量の向上が求められている。一般的に、画素の飽和電荷量を向上させるためには、画素の電荷蓄積領域により多くの電荷が蓄積されるようにポテンシャルウェルの深化が行われる。 In recent years, the density and miniaturization of pixels of imaging devices using CCD (Charge Coupled Device) and CMOS (Complementary MOS) have been increasing. Correspondingly, it is required to improve the saturated charge amount per unit area of the pixel so that the characteristics of the pixel do not deteriorate. Generally, in order to improve the saturated charge amount of a pixel, the potential well is deepened so that more charge is accumulated in the charge storage region of the pixel.
 しかし、ポテンシャルウェルの深化を行うと、電荷が周囲の画素に溢れ出すブルーミング等の問題が発生し、電荷の転送効率が低下してしまう。つまり、ポテンシャルウェルの深化により、電荷転送効率が低下するため、飽和電荷量と電荷転送効率はトレードオフの関係にある。転送効率の低下は、とりわけ、低照度領域において、画素に入射した光エネルギーの大きさと画素が出力する電荷量との間のリニアリティを悪化させる。リニアリティの悪化により、低照度領域におけるホワイトバランスが崩れ、特定の色成分(例えば緑色成分)が浮いてしまういわゆる色付き等が発生する。したがって、このようなリニアリティの悪化による画質の低下を防ぐための技法が幾つか提案されている。 However, if the potential well is deepened, problems such as blooming in which the electric charge overflows to the surrounding pixels will occur, and the electric charge transfer efficiency will decrease. That is, since the charge transfer efficiency decreases due to the deepening of the potential well, there is a trade-off relationship between the saturated charge amount and the charge transfer efficiency. The decrease in transfer efficiency deteriorates the linearity between the magnitude of the light energy incident on the pixel and the amount of charge output by the pixel, especially in the low light region. Due to the deterioration of linearity, the white balance in the low illuminance region is lost, and so-called coloring or the like in which a specific color component (for example, a green component) floats occurs. Therefore, some techniques for preventing the deterioration of image quality due to such deterioration of linearity have been proposed.
 例えば、下記特許文献1は、不完全転送による色再現性の悪化、特に高感度での色再現性の悪化を低減する技術を開示する。具体的には、特許文献1は、設定可能な各撮影感度について、異なる露光量に対応して撮像手段から出力された撮像信号に含まれる異なる色の出力信号を用いて生成された色ごとの色比補正係数を記憶手段に記憶し、記憶された色比補正係数を用いて撮像手段から出力された撮像信号を補正する補正手段を制御し、被写体の撮影において設定された撮影感度に対応した色比補正係数を記憶手段から取得し、撮像手段から出力された被写体の撮像信号に含まれる異なる色の出力信号を、取得された色比補正係数のうち、対応する色の色比補正係数を用いて補正する撮像装置を開示する。 For example, Patent Document 1 below discloses a technique for reducing deterioration of color reproducibility due to incomplete transfer, particularly deterioration of color reproducibility at high sensitivity. Specifically, Patent Document 1 describes, for each settable shooting sensitivity, for each color generated by using output signals of different colors included in the imaging signal output from the imaging means corresponding to different exposure amounts. The color ratio correction coefficient is stored in the storage means, and the correction means for correcting the image pickup signal output from the image pickup means is controlled by using the stored color ratio correction coefficient to correspond to the shooting sensitivity set in the shooting of the subject. The color ratio correction coefficient is acquired from the storage means, and the output signals of different colors included in the image pickup signal of the subject output from the image pickup means are obtained, and the color ratio correction coefficient of the corresponding color among the acquired color ratio correction coefficients is obtained. The imaging apparatus to be used and corrected is disclosed.
 また、下記特許文献2は、撮像装置が有する撮像素子から出力される画素信号の精度を上げる技術を開示する。具体的には、特許文献2は、複数の画素の各々について、電荷蓄積部から電荷保持部への電荷転送時に電荷蓄積部に残存する不完全転送電荷量に応じた不完全転送電荷量情報を、電荷蓄積部に対応する画素毎に予め記憶する記憶手段と、信号出力部により出力された画素信号に対して、記憶手段により記憶された不完全転送電荷量情報に基づく補正を行う補正手段とを備える装置を開示する。 Further, Patent Document 2 below discloses a technique for improving the accuracy of a pixel signal output from an image pickup device included in an image pickup device. Specifically, Patent Document 2 provides incomplete transfer charge amount information according to the incomplete transfer charge amount remaining in the charge storage unit at the time of charge transfer from the charge storage unit to the charge holding unit for each of the plurality of pixels. , A storage means that stores in advance for each pixel corresponding to the charge storage unit, and a correction means that corrects the pixel signal output by the signal output unit based on the incompletely transferred charge amount information stored by the storage means. Disclose a device comprising.
特開2013-150051号公報Japanese Unexamined Patent Publication No. 2013-150051 特開2012-231421号公報Japanese Unexamined Patent Publication No. 2012-231421
 上記文献に開示されるような技術は、電荷の不完全転送ないしは転送効率の低下による画素特性のリニアリティの悪化を補正するために、各画素に対する補正係数ないしはデータを予め取得して記憶手段に記憶しておく必要があった。また、使用温度や露光時間の変化に対応するため、予め取得し記憶する補正データの量が膨大になるという問題があった。 A technique as disclosed in the above document acquires a correction coefficient or data for each pixel in advance and stores it in a storage means in order to correct the deterioration of the linearity of pixel characteristics due to incomplete transfer of electric charge or deterioration of transfer efficiency. I had to do it. Further, there is a problem that the amount of correction data acquired and stored in advance becomes enormous in order to cope with changes in the operating temperature and the exposure time.
 そこで、本技術の目的は、電荷転送効率を低下させることなく、画素特性のリニアリティを確保することができる撮像デバイス及び撮像デバイスにおける画素信号の補正処理方法を提供することである。 Therefore, an object of the present technology is to provide an imaging device and a pixel signal correction processing method in the imaging device that can secure the linearity of the pixel characteristics without lowering the charge transfer efficiency.
 上記課題を解決するための本技術は、以下に示す発明特定事項乃至は技術的特徴を含んで構成される。 The present technology for solving the above problems is configured to include the following invention-specific matters or technical features.
 ある観点に従う本技術は、入射する光の量に応じて発生する電荷を蓄積可能な複数の画素を備える撮像デバイスである。前記撮像デバイスは、前記複数の画素は、第1の飽和電荷量を有するように形成された第1の画素と、前記第1の飽和電荷量よりも小さい第2の飽和電荷量を有するように形成された第2の画素とを含む。 The present technology according to a certain viewpoint is an imaging device including a plurality of pixels capable of accumulating electric charges generated according to the amount of incident light. In the imaging device, the plurality of pixels have a first pixel formed so as to have a first saturated charge amount, and a second saturated charge amount smaller than the first saturated charge amount. Includes a second pixel formed.
 また、別の観点に従う本技術は、撮像デバイスにおける画素信号の補正処理方法である。前記補正処理方法は、入射する光の量に応じて発生する電荷を蓄積した複数の画素から前記電荷を読み出すことと、前記複数の画素から読み出した前記電荷の量に基づく信号を補正することと、を含む。ここで、前記複数の画素は、第1の飽和電荷量を有するように形成された第1の画素と、前記第1の飽和電荷量よりも小さい第2の飽和電荷量を有するように形成された第2の画素とを含む。また、前記補正することは、前記第1の画素から転送される電荷量に基づく感度比と前記第2の画素から転送される電荷量に基づく感度比とに基づいて、前記複数の画素の何れかから転送される電荷量に基づく信号を補正する。 Further, this technology according to another viewpoint is a pixel signal correction processing method in an imaging device. The correction processing method includes reading out the electric charge from a plurality of pixels accumulating electric charges generated according to the amount of incident light, and correcting a signal based on the amount of the electric charge read out from the plurality of pixels. ,including. Here, the plurality of pixels are formed so as to have a first pixel formed to have a first saturated charge amount and a second saturated charge amount smaller than the first saturated charge amount. Also includes a second pixel. Further, the correction is any of the plurality of pixels based on the sensitivity ratio based on the amount of electric charge transferred from the first pixel and the sensitivity ratio based on the amount of electric charge transferred from the second pixel. Corrects the signal based on the amount of charge transferred from.
 なお、本明細書等において、手段とは、単に物理的手段を意味するものではなく、その手段が有する機能をソフトウェアによって実現する場合も含む。また、1つの手段が有する機能が2つ以上の物理的手段により実現されても、2つ以上の手段の機能が1つの物理的手段により実現されてもよい。 Note that, in the present specification and the like, the means does not simply mean a physical means, but also includes a case where the function of the means is realized by software. Further, the function of one means may be realized by two or more physical means, or the function of two or more means may be realized by one physical means.
 また、「システム」とは、複数の装置(又は特定の機能を実現する機能モジュール)が論理的に集合した物のことをいい、各装置や機能モジュールが単一の筐体内にあるか否かは特に問わない。 Further, the "system" means a logical assembly of a plurality of devices (or functional modules that realize a specific function), and whether or not each device or functional module is in a single housing. Is not particularly limited.
 本技術の他の技術的特徴、目的、及び作用効果乃至は利点は、添付した図面を参照して説明される以下の実施形態により明らかにされる。また、本明細書に記載された効果はあくまで例示であって限定されるものでは無く、また他の効果があってもよい。 Other technical features, objectives, effects or advantages of the present technology will be clarified by the following embodiments described with reference to the attached drawings. Further, the effects described in the present specification are merely examples and are not limited, and other effects may be obtained.
本技術の一実施形態に係る撮像デバイスの構成の一例を示す図である。It is a figure which shows an example of the structure of the image pickup device which concerns on one Embodiment of this technique. 本技術の一実施形態に係る画素アレイ部における画素の配列の一例を説明する図である。It is a figure explaining an example of the arrangement of the pixel in the pixel array part which concerns on one Embodiment of this technique. 本技術の一実施形態に係る画素アレイ部における画素の配列の一例を説明する図である。It is a figure explaining an example of the arrangement of the pixel in the pixel array part which concerns on one Embodiment of this technique. 本技術の一実施形態に係る画素アレイ部における画素の配列の一例を説明する図である。It is a figure explaining an example of the arrangement of the pixel in the pixel array part which concerns on one Embodiment of this technique. 本技術の一実施形態に係る画素アレイ部における画素の概略的構造の一例を示す部分縦断面図である。It is a partial vertical sectional view which shows an example of the schematic structure of the pixel in the pixel array part which concerns on one Embodiment of this technique. 本技術の一実施形態に係る画素アレイ部における画素の概略的構造の一例を示す部分縦断面図である。It is a partial vertical sectional view which shows an example of the schematic structure of the pixel in the pixel array part which concerns on one Embodiment of this technique. 本技術の一実施形態に係る画素アレイ部における画素の概略的構造の一例を示す部分縦断面図である。It is a partial vertical sectional view which shows an example of the schematic structure of the pixel in the pixel array part which concerns on one Embodiment of this technique. 本技術の一実施形態に係る画素アレイ部における画素の電極を説明するための部分断面平面図である。It is a partial cross-sectional plan view for demonstrating the electrode of a pixel in the pixel array part which concerns on one Embodiment of this technique. 本技術の一実施形態に係る撮像デバイスにおける画素信号の補正処理の一例を説明するためのフローチャートである。It is a flowchart for demonstrating an example of the pixel signal correction processing in the image pickup apparatus which concerns on one Embodiment of this technique. 本技術の一実施形態に係る撮像デバイスにおける画素信号の補正処理の一例を説明する図である。It is a figure explaining an example of the pixel signal correction processing in the image pickup apparatus which concerns on one Embodiment of this technique. 本技術の一実施形態に係る撮像デバイスにおける画素信号の補正処理の一例を説明するためのフローチャートである。It is a flowchart for demonstrating an example of the pixel signal correction processing in the image pickup apparatus which concerns on one Embodiment of this technique. 本技術の一実施形態に係る撮像デバイスにおける画素信号の補正処理の一例を説明する図である。It is a figure explaining an example of the pixel signal correction processing in the image pickup apparatus which concerns on one Embodiment of this technique. 本開示に係る技術を適用した電子機器としての撮像装置の構成の一例を示すブロック図である。It is a block diagram which shows an example of the structure of the image pickup apparatus as an electronic device to which the technique concerning this disclosure is applied. 本開示に係る技術が適用され得る移動体制御システムの一例である車両制御システムの概略的な構成例を示すブロック図である。It is a block diagram which shows the schematic structure example of the vehicle control system which is an example of the mobile body control system to which the technique which concerns on this disclosure can be applied. 撮像部の設置位置の例を示す図である。It is a figure which shows the example of the installation position of the imaging unit. 本開示に係る技術(本技術)が適用され得る内視鏡手術システムの概略的な構成の一例を示す図である。It is a figure which shows an example of the schematic structure of the endoscopic surgery system to which the technique (the present technique) which concerns on this disclosure can be applied. 図13に示すカメラヘッド及びCCUの機能構成の一例を示すブロック図である。It is a block diagram which shows an example of the functional structure of the camera head and CCU shown in FIG.
 以下、図面を参照して本技術の実施の形態を説明する。ただし、以下に説明する実施形態は、あくまでも例示であり、以下に明示しない種々の変形や技術の適用を排除する意図はない。本技術は、その趣旨を逸脱しない範囲で種々変形(例えば各実施形態を組み合わせる等)して実施することができる。また、以下の図面の記載において、同一又は類似の部分には同一又は類似の符号を付して表している。図面は模式的なものであり、必ずしも実際の寸法や比率等とは一致しない。図面相互間においても互いの寸法の関係や比率が異なる部分が含まれていることがある。 Hereinafter, embodiments of the present technology will be described with reference to the drawings. However, the embodiments described below are merely examples, and there is no intention of excluding the application of various modifications and techniques not specified below. The present technology can be implemented with various modifications (for example, combining each embodiment) within a range that does not deviate from the purpose. Further, in the description of the following drawings, the same or similar parts are represented by the same or similar reference numerals. The drawings are schematic and do not necessarily match the actual dimensions and ratios. Even between drawings, there may be parts where the relationship and ratio of dimensions are different from each other.
[第1の実施形態]
 図1は、本技術の一実施形態に係る撮像デバイスの構成の一例を示す図である。同図に示すように、撮像デバイス1は、例えば、制御部10と、画素アレイ部20と、垂直駆動部30と、水平駆動部40と、カラム処理部50とを含み構成される。また、撮像デバイス1は、信号処理部60と、画像メモリ70とを含み得る。撮像デバイス1は、例えばシステム・オン・チップ(SOC)として構成され得るが、これに限られない。
[First Embodiment]
FIG. 1 is a diagram showing an example of a configuration of an imaging device according to an embodiment of the present technology. As shown in the figure, the image pickup device 1 includes, for example, a control unit 10, a pixel array unit 20, a vertical drive unit 30, a horizontal drive unit 40, and a column processing unit 50. Further, the image pickup device 1 may include a signal processing unit 60 and an image memory 70. The imaging device 1 can be configured, for example, as a system on chip (SOC), but is not limited to this.
 制御部10は、撮像デバイス1を統括的に制御する回路である。制御部10は、各種のタイミング信号を生成するタイミングジェネレータ(図示せず)を含み得る。制御部10は、例えば、外部から供給されるクロック信号に基づいてタイミングジェネレータにより生成された各種のタイミング信号に従って、垂直駆動部30、水平駆動部40及びカラム処理部50の動作を制御する。 The control unit 10 is a circuit that comprehensively controls the image pickup device 1. The control unit 10 may include a timing generator (not shown) that generates various timing signals. The control unit 10 controls the operations of the vertical drive unit 30, the horizontal drive unit 40, and the column processing unit 50 according to various timing signals generated by the timing generator based on, for example, a clock signal supplied from the outside.
 画素アレイ部20は、入射した光の強さに応じた電荷を生成し蓄積する、アレイ状に配置された光電変換素子群を含み構成される。埋め込みフォトダイオードは、光電変換素子222(図4A参照)の一態様である。複数の光電変換素子222のそれぞれ又は幾つかは、1つの画素を構成し得る。各画素は、典型的には、カラーフィルタ(図4A参照)を含み、カラーフィルタに対応する色成分の光を受光するように構成される。画素の配列としては、例えば、クアッド(Quad)配列やベイヤー(Bayer)配列が知られているが、これに限られない。同図中、画素アレイ部20の上下方向を列方向ないしは垂直方向と称し、左右方向を行方向ないしは水平方向と定義する。なお、画素アレイ部20における画素の構成の詳細については、後述する。 The pixel array unit 20 includes a group of photoelectric conversion elements arranged in an array that generates and stores electric charges according to the intensity of incident light. The embedded photodiode is an aspect of the photoelectric conversion element 222 (see FIG. 4A). Each or some of the plurality of photoelectric conversion elements 222 may constitute one pixel. Each pixel typically includes a color filter (see FIG. 4A) and is configured to receive light of a color component corresponding to the color filter. As the arrangement of pixels, for example, a quad arrangement and a Bayer arrangement are known, but the arrangement is not limited to this. In the figure, the vertical direction of the pixel array unit 20 is referred to as a column direction or a vertical direction, and the left-right direction is defined as a row direction or a horizontal direction. The details of the pixel configuration in the pixel array unit 20 will be described later.
 垂直駆動部30は、シフトレジスタやアドレスデコーダ(図示せず)などを含み構成される。垂直駆動部30は、制御部10の制御の下、例えば、画素アレイ部20の画素群を、行単位で順番に、垂直方向に駆動していく。本開示では、垂直駆動部30は、信号の読出しのための走査を行う読出し走査回路32と光電変換素子222(図4A参照)から不要な電荷を掃き出す(リセットする)走査を行う掃出し走査回路34を含み得る。 The vertical drive unit 30 includes a shift register, an address decoder (not shown), and the like. Under the control of the control unit 10, the vertical drive unit 30 drives, for example, a group of pixels of the pixel array unit 20 in a row-by-row manner in the vertical direction. In the present disclosure, the vertical drive unit 30 sweeps out (reset) unnecessary charges from the read-out scanning circuit 32 and the photoelectric conversion element 222 (see FIG. 4A) that scan for reading signals, and the sweep-out scanning circuit 34. Can include.
 読出し走査回路32は、各画素から電荷に基づく信号を読み出すために、画素アレイ部20の画素群を行単位で順に選択走査を行う。 The read-out scanning circuit 32 sequentially selects and scans the pixel groups of the pixel array unit 20 row by row in order to read out a signal based on the electric charge from each pixel.
 掃出し走査回路34は、読出し走査回路32によって読出し動作が行われる読出し行に対して、その読出し動作よりも電子シャッタの動作速度の時間分だけ先行して掃出し走査を行う。掃出し走査回路34による不要電荷の掃き出し(リセット)により、いわゆる電子シャッタ動作が行われる。電子シャッタ動作とは、光電変換素子222の電荷を掃き出し、新たに露光(電荷の蓄積)を開始する動作をいう。 The sweep scanning circuit 34 performs sweep scanning for the read row for which the read operation is performed by the read scan circuit 32, ahead of the read operation by the time of the operating speed of the electronic shutter. The so-called electronic shutter operation is performed by sweeping (resetting) unnecessary charges by the sweep scanning circuit 34. The electronic shutter operation refers to an operation of sweeping out the electric charge of the photoelectric conversion element 222 and newly starting exposure (accumulation of electric charge).
 読出し走査回路32による読出し動作によって読み出される電荷に基づく信号は、その直前の読出し動作又は電子シャッタ動作以降に入射した光エネルギーの大きさに対応する。そして、直前の読出し動作による読出しタイミング又は電子シャッタ動作による掃出し動作タイミングから、今回の読出し動作による読出しタイミングまでの期間が、画素における電荷の蓄積時間となる。 The signal based on the electric charge read by the read operation by the read scan circuit 32 corresponds to the magnitude of the light energy incidented after the read operation immediately before or the electronic shutter operation. Then, the period from the read timing by the immediately preceding read operation or the sweep operation timing by the electronic shutter operation to the read timing by the current read operation is the charge accumulation time in the pixel.
 水平駆動部40は、シフトレジスタやアドレスデコーダ(図示せず)などを含み構成される。水平駆動部40は、制御部10の制御の下、例えば、画素アレイ部20の画素群を、列単位で順番に、水平方向に駆動していく。垂直駆動部30及び水平駆動部40による画素の選択的な駆動により、選択された画素に蓄積された電荷に基づく信号がカラム処理部50に出力される。 The horizontal drive unit 40 includes a shift register, an address decoder (not shown), and the like. Under the control of the control unit 10, the horizontal drive unit 40 drives, for example, a group of pixels of the pixel array unit 20 in a row-by-column manner in the horizontal direction. By selectively driving the pixels by the vertical drive unit 30 and the horizontal drive unit 40, a signal based on the electric charge accumulated in the selected pixels is output to the column processing unit 50.
 カラム処理部50は、画素アレイ部20の選択された行における画素群のそれぞれから出力される信号に対して、例えばCDS(Correlated Double Sampling;相関二重サンプリング)処理といった所定の処理を行う。具体的には、カラム処理部50は、選択された行における画素群のそれぞれから出力される差分信号を受け取り、該差分信号が示すレベル(電位)差を求めることによって1行分の画素ごとの信号を取得する。また、カラム処理部50は、取得した信号について、固定パターンノイズを除去し得る。カラム処理部50は、このような所定の処理を施した信号を、A/D変換部(図示せず)により、デジタル信号に変換し、これを画素信号として出力する。 The column processing unit 50 performs predetermined processing such as CDS (Correlated Double Sampling) processing on the signals output from each of the pixel groups in the selected row of the pixel array unit 20. Specifically, the column processing unit 50 receives the difference signals output from each of the pixel groups in the selected row, and obtains the level (potential) difference indicated by the difference signals for each row of pixels. Get the signal. Further, the column processing unit 50 can remove fixed pattern noise from the acquired signal. The column processing unit 50 converts the signal subjected to such predetermined processing into a digital signal by the A / D conversion unit (not shown), and outputs this as a pixel signal.
 信号処理部60は、少なくとも演算処理機能を有し、カラム処理部50から出力される画素信号に対して演算処理等の種々の信号処理を行う回路である。デジタルシグナルプロセッサ(DSP)は、信号処理部60の一態様である。画像メモリ70は、信号処理部60での信号処理に際し、その処理に必要なデータを一時的に格納する。本開示の信号処理部60は、後述するように、通常画素及び/又は補正画素からの画素信号に基づいて、通常画素及び/又は補正画素を補正する処理を行う。なお、信号処理部60は、カラム処理部50における上記のような演算処理の全部又は一部を行うように構成されても良い。 The signal processing unit 60 is a circuit that has at least an arithmetic processing function and performs various signal processing such as arithmetic processing on the pixel signal output from the column processing unit 50. A digital signal processor (DSP) is an aspect of the signal processing unit 60. The image memory 70 temporarily stores data necessary for signal processing by the signal processing unit 60. As will be described later, the signal processing unit 60 of the present disclosure performs processing for correcting normal pixels and / or correction pixels based on pixel signals from normal pixels and / or correction pixels. The signal processing unit 60 may be configured to perform all or part of the above-mentioned arithmetic processing in the column processing unit 50.
 図2は、本技術の一実施形態に係る画素アレイ部20の画素の配列の一例を説明するための図である。画素アレイ部20において、複数の画素のそれぞれは、カラーフィルタを含み構成され、カラーフィルタに対応する各色成分の画素が所定の配列パターンに従って配置されている。カラーフィルタは、例えば赤、緑及び青の3種類のフィルタを含むが、これに限られない。本開示では、複数の画素のうちの一部の画素が、補正画素として機能するように形成されている。同図において、ハッチングによって示された画素が、補正画素として示されている。補正画素は、後述するように、通常画素に比較して、飽和電荷量が小さく、電荷転送能力を確保するように形成された画素である。このような補正画素に基づく信号を用いて補正処理をすることにより、低照度領域において、空間解像度が低下するのを防止することができる。 FIG. 2 is a diagram for explaining an example of the pixel arrangement of the pixel array unit 20 according to the embodiment of the present technology. In the pixel array unit 20, each of the plurality of pixels includes a color filter, and the pixels of each color component corresponding to the color filter are arranged according to a predetermined arrangement pattern. Color filters include, but are not limited to, for example, three types of filters, red, green and blue. In the present disclosure, some of the plurality of pixels are formed to function as correction pixels. In the figure, the pixels indicated by hatching are shown as correction pixels. As will be described later, the correction pixel is a pixel formed so as to secure a charge transfer ability with a smaller amount of saturated charge as compared with a normal pixel. By performing the correction process using the signal based on such a correction pixel, it is possible to prevent the spatial resolution from being lowered in the low illuminance region.
 図2(a)及び(b)は、ベイヤー配列に従った各色成分の画素の配列の一例を示している。ベイヤー配列では、赤画素R、緑画素G及び青画素Bが1:2:1の比率で配列される。緑画素Gについて、例えば水平方向において隣接する画素に応じて、それぞれ、緑画素Gb及びGrと称することがある。典型的には、赤画素R、緑画素Gb及びGr、並びに青画素Bの4つの画素が1つの画素ブロックを形成し、該画素ブロック群がアレイ状に配列されることで、ベイヤー配列を形成する。同図(a)に示す例では、隣接する4つの画素群のうちの緑画素Gの1つ(すなわち、緑画素Gr)が補正緑画素Gr’として形成されている。或いは、図示されていないが、緑画素Gbが補正緑画素Gb’として形成されても良い。 2 (a) and 2 (b) show an example of the pixel arrangement of each color component according to the Bayer arrangement. In the Bayer arrangement, the red pixel R, the green pixel G, and the blue pixel B are arranged in a ratio of 1: 2: 1. The green pixel G may be referred to as a green pixel Gb and Gr, respectively, depending on, for example, adjacent pixels in the horizontal direction. Typically, four pixels of red pixel R, green pixels Gb and Gr, and blue pixel B form one pixel block, and the pixel block groups are arranged in an array to form a Bayer array. To do. In the example shown in FIG. 6A, one of the green pixels G (that is, the green pixel Gr) of the four adjacent pixel groups is formed as the correction green pixel Gr'. Alternatively, although not shown, the green pixel Gb may be formed as the correction green pixel Gb'.
 また、同図(b)は、補正画素間の間隔が同色成分の画素間の間隔より大きくなるように、補正画素を配置した例を示している。つまり、同図(b)に示す補正緑画素Gr’は、同図(a)に比較して、間引かれて配置されている。本例では、緑画素Grは、1つおきに補正緑画素Gr’として形成されているが、その間隔は適宜設定され得る。他の例として、補正画素(緑画素に限らない。)は、画素アレイ部20の複数の画素の配列おいて、ランダムに配置されても良い。 Further, FIG. 3B shows an example in which the correction pixels are arranged so that the distance between the correction pixels is larger than the distance between the pixels of the same color component. That is, the corrected green pixels Gr'shown in FIG. 3B are decimated and arranged as compared with FIG. In this example, every other green pixel Gr is formed as a correction green pixel Gr', but the interval thereof can be appropriately set. As another example, the correction pixels (not limited to the green pixels) may be randomly arranged in an array of a plurality of pixels of the pixel array unit 20.
 図3A及び3Bは、本技術の一実施形態に係る画素アレイ部20の画素の配列の一例を説明するための図である。具体的には、図3A及び3Bは、各色成分の画素がクアッド配列に従って配置された画素の配列の一例を示している。クアッド配列は、隣接する4つの画素が同色成分である配列である。同図に示すように、隣接する同色成分の4つの画素を1つの画素ブロックとしてみれば、各色成分の画素ブロック群はベイヤー配列となっている。言い換えれば、クアッド配列は、各色成分について、2×2画素からなる画素ブロックを形成し、該画素ブロックをベイヤー配列に従って配列したものである。この意味において、同図に示すような画素の配列は、クアッド・ベイヤー配列と称されることもある。 3A and 3B are diagrams for explaining an example of the pixel arrangement of the pixel array unit 20 according to the embodiment of the present technology. Specifically, FIGS. 3A and 3B show an example of an array of pixels in which the pixels of each color component are arranged according to a quad array. The quad array is an array in which four adjacent pixels have the same color component. As shown in the figure, if four adjacent pixels of the same color component are regarded as one pixel block, the pixel block group of each color component has a Bayer array. In other words, the quad array forms a pixel block composed of 2 × 2 pixels for each color component, and the pixel blocks are arranged according to the Bayer array. In this sense, the pixel array as shown in the figure is sometimes referred to as a quad Bayer array.
 同図A(a)に示す例では、隣接する同色成分の4つの画素のうちの1つの画素(例えば図中、画素ブロックにおける左下の画素)が補正画素として形成されている。 In the example shown in FIG. A (a), one pixel (for example, the lower left pixel in the pixel block in the figure) of four adjacent pixels having the same color component is formed as a correction pixel.
 また、例えば、図3A(b)に示すように、隣接する同色成分の4つの画素のうちの2つの画素(例えば図中、すなわち画素ブロックにおける対角上に位置する画素)が補正画素として形成されても良い。これにより、補正画素の数が増えるので、低照度領域において、空間解像度が下がるのを防ぐことができるようになる。 Further, for example, as shown in FIG. 3A (b), two pixels out of four adjacent pixels having the same color component (for example, pixels in the figure, that is, pixels located diagonally in the pixel block) are formed as correction pixels. May be done. As a result, the number of correction pixels is increased, so that it is possible to prevent the spatial resolution from being lowered in the low illuminance region.
 また、例えば、図3B(c)に示すように、4つの赤画素群R及び4つの青画素群Bのそれぞれの一部が補正画素として形成されても良い。本例では、緑画素Gには、補正画素が含まれていない。したがって、例えば、異なる画素配列に変換するためのリモザイク処理において緑画素Gから読み出された信号を有効に使用することにより、リモザイク処理を効率的に行うことができる。 Further, for example, as shown in FIG. 3B (c), a part of each of the four red pixel group R and the four blue pixel group B may be formed as correction pixels. In this example, the green pixel G does not include the correction pixel. Therefore, for example, by effectively using the signal read from the green pixel G in the re-mosaic process for converting to a different pixel array, the re-mosaic process can be efficiently performed.
 また、例えば、図3B(d)に示すように、水平方向において、赤画素Rに隣接する緑画素Gの一部の画素が補正画素として形成されても良い。これにより、補正画素の数が減り、補正処理にかかる演算負荷を軽減することができる。 Further, for example, as shown in FIG. 3B (d), a part of the green pixel G adjacent to the red pixel R may be formed as a correction pixel in the horizontal direction. As a result, the number of correction pixels can be reduced, and the calculation load required for the correction process can be reduced.
 図4Aは、本技術の一実施形態に係る画素アレイ部における画素の概略的構造の一例を示す部分縦断面図である。同図では、隣接する通常画素200a及び補正画素200bが示されている。 FIG. 4A is a partial vertical sectional view showing an example of a schematic structure of pixels in a pixel array unit according to an embodiment of the present technology. In the figure, adjacent normal pixels 200a and correction pixels 200b are shown.
 同図に示すように、画素アレイ部20における画素200は、例えば、マイクロレンズ201と、フィルタ層210と、半導体基板220と、配線層230とを含み、これらが順に積層され構成されている。すなわち、本開示の撮像デバイス1(図1参照)は、光が照射される半導体基板220の面と反対側の面に配線層230が設けられた裏面照射型撮像デバイスである。 As shown in the figure, the pixel 200 in the pixel array unit 20 includes, for example, a microlens 201, a filter layer 210, a semiconductor substrate 220, and a wiring layer 230, and these are laminated in this order. That is, the imaging device 1 (see FIG. 1) of the present disclosure is a back-illuminated imaging device in which a wiring layer 230 is provided on a surface opposite to the surface of the semiconductor substrate 220 to be irradiated with light.
 マイクロレンズ201は、画素200の表面に形成され、入射する光を、フィルタ層210を介して、半導体基板220の照射面に集光する。本例では、1つの画素200に対して、1つのマイクロレンズ201が形成されているが、これに限られず、複数(例えば2つ又は4つ)の画素200に対して、1つのマイクロレンズ201が形成されても良い。 The microlens 201 is formed on the surface of the pixel 200 and collects the incident light on the irradiation surface of the semiconductor substrate 220 via the filter layer 210. In this example, one microlens 201 is formed for one pixel 200, but the present invention is not limited to this, and one microlens 201 is formed for a plurality of (for example, two or four) pixels 200. May be formed.
 フィルタ層210は、各画素200が特定の色成分の光を受光するように、カラーフィルタを含み構成される。カラーフィルタは、例えば、赤、緑及び青のフィルタを含むが、これに限られない。上述したように、本開示では、カラーフィルタは、クアッド配列やベイヤー配列に従うように構成される。 The filter layer 210 includes a color filter so that each pixel 200 receives light of a specific color component. Color filters include, but are not limited to, for example, red, green and blue filters. As mentioned above, in the present disclosure, the color filter is configured to follow a quad or Bayer sequence.
 半導体基板220は、入射した光を受光し、電荷を蓄積する光電変換素子222を含み構成される。光電変換素子222は、例えば、P型半導体領域224及びN型半導体領域226からなる埋め込みフォトダイオードである。本例のN型半導体領域226は、電荷を蓄積し得る電荷蓄積領域である。本例では、補正画素200bとして機能する光電変換素子222のN型半導体領域226は、通常画素200aとして機能する光電変換素子222のN型半導体領域226よりも小さくなるように(つまり、飽和電荷量が少なくなるように)形成されている。通常画素200a及び補正画素200bは、例えば、半導体製造工程において、エピタキシャル成長層上にN型半導体領域226を形成するためのレジストを塗布し、インプランテーション(不純物注入)を行うことによって形成される。補正画素200bは、不純物の注入量が少なくなるように、例えばレジストの開口部の面積を小さくすることによって、形成される。 The semiconductor substrate 220 includes a photoelectric conversion element 222 that receives incident light and accumulates electric charges. The photoelectric conversion element 222 is, for example, an embedded photodiode composed of a P-type semiconductor region 224 and an N-type semiconductor region 226. The N-type semiconductor region 226 of this example is a charge storage region in which charges can be stored. In this example, the N-type semiconductor region 226 of the photoelectric conversion element 222 that functions as the correction pixel 200b is smaller than the N-type semiconductor region 226 of the photoelectric conversion element 222 that functions as the normal pixel 200a (that is, the saturated charge amount). Is formed). The normal pixel 200a and the correction pixel 200b are formed, for example, by applying a resist for forming the N-type semiconductor region 226 on the epitaxial growth layer and performing implantation (impurity injection) in the semiconductor manufacturing process. The correction pixel 200b is formed, for example, by reducing the area of the opening of the resist so that the amount of impurities injected is reduced.
 配線層230は、転送ゲート電極232及び金属配線234を含み構成される。転送ゲート電極232は、光電変換素子222に電気的に接続される。転送ゲート電極232には、制御部10の制御の下、ゲート電圧が印加される。撮像デバイス1の動作中、転送ゲート電極232へのゲート電圧が制御されることにより、電荷蓄積領域に蓄積された電荷がフローティング・ディフュージョン236(図5参照)に転送される。 The wiring layer 230 includes a transfer gate electrode 232 and a metal wiring 234. The transfer gate electrode 232 is electrically connected to the photoelectric conversion element 222. A gate voltage is applied to the transfer gate electrode 232 under the control of the control unit 10. By controlling the gate voltage to the transfer gate electrode 232 during the operation of the image pickup device 1, the charge accumulated in the charge storage region is transferred to the floating diffusion 236 (see FIG. 5).
 通常画素200aと補正画素200bとは、基本的には、その構造は同じであるが、上述したように、補正画素200bは、通常画素200aよりも半導体基板220における光電変換素子222が小さく形成されている点で、通常画素200aと相違している。このように、補正画素200bの光電変換素子222が通常画素200aの光電変換素子222より小さく形成されることにより、補正画素200bの飽和電荷量が通常画素200aの飽和電荷量より小さくなる一方、補正画素200bの電荷転送効率は向上する。 The structure of the normal pixel 200a and the correction pixel 200b is basically the same, but as described above, the correction pixel 200b is formed so that the photoelectric conversion element 222 on the semiconductor substrate 220 is smaller than that of the normal pixel 200a. In that respect, it is different from the normal pixel 200a. As described above, the photoelectric conversion element 222 of the correction pixel 200b is formed to be smaller than the photoelectric conversion element 222 of the normal pixel 200a, so that the saturation charge amount of the correction pixel 200b is smaller than the saturation charge amount of the normal pixel 200a, while the correction is performed. The charge transfer efficiency of the pixel 200b is improved.
 図5は、本技術の画素アレイ部における画素の電極を説明するための部分断面平面図である。同図は、クアッド配列に従った画素ブロック示している。本例では、同色の隣接する4つの画素200のうちの1つが補正画素200bとして形成されている。 FIG. 5 is a partial cross-sectional plan view for explaining pixel electrodes in the pixel array portion of the present technology. The figure shows pixel blocks according to a quad array. In this example, one of four adjacent pixels 200 of the same color is formed as the correction pixel 200b.
 同図に示すように、隣接する4つの画素200の中央部には、フローティング・ディフュージョン236が形成されている。また、各画素200の転送ゲート電極232は、フローティング・ディフュージョン236を取り囲むように配置されている。すなわち、隣接する4つの画素200は、1つのフローティング・ディフュージョン236を共有している。本例では、補正画素200bの転送ゲート電極232は、通常画素200aの転送ゲート電極232よりも、その面積が大きくなるように形成されている。これにより、通常画素200aの電荷蓄積領域からフローティング・ディフュージョン236への電荷の転送がより効率的に行われるようになる。 As shown in the figure, a floating diffusion 236 is formed in the central portion of the four adjacent pixels 200. Further, the transfer gate electrode 232 of each pixel 200 is arranged so as to surround the floating diffusion 236. That is, the four adjacent pixels 200 share one floating diffusion 236. In this example, the transfer gate electrode 232 of the correction pixel 200b is formed so that its area is larger than that of the transfer gate electrode 232 of the normal pixel 200a. As a result, the charge transfer from the charge storage region of the normal pixel 200a to the floating diffusion 236 can be performed more efficiently.
 各画素200の周辺には、例えば増幅トランジスタ238や選択トランジスタ239等が配置されている。転送ゲート電極232を介してフローティング・ディフュージョン236に転送された電荷に基づく信号は、増幅トランジスタ238により増幅され、選択トランジスタ239を介して画素信号として出力される。 For example, an amplification transistor 238, a selection transistor 239, and the like are arranged around each pixel 200. The charge-based signal transferred to the floating diffusion 236 via the transfer gate electrode 232 is amplified by the amplification transistor 238 and output as a pixel signal via the selection transistor 239.
 なお、上述した補正画素200bの転送ゲート電極232が十分に大きい場合等、電荷転送特性が場合には、例えば図4Bに示すように、補正画素200bの電荷蓄積領域(本例ではN型半導体領域226)は、通常画素200aの電荷蓄積領域と同じか僅かに小さくなるように形成されてもよい。また、補正画素200bの電荷蓄積領域が十分に小さい場合、電荷転送特性が良好の場合には、図4Cに示すように、補正画素200bの転送ゲート電極232の面積は、通常画素200aの転送ゲート電極232のそれと同じか僅かに大きくなるように形成されても良い。 If the transfer gate electrode 232 of the correction pixel 200b described above is sufficiently large and has charge transfer characteristics, for example, as shown in FIG. 4B, the charge storage region of the correction pixel 200b (in this example, the N-type semiconductor region). 226) may be formed so as to be the same as or slightly smaller than the charge storage region of the normal pixel 200a. Further, when the charge storage region of the correction pixel 200b is sufficiently small and the charge transfer characteristics are good, as shown in FIG. 4C, the area of the transfer gate electrode 232 of the correction pixel 200b is the transfer gate of the normal pixel 200a. It may be formed to be the same as or slightly larger than that of the electrode 232.
 図6は、本技術の一実施形態に係る撮像デバイスにおける画素信号の補正処理の一例を説明するためのフローチャートである。当該補正処理は、信号処理部60によって実行される。なお、以下では、図3Aに示したクアッド配列の画素アレイ部20から出力される画素信号に対する補正処理を例にして説明している。 FIG. 6 is a flowchart for explaining an example of pixel signal correction processing in the imaging device according to the embodiment of the present technology. The correction process is executed by the signal processing unit 60. In the following, the correction process for the pixel signal output from the pixel array unit 20 of the quad array shown in FIG. 3A will be described as an example.
 同図に示すように、信号処理部60は、カラム処理部50から行単位で出力される画素信号を受信する(S601)。受信した画素信号は、補正処理等のため、画像メモリ70に一時的に保持される。 As shown in the figure, the signal processing unit 60 receives a pixel signal output from the column processing unit 50 in units of rows (S601). The received pixel signal is temporarily held in the image memory 70 for correction processing and the like.
 続いて、信号処理部60は、受信した画素信号に基づいて、所定の色成分の画素について、通常画素の感度比及び補正画素の感度比を算出する(S602)。具体的には、本例では、信号処理部60は、通常赤画素Rの感度比及び補正赤画素R’の感度比をそれぞれ算出するとともに、通常青画素Bの感度比及び補正青画素B’の感度比をそれぞれ算出する。通常赤画素Rの感度比とは、例えば、通常緑画素Grの電荷量に対する通常赤画素Rの電荷量の比であり、補正赤画素R’の感度比とは、例えば、補正緑画素Gr’の電荷量に対する補正赤画素R’の電荷量の比である。また、通常青画素Bの感度比とは、例えば、通常緑画素Gbの電荷量に対する通常青画素Bの電荷量の比であり、補正青画素B’の感度比とは、例えば、補正緑画素Gb’の電荷量に対する補正青画素B’の電荷量の比である。 Subsequently, the signal processing unit 60 calculates the sensitivity ratio of the normal pixel and the sensitivity ratio of the correction pixel for the pixel of the predetermined color component based on the received pixel signal (S602). Specifically, in this example, the signal processing unit 60 calculates the sensitivity ratio of the normal red pixel R and the sensitivity ratio of the corrected red pixel R', respectively, and the sensitivity ratio of the normal blue pixel B and the corrected blue pixel B'. Calculate the sensitivity ratio of each. The sensitivity ratio of the normal red pixel R is, for example, the ratio of the charge amount of the normal red pixel R to the charge amount of the normal green pixel Gr, and the sensitivity ratio of the correction red pixel R'is, for example, the correction green pixel Gr'. It is the ratio of the charge amount of the corrected red pixel R'to the charge amount of. Further, the sensitivity ratio of the normal blue pixel B is, for example, the ratio of the charge amount of the normal blue pixel B to the charge amount of the normal green pixel Gb, and the sensitivity ratio of the correction blue pixel B'is, for example, the correction green pixel. It is a ratio of the charge amount of the corrected blue pixel B'to the charge amount of Gb'.
 信号処理部60は、次に、上記の所定の色成分の画素について、算出した補正画素の感度比と通常画素の感度比との間の差を算出する(S603)。すなわち、信号処理部60は、算出した補正赤画素R’の感度比と通常赤画素Rの感度比との差の絶対値|Δs_R|を算出するとともに、補正青画素B’の感度比と通常青画素Bの感度比との間の差の絶対値|Δs_B|を算出する。 Next, the signal processing unit 60 calculates the difference between the calculated sensitivity ratio of the correction pixel and the sensitivity ratio of the normal pixel for the pixel having the above-mentioned predetermined color component (S603). That is, the signal processing unit 60 calculates the absolute value | Δs_R | of the difference between the calculated sensitivity ratio of the corrected red pixel R'and the sensitivity ratio of the normal red pixel R, and also calculates the sensitivity ratio of the corrected blue pixel B'and the normal. The absolute value | Δs_B | of the difference between the blue pixel B and the sensitivity ratio is calculated.
 次に、信号処理部60は、算出した補正赤画素R’の感度比と通常赤画素Rの感度比との差の絶対値|Δs_R|が、画質パラメータ基準値ref_Rより小さいか否かを判定する(S604)。画質パラメータ基準値ref_Rは、赤画素の画質について、あらかじめ調整され定められた数値(例えば0.1)である。つまり、差の絶対値|Δs_R|が画質パラメータ基準値ref_Rよりも小さい場合、通常赤画素Rの電荷量が補正赤画素R’の電荷量と比較したときに十分な量であることから、通常赤画素Rの電荷転送に問題がないことを意味する。信号処理部60は、補正赤画素R’の感度比と通常赤画素Rの感度比との差の絶対値|Δs_R|が、画質パラメータ基準値ref_Rよりも小さいと判定する場合(S604のYes)、続いて、算出した補正青画素B’の感度比と通常青画素Bの感度比との差の絶対値|Δs_B|が、画質パラメータ基準値ref_Bよりも小さいか否かを判定する(S605)。画質パラメータ基準値ref_Bは、青画素の画質について、あらかじめ調整され定められた数値(例えば0.1)である。つまり、差の絶対値|Δs_B|が画質パラメータ基準値ref_Bよりも小さい場合、通常青画素Bの電荷量が補正青画素B’の電荷量と比較したときに十分であることから、通常青画素Bの電荷転送に問題がないことを意味する。 Next, the signal processing unit 60 determines whether or not the absolute value | Δs_R | of the difference between the calculated sensitivity ratio of the corrected red pixel R'and the sensitivity ratio of the normal red pixel R is smaller than the image quality parameter reference value ref_R. (S604). The image quality parameter reference value ref_R is a numerical value (for example, 0.1) adjusted and determined in advance with respect to the image quality of the red pixel. That is, when the absolute value | Δs_R | of the difference is smaller than the image quality parameter reference value ref_R, the charge amount of the normal red pixel R is sufficiently large as compared with the charge amount of the corrected red pixel R'. It means that there is no problem in the charge transfer of the red pixel R. When the signal processing unit 60 determines that the absolute value | Δs_R | of the difference between the sensitivity ratio of the corrected red pixel R'and the sensitivity ratio of the normal red pixel R is smaller than the image quality parameter reference value ref_R (Yes in S604). Subsequently, it is determined whether or not the absolute value | Δs_B | of the difference between the calculated sensitivity ratio of the corrected blue pixel B'and the sensitivity ratio of the normal blue pixel B is smaller than the image quality parameter reference value ref_B (S605). .. The image quality parameter reference value ref_B is a numerical value (for example, 0.1) adjusted and determined in advance with respect to the image quality of the blue pixel. That is, when the absolute value | Δs_B | of the difference is smaller than the image quality parameter reference value ref_B, the charge amount of the normal blue pixel B is sufficient when compared with the charge amount of the corrected blue pixel B', so that the normal blue pixel is usually a blue pixel. It means that there is no problem in the charge transfer of B.
 信号処理部60は、補正赤画素R’の感度比と通常赤画素Rの感度比との差の絶対値|Δs_R|が、画質パラメータ基準値ref_Rより小さいと判定し(S604のYes)、かつ、補正青画素B’の感度比と通常青画素Bの感度比との差の絶対値|Δs_B|が、画質パラメータ基準値ref_Bより小さいと判定する場合(S605のYes)、信号処理部60は、補正画素200bの信号を補正する(S606)。つまり、画素からの電荷の出力量がOF(Over Flow)レベル(すなわち、最大)である場合に、信号処理部60は、補正画素200bの信号を補正する。具体的には、図7(a)に示すように、信号処理部60は、補正画素200bからの電荷に基づく信号を、同一の画素ブロックにおいて隣接する同色成分の3つの通常画素200aからの電荷に基づく信号に基づいて補正する。このとき、信号処理部60は、例えば、補正画素200bに隣接する同色成分の通常画素200aからの電荷の平均を算出し、その電荷の平均に基づく信号に合わせて、補正画素200bからの電荷に基づく信号を補正しても良い。なお、本例では、信号処理部60は、通常画素200aの電荷に基づく信号について、補正せずに、そのまま出力する。 The signal processing unit 60 determines that the absolute value | Δs_R | of the difference between the sensitivity ratio of the corrected red pixel R'and the sensitivity ratio of the normal red pixel R is smaller than the image quality parameter reference value ref_R (Yes in S604), and When it is determined that the absolute value | Δs_B | of the difference between the sensitivity ratio of the corrected blue pixel B'and the sensitivity ratio of the normal blue pixel B is smaller than the image quality parameter reference value ref_B (Yes in S605), the signal processing unit 60 , Correct the signal of the correction pixel 200b (S606). That is, when the output amount of electric charge from the pixel is the OF (OverFlow) level (that is, the maximum), the signal processing unit 60 corrects the signal of the correction pixel 200b. Specifically, as shown in FIG. 7A, the signal processing unit 60 transmits a signal based on the electric charge from the correction pixel 200b to the electric charge from three normal pixels 200a having the same color component adjacent to each other in the same pixel block. Correct based on the signal based on. At this time, the signal processing unit 60 calculates, for example, the average of the charges from the normal pixels 200a of the same color component adjacent to the correction pixels 200b, and adjusts the charges from the correction pixels 200b according to the signal based on the average of the charges. Based on the signal may be corrected. In this example, the signal processing unit 60 outputs the signal based on the electric charge of the normal pixel 200a as it is without correction.
 信号処理部60は、補正画素200bの信号を補正すると、信号を後段に送り(S611)、当該画素信号についての処理を終了する。 When the signal processing unit 60 corrects the signal of the correction pixel 200b, the signal processing unit 60 sends the signal to the subsequent stage (S611) and ends the processing for the pixel signal.
 また、信号処理部60は、補正赤画素R’の感度比と通常赤画素Rの感度比との差の絶対値|Δs_R|が、画質パラメータ基準値ref_Rより小さいと判定し(S604のYes)、かつ、補正青画素B’の感度比と通常青画素Bの感度比との差の絶対値|Δs_B|が、画質パラメータ基準値ref_Bより小さくないと判定する場合(S605のNo)、信号処理部60は、通常青画素B及び通常緑画素Gbの信号を補正する(S607)。つまり、青画素からの電荷の出力量が中間(高いレベル(高照度領域)と低いレベル(低照度領域)との間)である場合に、通常青画素B及び通常緑画素Gbの信号が補正される。具体的には、図7(b)に示すように、信号処理部60は、通常青画素B及び通常緑画素Gbの電荷に基づく信号を、それぞれ、同一の画素ブロックの補正青画素B’及び補正緑画素Gb’からの電荷に基づく信号に基づいて補正する。この場合、信号処理部60は、通常青画素Bの信号のみを補正するように構成されても良い。 Further, the signal processing unit 60 determines that the absolute value | Δs_R | of the difference between the sensitivity ratio of the corrected red pixel R'and the sensitivity ratio of the normal red pixel R is smaller than the image quality parameter reference value ref_R (Yes in S604). In addition, when it is determined that the absolute value | Δs_B | of the difference between the sensitivity ratio of the corrected blue pixel B'and the sensitivity ratio of the normal blue pixel B is not smaller than the image quality parameter reference value ref_B (No in S605), signal processing The unit 60 corrects the signals of the normal blue pixel B and the normal green pixel Gb (S607). That is, when the amount of charge output from the blue pixel is in the middle (between a high level (high illuminance region) and a low level (low illuminance region)), the signals of the normal blue pixel B and the normal green pixel Gb are corrected. Will be done. Specifically, as shown in FIG. 7B, the signal processing unit 60 transmits signals based on the charges of the normal blue pixel B and the normal green pixel Gb to the corrected blue pixel B'and the correction blue pixel B'of the same pixel block, respectively. Correction Correction is performed based on the signal based on the charge from the green pixel Gb'. In this case, the signal processing unit 60 may be configured to normally correct only the signal of the blue pixel B.
 信号処理部60は、通常青画素B及び通常緑画素Gbの信号を補正すると、信号を後段に送り(S611)、当該画素信号についての処理を終了する。 When the signal processing unit 60 corrects the signals of the normal blue pixel B and the normal green pixel Gb, the signal processing unit 60 sends the signal to the subsequent stage (S611) and ends the processing for the pixel signal.
 一方、信号処理部60は、補正赤画素R’の感度比と通常赤画素Rの感度比との差が、画質パラメータ基準値ref_Rより小さくないと判定する場合(S604のNo)、算出した補正青画素B’の感度比と通常青画素Bの感度比との差の絶対値|Δs_B|が、画質パラメータ基準値ref_Bより小さいか否かを判定する(S608)。信号処理部60は、補正赤画素R’の感度比と通常赤画素Rの感度比との差の絶対値|Δs_R|が、画質パラメータ基準値ref_Rより小さくないと判定し(S604のNo)、かつ、補正青画素B’の感度比と通常青画素Bの感度比との差の絶対値|Δs_B|が、画質パラメータ基準値ref_Bより小さいと判定する場合(S608のYes)、信号処理部60は、通常赤画素R及び通常緑画素Grの信号を補正する(S609)。つまり、本例では、赤画素からの電荷の出力量が高いレベルと低いレベルとの間である場合に、通常赤画素R及び通常緑画素Grの信号が補正される。具体的には、図7(c)に示すように、信号処理部60は、通常赤画素R及び通常緑画素Grの電荷に基づく信号をそれぞれ同一の画素ブロックの補正赤画素R’及び補正緑画素Gr’からの電荷に基づく信号に基づいて補正する。この場合、信号処理部60は、通常赤画素Rの信号のみを補正してもよい。 On the other hand, when the signal processing unit 60 determines that the difference between the sensitivity ratio of the corrected red pixel R'and the sensitivity ratio of the normal red pixel R is not smaller than the image quality parameter reference value ref_R (No in S604), the calculated correction is performed. It is determined whether or not the absolute value | Δs_B | of the difference between the sensitivity ratio of the blue pixel B'and the sensitivity ratio of the normal blue pixel B is smaller than the image quality parameter reference value ref_B (S608). The signal processing unit 60 determines that the absolute value | Δs_R | of the difference between the sensitivity ratio of the corrected red pixel R'and the sensitivity ratio of the normal red pixel R is not smaller than the image quality parameter reference value ref_R (No in S604). When it is determined that the absolute value | Δs_B | of the difference between the sensitivity ratio of the corrected blue pixel B'and the sensitivity ratio of the normal blue pixel B is smaller than the image quality parameter reference value ref_B (Yes in S608), the signal processing unit 60 Corrects the signals of the normal red pixel R and the normal green pixel Gr (S609). That is, in this example, when the amount of charge output from the red pixel is between a high level and a low level, the signals of the normal red pixel R and the normal green pixel Gr are corrected. Specifically, as shown in FIG. 7C, the signal processing unit 60 transmits signals based on the charges of the normal red pixel R and the normal green pixel Gr to the corrected red pixel R'and the corrected green of the same pixel block, respectively. The correction is based on the signal based on the charge from the pixel Gr'. In this case, the signal processing unit 60 may usually correct only the signal of the red pixel R.
 信号処理部60は、通常赤画素R及び通常緑画素Grの信号を補正すると、信号を後段に送り(S611)、当該画素信号についての処理を終了する。 When the signal processing unit 60 corrects the signals of the normal red pixel R and the normal green pixel Gr, the signal processing unit 60 sends the signal to the subsequent stage (S611) and ends the processing for the pixel signal.
 また、信号処理部60は、補正赤画素R’の感度比と通常赤画素Rの感度比との差の絶対値|Δs_R|が、画質パラメータ基準値ref_Rより小さくないと判定し(S604のNo)、かつ、補正青画素B’の感度比と通常青画素Bの感度比との差の絶対値|Δs_B|が、画質パラメータ基準値ref_Bより小さいと判定する場合(S608のNo)、信号処理部60は、通常画素200aの信号を補正する(S610)。つまり、電荷の出力量がきわめて低いレベルである場合には、通常画素200aが補正される。具体的には、図7(d)に示すように、信号処理部60は、3つの通常画素200aの電荷に基づく信号を、同一の画素ブロック内において隣接する補正画素200bからの電荷に基づく信号に基づいて補正する。 Further, the signal processing unit 60 determines that the absolute value | Δs_R | of the difference between the sensitivity ratio of the corrected red pixel R'and the sensitivity ratio of the normal red pixel R is not smaller than the image quality parameter reference value ref_R (No. of S604). ), And when it is determined that the absolute value | Δs_B | of the difference between the sensitivity ratio of the corrected blue pixel B'and the sensitivity ratio of the normal blue pixel B is smaller than the image quality parameter reference value ref_B (No in S608), signal processing The unit 60 corrects the signal of the normal pixel 200a (S610). That is, when the output amount of electric charge is at an extremely low level, the normal pixel 200a is corrected. Specifically, as shown in FIG. 7D, the signal processing unit 60 converts a signal based on the charges of the three normal pixels 200a into a signal based on the charges from adjacent correction pixels 200b in the same pixel block. Correct based on.
 信号処理部60は、通常画素200aの信号を補正すると、信号を後段に送り(S611)、当該画素信号についての処理を終了する。 When the signal processing unit 60 corrects the signal of the normal pixel 200a, the signal processing unit 60 sends the signal to the subsequent stage (S611) and ends the processing for the pixel signal.
 このように、クアッド配列の画素配列の場合、信号処理部60は、高照度領域においては補正画素200bを補正することができ、低照度領域においては通常画素200aを補正することができる。これにより、画素特性のリニアリティの悪化を防ぐことができる。また、信号処理部60は、色成分ごとに画素200を補正することができる。 As described above, in the case of the pixel arrangement of the quad arrangement, the signal processing unit 60 can correct the correction pixels 200b in the high illuminance region and can correct the normal pixels 200a in the low illuminance region. This makes it possible to prevent deterioration of the linearity of the pixel characteristics. Further, the signal processing unit 60 can correct the pixels 200 for each color component.
 図8は、本技術の一実施形態に係る撮像デバイスにおける画素信号の補正処理の一例を説明するためのフローチャートである。当該補正処理は、信号処理部60によって実行される。なお、以下では、図2に示したベイヤー配列の画素アレイ部20から出力される画素信号に対する補正処理を例にして説明している。また、本例においては、緑画素Grを補正画素とする。 FIG. 8 is a flowchart for explaining an example of pixel signal correction processing in the imaging device according to the embodiment of the present technology. The correction process is executed by the signal processing unit 60. In the following, the correction process for the pixel signal output from the pixel array unit 20 of the Bayer array shown in FIG. 2 will be described as an example. Further, in this example, the green pixel Gr is used as the correction pixel.
 同図に示すように、信号処理部60は、カラム処理部50から行単位で出力される画素信号を受信する(S801)。受信した画素信号は、補正処理等のため、画像メモリ70に一時的に保持される。 As shown in the figure, the signal processing unit 60 receives a pixel signal output from the column processing unit 50 in units of rows (S801). The received pixel signal is temporarily held in the image memory 70 for correction processing and the like.
 続いて、信号処理部60は、受信した画素信号に基づいて、所定の色成分の画素について、通常画素の感度比及び補正画素の感度比を算出する(S802)。具体的には、信号処理部60は、通常緑画素Gbの感度比及び補正緑画素Gr’の感度比をそれぞれ算出する。通常緑画素Gbの感度比とは、例えば、通常赤画素Rの電荷量に対する通常緑画素Gbの電荷量の比であり、補正緑画素Gr’の感度比とは、例えば、通常赤画素Rの電荷量に対する補正緑画素Gr’の電荷量の比である。 Subsequently, the signal processing unit 60 calculates the sensitivity ratio of the normal pixel and the sensitivity ratio of the correction pixel for the pixel of the predetermined color component based on the received pixel signal (S802). Specifically, the signal processing unit 60 calculates the sensitivity ratio of the normal green pixel Gb and the sensitivity ratio of the corrected green pixel Gr', respectively. The sensitivity ratio of the normal green pixel Gb is, for example, the ratio of the charge amount of the normal green pixel Gb to the charge amount of the normal red pixel R, and the sensitivity ratio of the corrected green pixel Gr'is, for example, the charge amount of the normal red pixel R. It is the ratio of the charge amount of the corrected green pixel Gr'to the charge amount.
 信号処理部60は、次に、所定の色成分の画素について、算出した補正画素の感度比と通常画素の感度比との間の差を算出する(S803)。すなわち、信号処理部60は、算出した通常緑画素Gbの感度比と補正緑画素Gr’の感度比との差の絶対値|Δs_G|を算出する。 Next, the signal processing unit 60 calculates the difference between the calculated sensitivity ratio of the correction pixel and the sensitivity ratio of the normal pixel for the pixel of the predetermined color component (S803). That is, the signal processing unit 60 calculates the absolute value | Δs_G | of the difference between the calculated sensitivity ratio of the normal green pixel Gb and the sensitivity ratio of the corrected green pixel Gr'.
 次に、信号処理部60は、算出した通常緑画素Gbの感度比と補正緑画素Gr’の感度比との差の絶対値|Δs_G|が、画質パラメータ基準値ref_Gより小さいか否かを判定する(S804)。画質パラメータ基準値ref_Gは、例えば、あらかじめ調整され定められた数値(例えば0.1)である。つまり、差の絶対値|Δs_G|が画質パラメータ基準値ref_Gよりも小さい場合、通常緑画素Gbの電荷量が補正緑画素Gr’の電荷量と比較したときに十分であること、すなわち、通常緑画素Gbの電荷転送に問題がないことを意味する。 Next, the signal processing unit 60 determines whether or not the absolute value | Δs_G | of the difference between the calculated sensitivity ratio of the normal green pixel Gb and the sensitivity ratio of the corrected green pixel Gr'is smaller than the image quality parameter reference value ref_G. (S804). The image quality parameter reference value ref_G is, for example, a predetermined numerical value (for example, 0.1). That is, when the absolute value | Δs_G | of the difference is smaller than the image quality parameter reference value ref_G, the amount of charge of the normal green pixel Gb is sufficient when compared with the amount of charge of the corrected green pixel Gr', that is, it is usually green. It means that there is no problem in the charge transfer of the pixel Gb.
 信号処理部60は、補正緑画素Gr’の感度比と通常緑画素Gbの感度比との差の絶対値|Δs_G|が、画質パラメータ基準値ref_Gよりも小さいと判定すると(S804のYes)、補正緑画素Gr’の信号を補正する(S805)。具体的には、図9(a)に示すように、信号処理部60は、補正緑画素Gr’からの電荷に基づく信号を、同色成分の通常緑画素Gbからの電荷に基づく信号に基づいて補正する。同色成分の通常緑画素Gbは、補正緑画素Gr’の周囲に配置された画素である。このとき、信号処理部60は、例えば、補正画素200bに隣接する異色成分の通常画素200aからの電荷の出力量に合わせて、補正画素200bからの電荷に基づく信号を補正しても良い。信号処理部60は、補正緑画素Gr’の信号を補正すると、信号を後段に送り(S806)、当該画素信号についての処理を終了する。 When the signal processing unit 60 determines that the absolute value | Δs_G | of the difference between the sensitivity ratio of the corrected green pixel Gr'and the sensitivity ratio of the normal green pixel Gb is smaller than the image quality parameter reference value ref_G (Yes in S804), Correction The signal of the green pixel Gr'is corrected (S805). Specifically, as shown in FIG. 9A, the signal processing unit 60 bases the signal based on the charge from the corrected green pixel Gr'based on the signal based on the charge from the normal green pixel Gb having the same color component. to correct. The normal green pixel Gb having the same color component is a pixel arranged around the correction green pixel Gr'. At this time, the signal processing unit 60 may correct the signal based on the charge from the correction pixel 200b, for example, according to the output amount of the charge from the normal pixel 200a of the different color component adjacent to the correction pixel 200b. When the signal processing unit 60 corrects the signal of the corrected green pixel Gr', the signal processing unit 60 sends the signal to the subsequent stage (S806) and ends the processing for the pixel signal.
 一方、信号処理部60は、補正緑画素Gr’の感度比と通常緑画素Gbの感度比との差が、画質パラメータ基準値ref_Gより小さくないと判定すると(S804のNo)、通常緑画素Gbの信号を補正する(S806)。具体的には、図9(b)に示すように、信号処理部60は、通常緑画素Gbからの電荷に基づく信号を、同一の画素ブロックにおける同色成分の補正緑画素Gr’からの電荷に基づく信号に基づいて補正する。信号処理部60は、通常緑画素Gbの信号を補正すると、信号を後段に送り(S806)、当該画素信号についての処理を終了する。 On the other hand, when the signal processing unit 60 determines that the difference between the sensitivity ratio of the corrected green pixel Gr'and the sensitivity ratio of the normal green pixel Gb is not smaller than the image quality parameter reference value ref_G (No in S804), the normal green pixel Gb (S806). Specifically, as shown in FIG. 9B, the signal processing unit 60 converts the signal based on the charge from the normal green pixel Gb into the charge from the corrected green pixel Gr'of the same color component in the same pixel block. Correct based on the signal based. When the signal processing unit 60 normally corrects the signal of the green pixel Gb, the signal processing unit 60 sends the signal to the subsequent stage (S806) and ends the processing for the pixel signal.
 このように、ベイヤー配列の画素配列の場合、信号処理部60は、高照度領域においては補正画素200bを補正することができ、低照度領域においては通常画素200aを補正することができる。これにより、画素特性のリニアリティの悪化を防ぐことができる。 As described above, in the case of the Bayer array of pixels, the signal processing unit 60 can correct the correction pixels 200b in the high illuminance region and can correct the normal pixels 200a in the low illuminance region. This makes it possible to prevent deterioration of the linearity of the pixel characteristics.
 <電子機器への適用例>
 上述した撮像デバイス1は、例えば、デジタルスチルカメラやデジタルビデオカメラなどの撮像装置、撮像機能を備えた携帯電話機、又は、撮像機能を備えた他の機器といった各種の電子機器に適用することができる。
<Example of application to electronic devices>
The image pickup device 1 described above can be applied to various electronic devices such as an image pickup device such as a digital still camera or a digital video camera, a mobile phone having an image pickup function, or another device having an image pickup function. ..
 図10は、本技術を適用した電子機器としての撮像装置の構成の一例を示すブロックダイアグラムである。同図に示すように、撮像装置100は、光学系12、シャッタ装置14、撮像デバイス1、制御部10、信号処理部60、画像メモリ70、モニタ80を備え得る。 FIG. 10 is a block diagram showing an example of the configuration of an imaging device as an electronic device to which the present technology is applied. As shown in the figure, the image pickup device 100 may include an optical system 12, a shutter device 14, an image pickup device 1, a control unit 10, a signal processing unit 60, an image memory 70, and a monitor 80.
 光学系12は、1枚又は複数枚のレンズから構成され得る。光学系12は、被写体からの光を撮像デバイス1に導き、撮像デバイス1の画素アレイ部20に結像させる。また、光学系12は、制御部10の制御に従って、レンズの焦点調節や駆動制御を行う。 The optical system 12 may be composed of one or a plurality of lenses. The optical system 12 guides the light from the subject to the image pickup device 1 and causes the pixel array unit 20 of the image pickup device 1 to form an image. Further, the optical system 12 adjusts the focus of the lens and controls the drive according to the control of the control unit 10.
 シャッタ装置14は、上述したように、撮像デバイス1の掃出し走査回路34による不要電荷の掃き出し(リセット)により、電子シャッタ動作を行う。シャッタ装置14は、制御部10の制御に従って、撮像デバイス1への光照射期間及び受光期間を制御する。 As described above, the shutter device 14 performs an electronic shutter operation by sweeping out (resetting) unnecessary charges by the sweeping scanning circuit 34 of the imaging device 1. The shutter device 14 controls the light irradiation period and the light receiving period of the image pickup device 1 according to the control of the control unit 10.
 モニタ80は、信号処理部60が信号処理を施すことにより得られた画像データを表示する。撮像装置の利用者(例えば撮影者)は、接眼レンズ(不図示)を介して画像データをモニタ80から観察することができる。 The monitor 80 displays the image data obtained by the signal processing unit 60 performing signal processing. The user of the image pickup apparatus (for example, the photographer) can observe the image data from the monitor 80 through the eyepiece (not shown).
 上記各実施形態は、本技術を説明するための例示であり、本技術をこれらの実施形態にのみ限定する趣旨ではない。本技術は、その要旨を逸脱しない限り、さまざまな形態で実施することができる。 Each of the above embodiments is an example for explaining the present technology, and is not intended to limit the present technology to these embodiments only. The present technology can be implemented in various forms as long as it does not deviate from its gist.
 例えば、本明細書に開示される方法においては、その結果に矛盾が生じない限り、ステップ、動作又は機能を並行して又は異なる順に実施しても良い。説明されたステップ、動作及び機能は、単なる例として提供されており、ステップ、動作及び機能のうちのいくつかは、発明の要旨を逸脱しない範囲で、省略でき、また、互いに結合させることで一つのものとしてもよく、また、他のステップ、動作又は機能を追加してもよい。 For example, in the methods disclosed herein, steps, actions or functions may be performed in parallel or in a different order, as long as the results are not inconsistent. The steps, actions and functions described are provided merely as examples, and some of the steps, actions and functions can be omitted and combined with each other to the extent that they do not deviate from the gist of the invention. It may be one, or other steps, actions or functions may be added.
 また、本明細書では、さまざまな実施形態が開示されているが、一の実施形態における特定のフィーチャ(技術的事項)を、適宜改良しながら、他の実施形態に追加し、又は該他の実施形態における特定のフィーチャと置換することができ、そのような形態も本技術の要旨に含まれる。 In addition, although various embodiments are disclosed in the present specification, specific features (technical matters) in one embodiment are added to other embodiments, or the other embodiments, while being appropriately improved. It can be replaced with a specific feature in the embodiment, and such a form is also included in the gist of the present technology.
 <移動体への応用例>
 本開示に係る技術(本技術)は、様々な製品へ応用することができる。例えば、本開示に係る技術は、自動車、電気自動車、ハイブリッド電気自動車、自動二輪車、自転車、パーソナルモビリティ、飛行機、ドローン、船舶、ロボット等のいずれかの種類の移動体に搭載される装置として実現されてもよい。
<Example of application to mobiles>
The technology according to the present disclosure (the present technology) can be applied to various products. For example, the technology according to the present disclosure is realized as a device mounted on a moving body of any kind such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobility, an airplane, a drone, a ship, and a robot. You may.
 図11は、本開示に係る技術が適用され得る移動体制御システムの一例である車両制御システムの概略的な構成例を示すブロック図である。 FIG. 11 is a block diagram showing a schematic configuration example of a vehicle control system, which is an example of a mobile control system to which the technique according to the present disclosure can be applied.
 車両制御システム12000は、通信ネットワーク12001を介して接続された複数の電子制御ユニットを備える。図11に示した例では、車両制御システム12000は、駆動系制御ユニット12010、ボディ系制御ユニット12020、車外情報検出ユニット12030、車内情報検出ユニット12040、及び統合制御ユニット12050を備える。また、統合制御ユニット12050の機能構成として、マイクロコンピュータ12051、音声画像出力部12052、及び車載ネットワークI/F(interface)12053が図示されている。 The vehicle control system 12000 includes a plurality of electronic control units connected via the communication network 12001. In the example shown in FIG. 11, the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, an outside information detection unit 12030, an in-vehicle information detection unit 12040, and an integrated control unit 12050. Further, as a functional configuration of the integrated control unit 12050, a microcomputer 12051, an audio image output unit 12052, and an in-vehicle network I / F (interface) 12053 are shown.
 駆動系制御ユニット12010は、各種プログラムにしたがって車両の駆動系に関連する装置の動作を制御する。例えば、駆動系制御ユニット12010は、内燃機関又は駆動用モータ等の車両の駆動力を発生させるための駆動力発生装置、駆動力を車輪に伝達するための駆動力伝達機構、車両の舵角を調節するステアリング機構、及び、車両の制動力を発生させる制動装置等の制御装置として機能する。 The drive system control unit 12010 controls the operation of the device related to the drive system of the vehicle according to various programs. For example, the drive system control unit 12010 provides a driving force generator for generating the driving force of the vehicle such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting the driving force to the wheels, and a steering angle of the vehicle. It functions as a control device such as a steering mechanism for adjusting and a braking device for generating a braking force of a vehicle.
 ボディ系制御ユニット12020は、各種プログラムにしたがって車体に装備された各種装置の動作を制御する。例えば、ボディ系制御ユニット12020は、キーレスエントリシステム、スマートキーシステム、パワーウィンドウ装置、あるいは、ヘッドランプ、バックランプ、ブレーキランプ、ウィンカー又はフォグランプ等の各種ランプの制御装置として機能する。この場合、ボディ系制御ユニット12020には、鍵を代替する携帯機から発信される電波又は各種スイッチの信号が入力され得る。ボディ系制御ユニット12020は、これらの電波又は信号の入力を受け付け、車両のドアロック装置、パワーウィンドウ装置、ランプ等を制御する。 The body system control unit 12020 controls the operation of various devices mounted on the vehicle body according to various programs. For example, the body system control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as headlamps, back lamps, brake lamps, blinkers or fog lamps. In this case, the body system control unit 12020 may be input with radio waves transmitted from a portable device that substitutes for the key or signals of various switches. The body system control unit 12020 receives inputs of these radio waves or signals and controls a vehicle door lock device, a power window device, a lamp, and the like.
 車外情報検出ユニット12030は、車両制御システム12000を搭載した車両の外部の情報を検出する。例えば、車外情報検出ユニット12030には、撮像部12031が接続される。車外情報検出ユニット12030は、撮像部12031に車外の画像を撮像させるとともに、撮像された画像を受信する。車外情報検出ユニット12030は、受信した画像に基づいて、人、車、障害物、標識又は路面上の文字等の物体検出処理又は距離検出処理を行ってもよい。 The vehicle outside information detection unit 12030 detects information outside the vehicle equipped with the vehicle control system 12000. For example, the image pickup unit 12031 is connected to the vehicle exterior information detection unit 12030. The vehicle outside information detection unit 12030 causes the image pickup unit 12031 to capture an image of the outside of the vehicle and receives the captured image. The vehicle exterior information detection unit 12030 may perform object detection processing or distance detection processing such as a person, a vehicle, an obstacle, a sign, or characters on the road surface based on the received image.
 撮像部12031は、光を受光し、その光の受光量に応じた電気信号を出力する光センサである。撮像部12031は、電気信号を画像として出力することもできるし、測距の情報として出力することもできる。また、撮像部12031が受光する光は、可視光であっても良いし、赤外線等の非可視光であっても良い。 The imaging unit 12031 is an optical sensor that receives light and outputs an electric signal according to the amount of the light received. The image pickup unit 12031 can output an electric signal as an image or can output it as distance measurement information. Further, the light received by the imaging unit 12031 may be visible light or invisible light such as infrared light.
 車内情報検出ユニット12040は、車内の情報を検出する。車内情報検出ユニット12040には、例えば、運転者の状態を検出する運転者状態検出部12041が接続される。運転者状態検出部12041は、例えば運転者を撮像するカメラを含み、車内情報検出ユニット12040は、運転者状態検出部12041から入力される検出情報に基づいて、運転者の疲労度合い又は集中度合いを算出してもよいし、運転者が居眠りをしていないかを判別してもよい。 The in-vehicle information detection unit 12040 detects the in-vehicle information. For example, a driver state detection unit 12041 that detects the driver's state is connected to the in-vehicle information detection unit 12040. The driver state detection unit 12041 includes, for example, a camera that images the driver, and the in-vehicle information detection unit 12040 determines the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 12041. It may be calculated, or it may be determined whether the driver is dozing.
 マイクロコンピュータ12051は、車外情報検出ユニット12030又は車内情報検出ユニット12040で取得される車内外の情報に基づいて、駆動力発生装置、ステアリング機構又は制動装置の制御目標値を演算し、駆動系制御ユニット12010に対して制御指令を出力することができる。例えば、マイクロコンピュータ12051は、車両の衝突回避あるいは衝撃緩和、車間距離に基づく追従走行、車速維持走行、車両の衝突警告、又は車両のレーン逸脱警告等を含むADAS(Advanced Driver Assistance System)の機能実現を目的とした協調制御を行うことができる。 The microcomputer 12051 calculates the control target value of the driving force generator, the steering mechanism, or the braking device based on the information inside and outside the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040, and the drive system control unit. A control command can be output to 12010. For example, the microcomputer 12051 realizes ADAS (Advanced Driver Assistance System) functions including vehicle collision avoidance or impact mitigation, follow-up driving based on inter-vehicle distance, vehicle speed maintenance driving, vehicle collision warning, vehicle lane deviation warning, and the like. It is possible to perform cooperative control for the purpose of.
 また、マイクロコンピュータ12051は、車外情報検出ユニット12030又は車内情報検出ユニット12040で取得される車両の周囲の情報に基づいて駆動力発生装置、ステアリング機構又は制動装置等を制御することにより、運転者の操作に拠らずに自律的に走行する自動運転等を目的とした協調制御を行うことができる。 Further, the microcomputer 12051 controls the driving force generator, the steering mechanism, the braking device, and the like based on the information around the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040, so that the driver can control the driver. It is possible to perform coordinated control for the purpose of automatic driving, etc., which runs autonomously without depending on the operation.
 また、マイクロコンピュータ12051は、車外情報検出ユニット12030で取得される車外の情報に基づいて、ボディ系制御ユニット12020に対して制御指令を出力することができる。例えば、マイクロコンピュータ12051は、車外情報検出ユニット12030で検知した先行車又は対向車の位置に応じてヘッドランプを制御し、ハイビームをロービームに切り替える等の防眩を図ることを目的とした協調制御を行うことができる。 Further, the microcomputer 12051 can output a control command to the body system control unit 12020 based on the information outside the vehicle acquired by the vehicle exterior information detection unit 12030. For example, the microcomputer 12051 controls the headlamps according to the position of the preceding vehicle or the oncoming vehicle detected by the external information detection unit 12030, and performs coordinated control for the purpose of anti-glare such as switching the high beam to the low beam. It can be carried out.
 音声画像出力部12052は、車両の搭乗者又は車外に対して、視覚的又は聴覚的に情報を通知することが可能な出力装置へ音声及び画像のうちの少なくとも一方の出力信号を送信する。図11の例では、出力装置として、オーディオスピーカ12061、表示部12062及びインストルメントパネル12063が例示されている。表示部12062は、例えば、オンボードディスプレイ及びヘッドアップディスプレイの少なくとも一つを含んでいてもよい。 The audio image output unit 12052 transmits the output signal of at least one of the audio and the image to the output device capable of visually or audibly notifying the passenger or the outside of the vehicle of the information. In the example of FIG. 11, an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are exemplified as output devices. The display unit 12062 may include, for example, at least one of an onboard display and a heads-up display.
 図12は、撮像部12031の設置位置の例を示す図である。 FIG. 12 is a diagram showing an example of the installation position of the imaging unit 12031.
 図12では、車両12100は、撮像部12031として、撮像部12101,12102,12103,12104,12105を有する。 In FIG. 12, the vehicle 12100 has image pickup units 12101, 12102, 12103, 12104, 12105 as the image pickup unit 12031.
 撮像部12101,12102,12103,12104,12105は、例えば、車両12100のフロントノーズ、サイドミラー、リアバンパ、バックドア及び車室内のフロントガラスの上部等の位置に設けられる。フロントノーズに備えられる撮像部12101及び車室内のフロントガラスの上部に備えられる撮像部12105は、主として車両12100の前方の画像を取得する。サイドミラーに備えられる撮像部12102,12103は、主として車両12100の側方の画像を取得する。リアバンパ又はバックドアに備えられる撮像部12104は、主として車両12100の後方の画像を取得する。撮像部12101及び12105で取得される前方の画像は、主として先行車両又は、歩行者、障害物、信号機、交通標識又は車線等の検出に用いられる。 The imaging units 12101, 12102, 12103, 12104, 12105 are provided at positions such as the front nose, side mirrors, rear bumpers, back doors, and the upper part of the windshield in the vehicle interior of the vehicle 12100, for example. The imaging unit 12101 provided on the front nose and the imaging unit 12105 provided on the upper part of the windshield in the vehicle interior mainly acquire an image in front of the vehicle 12100. The imaging units 12102 and 12103 provided in the side mirrors mainly acquire images of the side of the vehicle 12100. The imaging unit 12104 provided on the rear bumper or the back door mainly acquires an image of the rear of the vehicle 12100. The images in front acquired by the imaging units 12101 and 12105 are mainly used for detecting the preceding vehicle, pedestrians, obstacles, traffic lights, traffic signs, lanes, and the like.
 なお、図12には、撮像部12101ないし12104の撮影範囲の一例が示されている。撮像範囲12111は、フロントノーズに設けられた撮像部12101の撮像範囲を示し、撮像範囲12112,12113は、それぞれサイドミラーに設けられた撮像部12102,12103の撮像範囲を示し、撮像範囲12114は、リアバンパ又はバックドアに設けられた撮像部12104の撮像範囲を示す。例えば、撮像部12101ないし12104で撮像された画像データが重ね合わせられることにより、車両12100を上方から見た俯瞰画像が得られる。 Note that FIG. 12 shows an example of the photographing range of the imaging units 12101 to 12104. The imaging range 12111 indicates the imaging range of the imaging unit 12101 provided on the front nose, the imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided on the side mirrors, respectively, and the imaging range 12114 indicates the imaging range of the imaging units 12102 and 12103. The imaging range of the imaging unit 12104 provided on the rear bumper or the back door is shown. For example, by superimposing the image data captured by the imaging units 12101 to 12104, a bird's-eye view image of the vehicle 12100 as viewed from above can be obtained.
 撮像部12101ないし12104の少なくとも1つは、距離情報を取得する機能を有していてもよい。例えば、撮像部12101ないし12104の少なくとも1つは、複数の撮像素子からなるステレオカメラであってもよいし、位相差検出用の画素を有する撮像素子であってもよい。 At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information. For example, at least one of the image pickup units 12101 to 12104 may be a stereo camera composed of a plurality of image pickup elements, or may be an image pickup element having pixels for phase difference detection.
 例えば、マイクロコンピュータ12051は、撮像部12101ないし12104から得られた距離情報を基に、撮像範囲12111ないし12114内における各立体物までの距離と、この距離の時間的変化(車両12100に対する相対速度)を求めることにより、特に車両12100の進行路上にある最も近い立体物で、車両12100と略同じ方向に所定の速度(例えば、0km/h以上)で走行する立体物を先行車として抽出することができる。さらに、マイクロコンピュータ12051は、先行車の手前に予め確保すべき車間距離を設定し、自動ブレーキ制御(追従停止制御も含む)や自動加速制御(追従発進制御も含む)等を行うことができる。このように運転者の操作に拠らずに自律的に走行する自動運転等を目的とした協調制御を行うことができる。 For example, the microcomputer 12051 has a distance to each three-dimensional object within the imaging range 12111 to 12114 based on the distance information obtained from the imaging units 12101 to 12104, and a temporal change of this distance (relative velocity with respect to the vehicle 12100). By obtaining, it is possible to extract as the preceding vehicle a three-dimensional object that is the closest three-dimensional object on the traveling path of the vehicle 12100 and that travels in substantially the same direction as the vehicle 12100 at a predetermined speed (for example, 0 km / h or more). it can. Further, the microcomputer 12051 can set an inter-vehicle distance to be secured in front of the preceding vehicle in advance, and can perform automatic braking control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like. In this way, it is possible to perform cooperative control for the purpose of automatic driving or the like in which the vehicle travels autonomously without depending on the operation of the driver.
 例えば、マイクロコンピュータ12051は、撮像部12101ないし12104から得られた距離情報を元に、立体物に関する立体物データを、2輪車、普通車両、大型車両、歩行者、電柱等その他の立体物に分類して抽出し、障害物の自動回避に用いることができる。例えば、マイクロコンピュータ12051は、車両12100の周辺の障害物を、車両12100のドライバが視認可能な障害物と視認困難な障害物とに識別する。そして、マイクロコンピュータ12051は、各障害物との衝突の危険度を示す衝突リスクを判断し、衝突リスクが設定値以上で衝突可能性がある状況であるときには、オーディオスピーカ12061や表示部12062を介してドライバに警報を出力することや、駆動系制御ユニット12010を介して強制減速や回避操舵を行うことで、衝突回避のための運転支援を行うことができる。 For example, the microcomputer 12051 converts three-dimensional object data related to a three-dimensional object into two-wheeled vehicles, ordinary vehicles, large vehicles, pedestrians, electric poles, and other three-dimensional objects based on the distance information obtained from the imaging units 12101 to 12104. It can be classified and extracted and used for automatic avoidance of obstacles. For example, the microcomputer 12051 distinguishes obstacles around the vehicle 12100 into obstacles that can be seen by the driver of the vehicle 12100 and obstacles that are difficult to see. Then, the microcomputer 12051 determines the collision risk indicating the risk of collision with each obstacle, and when the collision risk is equal to or higher than the set value and there is a possibility of collision, the microcomputer 12051 via the audio speaker 12061 or the display unit 12062. By outputting an alarm to the driver and performing forced deceleration and avoidance steering via the drive system control unit 12010, driving support for collision avoidance can be provided.
 撮像部12101ないし12104の少なくとも1つは、赤外線を検出する赤外線カメラであってもよい。例えば、マイクロコンピュータ12051は、撮像部12101ないし12104の撮像画像中に歩行者が存在するか否かを判定することで歩行者を認識することができる。かかる歩行者の認識は、例えば赤外線カメラとしての撮像部12101ないし12104の撮像画像における特徴点を抽出する手順と、物体の輪郭を示す一連の特徴点にパターンマッチング処理を行って歩行者か否かを判別する手順によって行われる。マイクロコンピュータ12051が、撮像部12101ないし12104の撮像画像中に歩行者が存在すると判定し、歩行者を認識すると、音声画像出力部12052は、当該認識された歩行者に強調のための方形輪郭線を重畳表示するように、表示部12062を制御する。また、音声画像出力部12052は、歩行者を示すアイコン等を所望の位置に表示するように表示部12062を制御してもよい。 At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays. For example, the microcomputer 12051 can recognize a pedestrian by determining whether or not a pedestrian is present in the captured image of the imaging units 12101 to 12104. Such pedestrian recognition includes, for example, a procedure for extracting feature points in an image captured by an imaging unit 12101 to 12104 as an infrared camera, and pattern matching processing for a series of feature points indicating the outline of an object to determine whether or not the pedestrian is a pedestrian. It is done by the procedure to determine. When the microcomputer 12051 determines that a pedestrian is present in the captured images of the imaging units 12101 to 12104 and recognizes the pedestrian, the audio image output unit 12052 outputs a square contour line for emphasizing the recognized pedestrian. The display unit 12062 is controlled so as to superimpose and display. Further, the audio image output unit 12052 may control the display unit 12062 so as to display an icon or the like indicating a pedestrian at a desired position.
 以上、本開示に係る技術が適用され得る車両制御システムの一例について説明した。本開示に係る技術は、以上説明した構成のうち、撮像部12301に適用され得る。具体的には、撮像部12301の画素アレイ部における複数の画素のうちの一部の画素が、補正画素として機能するように形成され得る。撮像部12301に本開示に係る技術を適用することにより、補正画素又は通常画素の補正処理を行い、画素特性のリニアリティの悪化による画質の低下を防ぐことができる。 The above is an example of a vehicle control system to which the technology according to the present disclosure can be applied. The technique according to the present disclosure can be applied to the imaging unit 12301 among the configurations described above. Specifically, some of the plurality of pixels in the pixel array unit of the imaging unit 12301 may be formed so as to function as correction pixels. By applying the technique according to the present disclosure to the image pickup unit 12301, it is possible to perform correction processing of correction pixels or normal pixels and prevent deterioration of image quality due to deterioration of linearity of pixel characteristics.
  <内視鏡手術システムへの応用例>
 本開示に係る技術(本技術)は、様々な製品へ応用することができる。例えば、本開示に係る技術は、内視鏡手術システムに適用されてもよい。
<Example of application to endoscopic surgery system>
The technology according to the present disclosure (the present technology) can be applied to various products. For example, the techniques according to the present disclosure may be applied to endoscopic surgery systems.
 図13は、本開示に係る技術(本技術)が適用され得る内視鏡手術システムの概略的な構成の一例を示す図である。 FIG. 13 is a diagram showing an example of a schematic configuration of an endoscopic surgery system to which the technique according to the present disclosure (the present technique) can be applied.
 図13では、術者(医師)11131が、内視鏡手術システム11000を用いて、患者ベッド11133上の患者11132に手術を行っている様子が図示されている。図示するように、内視鏡手術システム11000は、内視鏡11100と、気腹チューブ11111やエネルギー処置具11112等の、その他の術具11110と、内視鏡11100を支持する支持アーム装置11120と、内視鏡下手術のための各種の装置が搭載されたカート11200と、から構成される。 FIG. 13 illustrates how the surgeon (doctor) 11131 is performing surgery on patient 11132 on patient bed 11133 using the endoscopic surgery system 11000. As shown, the endoscopic surgery system 11000 includes an endoscope 11100, other surgical tools 11110 such as an abdominal tube 11111 and an energy treatment tool 11112, and a support arm device 11120 that supports the endoscope 11100. , A cart 11200 equipped with various devices for endoscopic surgery.
 内視鏡11100は、先端から所定の長さの領域が患者11132の体腔内に挿入される鏡筒11101と、鏡筒11101の基端に接続されるカメラヘッド11102と、から構成される。図示する例では、硬性の鏡筒11101を有するいわゆる硬性鏡として構成される内視鏡11100を図示しているが、内視鏡11100は、軟性の鏡筒を有するいわゆる軟性鏡として構成されてもよい。 The endoscope 11100 is composed of a lens barrel 11101 in which a region having a predetermined length from the tip is inserted into the body cavity of the patient 11132, and a camera head 11102 connected to the base end of the lens barrel 11101. In the illustrated example, the endoscope 11100 configured as a so-called rigid mirror having a rigid barrel 11101 is illustrated, but the endoscope 11100 may be configured as a so-called flexible mirror having a flexible barrel. Good.
 鏡筒11101の先端には、対物レンズが嵌め込まれた開口部が設けられている。内視鏡11100には光源装置11203が接続されており、当該光源装置11203によって生成された光が、鏡筒11101の内部に延設されるライトガイドによって当該鏡筒の先端まで導光され、対物レンズを介して患者11132の体腔内の観察対象に向かって照射される。なお、内視鏡11100は、直視鏡であってもよいし、斜視鏡又は側視鏡であってもよい。 An opening in which an objective lens is fitted is provided at the tip of the lens barrel 11101. A light source device 11203 is connected to the endoscope 11100, and the light generated by the light source device 11203 is guided to the tip of the lens barrel by a light guide extending inside the lens barrel 11101 to be an objective. It is irradiated toward the observation target in the body cavity of the patient 11132 through the lens. The endoscope 11100 may be a direct endoscope, a perspective mirror, or a side endoscope.
 カメラヘッド11102の内部には光学系及び撮像素子が設けられており、観察対象からの反射光(観察光)は当該光学系によって当該撮像素子に集光される。当該撮像素子によって観察光が光電変換され、観察光に対応する電気信号、すなわち観察像に対応する画像信号が生成される。当該画像信号は、RAWデータとしてカメラコントロールユニット(CCU: Camera Control Unit)11201に送信される。 An optical system and an image sensor are provided inside the camera head 11102, and the reflected light (observation light) from the observation target is focused on the image sensor by the optical system. The observation light is photoelectrically converted by the image sensor, and an electric signal corresponding to the observation light, that is, an image signal corresponding to the observation image is generated. The image signal is transmitted as RAW data to the camera control unit (CCU: Camera Control Unit) 11201.
 CCU11201は、CPU(Central Processing Unit)やGPU(Graphics Processing Unit)等によって構成され、内視鏡11100及び表示装置11202の動作を統括的に制御する。さらに、CCU11201は、カメラヘッド11102から画像信号を受け取り、その画像信号に対して、例えば現像処理(デモザイク処理)等の、当該画像信号に基づく画像を表示するための各種の画像処理を施す。 The CCU11201 is composed of a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), and the like, and comprehensively controls the operations of the endoscope 11100 and the display device 11202. Further, the CCU 11201 receives an image signal from the camera head 11102, and performs various image processing on the image signal for displaying an image based on the image signal, such as development processing (demosaic processing).
 表示装置11202は、CCU11201からの制御により、当該CCU11201によって画像処理が施された画像信号に基づく画像を表示する。 The display device 11202 displays an image based on the image signal processed by the CCU 11201 under the control of the CCU 11201.
 光源装置11203は、例えばLED(Light Emitting Diode)等の光源から構成され、術部等を撮影する際の照射光を内視鏡11100に供給する。 The light source device 11203 is composed of, for example, a light source such as an LED (Light Emitting Diode), and supplies irradiation light to the endoscope 11100 when photographing an operating part or the like.
 入力装置11204は、内視鏡手術システム11000に対する入力インタフェースである。ユーザは、入力装置11204を介して、内視鏡手術システム11000に対して各種の情報の入力や指示入力を行うことができる。例えば、ユーザは、内視鏡11100による撮像条件(照射光の種類、倍率及び焦点距離等)を変更する旨の指示等を入力する。 The input device 11204 is an input interface for the endoscopic surgery system 11000. The user can input various information and input instructions to the endoscopic surgery system 11000 via the input device 11204. For example, the user inputs an instruction to change the imaging conditions (type of irradiation light, magnification, focal length, etc.) by the endoscope 11100.
 処置具制御装置11205は、組織の焼灼、切開又は血管の封止等のためのエネルギー処置具11112の駆動を制御する。気腹装置11206は、内視鏡11100による視野の確保及び術者の作業空間の確保の目的で、患者11132の体腔を膨らめるために、気腹チューブ11111を介して当該体腔内にガスを送り込む。レコーダ11207は、手術に関する各種の情報を記録可能な装置である。プリンタ11208は、手術に関する各種の情報を、テキスト、画像又はグラフ等各種の形式で印刷可能な装置である。 The treatment tool control device 11205 controls the drive of the energy treatment tool 11112 for cauterizing, incising, sealing a blood vessel, or the like of a tissue. The pneumoperitoneum device 11206 uses a gas in the pneumoperitoneum tube 11111 to inflate the body cavity of the patient 11132 for the purpose of securing the field of view by the endoscope 11100 and securing the work space of the operator. To send. Recorder 11207 is a device capable of recording various information related to surgery. The printer 11208 is a device capable of printing various information related to surgery in various formats such as text, images, and graphs.
 なお、内視鏡11100に術部を撮影する際の照射光を供給する光源装置11203は、例えばLED、レーザ光源又はこれらの組み合わせによって構成される白色光源から構成することができる。RGBレーザ光源の組み合わせにより白色光源が構成される場合には、各色(各波長)の出力強度及び出力タイミングを高精度に制御することができるため、光源装置11203において撮像画像のホワイトバランスの調整を行うことができる。また、この場合には、RGBレーザ光源それぞれからのレーザ光を時分割で観察対象に照射し、その照射タイミングに同期してカメラヘッド11102の撮像素子の駆動を制御することにより、RGBそれぞれに対応した画像を時分割で撮像することも可能である。当該方法によれば、当該撮像素子にカラーフィルタを設けなくても、カラー画像を得ることができる。 The light source device 11203 that supplies the irradiation light to the endoscope 11100 when photographing the surgical site can be composed of, for example, an LED, a laser light source, or a white light source composed of a combination thereof. When a white light source is configured by combining RGB laser light sources, the output intensity and output timing of each color (each wavelength) can be controlled with high accuracy. Therefore, the light source device 11203 adjusts the white balance of the captured image. It can be carried out. Further, in this case, the laser light from each of the RGB laser light sources is irradiated to the observation target in a time-division manner, and the drive of the image sensor of the camera head 11102 is controlled in synchronization with the irradiation timing to correspond to each of RGB. It is also possible to capture the image in a time-division manner. According to this method, a color image can be obtained without providing a color filter on the image sensor.
 また、光源装置11203は、出力する光の強度を所定の時間ごとに変更するようにその駆動が制御されてもよい。その光の強度の変更のタイミングに同期してカメラヘッド11102の撮像素子の駆動を制御して時分割で画像を取得し、その画像を合成することにより、いわゆる黒つぶれ及び白とびのない高ダイナミックレンジの画像を生成することができる。 Further, the drive of the light source device 11203 may be controlled so as to change the intensity of the output light at predetermined time intervals. By controlling the drive of the image sensor of the camera head 11102 in synchronization with the timing of changing the light intensity to acquire images in a time-divided manner and synthesizing the images, so-called high dynamic without blackout and overexposure Range images can be generated.
 また、光源装置11203は、特殊光観察に対応した所定の波長帯域の光を供給可能に構成されてもよい。特殊光観察では、例えば、体組織における光の吸収の波長依存性を利用して、通常の観察時における照射光(すなわち、白色光)に比べて狭帯域の光を照射することにより、粘膜表層の血管等の所定の組織を高コントラストで撮影する、いわゆる狭帯域光観察(Narrow Band Imaging)が行われる。あるいは、特殊光観察では、励起光を照射することにより発生する蛍光により画像を得る蛍光観察が行われてもよい。蛍光観察では、体組織に励起光を照射し当該体組織からの蛍光を観察すること(自家蛍光観察)、又はインドシアニングリーン(ICG)等の試薬を体組織に局注するとともに当該体組織にその試薬の蛍光波長に対応した励起光を照射し蛍光像を得ること等を行うことができる。光源装置11203は、このような特殊光観察に対応した狭帯域光及び/又は励起光を供給可能に構成され得る。 Further, the light source device 11203 may be configured to be able to supply light in a predetermined wavelength band corresponding to special light observation. In special light observation, for example, by utilizing the wavelength dependence of light absorption in body tissue to irradiate light in a narrow band as compared with the irradiation light (that is, white light) in normal observation, the surface layer of the mucous membrane. A so-called narrow band imaging (Narrow Band Imaging) is performed in which a predetermined tissue such as a blood vessel is photographed with high contrast. Alternatively, in the special light observation, fluorescence observation may be performed in which an image is obtained by fluorescence generated by irradiating with excitation light. In fluorescence observation, the body tissue is irradiated with excitation light to observe the fluorescence from the body tissue (autofluorescence observation), or a reagent such as indocyanine green (ICG) is locally injected into the body tissue and the body tissue is injected. It is possible to obtain a fluorescence image by irradiating excitation light corresponding to the fluorescence wavelength of the reagent. The light source device 11203 may be configured to be capable of supplying narrow band light and / or excitation light corresponding to such special light observation.
 図14は、図13に示すカメラヘッド11102及びCCU11201の機能構成の一例を示すブロック図である。 FIG. 14 is a block diagram showing an example of the functional configuration of the camera head 11102 and CCU11201 shown in FIG.
 カメラヘッド11102は、レンズユニット11401と、撮像部11402と、駆動部11403と、通信部11404と、カメラヘッド制御部11405と、を有する。CCU11201は、通信部11411と、画像処理部11412と、制御部11413と、を有する。カメラヘッド11102とCCU11201とは、伝送ケーブル11400によって互いに通信可能に接続されている。 The camera head 11102 includes a lens unit 11401, an imaging unit 11402, a driving unit 11403, a communication unit 11404, and a camera head control unit 11405. CCU11201 includes a communication unit 11411, an image processing unit 11412, and a control unit 11413. The camera head 11102 and CCU11201 are communicatively connected to each other by a transmission cable 11400.
 レンズユニット11401は、鏡筒11101との接続部に設けられる光学系である。鏡筒11101の先端から取り込まれた観察光は、カメラヘッド11102まで導光され、当該レンズユニット11401に入射する。レンズユニット11401は、ズームレンズ及びフォーカスレンズを含む複数のレンズが組み合わされて構成される。 The lens unit 11401 is an optical system provided at a connection portion with the lens barrel 11101. The observation light taken in from the tip of the lens barrel 11101 is guided to the camera head 11102 and incident on the lens unit 11401. The lens unit 11401 is configured by combining a plurality of lenses including a zoom lens and a focus lens.
 撮像部11402は、撮像素子で構成される。撮像部11402を構成する撮像素子は、1つ(いわゆる単板式)であってもよいし、複数(いわゆる多板式)であってもよい。撮像部11402が多板式で構成される場合には、例えば各撮像素子によってRGBそれぞれに対応する画像信号が生成され、それらが合成されることによりカラー画像が得られてもよい。あるいは、撮像部11402は、3D(Dimensional)表示に対応する右目用及び左目用の画像信号をそれぞれ取得するための1対の撮像素子を有するように構成されてもよい。3D表示が行われることにより、術者11131は術部における生体組織の奥行きをより正確に把握することが可能になる。なお、撮像部11402が多板式で構成される場合には、各撮像素子に対応して、レンズユニット11401も複数系統設けられ得る。 The image pickup unit 11402 is composed of an image pickup element. The image sensor constituting the image pickup unit 11402 may be one (so-called single plate type) or a plurality (so-called multi-plate type). When the image pickup unit 11402 is composed of a multi-plate type, for example, each image pickup element may generate an image signal corresponding to each of RGB, and a color image may be obtained by synthesizing them. Alternatively, the image pickup unit 11402 may be configured to have a pair of image pickup elements for acquiring image signals for the right eye and the left eye corresponding to 3D (Dimensional) display, respectively. The 3D display enables the operator 11131 to more accurately grasp the depth of the biological tissue in the surgical site. When the image pickup unit 11402 is composed of a multi-plate type, a plurality of lens units 11401 may be provided corresponding to each image pickup element.
 また、撮像部11402は、必ずしもカメラヘッド11102に設けられなくてもよい。例えば、撮像部11402は、鏡筒11101の内部に、対物レンズの直後に設けられてもよい。 Further, the imaging unit 11402 does not necessarily have to be provided on the camera head 11102. For example, the imaging unit 11402 may be provided inside the lens barrel 11101 immediately after the objective lens.
 駆動部11403は、アクチュエータによって構成され、カメラヘッド制御部11405からの制御により、レンズユニット11401のズームレンズ及びフォーカスレンズを光軸に沿って所定の距離だけ移動させる。これにより、撮像部11402による撮像画像の倍率及び焦点が適宜調整され得る。 The drive unit 11403 is composed of an actuator, and the zoom lens and focus lens of the lens unit 11401 are moved by a predetermined distance along the optical axis under the control of the camera head control unit 11405. As a result, the magnification and focus of the image captured by the imaging unit 11402 can be adjusted as appropriate.
 通信部11404は、CCU11201との間で各種の情報を送受信するための通信装置によって構成される。通信部11404は、撮像部11402から得た画像信号をRAWデータとして伝送ケーブル11400を介してCCU11201に送信する。 The communication unit 11404 is composed of a communication device for transmitting and receiving various information to and from the CCU11201. The communication unit 11404 transmits the image signal obtained from the image pickup unit 11402 as RAW data to the CCU 11201 via the transmission cable 11400.
 また、通信部11404は、CCU11201から、カメラヘッド11102の駆動を制御するための制御信号を受信し、カメラヘッド制御部11405に供給する。当該制御信号には、例えば、撮像画像のフレームレートを指定する旨の情報、撮像時の露出値を指定する旨の情報、並びに/又は撮像画像の倍率及び焦点を指定する旨の情報等、撮像条件に関する情報が含まれる。 Further, the communication unit 11404 receives a control signal for controlling the drive of the camera head 11102 from the CCU 11201 and supplies the control signal to the camera head control unit 11405. The control signal includes, for example, information to specify the frame rate of the captured image, information to specify the exposure value at the time of imaging, and / or information to specify the magnification and focus of the captured image, and the like. Contains information about the condition.
 なお、上記のフレームレートや露出値、倍率、焦点等の撮像条件は、ユーザによって適宜指定されてもよいし、取得された画像信号に基づいてCCU11201の制御部11413によって自動的に設定されてもよい。後者の場合には、いわゆるAE(Auto Exposure)機能、AF(Auto Focus)機能及びAWB(Auto White Balance)機能が内視鏡11100に搭載されていることになる。 The above-mentioned imaging conditions such as frame rate, exposure value, magnification, and focus may be appropriately specified by the user, or may be automatically set by the control unit 11413 of CCU11201 based on the acquired image signal. Good. In the latter case, the so-called AE (Auto Exposure) function, AF (Auto Focus) function, and AWB (Auto White Balance) function are mounted on the endoscope 11100.
 カメラヘッド制御部11405は、通信部11404を介して受信したCCU11201からの制御信号に基づいて、カメラヘッド11102の駆動を制御する。 The camera head control unit 11405 controls the drive of the camera head 11102 based on the control signal from the CCU 11201 received via the communication unit 11404.
 通信部11411は、カメラヘッド11102との間で各種の情報を送受信するための通信装置によって構成される。通信部11411は、カメラヘッド11102から、伝送ケーブル11400を介して送信される画像信号を受信する。 The communication unit 11411 is composed of a communication device for transmitting and receiving various information to and from the camera head 11102. The communication unit 11411 receives an image signal transmitted from the camera head 11102 via the transmission cable 11400.
 また、通信部11411は、カメラヘッド11102に対して、カメラヘッド11102の駆動を制御するための制御信号を送信する。画像信号や制御信号は、電気通信や光通信等によって送信することができる。 Further, the communication unit 11411 transmits a control signal for controlling the drive of the camera head 11102 to the camera head 11102. Image signals and control signals can be transmitted by telecommunications, optical communication, or the like.
 画像処理部11412は、カメラヘッド11102から送信されたRAWデータである画像信号に対して各種の画像処理を施す。 The image processing unit 11412 performs various image processing on the image signal which is the RAW data transmitted from the camera head 11102.
 制御部11413は、内視鏡11100による術部等の撮像、及び、術部等の撮像により得られる撮像画像の表示に関する各種の制御を行う。例えば、制御部11413は、カメラヘッド11102の駆動を制御するための制御信号を生成する。 The control unit 11413 performs various controls related to the imaging of the surgical site and the like by the endoscope 11100 and the display of the captured image obtained by the imaging of the surgical site and the like. For example, the control unit 11413 generates a control signal for controlling the drive of the camera head 11102.
 また、制御部11413は、画像処理部11412によって画像処理が施された画像信号に基づいて、術部等が映った撮像画像を表示装置11202に表示させる。この際、制御部11413は、各種の画像認識技術を用いて撮像画像内における各種の物体を認識してもよい。例えば、制御部11413は、撮像画像に含まれる物体のエッジの形状や色等を検出することにより、鉗子等の術具、特定の生体部位、出血、エネルギー処置具11112の使用時のミスト等を認識することができる。制御部11413は、表示装置11202に撮像画像を表示させる際に、その認識結果を用いて、各種の手術支援情報を当該術部の画像に重畳表示させてもよい。手術支援情報が重畳表示され、術者11131に提示されることにより、術者11131の負担を軽減することや、術者11131が確実に手術を進めることが可能になる。 Further, the control unit 11413 causes the display device 11202 to display an image captured by the surgical unit or the like based on the image signal processed by the image processing unit 11412. At this time, the control unit 11413 may recognize various objects in the captured image by using various image recognition techniques. For example, the control unit 11413 detects the shape, color, and the like of the edge of an object included in the captured image to remove surgical tools such as forceps, a specific biological part, bleeding, and mist when using the energy treatment tool 11112. Can be recognized. When displaying the captured image on the display device 11202, the control unit 11413 may superimpose and display various surgical support information on the image of the surgical unit by using the recognition result. By superimposing and displaying the surgical support information and presenting it to the surgeon 11131, it is possible to reduce the burden on the surgeon 11131 and to allow the surgeon 11131 to proceed with the surgery reliably.
 カメラヘッド11102及びCCU11201を接続する伝送ケーブル11400は、電気信号の通信に対応した電気信号ケーブル、光通信に対応した光ファイバ、又はこれらの複合ケーブルである。 The transmission cable 11400 that connects the camera head 11102 and CCU11201 is an electric signal cable that supports electric signal communication, an optical fiber that supports optical communication, or a composite cable thereof.
 ここで、図示する例では、伝送ケーブル11400を用いて有線で通信が行われていたが、カメラヘッド11102とCCU11201との間の通信は無線で行われてもよい。 Here, in the illustrated example, the communication was performed by wire using the transmission cable 11400, but the communication between the camera head 11102 and the CCU11201 may be performed wirelessly.
 以上、本開示に係る技術が適用され得る内視鏡手術システムの一例について説明した。本開示に係る技術は、以上説明した構成のうち、カメラヘッド11102の撮像部11402に適用され得る。具体的には、撮像部11402の画素アレイ部における複数の画素のうちの一部の画素が、補正画素として機能するように形成され得る。撮像部11402に本開示に係る技術を適用することにより、補正画素又は通常画素の補正処理を行い、画素特定のリニアリティの悪化による画質の低下を防ぐことができる。 The above is an example of an endoscopic surgery system to which the technology according to the present disclosure can be applied. The technique according to the present disclosure can be applied to the imaging unit 11402 of the camera head 11102 among the configurations described above. Specifically, some of the plurality of pixels in the pixel array unit of the image pickup unit 11402 may be formed so as to function as correction pixels. By applying the technique according to the present disclosure to the image pickup unit 11402, it is possible to perform correction processing of correction pixels or normal pixels and prevent deterioration of image quality due to deterioration of pixel-specific linearity.
 なお、ここでは、一例として内視鏡手術システムについて説明したが、本開示に係る技術は、その他、例えば、顕微鏡手術システム等に適用されてもよい。 Although the endoscopic surgery system has been described here as an example, the technique according to the present disclosure may be applied to other, for example, a microscopic surgery system.
 また、本技術は、以下のような技術的事項を含み構成されても良い。
(1)
 入射する光の量に応じて発生する電荷を蓄積可能な複数の画素を備える撮像デバイスであって、
 前記複数の画素は、
 第1の飽和電荷量を有するように形成された第1の画素と、
 前記第1の飽和電荷量よりも小さい第2の飽和電荷量を有するように形成された第2の画素と、を含む、
撮像デバイス。
(2)
 前記複数の画素は、複数の前記第1の画素と少なくとも1つの前記第2の画素とを含む画素ブロックの配列からなる、
前記(1)に記載の撮像デバイス。
(3)
 各前記画素ブロックは、同色成分の隣接する4つの画素からなり、前記画素ブロックの配列は、ベイヤー配列を構成する、
前記(1)又は(2)に記載の撮像デバイス。
(4)
 前記画素ブロックにおける前記複数の第1の画素は、それぞれ、所定の色成分のうちの何れかの色成分に対応するように設けられ、
 前記画素ブロックにおける前記少なくとも1つの第2の画素は、前記何れかの色成分に対応するように設けられる、
前記(3)に記載の撮像デバイス。
(5)
 前記少なくとも1つの第2の画素は、緑色成分に対応するように設けられる、
前記(1)又は(4)に記載の撮像デバイス。
(6)
 前記少なくとも1つの第2の画素は、緑色成分以外の色成分に対応するように設けられる、
前記(1)又は(4)に記載の撮像デバイス。
(7)
 前記第2の画素は、前記画素ブロックにおいて、互いに対角位置に配置される、
前記(2)に記載の撮像デバイス。
(8)
 複数の前記第2の画素は、該第2の画素間の間隔が前記画素ブロックの大きさよりも大きくなるように配置される、
前記(2)又は(7)に記載の撮像デバイス。
(9)
 前記複数の画素は、それぞれ、ベイヤー配列に従って所定の色成分のうちの何れかの色成分に対応するように設けられる、
前記(1)に記載の撮像デバイス。
(10)
 複数の前記第2の画素は、前記ベイヤー配列の一の方向における青色成分に隣接する緑色成分又は赤色成分に隣接する緑色成分の何れかに対応するように設けられる、
前記(9)に記載の撮像デバイス。
(11)
 前記複数の第2の画素は、該第2の画素間の間隔が前記何れかの色成分間の間隔よりも大きくなるように配置される、
前記(9)又は(10)に記載の撮像デバイス。
(12)
 前記第2の画素における電荷を蓄積する領域から該電荷を転送するための転送ゲート電極の面積が、前記第1の画素における電荷を蓄積する領域から該電荷を転送するための転送ゲード電極の面積よりも大きい、
前記(1)乃至(11)の何れか一つに記載の撮像デバイス。
(13)
 前記第2の画素における電荷を蓄積する電荷蓄積領域が、前記第1の画素における電荷を蓄積する電荷蓄積領域よりも小さい、
前記(1)乃至(12の何れか一つに記載の撮像デバイス。
(14)
 前記複数の画素から転送される電荷量に基づく信号を処理する信号処理回路を更に備え、
 前記信号処理回路は、前記第1の画素から転送される電荷量と前記第2の画素から転送される電荷量とに基づいて、前記複数の画素の何れかから転送される電荷量に基づく信号を補正する、
前記(1)乃至(13)の何れか一つに記載の撮像デバイス。
(15)
 前記信号処理回路は、前記複数の第1の画素のそれぞれから転送される電荷量に基づく第1の感度比と、前記第2の画素から転送される電荷量に基づく第2の感度比とに基づいて、
前記複数の画素の何れかから転送される電荷量に基づく信号を補正する、
前記(14)に記載の撮像デバイス。
(16)
 前記信号処理回路は、前記第1の感度比と前記第2の感度比との差を算出し、算出された前記差と所定の基準値とに基づいて、前記複数の画素の何れかから転送される電荷量に基づく信号を補正する、
前記(14)又は(15)に記載の撮像デバイス。
(17)
 前記信号処理回路は、赤色成分に関する前記第1の感度比と前記第2の感度比との差が前記所定の基準値よりも小さく、かつ、青色成分に関する前記第1の感度比と前記第2の感度比との差が前記所定の基準値よりも小さい場合、前記第2の画素から転送される電荷量に基づく信号を補正する、
前記(16)に記載の撮像デバイス。
(18)
 前記信号処理回路は、赤色成分に関する前記第1の感度比と前記第2の感度比との差が前記所定の基準値よりも小さく、かつ、青色成分に関する前記第1の感度比と前記第2の感度比との差が前記所定の基準値よりも小さくない場合、少なくとも青色成分の前記第1の画素から転送される電荷量に基づく信号を、前記第2の画素から転送される電荷量に基づく信号に基づいて、補正し、又は、赤色成分に関する前記第1の感度比と前記第2の感度比との差が前記所定の基準値よりも小さくなく、かつ、青色成分に関する前記第1の感度比と前記第2の感度比との差が前記所定の基準値よりも小さい場合、少なくとも赤色成分の前記第1の画素から転送される電荷量に基づく信号を、前記第2の画素から転送される電荷量に基づく信号に基づいて、補正する、
前記(16)又は(17)に記載の撮像デバイス。
(19)
 前記信号処理回路は、赤色成分に関する前記第1の感度比と前記第2の感度比との差が前記所定の基準値よりも小さくなく、かつ、青色成分に関する前記第1の感度比と前記第2の感度比との差が前記所定の基準値よりも小さくない場合、同色成分において前記第1の画素から転送される電荷量に基づく信号を、前記第2の画素から転送される電荷量に基づく信号に基づいて、補正する、
前記(16)乃至(18)の何れか一つに記載の撮像デバイス。
(20)
 撮像デバイスにおける信号の補正処理方法であって、
 入射する光の量に応じて発生する電荷を蓄積した複数の画素から前記電荷を読み出すことと、
 前記複数の画素から読み出した前記電荷の量に基づく信号を補正することと、を含み、
 前記複数の画素は、第1の飽和電荷量を有するように形成された第1の画素と、前記第1の飽和電荷量よりも小さい第2の飽和電荷量を有するように形成された第2の画素とを含み、
 前記補正することは、前記第1の画素から転送される電荷量に基づく感度比と前記第2の画素から転送される電荷量に基づく感度比とに基づいて、前記複数の画素の何れかから転送される電荷量に基づく信号を補正する、
補正処理方法。
In addition, the present technology may be configured to include the following technical matters.
(1)
An imaging device including a plurality of pixels capable of accumulating electric charges generated according to the amount of incident light.
The plurality of pixels
A first pixel formed to have a first saturated charge amount and
A second pixel formed to have a second saturated charge amount smaller than the first saturated charge amount, and the like.
Imaging device.
(2)
The plurality of pixels consist of an array of pixel blocks including the plurality of the first pixels and at least one of the second pixels.
The imaging device according to (1) above.
(3)
Each of the pixel blocks is composed of four adjacent pixels having the same color component, and the array of the pixel blocks constitutes a Bayer array.
The imaging device according to (1) or (2) above.
(4)
The plurality of first pixels in the pixel block are provided so as to correspond to any of the predetermined color components.
The at least one second pixel in the pixel block is provided so as to correspond to any of the color components.
The imaging device according to (3) above.
(5)
The at least one second pixel is provided so as to correspond to the green component.
The imaging device according to (1) or (4) above.
(6)
The at least one second pixel is provided so as to correspond to a color component other than the green component.
The imaging device according to (1) or (4) above.
(7)
The second pixel is arranged diagonally to each other in the pixel block.
The imaging device according to (2) above.
(8)
The plurality of the second pixels are arranged so that the distance between the second pixels is larger than the size of the pixel block.
The imaging device according to (2) or (7) above.
(9)
The plurality of pixels are provided so as to correspond to any one of the predetermined color components according to the Bayer arrangement.
The imaging device according to (1) above.
(10)
The plurality of second pixels are provided so as to correspond to either a green component adjacent to a blue component or a green component adjacent to a red component in one direction of the Bayer array.
The imaging device according to (9) above.
(11)
The plurality of second pixels are arranged so that the distance between the second pixels is larger than the distance between any of the color components.
The imaging device according to (9) or (10) above.
(12)
The area of the transfer gate electrode for transferring the charge from the charge accumulating region in the second pixel is the area of the transfer gade electrode for transferring the charge from the charge accumulating region in the first pixel. Greater than
The imaging device according to any one of (1) to (11).
(13)
The charge storage region for accumulating charges in the second pixel is smaller than the charge storage region for accumulating charges in the first pixel.
The imaging device according to any one of (1) to (12).
(14)
A signal processing circuit for processing a signal based on the amount of charge transferred from the plurality of pixels is further provided.
The signal processing circuit is a signal based on the amount of charge transferred from any of the plurality of pixels based on the amount of charge transferred from the first pixel and the amount of charge transferred from the second pixel. To correct,
The imaging device according to any one of (1) to (13).
(15)
The signal processing circuit has a first sensitivity ratio based on the amount of electric charge transferred from each of the plurality of first pixels and a second sensitivity ratio based on the amount of electric charge transferred from the second pixel. On the basis of,
Corrects a signal based on the amount of charge transferred from any of the plurality of pixels.
The imaging device according to (14) above.
(16)
The signal processing circuit calculates the difference between the first sensitivity ratio and the second sensitivity ratio, and transfers the difference from any of the plurality of pixels based on the calculated difference and a predetermined reference value. Correct the signal based on the amount of charge
The imaging device according to (14) or (15).
(17)
In the signal processing circuit, the difference between the first sensitivity ratio with respect to the red component and the second sensitivity ratio is smaller than the predetermined reference value, and the first sensitivity ratio with respect to the blue component and the second sensitivity ratio are described. When the difference from the sensitivity ratio of is smaller than the predetermined reference value, the signal based on the amount of charge transferred from the second pixel is corrected.
The imaging device according to (16) above.
(18)
In the signal processing circuit, the difference between the first sensitivity ratio with respect to the red component and the second sensitivity ratio is smaller than the predetermined reference value, and the first sensitivity ratio with respect to the blue component and the second sensitivity ratio are described. When the difference from the sensitivity ratio of is not smaller than the predetermined reference value, a signal based on the amount of charge transferred from the first pixel of at least the blue component is converted to the amount of charge transferred from the second pixel. Based on the signal based on the correction, or the difference between the first sensitivity ratio and the second sensitivity ratio regarding the red component is not smaller than the predetermined reference value, and the first sensitivity ratio regarding the blue component is not smaller than the predetermined reference value. When the difference between the sensitivity ratio and the second sensitivity ratio is smaller than the predetermined reference value, a signal based on the amount of charge transferred from the first pixel of at least the red component is transferred from the second pixel. Correct based on the signal based on the amount of charge to be made,
The imaging device according to (16) or (17).
(19)
In the signal processing circuit, the difference between the first sensitivity ratio and the second sensitivity ratio regarding the red component is not smaller than the predetermined reference value, and the first sensitivity ratio and the first sensitivity ratio regarding the blue component are not smaller than the predetermined reference value. When the difference from the sensitivity ratio of 2 is not smaller than the predetermined reference value, the signal based on the amount of charge transferred from the first pixel in the same color component is converted to the amount of charge transferred from the second pixel. Correct based on the signal based,
The imaging device according to any one of (16) to (18).
(20)
This is a signal correction processing method in an imaging device.
Reading the charge from a plurality of pixels that have accumulated the charge generated according to the amount of incident light, and
Including correcting a signal based on the amount of charge read from the plurality of pixels.
The plurality of pixels have a first pixel formed to have a first saturated charge amount and a second pixel formed to have a second saturated charge amount smaller than the first saturated charge amount. Including the pixels of
The correction is performed from any of the plurality of pixels based on the sensitivity ratio based on the amount of charge transferred from the first pixel and the sensitivity ratio based on the amount of charge transferred from the second pixel. Correct the signal based on the amount of charge transferred,
Correction processing method.
1…撮像デバイス
10…制御部
20…画素アレイ部
 200a…通常画素
 200b…補正画素
 201…マイクロレンズ
 210…フィルタ層
 220…半導体基板
 222…光電変換素子
 224…P型半導体領域
 226…N型半導体領域
 230…配線層
 232…転送ゲート電極
 234…金属配線
 236…フローティング・ディフュージョン
 238…増幅トランジスタ
 239…選択トランジスタ
30…垂直駆動部
 32…読出し走査回路
 34…掃出し走査回路
40…水平駆動部
50…カラム処理部
60…信号処理部
70…画像メモリ
1 ... Imaging device 10 ... Control unit 20 ... Pixel array unit 200a ... Normal pixel 200b ... Correction pixel 201 ... Microlens 210 ... Filter layer 220 ... Semiconductor substrate 222 ... Photoelectric conversion element 224 ... P-type semiconductor region 226 ... N-type semiconductor region 230 ... Wiring layer 232 ... Transfer gate electrode 234 ... Metal wiring 236 ... Floating diffusion 238 ... Amplification transistor 239 ... Selective transistor 30 ... Vertical drive unit 32 ... Read scanning circuit 34 ... Sweep scanning circuit 40 ... Horizontal drive unit 50 ... Column processing Unit 60 ... Signal processing unit 70 ... Image memory

Claims (20)

  1.  入射する光の量に応じて発生する電荷を蓄積可能な複数の画素を備える撮像デバイスであって、
     前記複数の画素は、
     第1の飽和電荷量を有するように形成された第1の画素と、
     前記第1の飽和電荷量よりも小さい第2の飽和電荷量を有するように形成された第2の画素と、を含む、
    撮像デバイス。
    An imaging device including a plurality of pixels capable of accumulating electric charges generated according to the amount of incident light.
    The plurality of pixels
    A first pixel formed to have a first saturated charge amount and
    A second pixel formed to have a second saturated charge amount smaller than the first saturated charge amount, and the like.
    Imaging device.
  2.  前記複数の画素は、複数の前記第1の画素と少なくとも1つの前記第2の画素とを含む画素ブロックの配列からなる、
    請求項1に記載の撮像デバイス。
    The plurality of pixels consist of an array of pixel blocks including the plurality of the first pixels and at least one of the second pixels.
    The imaging device according to claim 1.
  3.  各前記画素ブロックは、同色成分の隣接する4つの画素からなり、前記画素ブロックの配列は、ベイヤー配列を構成する、
    請求項2に記載の撮像デバイス。
    Each of the pixel blocks is composed of four adjacent pixels having the same color component, and the array of the pixel blocks constitutes a Bayer array.
    The imaging device according to claim 2.
  4.  前記画素ブロックにおける前記複数の第1の画素は、それぞれ、所定の色成分のうちの何れかの色成分に対応するように設けられ、
     前記画素ブロックにおける前記少なくとも1つの第2の画素は、前記何れかの色成分に対応するように設けられる、
    請求項3に記載の撮像デバイス。
    The plurality of first pixels in the pixel block are provided so as to correspond to any of the predetermined color components.
    The at least one second pixel in the pixel block is provided so as to correspond to any of the color components.
    The imaging device according to claim 3.
  5.  前記少なくとも1つの第2の画素は、緑色成分に対応するように設けられる、
    請求項4に記載の撮像デバイス。
    The at least one second pixel is provided so as to correspond to the green component.
    The imaging device according to claim 4.
  6.  前記少なくとも1つの第2の画素は、緑色成分以外の色成分に対応するように設けられる、
    請求項4に記載の撮像デバイス。
    The at least one second pixel is provided so as to correspond to a color component other than the green component.
    The imaging device according to claim 4.
  7.  前記第2の画素は、前記画素ブロックにおいて、互いに対角位置に配置される、
    請求項2に記載の撮像デバイス。
    The second pixel is arranged diagonally to each other in the pixel block.
    The imaging device according to claim 2.
  8.  複数の前記第2の画素は、該第2の画素間の間隔が前記画素ブロックの大きさよりも大きくなるように配置される、
    請求項2に記載の撮像デバイス。
    The plurality of the second pixels are arranged so that the distance between the second pixels is larger than the size of the pixel block.
    The imaging device according to claim 2.
  9.  前記複数の画素は、それぞれ、ベイヤー配列に従って所定の色成分のうちの何れかの色成分に対応するように設けられる、
    請求項1に記載の撮像デバイス。
    The plurality of pixels are provided so as to correspond to any one of the predetermined color components according to the Bayer arrangement.
    The imaging device according to claim 1.
  10.  複数の前記第2の画素は、前記ベイヤー配列の一の方向における青色成分に隣接する緑色成分又は赤色成分に隣接する緑色成分の何れかに対応するように設けられる、
    請求項9に記載の撮像デバイス。
    The plurality of second pixels are provided so as to correspond to either a green component adjacent to a blue component or a green component adjacent to a red component in one direction of the Bayer array.
    The imaging device according to claim 9.
  11.  前記複数の第2の画素は、該第2の画素間の間隔が前記何れかの色成分間の間隔よりも大きくなるように配置される、
    請求項9に記載の撮像デバイス。
    The plurality of second pixels are arranged so that the distance between the second pixels is larger than the distance between any of the color components.
    The imaging device according to claim 9.
  12.  前記第2の画素における電荷を蓄積する領域から該電荷を転送するための転送ゲート電極の面積が、前記第1の画素における電荷を蓄積する領域から該電荷を転送するための転送ゲート電極の面積よりも大きい、
    請求項1記載の撮像デバイス。
    The area of the transfer gate electrode for transferring the charge from the charge accumulating region in the second pixel is the area of the transfer gate electrode for transferring the charge from the charge accumulating region in the first pixel. Greater than
    The imaging device according to claim 1.
  13.  前記第2の画素における電荷を蓄積する電荷蓄積領域が、前記第1の画素における電荷を蓄積する電荷蓄積領域よりも小さい、
    請求項1記載の撮像デバイス。
    The charge storage region for accumulating charges in the second pixel is smaller than the charge storage region for accumulating charges in the first pixel.
    The imaging device according to claim 1.
  14.  前記複数の画素から転送される電荷量に基づく信号を処理する信号処理回路を更に備え、
     前記信号処理回路は、前記第1の画素から転送される電荷量と前記第2の画素から転送される電荷量とに基づいて、前記複数の画素の何れかから転送される電荷量に基づく信号を補正する、
    請求項1に記載の撮像デバイス。
    A signal processing circuit for processing a signal based on the amount of charge transferred from the plurality of pixels is further provided.
    The signal processing circuit is a signal based on the amount of charge transferred from any of the plurality of pixels based on the amount of charge transferred from the first pixel and the amount of charge transferred from the second pixel. To correct,
    The imaging device according to claim 1.
  15.  前記信号処理回路は、前記複数の第1の画素のそれぞれから転送される電荷量に基づく第1の感度比と、前記第2の画素から転送される電荷量に基づく第2の感度比とに基づいて、
    前記複数の画素の何れかから転送される電荷量に基づく信号を補正する、
    請求項14に記載の撮像デバイス。
    The signal processing circuit has a first sensitivity ratio based on the amount of electric charge transferred from each of the plurality of first pixels and a second sensitivity ratio based on the amount of electric charge transferred from the second pixel. On the basis of,
    Corrects a signal based on the amount of charge transferred from any of the plurality of pixels.
    The imaging device according to claim 14.
  16.  前記信号処理回路は、前記第1の感度比と前記第2の感度比との差を算出し、算出された前記差と所定の基準値とに基づいて、前記複数の画素の何れかから転送される電荷量に基づく信号を補正する、
    請求項15に記載の撮像デバイス。
    The signal processing circuit calculates the difference between the first sensitivity ratio and the second sensitivity ratio, and transfers the difference from any of the plurality of pixels based on the calculated difference and a predetermined reference value. Correct the signal based on the amount of charge
    The imaging device according to claim 15.
  17.  前記信号処理回路は、赤色成分に関する前記第1の感度比と前記第2の感度比との差が前記所定の基準値よりも小さく、かつ、青色成分に関する前記第1の感度比と前記第2の感度比との差が前記所定の基準値よりも小さい場合、前記第2の画素から転送される電荷量に基づく信号を補正する、
    請求項16に記載の撮像デバイス。
    In the signal processing circuit, the difference between the first sensitivity ratio with respect to the red component and the second sensitivity ratio is smaller than the predetermined reference value, and the first sensitivity ratio with respect to the blue component and the second sensitivity ratio are described. When the difference from the sensitivity ratio of is smaller than the predetermined reference value, the signal based on the amount of charge transferred from the second pixel is corrected.
    The imaging device according to claim 16.
  18.  前記信号処理回路は、赤色成分に関する前記第1の感度比と前記第2の感度比との差が前記所定の基準値よりも小さく、かつ、青色成分に関する前記第1の感度比と前記第2の感度比との差が前記所定の基準値よりも小さくない場合、少なくとも青色成分の前記第1の画素から転送される電荷量に基づく信号を、前記第2の画素から転送される電荷量に基づく信号に基づいて、補正し、又は、赤色成分に関する前記第1の感度比と前記第2の感度比との差が前記所定の基準値よりも小さくなく、かつ、青色成分に関する前記第1の感度比と前記第2の感度比との差が前記所定の基準値よりも小さい場合、少なくとも赤色成分の前記第1の画素から転送される電荷量に基づく信号を、前記第2の画素から転送される電荷量に基づく信号に基づいて、補正する、
    請求項16に記載の撮像デバイス。
    In the signal processing circuit, the difference between the first sensitivity ratio with respect to the red component and the second sensitivity ratio is smaller than the predetermined reference value, and the first sensitivity ratio with respect to the blue component and the second sensitivity ratio are described. When the difference from the sensitivity ratio of is not smaller than the predetermined reference value, a signal based on the amount of charge transferred from the first pixel of at least the blue component is converted to the amount of charge transferred from the second pixel. Based on the signal based on the correction, or the difference between the first sensitivity ratio and the second sensitivity ratio regarding the red component is not smaller than the predetermined reference value, and the first sensitivity ratio regarding the blue component is not smaller than the predetermined reference value. When the difference between the sensitivity ratio and the second sensitivity ratio is smaller than the predetermined reference value, a signal based on the amount of charge transferred from the first pixel of at least the red component is transferred from the second pixel. Correct based on the signal based on the amount of charge to be made,
    The imaging device according to claim 16.
  19.  前記信号処理回路は、赤色成分に関する前記第1の感度比と前記第2の感度比との差が前記所定の基準値よりも小さくなく、かつ、青色成分に関する前記第1の感度比と前記第2の感度比との差が前記所定の基準値よりも小さくない場合、同色成分において前記第1の画素から転送される電荷量に基づく信号を、前記第2の画素から転送される電荷量に基づく信号に基づいて、補正する、
    請求項16に記載の撮像デバイス。
    In the signal processing circuit, the difference between the first sensitivity ratio and the second sensitivity ratio regarding the red component is not smaller than the predetermined reference value, and the first sensitivity ratio and the first sensitivity ratio regarding the blue component are not smaller than the predetermined reference value. When the difference from the sensitivity ratio of 2 is not smaller than the predetermined reference value, the signal based on the amount of charge transferred from the first pixel in the same color component is converted to the amount of charge transferred from the second pixel. Correct based on the signal based,
    The imaging device according to claim 16.
  20.  撮像デバイスにおける画素信号の補正処理方法であって、
     入射する光の量に応じて発生する電荷を蓄積した複数の画素から前記電荷を読み出すことと、
     前記複数の画素から読み出した前記電荷の量に基づく画素信号を補正することと、を含み、
     前記複数の画素は、第1の飽和電荷量を有するように形成された第1の画素と、前記第1の飽和電荷量よりも小さい第2の飽和電荷量を有するように形成された第2の画素とを含み、
     前記補正することは、前記第1の画素から転送される電荷量に基づく感度比と前記第2の画素から転送される電荷量に基づく感度比とに基づいて、前記複数の画素の何れかから転送される電荷量に基づく画素信号を補正する、
    補正処理方法。
    This is a pixel signal correction processing method in an imaging device.
    Reading the charge from a plurality of pixels that have accumulated the charge generated according to the amount of incident light, and
    Including correcting a pixel signal based on the amount of charge read from the plurality of pixels.
    The plurality of pixels have a first pixel formed to have a first saturated charge amount and a second pixel formed to have a second saturated charge amount smaller than the first saturated charge amount. Including the pixels of
    The correction is performed from any of the plurality of pixels based on the sensitivity ratio based on the amount of charge transferred from the first pixel and the sensitivity ratio based on the amount of charge transferred from the second pixel. Correct the pixel signal based on the amount of charge transferred,
    Correction processing method.
PCT/JP2020/039150 2019-11-05 2020-10-16 Image capture device and method for correcting pixel signal in image capture device WO2021090663A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019-200453 2019-11-05
JP2019200453 2019-11-05

Publications (1)

Publication Number Publication Date
WO2021090663A1 true WO2021090663A1 (en) 2021-05-14

Family

ID=75849735

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/039150 WO2021090663A1 (en) 2019-11-05 2020-10-16 Image capture device and method for correcting pixel signal in image capture device

Country Status (2)

Country Link
TW (1) TW202123683A (en)
WO (1) WO2021090663A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009303043A (en) * 2008-06-16 2009-12-24 Panasonic Corp Solid-state imaging device and signal processing method thereof
WO2014041845A1 (en) * 2012-09-12 2014-03-20 富士フイルム株式会社 Imaging device and signal processing method
JP2015153975A (en) * 2014-02-18 2015-08-24 ソニー株式会社 Solid state image sensor, manufacturing method of the same, and electronic apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009303043A (en) * 2008-06-16 2009-12-24 Panasonic Corp Solid-state imaging device and signal processing method thereof
WO2014041845A1 (en) * 2012-09-12 2014-03-20 富士フイルム株式会社 Imaging device and signal processing method
JP2015153975A (en) * 2014-02-18 2015-08-24 ソニー株式会社 Solid state image sensor, manufacturing method of the same, and electronic apparatus

Also Published As

Publication number Publication date
TW202123683A (en) 2021-06-16

Similar Documents

Publication Publication Date Title
US11923387B2 (en) Imaging device and electronic apparatus
WO2021131318A1 (en) Solid-state imaging device and electronic apparatus
WO2021015009A1 (en) Solid-state imaging device and electronic apparatus
US11756971B2 (en) Solid-state imaging element and imaging apparatus
WO2021124975A1 (en) Solid-state imaging device and electronic instrument
JP2021034496A (en) Imaging element and distance measuring device
US11889206B2 (en) Solid-state imaging device and electronic equipment
WO2021100338A1 (en) Solid-state image capture element
WO2021090663A1 (en) Image capture device and method for correcting pixel signal in image capture device
JP2022015325A (en) Solid-state imaging device and electronic device
JP2021125716A (en) Solid imaging device and electronic apparatus
WO2024057805A1 (en) Imaging element and electronic device
WO2023080011A1 (en) Imaging device and electronic apparatus
WO2021171796A1 (en) Solid-state imaging device and electronic apparatus
WO2022102433A1 (en) Image capture device
WO2022158170A1 (en) Photodetector and electronic device
WO2023013156A1 (en) Imaging element and electronic device
WO2022259855A1 (en) Semiconductor device, method for manufacturing same, and electronic apparatus
WO2021186911A1 (en) Imaging device and electronic apparatus
WO2021002213A1 (en) Solid-state imaging device, operation method for same, and electronic apparatus
US20240113148A1 (en) Imaging element and imaging device
WO2020075380A1 (en) Storage circuit and imaging device
TW202410428A (en) Light detection device
JP2021125574A (en) Light receiving element, solid-state image sensor, and electronic device
JP2021072397A (en) Solid-state image pickup device and electronic equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20886037

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20886037

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP