WO2023021774A1 - Dispositif d'imagerie et appareil électronique l'intégrant - Google Patents

Dispositif d'imagerie et appareil électronique l'intégrant Download PDF

Info

Publication number
WO2023021774A1
WO2023021774A1 PCT/JP2022/013825 JP2022013825W WO2023021774A1 WO 2023021774 A1 WO2023021774 A1 WO 2023021774A1 JP 2022013825 W JP2022013825 W JP 2022013825W WO 2023021774 A1 WO2023021774 A1 WO 2023021774A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
light
pixels
light receiving
receiving pixels
Prior art date
Application number
PCT/JP2022/013825
Other languages
English (en)
Japanese (ja)
Inventor
拓磨 永田
Original Assignee
ソニーセミコンダクタソリューションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーセミコンダクタソリューションズ株式会社 filed Critical ソニーセミコンダクタソリューションズ株式会社
Publication of WO2023021774A1 publication Critical patent/WO2023021774A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/46Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by combining or binning pixels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • H04N25/58Control of the dynamic range involving two or more exposures
    • H04N25/581Control of the dynamic range involving two or more exposures acquired simultaneously
    • H04N25/585Control of the dynamic range involving two or more exposures acquired simultaneously with pixels having different sensitivities within the sensor, e.g. fast or slow pixels or pixels having different sizes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors
    • H04N25/77Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components

Definitions

  • the present disclosure relates to imaging devices and electronic devices.
  • imaging devices that perform autofocus based on the image plane phase difference.
  • the sensitivity difference correction of the input image is performed, then pixel addition is performed, and then the pixel values after the addition are used to generate the image plane phase difference information.
  • rice field when generating the image plane phase difference information, the sensitivity difference correction of the input image is performed, then pixel addition is performed, and then the pixel values after the addition are used to generate the image plane phase difference information.
  • the present disclosure proposes an imaging device and an electronic device capable of suppressing an increase in device size.
  • An imaging device includes a pixel array including a plurality of light-receiving pixels, a reading unit that generates an image signal composed of pixel values read from each of the light-receiving pixels, and an image signal output from the readout unit.
  • the pixel array includes a plurality of pixel pairs each composed of at least two light-receiving pixels that share one on-chip lens
  • the signal processing unit includes the plurality of adding the pixel values read from the first light-receiving pixels in each of at least two pixel pairs out of the pixel pairs, and reading from a second light-receiving pixel different from the first light-receiving pixel in each of the pixel pairs an addition unit that performs a first addition process for adding the pixel values obtained by adding the pixel values obtained from the first addition process; and a generator that generates information about the image plane phase difference.
  • FIG. 1 is a block diagram showing a configuration example of an imaging device according to a first embodiment
  • FIG. 2 is an explanatory diagram showing one configuration example of a pixel array shown in FIG. 1
  • FIG. 3 is an explanatory diagram showing a configuration example of a light receiving pixel shown in FIG. 2
  • FIG. 3 is a circuit diagram showing a configuration example of a pixel block shown in FIG. 2
  • FIG. 3 is a circuit diagram showing a configuration example of another pixel block shown in FIG. 2
  • FIG. FIG. 3 is an explanatory diagram showing a connection example of a plurality of pixel blocks shown in FIG. 2
  • 2 is a block diagram showing a configuration example of a reading unit shown in FIG. 1
  • FIG. 2 is an explanatory diagram showing a configuration example of an image signal shown in FIG. 1; 2 is an explanatory diagram showing an example of the number of effective pixels in the imaging device shown in FIG. 1; FIG. 2 is an explanatory diagram showing one operation example of a plurality of imaging modes in the imaging device shown in FIG. 1;
  • FIG. 8 is a diagram for explaining same-color pixel addition processing that is performed on pixel values read from each pixel block according to the first embodiment (part 2);
  • FIG. 9 is a diagram for explaining same-color pixel addition processing that is performed on pixel values read from each pixel block according to the first embodiment (No. 3);
  • FIG. 12 is a diagram for explaining same-color pixel addition processing that is performed on pixel values read from each pixel block according to the first embodiment (No. 4);
  • FIG. 4 is a diagram showing left pixel total values and right pixel total values finally obtained based on pixel values read from unitary units according to the first embodiment;
  • FIG. 3 is a diagram of one unit extracted from an arrangement example of light receiving elements in the pixel array shown in FIG. 2;
  • FIG. 4 is a diagram for explaining same-color pixel addition processing executed in HDR mode on pixel values read from each pixel block according to the first embodiment (No. 1);
  • FIG. 10 is a diagram for explaining same-color pixel addition processing executed in HDR mode on pixel values read from each pixel block according to the first embodiment (No. 2);
  • FIG. 11 is a diagram for explaining same-color pixel addition processing executed in HDR mode on pixel values read from each pixel block according to the first embodiment (No. 3);
  • FIG. 10 is a diagram for explaining same-color pixel addition processing performed in HDR mode on pixel values read from each pixel block according to the first embodiment (No. 4);
  • FIG. 10 is a diagram for explaining same-color pixel addition processing performed in HDR mode on pixel values read from each pixel block according to the first embodiment (No. 5);
  • FIG. 16 is a diagram for explaining same-color pixel addition processing executed in HDR mode on pixel values read from each pixel block according to the first embodiment (No. 6);
  • FIG. 10 is a diagram for explaining same-color pixel addition processing performed in HDR mode on pixel values read from each pixel block according to the first embodiment (No. 7);
  • FIG. 11 is a diagram for explaining same-color pixel addition processing executed in HDR mode on pixel values read from each pixel block according to the first embodiment (No. 8);
  • FIG. 4 is a diagram showing left pixel total values and right pixel total values of low-luminance pixels and high-luminance pixels, respectively, finally obtained based on pixel values read out from unitary units according to the first embodiment;
  • FIG. 10 is a diagram of one unit extracted from an arrangement example of light-receiving elements in a pixel array according to a first modification of the first embodiment;
  • FIG. 10 is a diagram for explaining same-color pixel addition processing performed on pixel values read from each pixel block according to the first modification of the first embodiment (No. 1);
  • FIG. 12 is a diagram for explaining same-color pixel addition processing executed on pixel values read from each pixel block according to the first modification of the first embodiment (No. 2);
  • FIG. 10 is a diagram of one unit extracted from an arrangement example of light-receiving elements in a pixel array according to a first modification of the first embodiment;
  • FIG. 10 is a diagram for explaining same-color pixel addition processing performed on pixel values read from each pixel block according
  • FIG. 12 is a diagram for explaining same-color pixel addition processing performed on pixel values read from each pixel block according to the first modification of the first embodiment (No. 3);
  • FIG. 12 is a diagram for explaining same-color pixel addition processing executed on pixel values read from each pixel block according to the first modification of the first embodiment (No. 4);
  • FIG. 10 is a diagram showing left pixel total values and right pixel total values finally obtained based on pixel values read from unit units according to the first modification of the first embodiment; It is the figure which extracted one unit from the example of arrangement
  • FIG. 10 is a diagram for explaining same-color pixel addition processing executed on pixel values read out for image plane phase difference detection from the unit according to the second modification of the first embodiment;
  • FIG. 10 is a diagram showing left pixel total values and right pixel total values finally obtained based on pixel values read from unit units according to the second modification of the first embodiment;
  • FIG. 11 is a diagram showing left pixel total values and right pixel total values finally obtained based on pixel values read from unit units according to the third modification of the first embodiment;
  • FIG. 14 is a diagram showing left pixel total values and right pixel total values finally obtained based on pixel values read from unit units according to the fourth modification of the first embodiment;
  • FIG. 10 is a diagram for explaining same-color pixel addition processing executed on pixel values read out for image plane phase difference detection from the unit according to the second modification of the first embodiment;
  • FIG. 10 is a diagram showing left pixel total values and right pixel total values finally obtained based on pixel values read from unit units according to the second modification of
  • FIG. 14 is a diagram showing left pixel total values and right pixel total values finally obtained based on pixel values read from unit units according to the fifth modification of the first embodiment; It is a block diagram showing one structural example of the imaging device which concerns on 2nd Embodiment.
  • 41 is an explanatory diagram showing one configuration example of the pixel array shown in FIG. 40;
  • FIG. FIG. 42 is an explanatory diagram showing a connection example of a plurality of pixel blocks shown in FIG. 41;
  • FIG. 42 is an explanatory diagram showing an operation example of the pixel array shown in FIG. 41;
  • FIG. 41 is an explanatory diagram showing one configuration example of the pixel array shown in FIG. 40;
  • FIG. FIG. 42 is an explanatory diagram showing a connection example of a plurality of pixel blocks shown in FIG. 41;
  • FIG. 42 is an explanatory diagram showing an operation example of the pixel array shown in FIG. 41;
  • FIG. 11 is an explanatory diagram showing one configuration example of a pixel array according to a modification of the second embodiment
  • 35 is an explanatory diagram showing a connection example of a plurality of pixel blocks shown in FIG. 34
  • FIG. It is explanatory drawing showing the usage example of an imaging device.
  • 1 is a block diagram showing an example of a schematic configuration of a vehicle control system
  • FIG. 4 is an explanatory diagram showing an example of installation positions of an outside information detection unit and an imaging unit;
  • First Embodiment 1.1 Configuration Example 1.2 General Operation Example 1.3 Generation of Image Plane Phase Difference Information (Phase Difference Data DF) 1.3.1 More Detailed Configuration Example of Phase Difference Data Generation Unit 1.1.
  • 3.2 Concrete example of same-color pixel addition processing in imaging mode MC 1.3.3 Concrete example of same-color pixel addition processing in HDR mode 1.4 Actions and effects 1.5 Modifications 1.5.1 First Modified example 1.5.2 Second modified example 1.5.3 Third modified example 1.5.4 Fourth modified example 1.5.5 Fifth modified example 2.
  • FIG. 1 is a block diagram showing a configuration example of an imaging apparatus according to this embodiment.
  • the imaging device 1 includes a pixel array 11 , a driving section 12 , a reference signal generating section 13 , a reading section 20 , a signal processing section 15 and an imaging control section 18 .
  • the signal processing unit 15 may be arranged on the same chip (including stacked chips) as the chip on which the pixel array 11 is provided, or may be arranged on a different chip.
  • the pixel array 11 has a plurality of light receiving pixels P arranged in a matrix.
  • the light receiving pixel P is configured to generate a signal SIG including a pixel voltage Vpix according to the amount of light received.
  • FIG. 2 shows an example of the arrangement of the light receiving pixels P in the pixel array 11.
  • FIG. FIG. 3 shows an example of a schematic cross-sectional structure of the pixel array 11. As shown in FIG. The pixel array 11 has multiple pixel blocks 100 and multiple lenses 101 .
  • the plurality of pixel blocks 100 includes pixel blocks 100R, 100Gr, 100Gb, and 100B.
  • a plurality of light-receiving pixels P are arranged in units (units U) of four pixel blocks 100 (pixel blocks 100R, 100Gr, 100Gb, and 100B).
  • the pixel block 100R has eight light-receiving pixels P (light-receiving pixels PR) including red (R) color filters 55, and the pixel block 100Gr has ten light-receiving pixels including green (G) color filters 55.
  • the pixel block 100Gb has 10 light-receiving pixels P (light-receiving pixels PGb) including green (G) color filters 55, and the pixel block 100B has blue (B) color filters. It has eight light-receiving pixels P (light-receiving pixels PB) including color filters 55 . In FIG. 2, the difference in color of the color filters is expressed using hatching.
  • the arrangement pattern of the light-receiving pixels PR in the pixel block 100R and the arrangement pattern of the light-receiving pixels PB in the pixel block 100B are the same.
  • the patterns are identical to each other.
  • the pixel block 100Gr is located at the upper left
  • the pixel block 100R is located at the upper right
  • the pixel block 100B is located at the lower left
  • the pixel block 100Gb is located at the lower right.
  • the pixel blocks 100R, 100Gr, 100Gb, and 100B are arranged in a so-called Bayer arrangement with the pixel block 100 as a unit.
  • the pixel array 11 includes a semiconductor substrate 51, a semiconductor region 52, an insulating layer 53, a multilayer wiring layer 54, a color filter 55, and a light shielding film 116.
  • the semiconductor substrate 51 is a support substrate on which the imaging device 1 is formed, and is a P-type semiconductor substrate.
  • the semiconductor region 52 is a semiconductor region provided at a position corresponding to each of the plurality of light receiving pixels P in the substrate of the semiconductor substrate 51, and is doped with an N-type impurity to form the photodiode PD. .
  • the insulating layer 53 is provided on the boundary of a plurality of light receiving pixels P arranged side by side on the XY plane within the substrate of the semiconductor substrate 51, and in this example, is a DTI (Deep Trench Isolation) formed using an oxide film or the like.
  • the multilayer wiring layer 54 is provided on the semiconductor substrate 51 on the surface opposite to the light incident surface S of the pixel array 11, and includes a plurality of wiring layers and an interlayer insulating film.
  • the wiring in the multilayer wiring layer 54 is configured to connect, for example, a transistor (not shown) provided on the surface of the semiconductor substrate 51 with the driving section 12 and the reading section 20 .
  • the color filter 55 is provided on the semiconductor substrate 51 on the light incident surface S of the pixel array 11 .
  • the light shielding film 116 is provided on the light incident surface S of the pixel array 11 so as to surround two light receiving pixels P (hereinafter also referred to as pixel pairs 90) arranged side by side in the X direction.
  • the plurality of lenses 101 are so-called on-chip lenses, and are provided on the color filters 55 on the light incident surface S of the pixel array 11 .
  • a light shielding film 56 may be arranged on the light incident surface S for optically separating adjacent pixels.
  • the lens 101 is provided above two light receiving pixels P (pixel pairs 90) arranged side by side in the X direction. Four lenses 101 are provided above the eight light-receiving pixels P in the pixel block 100R, five lenses 101 are provided above the ten light-receiving pixels P in the pixel block 100Gr, and ten lenses 101 are provided in the pixel block 100Gb.
  • the lenses 101 are provided above the light receiving pixels P of the pixel block 100B, and four lenses 101 are provided above the eight light receiving pixels P of the pixel block 100B.
  • the lenses 101 are arranged side by side in the X and Y directions.
  • the lenses 101 arranged in the Y direction are arranged with a shift of one light receiving pixel P in the X direction.
  • the pixel pairs 90 aligned in the Y direction are arranged with a shift of one light receiving pixel P in the X direction.
  • the imaging device 1 generates phase difference data DF based on so-called image plane phase differences detected by the plurality of pixel pairs 90 .
  • the defocus amount is determined based on this phase difference data DF, and the position of the photographing lens is moved based on the defocus amount. In this way, the camera can realize autofocus.
  • FIG. 4 shows a configuration example of the pixel block 100Gr.
  • FIG. 5 shows a configuration example of the pixel block 100R.
  • FIG. 6 shows a wiring example of the pixel blocks 100R, 100Gr, 100Gb and 100B.
  • the plurality of pixel blocks 100 are drawn apart from each other.
  • the pixel array 11 has multiple control lines TRGL, multiple control lines RSTL, multiple control lines SELL, and multiple signal lines VSL.
  • the control line TRGL extends in the X direction (horizontal direction in FIGS. 4 to 6) and has one end connected to the driving section 12 .
  • a control signal STRG is supplied from the driving section 12 to the control line TRGL.
  • the control line RSTL extends in the X direction and has one end connected to the driving section 12 .
  • a control signal SRST is supplied from the driving section 12 to the control line RSTL.
  • the control line SELL extends in the X direction and has one end connected to the drive unit 12 .
  • a control signal SSEL is supplied from the drive unit 12 to the control line SELL.
  • the signal line VSL extends in the Y direction (vertical direction in FIGS. 4 to 6) and has one end connected to the reading section 20 .
  • the signal line VSL transmits the signal SIG generated by the light receiving pixel P to the reading unit 20
  • the pixel block 100Gr (FIG. 4) has ten photodiodes PD, ten transistors TRG, a floating diffusion FD, and transistors RST, AMP, and SEL. Ten photodiodes PD and ten transistors TRG respectively correspond to ten light-receiving pixels PGr included in the pixel block 100Gr.
  • the transistors TRG, RST, AMP, and SEL are N-type MOS (Metal Oxide Semiconductor) transistors in this example.
  • the photodiode PD is a photoelectric conversion element that generates an amount of charge corresponding to the amount of light received and accumulates the generated charge inside.
  • the photodiode PD has an anode grounded and a cathode connected to the source of the transistor TRG.
  • the transistor TRG has a gate connected to the control line TRGL, a source connected to the cathode of the photodiode PD, and a drain connected to the floating diffusion FD. Gates of ten transistors TRG are connected to different control lines TRGL among ten control lines TRGL (control lines TRGL1 to TRGL6 and TRGL9 to TRGL12 in this example).
  • the floating diffusion FD is configured to accumulate charges transferred from the photodiode PD via the transistor TRG.
  • the floating diffusion FD is configured using, for example, a diffusion layer formed on the surface of the semiconductor substrate. In FIG. 4, the floating diffusion FD is shown using a capacitive element symbol.
  • the gate of the transistor RST is connected to the control line RSTL, the drain is supplied with the power supply voltage VDD, and the source is connected to the floating diffusion FD.
  • the gate of the transistor AMP is connected to the floating diffusion FD, the power supply voltage VDDH is supplied to the drain, and the source is connected to the drain of the transistor SEL.
  • the gate of the transistor SEL is connected to the control line SELL, the drain is connected to the source of the transistor AMP, and the source is connected to the signal line VSL.
  • the charges accumulated in the photodiode PD are discharged by turning on the transistors TRG and RST based on the control signals STRG and SRST, for example.
  • the exposure period T is started, and an amount of charge corresponding to the amount of light received is accumulated in the photodiode PD.
  • the light receiving pixel P outputs the signal SIG including the reset voltage Vreset and the pixel voltage Vpix to the signal line VSL.
  • the light-receiving pixel P is electrically connected to the signal line VSL by turning on the transistor SEL based on the control signal SSEL.
  • the transistor AMP is connected to a constant current source 21 (described later) of the reading section 20 and operates as a so-called source follower.
  • the light-receiving pixel P has a voltage of the floating diffusion FD during a P-phase (pre-charge phase) period TP after the voltage of the floating diffusion FD is reset by turning on the transistor RST. A voltage corresponding to the voltage is output as a reset voltage Vreset.
  • the light receiving pixel P during the D-phase (data phase) period TD after the charge is transferred from the photodiode PD to the floating diffusion FD by turning on the transistor TRG, The resulting voltage is output as the pixel voltage Vpix.
  • a difference voltage between the pixel voltage Vpix and the reset voltage Vreset corresponds to the amount of light received by the light receiving pixel P during the exposure period T.
  • the light-receiving pixel P outputs the signal SIG including the reset voltage Vreset and the pixel voltage Vpix to the signal line VSL.
  • a pixel block 100R (FIG. 5) has eight photodiodes PD, eight transistors TRG, a floating diffusion FD, and transistors RST, AMP, and SEL.
  • the eight photodiodes PD and the eight transistors TRG respectively correspond to the eight light receiving pixels PR included in the pixel block 100R.
  • the gates of the eight transistors TRG are connected to different control lines TRGL among the eight control lines TRGL (control lines TRGL1, TRGL2, TRGL5 to TRGL10 in this example).
  • the pixel blocks 100Gr and 100R belonging to the same row arranged in the X direction are connected to a plurality of control lines TRGL among the same 12 control lines TRGL (control lines TRGL1 to TRGL12).
  • the control lines TRGL1 to TRGL12 are arranged in this order from bottom to top in FIG.
  • the pixel block 100Gr is connected to 10 control lines TRGL (control lines TRGL1 to TRGL6, TRGL9 to TRGL12) out of 12 control lines TRGL (control lines TRGL1 to TRGL12), and the pixel block 100R is connected to the 12 control lines TRGL (control lines TRGL1 to TRGL6, TRGL9 to TRGL12). It is connected to eight control lines TRGL (control lines TRGL1, TRGL2, TRGL5 to TRGL10) out of the control lines TRGL (control lines TRGL1 to TRGL12).
  • the pixel blocks 100Gr and 100R belonging to the same row and arranged in the X direction are connected to one control line RSTL and one control line SELL.
  • the pixel blocks 100Gr belonging to the same column arranged in the Y direction are connected to one signal line VSL.
  • pixel blocks 100R belonging to the same column arranged in the Y direction are connected to one signal line VSL.
  • the pixel block 100B has eight photodiodes PD, eight transistors TRG, a floating diffusion FD, and transistors RST, AMP, and SEL, similarly to the pixel block 100R (FIG. 5). Eight photodiodes PD and eight transistors TRG respectively correspond to eight light receiving pixels PB included in the pixel block 100B. Gates of the eight transistors TRG are connected to different control lines TRGL among the eight control lines TRGL.
  • the pixel block 100Gb has 10 photodiodes PD, 10 transistors TRG, a floating diffusion FD, and transistors RST, AMP, and SEL, like the pixel block 100Gr (FIG. 4).
  • Ten photodiodes PD and ten transistors TRG respectively correspond to ten light-receiving pixels PGb included in the pixel block 100Gb.
  • Gates of the ten transistors TRG are connected to different control lines TRGL among the ten control lines TRGL.
  • the pixel blocks 100B and 100Gb belonging to the same row arranged in the X direction are connected to a plurality of control lines TRGL out of the same 12 control lines TRGL.
  • the pixel blocks 100B and 100Gb belonging to the same row and arranged in the X direction are connected to one control line RSTL and one control line SELL.
  • the pixel blocks 100B belonging to the same column arranged in the Y direction are connected to one signal line VSL.
  • the pixel blocks 100Gb belonging to the same column arranged in the Y direction are connected to one signal line VSL.
  • the drive unit 12 ( FIG. 1 ) is configured to drive the plurality of light receiving pixels P in the pixel array 11 based on instructions from the imaging control unit 18 . Specifically, the driving unit 12 supplies a plurality of control signals STRG to the plurality of control lines TRGL in the pixel array 11, supplies a plurality of control signals SRST to the plurality of control lines RSTL, and supplies a plurality of control signals SRST to the plurality of control lines. A plurality of light-receiving pixels P in the pixel array 11 are driven by supplying a plurality of control signals SSEL to the SELL.
  • the reference signal generation unit 13 is configured to generate the reference signal RAMP based on the instruction from the imaging control unit 18 .
  • the reference signal RAMP has a so-called ramp waveform in which the voltage level gradually changes over time during the period (the P-phase period TP and the D-phase period TD) in which the reading unit 20 performs AD conversion.
  • the reference signal generation unit 13 supplies such a reference signal RAMP to the reading unit 20 .
  • the reading unit 20 is configured to generate the image signal Spic0 by performing AD conversion based on the signal SIG supplied from the pixel array 11 via the signal line VSL based on the instruction from the imaging control unit 18. be done.
  • FIG. 7 shows a configuration example of the reading unit 20.
  • FIG. 6 also shows the reference signal generating unit 13, the signal processing unit 15, and the imaging control unit 18.
  • the reading unit 20 has a plurality of constant current sources 21 , a plurality of AD (Analog to Digital) conversion units ADC, and a transfer control unit 27 .
  • the plurality of constant current sources 21 and the plurality of AD converters ADC are provided corresponding to the plurality of signal lines VSL, respectively.
  • the constant current source 21 and AD converter ADC corresponding to one signal line VSL will be described below.
  • the constant current source 21 is configured to apply a predetermined current to the corresponding signal line VSL.
  • One end of the constant current source 21 is connected to the corresponding signal line VSL, and the other end is grounded.
  • the AD conversion unit ADC is configured to perform AD conversion based on the signal SIG on the corresponding signal line VSL.
  • the AD converter ADC has capacitive elements 22 and 23 , a comparator circuit 24 , a counter 25 and a latch 26 .
  • One end of the capacitive element 22 is connected to the signal line VSL and supplied with the signal SIG, and the other end is connected to the comparison circuit 24 .
  • a reference signal RAMP supplied from the reference signal generation unit 13 is supplied to one end of the capacitive element 23 , and the other end is connected to the comparison circuit 24 .
  • the comparison circuit 24 performs a comparison operation based on the signal SIG supplied from the light receiving pixel P via the signal line VSL and the capacitive element 22 and the reference signal RAMP supplied from the reference signal generation section 13 via the capacitive element 23. is configured to generate the signal CP by performing The comparison circuit 24 sets the operating point by setting the voltages of the capacitive elements 22 and 23 based on the control signal AZ supplied from the imaging control section 18 . After that, the comparison circuit 24 performs a comparison operation to compare the reset voltage Vreset included in the signal SIG with the voltage of the reference signal RAMP in the P-phase period TP, and performs a comparison operation of A comparison operation is performed to compare the pixel voltage Vpix and the voltage of the reference signal RAMP.
  • the counter 25 is configured to perform a counting operation of counting the pulses of the clock signal CLK supplied from the imaging control section 18 based on the signal CP supplied from the comparison circuit 24 . Specifically, the counter 25 generates the count value CNTP by counting the pulses of the clock signal CLK until the signal CP transitions in the P-phase period TP, and converts the count value CNTP into a digital signal having a plurality of bits. Output as code. Further, the counter 25 generates a count value CNTD by counting the pulses of the clock signal CLK until the signal CP transitions in the D-phase period TD, and outputs the count value CNTD as a digital code having a plurality of bits. It is designed to
  • the latch 26 is configured to temporarily hold the digital code supplied from the counter 25 and to output the digital code to the bus wiring BUS based on the instruction from the transfer control section 27 .
  • the transfer control unit 27 is configured to control the latches 26 of the plurality of AD conversion units ADC to sequentially output the digital code to the bus wiring BUS based on the control signal CTL supplied from the imaging control unit 18. be.
  • the reading unit 20 uses the bus wiring BUS to sequentially transfer the plurality of digital codes supplied from the plurality of AD conversion units ADC to the signal processing unit 15 as the image signal Spic0.
  • the signal processing unit 15 (FIG. 1) is configured to generate the image signal Spic by performing predetermined signal processing based on the image signal Spic0 and an instruction from the imaging control unit 18.
  • the signal processor 15 has an image data generator 16 and a phase difference data generator 17 .
  • the image data generator 16 is configured to generate image data DP representing a captured image by performing predetermined image processing based on the image signal Spic0.
  • the phase difference data generation unit 17 is configured to generate phase difference data DF indicating an image plane phase difference by performing predetermined image processing based on the image signal Spic0.
  • the signal processing unit 15 generates an image signal Spic including the image data DP generated by the image data generation unit 16 and the phase difference data DF generated by the phase difference data generation unit 17 .
  • FIG. 8 shows an example of the image signal Spic.
  • the signal processing unit 15 generates the image signal Spic by, for example, alternately arranging the image data DP related to the light-receiving pixels P in multiple rows and the phase difference data DF related to the light-receiving pixels P in multiple rows. Then, the signal processing section 15 outputs such an image signal Spic.
  • the imaging control unit 18 supplies control signals to the driving unit 12, the reference signal generating unit 13, the reading unit 20, and the signal processing unit 15, and controls the operation of these circuits, thereby controlling the operation of the imaging device 1.
  • a control signal Sctl is supplied to the imaging control unit 18 from the outside.
  • This control signal Sctl includes, for example, information about the zoom magnification of so-called electronic zoom.
  • the imaging control unit 18 controls the operation of the imaging device 1 based on the control signal Sctl.
  • the light-receiving pixel P corresponds to a specific example of "light-receiving pixel” in the present disclosure.
  • Pixel pair 90 corresponds to a specific example of "pixel pair” in the present disclosure.
  • Pixel block 100 corresponds to a specific example of "pixel block” in the present disclosure.
  • the pixel block 100Gr corresponds to a specific example of "first pixel block” in the present disclosure.
  • the pixel block 100R corresponds to a specific example of "second pixel block” in the present disclosure.
  • the lens 101 corresponds to a specific example of "lens” in the present disclosure.
  • the control line TRGL corresponds to a specific example of "control line” in the present disclosure.
  • the insulating layer 53 corresponds to a specific example of "insulating layer” in the present disclosure.
  • FIG. 1 The drive unit 12 sequentially drives the plurality of light receiving pixels P in the pixel array 11 based on instructions from the imaging control unit 18 .
  • the reference signal generator 13 generates the reference signal RAMP based on the instruction from the imaging controller 18 .
  • the light-receiving pixel P outputs the reset voltage Vreset as the signal SIG during the P-phase period TP, and outputs the pixel voltage Vpix corresponding to the amount of received light as the signal SIG during the D-phase period TD.
  • the reading unit 20 generates the image signal Spic0 based on the signal SIG supplied from the pixel array 11 via the signal line VSL and the instruction from the imaging control unit 18 .
  • the image data generation unit 16 performs predetermined image processing based on the image signal Spic0 to generate image data DP representing the captured image
  • the phase difference data generation unit 17 generates the image signal Predetermined image processing is performed based on Spic0 to generate phase difference data DF representing the image plane phase difference.
  • the signal processing unit 15 generates an image signal Spic including the image data DP and the phase difference data DF.
  • the imaging control unit 18 supplies control signals to the driving unit 12, the reference signal generating unit 13, the reading unit 20, and the signal processing unit 15, and controls the operation of these circuits, thereby controlling the operation of the imaging device 1. do.
  • the imaging control unit 18 controls the operation of the imaging device 1 based on the control signal Sctl including information about the zoom magnification of the electronic zoom. A zoom operation in the imaging device 1 will be described below.
  • FIG. 9 shows an example of the number of light-receiving pixels P (the number of effective pixels) in the captured image when the zoom magnification is changed from 1x to 10x.
  • the solid line indicates the number of effective pixels of the imaging device 1.
  • FIG. 10A and 10B show an example of the zoom operation in the imaging apparatus 1, where (A) shows the operation when the zoom magnification is 1 ⁇ , and (B) shows the operation when the zoom magnification is 2 ⁇ . and (C) shows the operation when the zoom magnification is 3 times.
  • the imaging device 1 has three imaging modes M (imaging modes MA, MB, MC).
  • the imaging control unit 18 selects one of the three imaging modes MA to MC based on the information about the zoom magnification included in the control signal Sctl. Specifically, as shown in FIG. 9, the imaging control unit 18 selects the imaging mode MA when the zoom magnification is less than 2, and selects the imaging mode MA when the zoom magnification is 2 or more and less than 3. MB is selected, and when the zoom magnification is 3 or more, imaging mode MC is selected.
  • imaging device 1 obtains four pixel values V (pixel values VR, VGr, VGb, VB) in each of a plurality of unit units U. In this way, the imaging device 1 generates the image data DP by generating the pixel values V at a ratio of 4 pixels to 36 pixels P.
  • FIG. 10A When the number of light receiving pixels P in the pixel array 11 is 108 [Mpix], pixel values V for 12 [Mpix] are calculated. As a result, the number of effective pixels is 12 [Mpix], as shown in FIG.
  • this imaging mode MA when the zoom magnification is increased from 1, the number of effective pixels decreases according to the magnification. Then, when the zoom magnification becomes 2, the imaging mode M becomes the imaging mode MB.
  • the imaging device 1 obtains 16 pixel values V for each of the plurality of unit units U, as shown in FIG. 10(B). In this manner, the imaging device 1 generates the image data DP by generating the pixel values V at a ratio of 16 pixels to 36 light receiving pixels P.
  • FIG. When the number of light receiving pixels P in the pixel array 11 is 108 [Mpix], pixel values V for 48 [Mpix] are calculated. Actually, since the zoom magnification is 2 times, the imaging range is narrowed to 1/4 as shown in FIG. ).
  • this imaging mode MB when the zoom magnification is increased from 2, the number of effective pixels decreases according to the magnification. Then, when the zoom magnification becomes 3, the imaging mode M becomes the imaging mode MC.
  • the imaging device 1 obtains 36 pixel values V for each of the plurality of unit units U, as shown in FIG. 10(C). In this manner, the imaging device 1 generates the image data DP by generating the pixel values V at a ratio of 36 pixels to the 36 light receiving pixels P.
  • FIG. When the number of light receiving pixels P in the pixel array 11 is 108 [Mpix], a captured image of 108 [Mpix] can be obtained. Actually, since the zoom magnification is 3 times, the imaging range is narrowed to 1/9 as shown in FIG. ).
  • the imaging device 1 is provided with three imaging modes M, so that it is possible to reduce the change in the image quality of the captured image when the zoom magnification is changed. That is, for example, two imaging modes MA and MC are provided by omitting the imaging mode MB, and the imaging mode MA is selected when the zoom magnification is less than 2.
  • the imaging mode MA is selected when the zoom magnification is less than 2.
  • the image pickup apparatus 1 is provided with three image pickup modes M, it is possible to reduce the change in the number of effective pixels when the zoom magnification is changed. can.
  • Phase Difference Data DF Phase Difference Data DF
  • pixel values after performing sensitivity difference correction are conventionally used to generate image plane phase difference information in an imaging apparatus that performs autofocus based on the image plane phase difference.
  • pixel array patterns have become more complicated as pixels have become finer, and sensitivity difference correction processing has also become more complicated. There was a problem of getting stuck.
  • the pixel blocks 100R, 100Gr, 100Gb, and 100B that constitute the unit U, which is the minimum unit of repetition, are respectively composed of eight or ten light-receiving pixels P, so-called DecaOcta array imaging.
  • the apparatus 1 has, for example, an imaging mode MA (see FIG. 10A) in which each pixel block 100R, 100Gr, 100Gb, and 100B is read out as one pixel, and an imaging mode in which all effective pixels P are individually read out. It operates by switching between three stages of binning modes, mode MC (see FIG. 10C) and imaging mode MB (see FIG. 10B), which reads at an intermediate resolution between imaging mode MA and imaging mode MC.
  • mode MC see FIG. 10C
  • imaging mode MB see FIG. 10B
  • the DecaOcta array imaging device 1 also supports an HDR (High Dynamic Range) mode in which the dynamic range for luminance of some or all of the light receiving pixels P is wider than in a normal operation mode (SDR mode described later).
  • HDR High Dynamic Range
  • SDR mode normal operation mode
  • the pixel arrangement pattern of the image signal Spic0 input to the sensitivity difference correction process in the phase difference data generation unit 17 is simplified. Thereby, it is possible to suppress complication of sensitivity difference correction processing in the phase difference data generation unit 17 . More specifically, by simplifying the pixel array pattern of the image signal Spic0, it is possible to reduce the number of parallel channels for the sensitivity difference correction processing for phase difference detection. It is possible to reduce the memory capacity for the difference correction coefficients. As a result, even if the pixel array pattern is complicated, it is possible to suppress an increase in circuit scale and memory capacity, thereby suppressing an increase in device size.
  • the pixel arrangement pattern of the image signal Spic0 input to the sensitivity difference correction process is can be uniformly converted into a simple array pattern, so there is also the advantage of being able to apply the lightened sensitivity difference correction processing according to the present embodiment to almost all modes. be done.
  • FIG. 11 is a block diagram showing a more detailed configuration example of the phase difference data generation unit according to this embodiment.
  • the phase difference data generator 17 includes a same color pixel adder 171 , a sensitivity difference corrector 172 , a phase difference information generator 173 , a phase difference detector 174 and a memory 175 .
  • the sensitivity difference correction unit 172 and the phase difference information generation unit 173 perform the left and right and the right and left based on a plurality of pixel values included in the image signal Spic1 after the same color pixel addition processing by the same color pixel addition unit 171.
  • it functions as a generation unit that generates information (for example, phase difference information or phase difference data DF to be described later) regarding the image plane phase difference between the upper and lower light receiving pixels.
  • This generator may include the phase difference detector 174 .
  • the same-color pixel adder 171 applies the digital image signal Spic0 read from the image array 11 and input to the phase difference data generator 17 to the left side of the pixel pair 90 for each of the pixel blocks 100R, 100Gr, 100Gb, and 100B.
  • pixel addition processing for summing the pixel values of the light receiving pixels P (hereinafter also referred to as left pixels) located in the pixel addition processing for summing the pixel values of the light receiving pixels P located on the right side (hereinafter also referred to as right pixels). to run.
  • the pixel blocks 100R, 100Gr, 100Gb, and 100B are output from the same-color pixel addition unit 171 as the left total pixel value (hereinafter also referred to as the left pixel total value) and the right total pixel value (hereinafter also referred to as the right pixel total value). ) is output as an image signal Spic1 composed of a total pixel value pair.
  • the pixel blocks 100R, 100Gr, 100Gb, and 100B have the resolution of the image signal Spic1 input to the sensitivity difference corrector 172 independently of the number of pixels of the pixel blocks 100R, 100Gr, 100Gb, and 100B in the image signal Spic0. Since it is possible to reduce the resolution to the minimum unit, it is possible to reduce the circuit scale by reducing the number of parallel channels for the sensitivity difference correction processing, and to reduce the capacity of the memory 175 that stores the sensitivity difference correction coefficient table. is possible, and as a result, it is possible to suppress an increase in device size.
  • each pixel block 100R, 100Gr, 100Gb Imaging devices with any pixel array pattern, such as non-rectangular pixel array patterns such as squares and rectangles for 100B, and pixel array patterns in which the number of pixels constituting each pixel block 100R, 100Gr, 100Gb, and 100B are not uniform. It is possible to apply this embodiment.
  • imaging mode MA a set of left pixel total value and right pixel total value is read out for each of the pixel blocks 100R, 100Gr, 100Gb, and 100B.
  • the image signal Spic0 input to 15 is composed of the total pixel value of the right pixels P and the total pixel value of the left pixels P for each of the pixel blocks 100R, 100Gr, 100Gb, and 100B. Therefore, in the case of the imaging mode MA, the same color pixel adder 171 may be bypassed and the image signal Spic0 may be directly input to the sensitivity difference corrector 172 .
  • light-receiving pixels P operating in a low dynamic range (hereinafter also referred to as low-brightness pixels) and light-receiving pixels P operating in a high dynamic range or low dynamic range (hereinafter referred to as high
  • the left pixel total value and right pixel total value of the low luminance pixels P and the left pixel A sum value and a right pixel sum value may be determined.
  • the sensitivity difference correction unit 172 acquires a sensitivity difference correction coefficient table from the memory 175, and converts the coefficients managed by this table into the left pixel total value and the right By multiplying the pixel total value by the pixel total value, the difference in luminance value (pixel value) between the left pixel total value and the right pixel total value due to the sensitivity difference between the left pixel and the right pixel is corrected.
  • pairs of sensitivity difference correction coefficients corresponding to pixel blocks 100R, 100Gr, 100Gb, and 100B constituting the pixel array 11 one-to-one correspond to the pixel blocks 100R, 100R, and 100B on the pixel array 11, respectively. It may have a table structure arranged in a matrix according to the arrangement of 100Gr, 100Gb, and 100B.
  • the sensitivity difference correction coefficient for correcting the left pixel total value (hereinafter also referred to as the left sensitivity difference correction coefficient) and the sensitivity difference correction coefficient for correcting the right pixel total value (hereinafter also referred to as the right sensitivity difference correction coefficient)
  • the left sensitivity difference correction coefficient the sensitivity difference correction coefficient for correcting the left pixel total value
  • the right sensitivity difference correction coefficient A value obtained in advance by experience, simulation, experiment, or the like may be used as the value.
  • the left pixel total value and the right pixel total value of the high-luminance pixels P It is possible to reduce the residual sensitivity difference when using the same sensitivity difference correction coefficient table as the sensitivity difference correction coefficient table for the low-luminance pixel P. This suggests that even if the same sensitivity difference correction coefficient table is used in SDR mode and HDR mode, operation can be performed without problems.
  • phase difference information generation unit 173 calculates the total value or average value of the left pixel total value and the right pixel total value for each unit unit U with respect to the image signal Spic1 after the sensitivity difference correction.
  • a pair (hereinafter also referred to as a luminance pair) of luminance information of the left pixel P (hereinafter also referred to as left luminance information) and luminance information of the right pixel P (hereinafter also referred to as right luminance information) for each U is generated.
  • the phase difference information generation unit 173 generates phase difference information by obtaining a ratio, a difference, or the like between the generated left luminance information and right luminance information.
  • the phase difference information generation unit 173 When operating in the HDR mode, the phase difference information generation unit 173 generates left luminance information (also referred to as left low luminance information) of the light receiving pixel P, which is a left pixel and is a low luminance pixel, for each unit unit U. , right luminance information (also referred to as right low luminance information) of the light receiving pixel P that is a right pixel and a low luminance pixel, and left luminance information (also referred to as left high luminance information) of the light receiving pixel P that is a left pixel and a high luminance pixel.
  • left luminance information also referred to as left low luminance information
  • right luminance information also referred to as right low luminance information
  • left luminance information also referred to as left high luminance information
  • the phase difference information generation unit 173 generates left luminance information by adding a value obtained by multiplying the left low luminance information by a predetermined coefficient to the left high luminance information. Similarly, the phase difference information generation unit 173 generates right luminance information by adding a value obtained by multiplying the right low luminance information by a predetermined coefficient to the right high luminance information. Then, the phase difference information generation unit 173 generates phase difference information by obtaining a ratio, a difference, or the like between the generated left luminance information and right luminance information.
  • this white balance adjustment processing may be performed within the phase difference information generation unit 173, or may be performed at a processing stage prior to the phase difference information generation unit 173.
  • a processing unit for white balance adjustment may be added to the output stage of the sensitivity difference correction unit 172 .
  • the white balance adjustment may be performed before the phase difference information generator 173 obtains the phase difference information.
  • the phase difference information generation unit 173 is not limited to the white balance adjustment purpose, and for various adjustment purposes, the left pixel total value and the right pixel total value for each of the pixel blocks 100R, 100Gr, 100Gb, and 100B are given predetermined coefficients.
  • the left luminance information and the right luminance information may be generated by multiplying and summing them.
  • phase difference detector 174 The phase difference detector 174 generates and outputs phase difference data DF based on the phase difference information output from the phase difference information generator 173 . Note that the phase difference detector 174 may be omitted when the phase difference information is used as the phase difference data DF as it is.
  • the memory 175 stores the sensitivity difference correction coefficient table as described above.
  • the memory 175 may be provided as a dedicated memory for the phase difference data generator 17 or may be provided as a shared memory with the image data generator 16 .
  • FIG. 12 is a diagram of one unit U extracted from the arrangement example of the light receiving elements P in the pixel array 11 shown in FIG. 13 to 16 are diagrams for explaining the same-color pixel addition process performed on pixel values read out from each pixel block.
  • FIG. 17 is a diagram showing the left pixel total value and the right pixel total value finally obtained based on the pixel values read out from the unit U.
  • the imaging mode is the imaging mode MC will be described as an example.
  • the arrangement pattern of the light receiving pixels P is the DecaOcta arrangement is illustrated.
  • one unit unit U includes a pixel block 100Gr made up of ten light receiving elements P, a pixel block 100R made up of eight light receiving elements P, a pixel block 100B made up of eight light receiving elements P, It is composed of a pixel block 100Gb consisting of ten light receiving elements P. As shown in FIG. 12, one unit unit U includes a pixel block 100Gr made up of ten light receiving elements P, a pixel block 100R made up of eight light receiving elements P, a pixel block 100B made up of eight light receiving elements P, It is composed of a pixel block 100Gb consisting of ten light receiving elements P. As shown in FIG.
  • GrL1 to GrL5 indicate the pixel values VGr read from the left pixel and each left pixel in the pixel pair 90
  • GrR1 to GrR5 indicate the pixel values read from the right pixel and each right pixel.
  • VGr denote VGr.
  • RL1 to RL4 indicate the left pixel and the pixel value VR read from each left pixel
  • RR1 to RR4 indicate the right pixel and the pixel value VR read from each right pixel.
  • BL1 to BL4 indicate the left pixel and the pixel value VB read from each left pixel
  • BR1 to BR4 indicate the right pixel and the pixel value VB read from each right pixel
  • GbL1 to GbL5 indicate the left pixel and the pixel value VGb read from each left pixel
  • GbR1 to GbR5 indicate the right pixel and the pixel value VGb read from each right pixel.
  • the same-color pixel addition unit 171 adds pixel values GrL1, GrL2, GrL3, GrL4, and GrL5 read from the left pixel P among the pixel values VGr read from the pixel block 100Gr.
  • the left pixel total value GrL of the pixel block 100Gr is calculated.
  • the same-color pixel addition unit 171 adds the pixel values GrR1, GrR2, GrR3, GrR4, and GrR5 read from the right pixel P among the pixel values VGr read from the pixel block 100Gr to obtain pixel A right pixel total value GrR of the block 100Gr is calculated.
  • pixel values RL1 to RL4, BL1 to BL4, or GbL1 to GbL5 read from left pixels P are similarly applied to other pixel blocks 100R, 100B, and 100Gb.
  • the left pixel total values RL, BL, and GbL of the pixel blocks 100R, 100B, and 100Gb are calculated, and the pixel values RR1 to RR4, BR1 to BR4, or GbR1 to GbR5 read from the right pixels P are calculated.
  • the right pixel total values RR, BR, and GbR of the pixel blocks 100R, 100B, and 100Gb are calculated.
  • the same-color pixel addition unit 171 outputs the left pixel total value GrL and the right pixel value GrL to the pixel block 100Gr.
  • the total pixel value pair 111Gr with the total value GrR is arranged, the total pixel value pair 111R with the left pixel total value RL and the right pixel total value RR is arranged in the pixel block 100R, and the left pixel total value BL and the total pixel value RR are arranged in the pixel block 100B.
  • imaging mode MB can be easily performed from the above description by replacing the 36 pixel values V read from one unit unit U in the imaging mode MC with 16 pixel values. detailed description is omitted here.
  • imaging mode MA the number of pixel values V read out from one unit unit U is four. Since the data is read out directly from the array unit 11, the same-color pixel addition processing may be bypassed.
  • FIG. 18 is a diagram of one unit U extracted from the arrangement example of the light receiving elements P in the pixel array 11 shown in FIG. 19 to 26 are diagrams for explaining the same-color pixel addition process performed on the pixel values read out from each pixel block.
  • FIG. 27 is a diagram showing left pixel total values and right pixel total values of low-luminance pixels and high-luminance pixels, respectively, which are finally obtained based on the pixel values read out from the unit U.
  • the arrangement pattern of the light-receiving pixels P is the DecaOcta arrangement.
  • GrL1L to GrL3L indicate the left pixel and the pixel value VGr read from each left pixel in the pixel pair 90 of the low-luminance pixel PL
  • GrR1L to GrR3L indicate the low-luminance pixel PL
  • and GrL1H to GrL2H represent the pixel values VGr read from the left pixel and each left pixel in the pixel pair 90 of the high luminance pixel PH.
  • GrR1H to GrR2H denote the right pixel and the pixel value VGr read from each right pixel in the pixel pair 90 of the high luminance pixel PH.
  • RL1L to RL2L indicate the left pixel in the pixel pair 90 of the low-luminance pixel PL and the pixel value VR read from each left pixel
  • RR1L to RR2L indicate the pixel pair 90 of the low-luminance pixel PL
  • RL1H to RL2H indicate the pixel values VR read from the left pixel and each left pixel in the pixel pair 90 of the high-brightness pixel PH
  • RR1H to RR2H denote the right pixel in the pixel pair 90 of the high luminance pixel PH and the pixel value VR read from each right pixel.
  • BL1L to BL2L indicate the left pixel in the pixel pair 90 of the low-luminance pixel PL and the pixel value VB read from each left pixel
  • BR1L to BR2L indicate the pixel pair 90 of the low-luminance pixel PL. shows the pixel value VB read from the right pixel and each right pixel in
  • BL1H to BL2H show the pixel value VB read from the left pixel and each left pixel in the pixel pair 90 of the high luminance pixel PH
  • BR1H to BR2H denote the right pixel in the pixel pair 90 of the high luminance pixel PH and the pixel value VB read from each right pixel.
  • GbL1L to GbL3L indicate the left pixel in the pixel pair 90 of the low-luminance pixel PL and the pixel value VGb read from each left pixel
  • GbR1L to GbR3L indicate the pixel pair 90 of the low-luminance pixel PL.
  • GbL1H to GbL2H indicate the pixel values VGb read from the right pixel and each right pixel in the pixel pair 90 of the high luminance pixel PH
  • GbR1H to GbR2H indicates the pixel value VGb read from the right pixel and each right pixel in the pixel pair 90 of the high luminance pixel PH.
  • the same-color pixel addition unit 171 adds the pixel values GrL1H and GrL2H read from the left pixel P among the pixel values VGr read from the high-brightness pixels PH in the pixel block 100Gr. , the left pixel total value GrLH of the high luminance pixels PH in the pixel block 100Gr is calculated.
  • the same-color pixel addition unit 171 adds the pixel values GrR1H and GrR2H read from the right pixel P, among the pixel values VGr read from the high-brightness pixels PH in the pixel block 100Gr, to obtain the pixel block 100Gr.
  • the right pixel total value GrRH of the high luminance pixels PH in 100Gr is calculated.
  • the same-color pixel addition unit 171 adds the pixel values GrL1L, GrL2L, and GrL3L read from the left pixel P out of the pixel values VGr read from the low-luminance pixels PL in the pixel block 100Gr. is added to calculate the left pixel total value GrLL of the low luminance pixels PL in the pixel block 100Gr.
  • the same-color pixel addition unit 171 adds the pixel values GrR1L, GrR2L, and GrR3L read from the right pixel P among the pixel values VGr read from the low-luminance pixels PL in the pixel block 100Gr.
  • a right pixel total value GrRL of the low luminance pixels PL in the pixel block 100Gr is calculated.
  • pixel values RL1H to RL2H and BL1H to BL2H read from the high luminance pixels PH of the left pixels P are similarly applied to the other pixel blocks 100R, 100B, and 100Gb.
  • the left pixel total values RLH, BLH, and GbLH of the high-luminance pixels PH in the pixel blocks 100R, 100B, and 100Gb are calculated, and read from the high-luminance pixel PH of the right pixels P.
  • the right pixel total values RRH, BRH, and GbRH of the low-luminance pixels PL in the pixel blocks 100R, 100B, and 100Gb are calculated ( 21, 23 and 25).
  • the low-luminance pixels in the pixel blocks 100R, 100B, and 100Gb By calculating the left pixel total values RLH, BLH, and GbLH of PL and adding the pixel values RR1L to RR2L, BR1L to BR2L, or GbR1L to GbR3L read from the low-luminance pixels PL of the right pixels P , right pixel total values RRL, BRL, and GbRL of low-luminance pixels PL in pixel blocks 100R, 100B, and 100Gb (see FIGS. 22, 24, and 26).
  • the pixel block 100Gr from the same-color pixel adder 171 outputs the left pixel total of the high-brightness pixel PH.
  • a total pixel value pair 111GrH of the value GrLH and the right pixel total value GrRH, and a total pixel value pair 111GrL of the left pixel total value GrLL and the right pixel total value GrRL of the low-luminance pixel PL are arranged, and the high-luminance pixels are arranged in the pixel block 100R.
  • a total pixel value pair 111RH of left pixel total value RLH and right pixel total value RRH of PH and a total pixel value pair 111RL of left pixel total value RLL and right pixel total value RRL of low luminance pixel PL are arranged in the pixel block 100B.
  • a total pixel value pair 111BH of left pixel total value BLH and right pixel total value BRH of high luminance pixel PH and left pixel total value BLL and right pixel total value BRL of low luminance pixel PL are arranged in pixel block 100Gb.
  • a total pixel value pair 111GbH of the left pixel total value GbLH and the right pixel total value GbRH of the high-luminance pixel PH and a total pixel value pair 111GbL of the left pixel total value GbLL and the right pixel total value GbRL of the low-luminance pixel PL are Arranged image signal Spic1 is output.
  • the phase difference information generating section 173 converts the left low-luminance information of the low-luminance pixel PL among the left pixels into the left pixel total values GrLL, RLL,
  • the right low-luminance information of the low-luminance pixel PL of the right pixels is calculated from the right pixel total values GrRL, RRL, BRL and GbRL, and the left high-luminance information of the high-luminance pixel PH of the left pixels is calculated from BLL and GbLL.
  • phase difference information generation unit 173 generates left luminance information and right luminance information by adding a value obtained by multiplying the calculated luminance information by a coefficient as necessary, and generates left luminance information and right luminance information.
  • Phase difference information is generated by obtaining a ratio, a difference, or the like with luminance information.
  • the image signal Spic0 input to the phase difference data generation unit 17 is generated by one pixel on the left and right (total value of left pixels) for each of the pixel blocks 100R, 100Gr, 100Gb, and 100B and right pixel total value) is converted into an image signal Spic1.
  • the resolution of the image signal Spic1 input to the sensitivity difference correction unit 172 is unified regardless of the resolution of the image signal Spic0 (with or without binning), in other words, regardless of the mode in which the imaging apparatus 1 is executing. Therefore, the sensitivity difference correction processing according to this embodiment can be applied to substantially all modes.
  • the resolution of the image signal Spic1 input to the sensitivity difference correction unit 172 is set to the minimum pixel blocks 100R, 100Gr, 100Gb, and 100B, regardless of the number of pixels in each of the pixel blocks 100R, 100Gr, 100Gb, and 100B in the image signal Spic0. Since it is possible to reduce the resolution to a unit, it is possible to reduce the circuit scale by reducing the number of parallel channels for sensitivity difference correction processing and reduce the capacity of the memory 175 for storing the sensitivity difference correction coefficient table. As a result, it becomes possible to suppress an increase in device size.
  • each pixel block 100R, 100Gr, 100Gb Imaging devices with any pixel array pattern, such as non-rectangular pixel array patterns such as squares and rectangles for 100B, and pixel array patterns in which the number of pixels constituting each pixel block 100R, 100Gr, 100Gb, and 100B are not uniform. It is possible to apply this embodiment.
  • sensitivity difference correction processing is performed not only when a plurality of stages of binning modes are provided, but also when it is possible to switch between an SDR (Standard Dynamic Range) mode using a normal dynamic range and an HDR mode. Since it is possible to uniformly convert the pixel arrangement pattern of the image signal Spic0 to be input to the Spic0 into a simple arrangement pattern, the lightened sensitivity difference correction processing according to this embodiment is applied to almost all modes. There is also the advantage of being able to
  • the pixel pairs 90 in the pixel array 11 are all composed of two light-receiving pixels P adjacent to each other in the X direction. Part or all of 90 may be composed of two light receiving elements P adjacent to each other in the Y direction.
  • all the pixel pairs 90 are composed of the light receiving pixels P adjacent in the X direction
  • autofocusing based on the image plane phase difference in the horizontal direction becomes possible
  • all the pixel pairs 90 are composed of the light receiving pixels P adjacent in the Y direction.
  • autofocusing based on the vertical image plane phase difference is possible
  • a part of the pixel pair 90 is composed of the light receiving pixels P adjacent in the X direction
  • the rest is composed of the light receiving pixels P adjacent in the Y direction.
  • autofocus based on horizontal and vertical image plane phase differences is possible.
  • each of the pixel blocks 100R, 100Gr, 100Gb, and 100B constituting the unit U is composed of a total of 16 light-receiving pixels P of 4 ⁇ 4 pixels.
  • a case of the QQBC arrangement is illustrated.
  • FIG. 28 is a diagram of one unit U extracted from the arrangement example of the light receiving elements P in the pixel array 11 shown in FIG. 29 to 32 are diagrams for explaining the same-color pixel addition process performed on the pixel values read out from each pixel block.
  • FIG. 33 is a diagram showing the left pixel total value and the right pixel total value finally obtained based on the pixel values read out from the unit U.
  • the image pickup mode is the image pickup mode MC
  • the case where the image pickup mode is the image pickup mode MB is similarly applicable.
  • each of the pixel blocks 100R, 100Gr, 100Gb, and 100B has a total of 16 light-receiving pixels P arranged in 4 ⁇ 4 pixels.
  • one on-chip lens 101 is arranged across a total of four light receiving pixels P arranged in a 2 ⁇ 2 matrix.
  • each of the pixel blocks 100R, 100Gr, 100Gb, and 100B is provided with a total of four on-chip lenses 101 arranged in a 2 ⁇ 2 matrix.
  • two light receiving pixels P that are adjacent in the row direction (X direction) can be used to acquire the image plane phase difference in the horizontal direction.
  • a pixel pair 90X on the other hand, two light-receiving pixels P that are adjacent in the row direction (Y direction) form a pixel pair 90Y that can be used to acquire the image plane phase difference in the vertical direction.
  • GrL1u, GrL1d, GrL2u, GrL2d, GrL3u, GrL3d, GrL4u and GrL4d are the left pixels in the pixel pair 90X and the pixels read from the respective left pixels.
  • Let GrR1u, GrR1d, GrR2u, GrR2d, GrR3u, GrR3d, GrR4u and GrR4d denote the right pixel in pixel pair 90X and the pixel value VGr read from each right pixel.
  • GrL1u, GrR1u, GrL2u, GrR2u, GrL3u, GrR3u, GrL4u, and GrR4u are the pixel values read from the upper light-receiving pixel P (hereinafter also referred to as the upper pixel) and each upper pixel in the pixel pair 90Y.
  • GrL1d, GrR1d, GrL2d, GrR2d, GrR2d, GrL3d, GrR3d, GrL4d, and GrR4d are the pixel values VGr read from the lower light-receiving pixel P (hereinafter also referred to as the lower pixel) and each lower pixel in the pixel pair 90Y.
  • RL1u, RL1d, RL2u, RL2d, RL3u, RL3d, RL4u and RL4d denote left pixels and pixel values VGr read from each left pixel
  • RR3d, RR4u and RR4d indicate pixel values VGr read from the right pixel and each right pixel
  • RL1u, RR1u, RL2u, RR2u, RL3u, RR3u, RL4u and RR4u are read from the upper pixel and each upper pixel.
  • RL1d, RR1d, RL2d, RR2d, RL3d, RR3d, RL4d, and RR4d denote the lower pixel and the pixel value VGr read from each lower pixel
  • BL1u, BL1d , BL2u, BL2d, BL3u, BL3d, BL4u and BL4d indicate pixel values VGr read from the left pixel and each left pixel
  • BR1u, BR1d, BR2u, BR2d, BR3u, BR3d, BR4u and BR4d indicate right pixel and each pixel value VGr.
  • BL1u, BR1u, BL2u, BR2u, BL3u, BR3u, BL4u and BR4u indicate the pixel values VGr read from the upper pixels and the respective upper pixels
  • BL1d, BR1d, BL2d, BR2d, BL3d, BR3d, BL4d, and BR4d indicate the lower pixel and the pixel value VGr read from each lower pixel.
  • GbL4d indicates the left pixel and the pixel value VGr read from each left pixel
  • GbR1u, GbR1d, GbR2u, GbR2d, GbR3u, GbR3d, GbR4u and GbR4d indicate the right pixel and the pixel value VGr read from each right pixel.
  • GbL1u, GbR1u, GbL2u, GbR2u, GbL3u, GbR3u, Gb and GbR4u indicate the pixel value VGr read from the upper pixel and each upper pixel
  • GbL1d, GbR1d, GbL2d, GbR2d, GbL3d, GbR3d, GbL4d and GbR4d indicates the lower pixel and the pixel value VGr read from each lower pixel.
  • the same-color pixel adding unit 171 adds the pixel values GrL1u, GrL1d, GrL2u, GrL2d, GrL3u, GrL3d, By adding GrL4u and GrL4d, the left pixel total value GrL of the pixel block 100Gr is calculated.
  • the same color pixel addition unit 171 adds the pixel values GrR1u, GrR1d, GrR2u, GrR2d, GrR3u, GrR3d, GrR4u and GrR4d read from the right pixel P among the pixel values VGr read from the pixel block 100Gr. By adding, the right pixel total value GrR of the pixel block 100Gr is calculated.
  • the same-color pixel addition unit 171 adds the pixel values GrL1u, GrR1u, GrL2u, GrR2u, GrL3u, GrR3u, GrL4u, and GrR4u read from the upper pixel P among the pixel values VGr read from the pixel block 100Gr. By doing so, the upper pixel total value Gru of the pixel block 100Gr is calculated.
  • the same-color pixel addition unit 171 adds the pixel values GrL1d, GrR1d, GrL2d, GrR2d, GrL3d, GrR3d, GrL4d, and GrR4d read from the lower pixel P among the pixel values VGr read from the pixel block 100Gr. By adding, the right pixel total value GrR of the pixel block 100Gr is calculated.
  • pixel values RL1u to RL4u and RL1d to RL4d, BL1u to BL4u and BL1d to RL1u to RL4u and RL1d to RL4d read from the left pixel P are similarly applied to the other pixel blocks 100R, 100B, and 100Gb.
  • the left pixel total values RL, BL, and GbL of the pixel blocks 100R, 100B, and 100Gb are calculated, and the pixel values RR1u to By adding RR4u and RR1d to RR4d, BR1u to BR4u and BR1d to BR4d, or GbR1u to GbR4u and GbR1d to GbR4d, right pixel total values RR, BR and GbR of pixel blocks 100R, 100B and 100Gb are calculated.
  • the pixel blocks 100R, 100B and 100Gb are obtained.
  • the same-color pixel adder 171 outputs the left pixel total value GrL and the right pixel total value GrL to the pixel block 100Gr.
  • the total pixel value pair 111Grh with the total value GrR is arranged, the total pixel value pair 111Rh with the left pixel total value RL and the right pixel total value RR is arranged in the pixel block 100R, and the left pixel total value BL and the total pixel value RR are arranged in the pixel block 100B.
  • An image signal Spic1h in which a total pixel value pair 111Bh of the right pixel total value BR is arranged and a total pixel value pair 111Gbh of the left pixel total value GbL and the right pixel total value GbR is arranged in the pixel block 100Gb is output.
  • a total pixel value pair 111Grv of the upper pixel total value Gru and the lower pixel total value Grd is arranged in the pixel block 100Gr
  • a total pixel value pair 111Rv of the upper pixel total value Ru and the lower pixel total value Rd is arranged in the pixel block 100R.
  • a total pixel value pair 111Bv of the upper pixel total value Bu and the lower pixel total value Bd is arranged in the pixel block 100B, and a total pixel value pair 111Bv of the upper pixel total value Gbu and the lower pixel total value Gbd is arranged in the pixel block 100Gb.
  • An image signal Spic1v in which 111Gbv is arranged is output.
  • each of the pixel blocks 100R, 100Gr, 100Gb, and 100B that make up the unit U is a pixel array composed of a total of nine light-receiving pixels P of 3 ⁇ 3 pixels.
  • the pixel values added for detecting the image plane phase difference are not the pixel values read from the light receiving pixels P of the same color. shall be read as "addition".
  • FIG. 34 is a view of one unit U extracted from the arrangement example of the light receiving elements P in the pixel array 11 shown in FIG. 35A and 35B are diagrams for explaining the same-color pixel addition process performed on the pixel values read from the unit U for image plane phase difference detection.
  • FIG. 36 is a diagram showing the left pixel total value and the right pixel total value finally obtained based on the pixel values read out from the unit U.
  • each of the pixel blocks 100R, 100Gr, 100Gb, and 100B has a total of nine light-receiving pixels P arranged in 3 ⁇ 3 pixels.
  • a total of four light-receiving pixels P of 2 ⁇ 2 pixels located at the center lower side of the unit unit U are light-receiving pixels for image plane phase difference detection (hereinafter referred to as image plane phase difference detection pixels). (also called surface detection pixels).
  • two light-receiving pixels PB1 and PB2 positioned at the lower right of the pixel block 100B and two light-receiving pixels PGb1 and PGb2 positioned at the lower left of the pixel block 100Gb and adjacent to the light-receiving pixels PB1 and PB2, respectively. are used as image plane detection pixels. Therefore, in this modification, the light-receiving pixels PB1 and PGb1 form a pixel pair 90-1 sharing one on-chip lens 101, and the light-receiving pixels PB2 and PGb2 form a pixel pair 90-1 sharing one on-chip lens 101. 2.
  • the same-color pixel addition unit 171 calculates the left pixel total value BL by adding the pixel values BL1 and BL2 read from the light-receiving pixels PB1 and PB2 that are the left pixels.
  • the right pixel total value GrR is calculated by adding the pixel values GbR1 and GbR2 read from certain light receiving pixels PGb1 and PGb2.
  • the same color pixel adder 171 outputs an image signal Spic1 in which each unit U is composed of a total pixel value pair 310 of the left pixel total value GrL and the right pixel total value GrR. be done.
  • the memory 175 may store a sensitivity difference correction coefficient table that considers the difference in light sensitivity between the light-receiving pixels PB1 and PB2 and the light-receiving pixels Gb1 and Gb2. As a result, even if there is a difference in light sensitivity between the light receiving pixels PB1 and PB2 and the light receiving pixels Gb1 and Gb2, phase difference information can be generated based on the pixel values read out from these pixels.
  • the pixel arrangement pattern of the pixel array 11 is the DecaOcta arrangement exemplified in the first embodiment, and the A case of detecting an image plane phase difference using a part of pixel values will be exemplified.
  • FIG. 37 is a diagram showing the left pixel total value and the right pixel total value finally obtained based on the pixel values read out from the unit U.
  • the same-color pixel adder 171 reads from one of the pixel blocks 100R, 100Gr, 100Gb, and 100B included in the unit unit U (the pixel block 100Gb in this example). Using the output pixel values (pixel values GbL1 to GbL5 and GbR1 to GbR5 in this example), a left pixel total value GbL and a right pixel total value GbR are calculated, and a total pixel value pair 410 based on these is calculated for each unit unit U. , and outputs an image signal Spic1 having a total pixel value pair of .
  • the pixel block 100Gb is used as the pixel block used for detecting the image plane phase difference. good too. Further, in this modification, the case where all the light-receiving pixels P constitute the pixel pair 90 with the adjacent pixels is illustrated, but the present invention is not limited to this, and the light-receiving pixels P not used for detecting the image plane phase difference are A separate on-chip lens 101 that does not span adjacent pixels may be provided.
  • FIG. 38 is a diagram showing the left pixel total value and the right pixel total value finally obtained based on the pixel values read from the unit U.
  • the read pixel values (pixel values GbL1 and GbR1 in this example) are used as they are as the left pixel total value GbL1 and the right pixel total value GbR1.
  • the same-color pixel addition processing in the same-color pixel addition unit 171 may be bypassed. That is, the same-color pixel adder 171 extracts the pixel values (GbL1 and GbR1) of the target pixel pair 90 from the input image signal Spic0, and sums the total pixel value pair 510 for each unit U.
  • An image signal Spic1 may be output as pixel value pairs.
  • the pixel pair 90 used for detecting the image plane phase difference is exemplified as one pixel pair 90 in the pixel block 100Gb.
  • 100B may be the pixel pairs 90 used for detecting the image plane phase difference.
  • the case where all the light-receiving pixels P constitute the pixel pair 90 with the adjacent pixels is illustrated, but the present invention is not limited to this, and the light-receiving pixels P not used for detecting the image plane phase difference are A separate on-chip lens 101 that does not span adjacent pixels may be provided.
  • the pixel arrangement pattern of the pixel array 11 is the DecaOcta arrangement illustrated in the first embodiment.
  • this modified example a case where the imaging device 1 operates in the HDR mode is illustrated.
  • FIG. 39 is a diagram showing the left pixel total value and the right pixel total value finally obtained based on the pixel values read from the unit U.
  • one of the pixel blocks 100R, 100Gr, 100Gb, and 100B included in the unit unit U is a pixel for image plane phase difference detection. Used as a block.
  • GbL1L to GbL3L indicate the left pixel and the pixel value VGb read from each left pixel in the pixel pair 90 of the low luminance pixel PL
  • GbR1L to GbR3L indicate the low luminance pixel value VGb.
  • the pixel values VGb read from the right pixel and each right pixel in the pixel pair 90 of the luminance pixel PL are shown
  • GbL1H to GbL2H are the left pixel and pixels read from each left pixel in the pixel pair 90 of the high luminance pixel PH.
  • GbR1H to GbR2H indicate the right pixel and the pixel value VGb read from each right pixel in the pixel pair 90 of the high luminance pixel PH.
  • the same-color pixel addition unit 171 adds the pixel values GbL1H and GrL2H read from the left pixel P among the pixel values VGb read from the high-brightness pixels PH in the pixel block 100Gb. , the left pixel total value GbLH of the high luminance pixels PH in the pixel block 100Gb is calculated. Similarly, the same-color pixel addition unit 171 adds the pixel values GbR1H and GbR2H read from the right pixel P among the pixel values VGb read from the high-brightness pixels PH in the pixel block 100Gb, thereby A right pixel total value GbRH of the high luminance pixel PH at 100 Gb is calculated.
  • the same-color pixel addition unit 171 adds the pixel values GbL1L, GbL2L, and GbL3L read from the left pixel P among the pixel values VGb read from the low-luminance pixels PL in the pixel block 100Gb.
  • a left pixel total value GbLL of the low-luminance pixels PL in the block 100Gb is calculated.
  • the same-color pixel addition unit 171 adds the pixel values GbR1L, GbR2L, and GbR3L read from the right pixel P among the pixel values VGb read from the low-luminance pixels PL in the pixel block 100Gb.
  • a right pixel total value GbRL of the low luminance pixels PL in the pixel block 100Gb is calculated.
  • the same-color pixel addition unit 171 adds a total pixel value pair 111H including the left pixel total value GbLH and the right pixel total value GbRH of the high-luminance pixel PH, and the left pixel total value GbLL and the right pixel total value GbRL of the low-luminance pixel PL.
  • An image signal Spic1 having a total pixel value pair 111L and a total pixel value pair 610 for each unit unit U is output.
  • the pixel block 100Gb is used as the pixel block used for detecting the image plane phase difference. good too. Further, in this modification, the case where all the light-receiving pixels P constitute the pixel pair 90 with the adjacent pixels is illustrated, but the present invention is not limited to this, and the light-receiving pixels P not used for detecting the image plane phase difference are A separate on-chip lens 101 that does not span adjacent pixels may be provided.
  • one unit unit U includes pixel blocks 100Gr and 100Gb each including ten light-receiving pixels P and a pixel block 100R including eight light-receiving pixels P. , 100B, but it is not limited to this.
  • FIG. 40 shows a configuration example of the imaging device 2 according to this embodiment.
  • the imaging device 2 includes a pixel array 31 , a driving section 32 and a signal processing section 35 .
  • FIG. 41 shows an example of the arrangement of the light receiving pixels P in the pixel array 31.
  • FIG. The pixel array 31 has multiple pixel blocks 300 and multiple lenses 101 .
  • the plurality of pixel blocks 300 includes pixel blocks 300R, 300Gr, 300Gb, and 300B.
  • a plurality of light-receiving pixels P are arranged in units (units U) of four pixel blocks 300 (pixel blocks 300R, 300Gr, 300Gb, and 300B).
  • the pixel block 300R has eight light-receiving pixels P (light-receiving pixels PR) including red (R) color filters 55, and the pixel block 300Gr has eight light-receiving pixels including green (G) color filters 55.
  • the pixel block 300Gb has 10 light-receiving pixels P (light-receiving pixels PGb) including green (G) color filters 55, and the pixel block 300B has blue (B) color filters. It has eight light-receiving pixels P (light-receiving pixels PB) including color filters 55 .
  • the arrangement pattern of the light-receiving pixels PR in the pixel block 300R, the arrangement pattern of the light-receiving pixels PGr in the pixel block 300Gr, the arrangement pattern of the light-receiving pixels PGb in the pixel block 300Gb, and the arrangement pattern of the light-receiving pixels PB in the pixel block 300B are the same.
  • the pixel block 300Gr is located at the upper left
  • the pixel block 300R is located at the upper right
  • the pixel block 300B is located at the lower left
  • the pixel block 300Gb is located at the lower right.
  • the pixel blocks 300R, 300Gr, 300Gb, and 300B are arranged in a so-called Bayer arrangement with the pixel block 300 as a unit.
  • a plurality of light-receiving pixels P are arranged side by side in an oblique direction. That is, in the pixel array 11 (FIG. 2) according to the first embodiment described above, the plurality of light receiving pixels P are arranged side by side in the X direction and the Y direction, but in the pixel array 31 (FIG. 41) , a plurality of light-receiving pixels P are arranged in parallel in an oblique direction. As a result, the two light receiving pixels P in the pixel pair 90 are also arranged side by side in an oblique direction. A lens 101 is provided above the pixel pair 90 .
  • an oblique direction upward to the left is called an oblique direction
  • a direction perpendicular to the oblique direction is called a reverse oblique direction.
  • the light-receiving pixel P positioned diagonally to the upper left is also referred to as a left pixel
  • the light-receiving pixel P positioned diagonally to the right is also referred to as a right pixel.
  • the configuration of the pixel block 300 (pixel blocks 300R, 300Gr, 300Gb, 300B) is the same as the pixel block 100R (FIG. 5) according to the above-described first embodiment, and includes eight photodiodes PD and eight photodiodes PD. It has a transistor TRG, a floating diffusion FD, and transistors RST, AMP, and SEL. Eight photodiodes PD and eight transistors TRG correspond to eight light-receiving pixels P included in the pixel block 300, respectively.
  • FIG. 42 shows a wiring example of pixel blocks 300R, 300Gr, 300Gb, and 300B.
  • the plurality of pixel blocks 300 are drawn apart from each other for convenience of explanation.
  • the pixel array 31 has a plurality of control lines TRGL, a plurality of control lines RSTL, a plurality of control lines SELL, and a plurality of signal lines VSL, like the pixel array 11 according to the first embodiment described above. are doing.
  • the control line TRGL extends in the X direction and has one end connected to the driving section 32 .
  • the control line RSTL extends in the X direction and has one end connected to the driving section 32 .
  • the control line SELL extends in the X direction and has one end connected to the driving section 32 .
  • the signal line VSL extends in the Y direction and has one end connected to the readout section 20 .
  • the pixel blocks 300Gr and pixel blocks 300R belonging to the same row and arranged in the X direction are connected to the same eight control lines TRGL. Although not shown, the pixel blocks 300Gr and 300R belonging to the same row and arranged in the X direction are connected to one control line RSTL and one control line SELL. Pixel blocks 300Gr belonging to the same column and arranged in the Y direction are connected to one signal line VSL. Similarly, pixel blocks 300R belonging to the same column arranged in the Y direction are connected to one signal line VSL.
  • the pixel blocks 300B and 300Gb belonging to the same row and arranged in the X direction are connected to the same eight control lines TRGL.
  • the pixel blocks 300B and 300Gb belonging to the same row and arranged in the X direction are connected to one control line RSTL and one control line SELL.
  • Pixel blocks 300B belonging to the same column and arranged in the Y direction are connected to one signal line VSL.
  • the pixel blocks 300Gb belonging to the same column arranged in the Y direction are connected to one signal line VSL.
  • the line density of the control lines TRGL can be reduced more than in the pixel array 11 according to the first embodiment.
  • the line density of the control lines TRGL is four per side of the light receiving pixel P.
  • the drive unit 32 ( FIG. 40 ) is configured to drive the plurality of light receiving pixels P in the pixel array 31 based on instructions from the imaging control unit 18 .
  • the signal processing unit 35 is configured to perform predetermined signal processing based on the image signal Spic0 and an instruction from the imaging control unit 18 to generate the image signal Spic.
  • the signal processor 35 has an image data generator 36 and a phase difference data generator 37 .
  • the phase difference data generation section 37 may have, for example, the same configuration as the phase difference data generation section 17 described with reference to FIG. 11 in the first embodiment.
  • the imaging device 2 selects the first imaging mode M when the zoom magnification is less than 2, and selects the second imaging mode M when the zoom magnification is 2 or more and less than 2 ⁇ 2. If the zoom magnification is 2 ⁇ 2 or more, the third imaging mode M can be selected.
  • the light-receiving pixel PGr associated with green detects not only green light but also red light LR
  • the light-receiving pixel PGb associated with green detects not only green light but also red light LR.
  • Blue light LB may also be detected.
  • the arrangement position of the pixel block 300Gr and the arrangement position of the pixel block 300Gb are symmetrical with respect to the pixel block 300R and similarly with respect to the pixel block 300B. Therefore, for example, the amount of red light LR leaking from the pixel block 300R to the pixel block 300Gr is substantially the same as the amount of red light LR leaking from the pixel block 300R to the pixel block 300Gb.
  • the amount of blue light LB leaking from the pixel block 300B to the pixel block 300Gr is substantially the same as the amount of blue light LB leaking from the pixel block 300B to the pixel block 300Gb. Therefore, for example, depending on the balance between the light intensity of the red light LR and the light intensity of the blue light LB, it is possible to reduce the possibility of a difference occurring between the amount of light received by the light-receiving pixel PGr and the amount of light received by the light-receiving pixel PGb. can.
  • the image signal Spic0 input to the phase difference data generation unit 37 is generated in the pixel blocks 300R, 300R, and 300R.
  • Each of 300Gr, 300Gb, and 300B is converted into an image signal Spic1 of one left and right pixel (corresponding to the left pixel total value and right pixel total value).
  • the resolution of the image signal Spic1 input to the sensitivity difference correction unit 172 is unified regardless of the resolution of the image signal Spic0 (whether or not there is binning), in other words, regardless of the mode being executed by the imaging device 2. Therefore, the sensitivity difference correction processing according to this embodiment can be applied to substantially all modes.
  • the resolution of the image signal Spic1 input to the sensitivity difference correction unit 172 is set to the minimum pixel blocks 300R, 300Gr, 300Gb, and 300B, regardless of the number of pixels in each of the pixel blocks 300R, 300Gr, 300Gb, and 300B in the image signal Spic0. Since it is possible to reduce the resolution to a unit, it is possible to reduce the circuit scale by reducing the number of parallel channels for sensitivity difference correction processing and reduce the capacity of the memory 175 for storing the sensitivity difference correction coefficient table. As a result, it becomes possible to suppress an increase in device size.
  • each pixel block 300R, 300Gr, 300Gb since it is possible to output the image signal Spic1 with a uniform resolution without depending on the pixel arrangement of the input image signal Spic0, each pixel block 300R, 300Gr, 300Gb, It is also applicable to imaging devices with any pixel array pattern, such as a non-rectangular pixel array pattern such as a square or rectangular pixel block 300B, or a pixel array pattern in which the number of pixels constituting each pixel block 300R, 300Gr, 300Gb, and 300B is not uniform. It is possible to apply this embodiment.
  • sensitivity difference correction processing is performed not only when a plurality of stages of binning modes are provided, but also when it is possible to switch between an SDR (Standard Dynamic Range) mode using a normal dynamic range and an HDR mode. Since it is possible to uniformly convert the pixel arrangement pattern of the image signal Spic0 to be input to the Spic0 into a simple arrangement pattern, the lightened sensitivity difference correction processing according to this embodiment is applied to almost all modes. There is also the advantage of being able to
  • the pixel pairs 90 in the pixel array 31 are all composed of two light-receiving pixels P that are adjacent in the diagonal direction.
  • a part or the whole of 90 may be composed of two light receiving elements P adjacent to each other in the opposite oblique direction.
  • An imaging device 2A includes a pixel array 31A, a driving section 32A, and a signal processing section 35A, like the imaging device 2 (FIG. 40).
  • FIG. 44 shows an example of the arrangement of the light receiving pixels P in the pixel array 31A.
  • the pixel array 31A has multiple pixel blocks 300 and multiple lenses 101 .
  • the pixel array 31A has a plurality of pixel blocks 300 and a plurality of lenses 101, like the pixel array 31 (FIG. 41).
  • the plurality of pixel blocks 300 includes pixel blocks 300R1, 300R2, 300Gr1, 300Gr2, 300Gb1, 300Gb2, 300B1, and 300B2.
  • the pixel block 300 (pixel blocks 300R1, 300R2, 300Gr1, 300Gr2, 300Gb1, 300Gb2, 300B1, 300B2) has four light receiving pixels P. As shown in FIG.
  • each of the pixel blocks 300R1 and 300R2 has four light-receiving pixels PR
  • each of the pixel blocks 300Gr1 and 300Gr2 has four light-receiving pixels PGr
  • each of the pixel blocks 300Gb1 and 300Gb2 has: It has four light receiving pixels PGb
  • each of the pixel blocks 300B1 and 300B2 has four light receiving pixels PB.
  • the pixel block 300 has four photodiodes PD, four transistors TRG, a floating diffusion FD, and transistors RST, AMP, and SEL.
  • Four photodiodes PD and four transistors TRG correspond to four light receiving pixels P included in the pixel block 300, respectively.
  • FIG. 45 shows a wiring example of pixel blocks 300R1, 300R2, 300Gr1, 300Gr2, 300Gb1, 300Gb2, 300B1, and 300B2.
  • the plurality of pixel blocks 300 are drawn apart from each other for convenience of explanation.
  • the pixel blocks 300Gr1, 300Gr2, 300R1, and 300R2 belonging to the same row and arranged in the X direction are connected to the same four control lines TRGL.
  • the pixel blocks 300Gr1, 300Gr2, 300R1, 300R2 belonging to the same row and arranged in the X direction are connected to one control line RSTL and one control line SELL.
  • Pixel blocks 300Gr1 belonging to the same column aligned in the Y direction are connected to one signal line VSL
  • pixel blocks 300Gr2 belonging to the same column aligned in the Y direction are connected to one signal line VSL.
  • the pixel blocks 300R1 belonging to the same column arranged in the Y direction are connected to one signal line VSL
  • the pixel blocks 300R2 belonging to the same column arranged in the Y direction are connected to one signal line VSL.
  • pixel blocks 300B1, 300B2, 300Gb1, and 300Gb2 belonging to the same row and arranged in the X direction are connected to the same four control lines TRGL.
  • the pixel blocks 300B1, 300B2, 300Gb1 and 300Gb2 belonging to the same row and arranged in the X direction are connected to one control line RSTL and one control line SELL.
  • Pixel blocks 300B1 belonging to the same column aligned in the Y direction are connected to one signal line VSL
  • pixel blocks 300B2 belonging to the same column aligned in the Y direction are connected to one signal line VSL.
  • the pixel blocks 300Gb1 belonging to the same column arranged in the Y direction are connected to one signal line VSL
  • the pixel blocks 300Gb2 belonging to the same column arranged in the Y direction are connected to one signal line VSL.
  • the line density of the control lines TRGL can be reduced more than in the pixel array 31 according to the modification.
  • the drive unit 32A is configured to drive the plurality of light receiving pixels P in the pixel array 31 based on instructions from the imaging control unit 18.
  • the signal processing unit 35A is configured to generate the image signal Spic by performing predetermined signal processing based on the image signal Spic0 and an instruction from the imaging control unit 18 .
  • the signal processor 35A has an image data generator 36A and a phase difference data generator 37A.
  • the phase difference data generator 37A may have, for example, the same configuration as the phase difference data generator 17 described with reference to FIG. 11 in the first embodiment.
  • the two light-receiving pixels P in the pixel pair 90 are arranged side by side in an oblique direction, but the present invention is not limited to this. You may arrange side by side in a direction.
  • FIG. 46 shows a usage example (an example of an electronic device) of the imaging apparatuses 1 and 2 according to the above-described embodiments.
  • the imaging devices 1 and 2 described above can be used in various cases for sensing light such as visible light, infrared light, ultraviolet light, and X-rays, for example, as follows.
  • ⁇ Devices that capture images for viewing purposes, such as digital cameras and mobile devices with camera functions
  • Devices used for transportation such as in-vehicle sensors that capture images behind, around, and inside the vehicle, surveillance cameras that monitor running vehicles and roads, and ranging sensors that measure the distance between vehicles.
  • Devices used in home appliances such as televisions, refrigerators, air conditioners, etc., endoscopes, and devices that perform angiography by receiving infrared light to capture images and operate devices according to gestures.
  • Devices used for medical and health care such as equipment used for security purposes such as monitoring cameras for crime prevention and cameras used for personal authentication, skin measuring instruments for photographing the skin, scalp Equipment used for beauty, such as a microscope for photographing Equipment used for sports, such as action cameras and wearable cameras for sports, etc. Cameras for monitoring the condition of fields and crops, etc. of agricultural equipment
  • the technology according to the present disclosure (this technology) can be applied to various products.
  • the technology according to the present disclosure can be realized as a device mounted on any type of moving body such as automobiles, electric vehicles, hybrid electric vehicles, motorcycles, bicycles, personal mobility, airplanes, drones, ships, and robots. may
  • FIG. 47 is a block diagram showing a schematic configuration example of a vehicle control system, which is an example of a mobile control system to which the technology according to the present disclosure can be applied.
  • a vehicle control system 12000 includes a plurality of electronic control units connected via a communication network 12001.
  • vehicle control system 12000 includes drive system control unit 12010 , body system control unit 12020 , vehicle exterior information detection unit 12030 , vehicle interior information detection unit 12040 , and integrated control unit 12050 .
  • integrated control unit 12050 As the functional configuration of the integrated control unit 12050, a microcomputer 12051, an audio/image output unit 12052, and an in-vehicle network I/F (interface) 12053 are illustrated.
  • the drive system control unit 12010 controls the operation of devices related to the drive system of the vehicle according to various programs.
  • the driving system control unit 12010 includes a driving force generator for generating driving force of the vehicle such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting the driving force to the wheels, and a steering angle of the vehicle. It functions as a control device such as a steering mechanism to adjust and a brake device to generate braking force of the vehicle.
  • the body system control unit 12020 controls the operation of various devices equipped on the vehicle body according to various programs.
  • the body system control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as headlamps, back lamps, brake lamps, winkers or fog lamps.
  • the body system control unit 12020 can receive radio waves transmitted from a portable device that substitutes for a key or signals from various switches.
  • the body system control unit 12020 receives the input of these radio waves or signals and controls the door lock device, power window device, lamps, etc. of the vehicle.
  • the vehicle exterior information detection unit 12030 detects information outside the vehicle in which the vehicle control system 12000 is installed.
  • the vehicle exterior information detection unit 12030 is connected with an imaging section 12031 .
  • the vehicle exterior information detection unit 12030 causes the imaging unit 12031 to capture an image of the exterior of the vehicle, and receives the captured image.
  • the vehicle exterior information detection unit 12030 may perform object detection processing or distance detection processing such as people, vehicles, obstacles, signs, or characters on the road surface based on the received image.
  • the imaging unit 12031 is an optical sensor that receives light and outputs an electrical signal according to the amount of received light.
  • the imaging unit 12031 can output the electric signal as an image, and can also output it as distance measurement information.
  • the light received by the imaging unit 12031 may be visible light or non-visible light such as infrared rays.
  • the in-vehicle information detection unit 12040 detects in-vehicle information.
  • the in-vehicle information detection unit 12040 is connected to, for example, a driver state detection section 12041 that detects the state of the driver.
  • the driver state detection unit 12041 includes, for example, a camera that captures an image of the driver, and the in-vehicle information detection unit 12040 detects the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 12041. It may be calculated, or it may be determined whether the driver is dozing off.
  • the microcomputer 12051 calculates control target values for the driving force generator, the steering mechanism, or the braking device based on the information inside and outside the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040, and controls the drive system control unit.
  • a control command can be output to 12010 .
  • the microcomputer 12051 realizes the functions of ADAS (Advanced Driver Assistance System) including collision avoidance or shock mitigation, follow-up driving based on inter-vehicle distance, vehicle speed maintenance driving, vehicle collision warning, or vehicle lane deviation warning. Cooperative control can be performed for the purpose of ADAS (Advanced Driver Assistance System) including collision avoidance or shock mitigation, follow-up driving based on inter-vehicle distance, vehicle speed maintenance driving, vehicle collision warning, or vehicle lane deviation warning. Cooperative control can be performed for the purpose of ADAS (Advanced Driver Assistance System) including collision avoidance or shock mitigation, follow-up driving based on inter-vehicle distance, vehicle speed maintenance driving, vehicle collision warning, or vehicle
  • the microcomputer 12051 controls the driving force generator, the steering mechanism, the braking device, etc. based on the information about the vehicle surroundings acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040, so that the driver's Cooperative control can be performed for the purpose of autonomous driving, etc., in which vehicles autonomously travel without depending on operation.
  • the microcomputer 12051 can output a control command to the body system control unit 12020 based on the information outside the vehicle acquired by the information detection unit 12030 outside the vehicle.
  • the microcomputer 12051 controls the headlamps according to the position of the preceding vehicle or the oncoming vehicle detected by the vehicle exterior information detection unit 12030, and performs cooperative control aimed at anti-glare such as switching from high beam to low beam. It can be carried out.
  • the audio/image output unit 12052 transmits at least one of audio and/or image output signals to an output device capable of visually or audibly notifying the passengers of the vehicle or the outside of the vehicle.
  • an audio speaker 12061, a display unit 12062 and an instrument panel 12063 are illustrated as output devices.
  • the display unit 12062 may include at least one of an on-board display and a head-up display, for example.
  • FIG. 48 is a diagram showing an example of the installation position of the imaging unit 12031.
  • the vehicle 12100 has imaging units 12101, 12102, 12103, 12104, and 12105 as the imaging unit 12031.
  • the imaging units 12101, 12102, 12103, 12104, and 12105 are provided at positions such as the front nose of the vehicle 12100, the side mirrors, the rear bumper, the back door, and the upper part of the windshield in the vehicle interior, for example.
  • An image pickup unit 12101 provided in the front nose and an image pickup unit 12105 provided above the windshield in the passenger compartment mainly acquire images in front of the vehicle 12100 .
  • Imaging units 12102 and 12103 provided in the side mirrors mainly acquire side images of the vehicle 12100 .
  • An imaging unit 12104 provided in the rear bumper or back door mainly acquires an image behind the vehicle 12100 .
  • Forward images acquired by the imaging units 12101 and 12105 are mainly used for detecting preceding vehicles, pedestrians, obstacles, traffic lights, traffic signs, lanes, and the like.
  • FIG. 48 shows an example of the imaging range of the imaging units 12101 to 12104.
  • the imaging range 12111 indicates the imaging range of the imaging unit 12101 provided in the front nose
  • the imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided in the side mirrors, respectively
  • the imaging range 12114 The imaging range of an imaging unit 12104 provided on the rear bumper or back door is shown. For example, by superimposing the image data captured by the imaging units 12101 to 12104, a bird's-eye view image of the vehicle 12100 viewed from above can be obtained.
  • At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information.
  • at least one of the imaging units 12101 to 12104 may be a stereo camera composed of a plurality of imaging elements, or may be an imaging element having pixels for phase difference detection.
  • the microcomputer 12051 determines the distance to each three-dimensional object within the imaging ranges 12111 to 12114 and changes in this distance over time (relative velocity with respect to the vehicle 12100). , it is possible to extract, as the preceding vehicle, the closest three-dimensional object on the course of the vehicle 12100, which runs at a predetermined speed (for example, 0 km/h or more) in substantially the same direction as the vehicle 12100. can. Furthermore, the microcomputer 12051 can set the inter-vehicle distance to be secured in advance in front of the preceding vehicle, and perform automatic brake control (including following stop control) and automatic acceleration control (including following start control). In this way, cooperative control can be performed for the purpose of automatic driving in which the vehicle runs autonomously without relying on the operation of the driver.
  • automatic brake control including following stop control
  • automatic acceleration control including following start control
  • the microcomputer 12051 converts three-dimensional object data related to three-dimensional objects to other three-dimensional objects such as motorcycles, ordinary vehicles, large vehicles, pedestrians, and utility poles. It can be classified and extracted and used for automatic avoidance of obstacles. For example, the microcomputer 12051 distinguishes obstacles around the vehicle 12100 into those that are visible to the driver of the vehicle 12100 and those that are difficult to see. Then, the microcomputer 12051 judges the collision risk indicating the degree of danger of collision with each obstacle, and when the collision risk is equal to or higher than the set value and there is a possibility of collision, an audio speaker 12061 and a display unit 12062 are displayed. By outputting an alarm to the driver via the drive system control unit 12010 and performing forced deceleration and avoidance steering via the drive system control unit 12010, driving support for collision avoidance can be performed.
  • At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays.
  • the microcomputer 12051 can recognize a pedestrian by determining whether or not the pedestrian exists in the captured images of the imaging units 12101 to 12104 .
  • recognition of a pedestrian is performed by, for example, a procedure for extracting feature points in images captured by the imaging units 12101 to 12104 as infrared cameras, and performing pattern matching processing on a series of feature points indicating the outline of an object to determine whether or not the pedestrian is a pedestrian.
  • the audio image output unit 12052 outputs a rectangular outline for emphasis to the recognized pedestrian. is superimposed on the display unit 12062 . Also, the audio/image output unit 12052 may control the display unit 12062 to display an icon or the like indicating a pedestrian at a desired position.
  • the technology according to the present disclosure can be applied to the imaging unit 12031 among the configurations described above.
  • An imaging device mounted on a vehicle can improve the image quality of a captured image.
  • the vehicle control system 12000 realizes a vehicle collision avoidance or collision mitigation function, a follow-up driving function based on the distance between vehicles, a vehicle speed maintenance driving function, a vehicle collision warning function, a vehicle lane deviation warning function, etc. with high accuracy. can.
  • a pixel array comprising a plurality of light receiving pixels; a reading unit that generates an image signal composed of pixel values read from each of the light-receiving pixels; a signal processing unit that processes the image signal output from the reading unit; with The pixel array includes a plurality of pixel pairs consisting of at least two light receiving pixels sharing one on-chip lens, The signal processing unit is adding pixel values read from first light-receiving pixels in each of at least two pixel pairs among the plurality of pixel pairs, and reading from second light-receiving pixels different from the first light-receiving pixels in each of the pixel pairs; an addition unit that performs a first addition process for adding the output pixel values; a generation unit that generates information about an image plane phase difference between the first light receiving pixel and the second light receiving pixel based on a plurality of pixel values included in the image signal after the first addition processing;
  • An imaging device comprising: (2) The plurality of light-recei
  • the generating unit includes a correcting unit that corrects a plurality of pixel values included in the image signal after the first addition processing based on a sensitivity difference between the plurality of light receiving pixels.
  • imaging device (4) The plurality of light receiving pixels are arranged in a matrix, The imaging device according to any one of (1) to (3), wherein at least one of the at least two pixel pairs includes the first light-receiving pixel and the second light-receiving pixel that are adjacent in the row direction. .
  • the plurality of light receiving pixels are arranged in a matrix, The imaging device according to any one of (1) to (4), wherein at least one of the at least two pixel pairs includes the first light-receiving pixel and the second light-receiving pixel that are adjacent in the column direction. .
  • the plurality of light receiving pixels are arranged in a matrix, Each of the on-chip lenses is shared by the four light receiving elements arranged in 2 ⁇ 2,
  • the at least two pixel pairs are, of the four light receiving elements sharing the one on-chip lens, a first pixel pair composed of the first light receiving pixel and the second light receiving pixel adjacent in the row direction, and a column a second pixel pair consisting of a third light-receiving pixel and a fourth light-receiving pixel that are adjacent in a direction;
  • the adder adds the pixel values read from the first light receiving pixels in each of the first pixel pairs, and adds the pixel values read from the second light receiving pixels in each of the first pixel pairs.
  • the generation unit In addition to the first addition processing of adding the pixel values read from the third light receiving pixels in each of the second pixel pairs, adding the pixel values read from the fourth light receiving pixels in each of the second pixel pairs further executing a second addition process for adding the pixel values obtained,
  • the generation unit generates information about the image plane phase difference between the first light receiving pixel and the second light receiving pixel based on the plurality of pixel values included in the image signal after the first addition processing. and generating information about an image plane phase difference between the third light receiving pixel and the fourth light receiving pixel based on the plurality of pixel values included in the image signal after the second addition processing. ).
  • the plurality of light-receiving pixels include light-receiving pixels operating with a sensitivity of a first dynamic range and light-receiving pixels operating with a sensitivity of a second dynamic range wider than the first dynamic range,
  • the at least two pixel pairs are a third pixel pair consisting of the first light-receiving pixel and the second light-receiving pixel that operate with the sensitivity of the first dynamic range, and the pixel pair that operates with the sensitivity of the second dynamic range.
  • the first addition process adds the pixel values read from the first light receiving pixels in each of the third pixel pairs, and adds the pixel values read from the second light receiving pixels in each of the third pixel pairs.
  • a fourth addition process that adds the pixel values obtained,
  • the generating unit combines the first total pixel value calculated by the third addition process and the second total pixel value calculated by the fourth addition process, and the The imaging device according to any one of (1) to (6), wherein information about the image plane phase difference between the first light receiving pixel and the second light receiving pixel is generated based on a plurality of pixel values.
  • the plurality of light-receiving pixels are grouped into a plurality of pixel blocks each including two or more light-receiving pixels;
  • the imaging according to any one of (1) to (7), wherein the addition unit adds the pixel value of the first light receiving pixel and the pixel value of the second light receiving pixel in the same pixel block.
  • At least one of the plurality of pixel blocks includes two or more of the pixel pairs;
  • the reading unit reads one or more pixel values from the first light receiving pixels in the two or more pixel pairs included in the at least one pixel block, and reads the second light receiving pixels in the two or more pixel pairs. read out one or more pixel values from
  • the adder adds the one or more pixel values read from the first light-receiving pixels in the two or more pixel pairs for each pixel block, and adds the pixel values in the two or more pixel pairs.
  • the imaging device according to (8), wherein the first addition process is performed to add the one or more pixel values read from two light receiving pixels.
  • the pixel array comprises a plurality of unit units, each of which is a repeating unit, each of which is composed of two or more of the pixel blocks; each of the pixel blocks is composed of the two or more light-receiving pixels that receive light in the same wavelength band;
  • each of the unit units is composed of two pixel blocks each composed of ten light-receiving pixels and two pixel blocks each composed of eight light-receiving pixels.
  • each of the unit units is composed of four pixel blocks each composed of 16 light receiving pixels arranged in 4 ⁇ 4.
  • each of the unit units is composed of four pixel blocks each composed of nine light receiving pixels arranged in 3 ⁇ 3.
  • the adder adds the pixel values read from the first light-receiving pixels in at least one pixel block among the plurality of pixel blocks, and adds the pixel values read from the second light-receiving pixels in the at least one pixel block.
  • the imaging apparatus according to any one of (8) to (13) above, wherein a first addition process is performed to add the read pixel values.
  • Reference Signs List 1 2 imaging device 11, 31 pixel array 12, 32 driving section 13 reference signal generating section 15, 35 signal processing section 16, 36 image data generating section 17, 37 phase difference data generating section 18 imaging control section 20 reading section 21 constant current source 22, 23 capacitive element 24 comparison circuit 25 counter 26 latch 51 semiconductor substrate 52 semiconductor region 53 insulating layer 54 multilayer wiring layer 55 color filter 56 light shielding film 90, 90-1, 90-2, 90X, 90Y pixel pair 100B, 100Gb, 100Gr, 100R, 300B, 300B1, 300B2, 300Gb, 300Gb1, 300Gb2, 300Gr, 300Gr1, 300Gr2, 300R, 300R1, 300R2 Pixel block 101 On-chip lens 111B, 111Gb, 111Gr, 1111, 511, 0111, 031 Total pixel value pair 171 Same color pixel adder 172 Sensitivity difference corrector 173 Phase difference information generator 174 Phase difference detector 175 Memory DF Phase difference data DP Image data FD Floating diffusion LR, LB Light P,

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

La présente invention évite une augmentation de la taille d'un dispositif. Selon un mode de réalisation de la présente invention, un dispositif d'imagerie comprend : une matrice de pixels (11) pourvue d'une pluralité de pixels de réception de lumière ; une unité de lecture (20) destinée à générer un signal d'image contenant des valeurs de pixels respectivement lues à partir des pixels de réception de lumière ; et un processeur de signal (15) destiné à traiter le signal d'image délivré en sortie par l'unité de lecture. La matrice de pixels contient une pluralité de paires de pixels (90) comptant au moins deux pixels de réception de lumière qui partagent une lentille sur puce. Le processeur de signal comprend : un additionneur (171) qui exécute un premier traitement d'addition visant à additionner des valeurs de pixels lues à partir d'un premier pixel de réception de lumière dans chaque paire parmi au moins deux paires de pixels de la pluralité de paires de pixels et à additionner des valeurs de pixels lues à partir d'un second pixel de réception de lumière différent du premier dans chacune des paires de pixels ; et un générateur (173) qui génère des informations relatives à une différence de phase d'image entre les premiers et seconds pixels de réception de lumière sur la base d'une pluralité de valeurs de pixels intégrées dans le signal d'image après le premier traitement d'addition.
PCT/JP2022/013825 2021-08-17 2022-03-24 Dispositif d'imagerie et appareil électronique l'intégrant WO2023021774A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021132844A JP2023027618A (ja) 2021-08-17 2021-08-17 撮像装置及び電子機器
JP2021-132844 2021-08-17

Publications (1)

Publication Number Publication Date
WO2023021774A1 true WO2023021774A1 (fr) 2023-02-23

Family

ID=85240316

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/013825 WO2023021774A1 (fr) 2021-08-17 2022-03-24 Dispositif d'imagerie et appareil électronique l'intégrant

Country Status (2)

Country Link
JP (1) JP2023027618A (fr)
WO (1) WO2023021774A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013100036A1 (fr) * 2011-12-27 2013-07-04 富士フイルム株式会社 Élément d'imagerie en couleurs
JP2015115344A (ja) * 2013-12-09 2015-06-22 株式会社東芝 固体撮像装置
JP2018019241A (ja) * 2016-07-27 2018-02-01 富士通株式会社 撮像装置
JP2020098968A (ja) * 2018-12-17 2020-06-25 オリンパス株式会社 撮像素子、撮像装置、および撮像方法
WO2020170565A1 (fr) * 2019-02-19 2020-08-27 ソニーセミコンダクタソリューションズ株式会社 Procédé de traitement de signaux et dispositif d'imagerie

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013100036A1 (fr) * 2011-12-27 2013-07-04 富士フイルム株式会社 Élément d'imagerie en couleurs
JP2015115344A (ja) * 2013-12-09 2015-06-22 株式会社東芝 固体撮像装置
JP2018019241A (ja) * 2016-07-27 2018-02-01 富士通株式会社 撮像装置
JP2020098968A (ja) * 2018-12-17 2020-06-25 オリンパス株式会社 撮像素子、撮像装置、および撮像方法
WO2020170565A1 (fr) * 2019-02-19 2020-08-27 ソニーセミコンダクタソリューションズ株式会社 Procédé de traitement de signaux et dispositif d'imagerie

Also Published As

Publication number Publication date
JP2023027618A (ja) 2023-03-02

Similar Documents

Publication Publication Date Title
US11888004B2 (en) Imaging apparatus having phase difference detection pixels receiving light transmitted through a same color filter
CN110383481B (zh) 固态成像装置和电子设备
JP7370413B2 (ja) 固体撮像装置、及び電子機器
US11336860B2 (en) Solid-state image capturing device, method of driving solid-state image capturing device, and electronic apparatus
US20230402475A1 (en) Imaging apparatus and electronic device
US20210385394A1 (en) Solid-state imaging apparatus and electronic
US20230387155A1 (en) Imaging apparatus
WO2019207927A1 (fr) Antenne réseau, dispositif d'imagerie à semi-conducteurs et appareil électronique
WO2023021774A1 (fr) Dispositif d'imagerie et appareil électronique l'intégrant
WO2023243222A1 (fr) Dispositif d'imagerie
WO2023132151A1 (fr) Élément de capture d'image et dispositif électronique
WO2023074177A1 (fr) Dispositif d'imagerie
WO2023286297A1 (fr) Élément d'imagerie à semi-conducteurs, dispositif d'imagerie, et procédé de contrôle d'élément d'imagerie à semi-conducteurs
WO2023032416A1 (fr) Dispositif d'imagerie
WO2023079840A1 (fr) Dispositif d'imagerie et appareil électronique
US12003878B2 (en) Imaging device
WO2023021780A1 (fr) Dispositif d'imagerie, appareil électronique et procédé de traitement d'informations
EP4398592A1 (fr) Dispositif d'imagerie
KR20240087828A (ko) 촬상 장치

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE