WO2020054184A1 - Élément d'imagerie à semi-conducteurs, dispositif d'imagerie, et procédé de commande d'élément d'imagerie à semi-conducteurs - Google Patents

Élément d'imagerie à semi-conducteurs, dispositif d'imagerie, et procédé de commande d'élément d'imagerie à semi-conducteurs Download PDF

Info

Publication number
WO2020054184A1
WO2020054184A1 PCT/JP2019/026210 JP2019026210W WO2020054184A1 WO 2020054184 A1 WO2020054184 A1 WO 2020054184A1 JP 2019026210 W JP2019026210 W JP 2019026210W WO 2020054184 A1 WO2020054184 A1 WO 2020054184A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
reset
pixel data
unit
pixel
Prior art date
Application number
PCT/JP2019/026210
Other languages
English (en)
Japanese (ja)
Inventor
拓 永瀬
亮太郎 高田
Original Assignee
ソニーセミコンダクタソリューションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーセミコンダクタソリューションズ株式会社 filed Critical ソニーセミコンダクタソリューションズ株式会社
Publication of WO2020054184A1 publication Critical patent/WO2020054184A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors
    • H04N25/78Readout circuits for addressed sensors, e.g. output amplifiers or A/D converters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/616Noise processing, e.g. detecting, correcting, reducing or removing noise involving a correlated sampling function, e.g. correlated double sampling [CDS] or triple sampling

Definitions

  • the present technology relates to a solid-state imaging device, an imaging device, and a method for controlling a solid-state imaging device. More specifically, the present invention relates to a solid-state imaging device that generates a pixel signal having a difference between a reset level and a signal level, an imaging device, and a method for controlling the solid-state imaging device.
  • a correlated double sampling (CDS) process has been performed in a solid-state imaging device for the purpose of reducing fixed pattern noise and the like.
  • a data processing unit that performs the CDS processing calculates a difference between a reset level at the time of initialization of a pixel and a signal level at the end of exposure of the pixel, and outputs the difference as a pixel signal with reduced fixed pattern noise.
  • an ADC Analog to Digital Converter
  • a reset level and a signal level from the ADC are transferred to a data processing unit to perform a CDS process (for example, Patent Reference 1).
  • the present technology has been developed in view of such a situation, and has an object to reduce the number of wirings in a circuit that performs CDS processing in a solid-state imaging device that performs CDS processing.
  • a first aspect of the present technology includes a data output unit that repeatedly and alternately outputs predetermined reset data and signal data corresponding to an exposure amount, A pair of reset data holding units for alternately holding the repeatedly output reset data; and a pair of the reset data held in one of the pair of reset data holding units and the signal data output after the reset data.
  • a pixel data generating unit that repeats a process of generating a difference as pixel data, a pair of pixel data holding units that alternately holds the repeatedly generated pixel data, and a pair of the pixel data holding units that are alternately selected.
  • a solid-state imaging device including an output-side selection unit that outputs the pixel data held in the selected pixel data holding unit, and control thereof It is the law. This brings about an effect that pixel data is repeatedly generated from the reset data and the signal data alternately held in the pair of reset data holding units.
  • the data output unit may start outputting the reset data while outputting the signal data. This brings about an effect that the reading speed of the image data is increased.
  • an input-side selection unit for alternately selecting the pair of reset data holding units and supplying the reset data held in the selected reset data holding unit to the pixel data generation unit. It may be further provided. This brings about an effect that pixel data is generated from reset data and signal data read from a selected one of the pair of reset data holding units.
  • the pixel data generation unit includes a pair of pixel data generation circuits, and one of the pair of pixel data generation circuits is held in one of the pair of reset data holding units.
  • the difference between the reset data and the signal data output after the reset data is output as the pixel data to one of the pair of pixel data holding units.
  • a difference between the reset data held in the other data holding unit and the signal data output after the reset data may be output to the other of the pair of pixel data holding units as the pixel data. This brings about an effect that the wiring distance between the reset data holding unit and the pixel data generation circuit is shortened.
  • a pixel for generating a predetermined reset level and a signal level corresponding to the exposure amount a process of converting the reset level into a digital signal and holding the digital signal as the reset data, To a digital signal and holding the signal data. This brings about an effect that the CDS processing is performed on the AD-converted data for each pixel.
  • the pixel is disposed on a predetermined light receiving substrate, and the analog-to-digital conversion unit, the data output unit, the pair of reset data holding units, the pixel data generation unit, and the pair of The pixel data holding unit and the output side selection unit may be arranged on a predetermined circuit board. This brings about an effect that the circuit scale per board is reduced as compared with the case where the circuit board is provided on a single board.
  • the pixel is disposed on a predetermined light receiving substrate, the analog-to-digital converter is disposed on a first circuit substrate, and the data output unit and the pair of reset data holding units are provided.
  • Part of the pixel data generation unit, the pair of pixel data holding units, and the output-side selection unit may be disposed on the first circuit board, and the rest may be disposed on a second circuit board.
  • a data output unit that alternately and repeatedly outputs predetermined reset data and signal data corresponding to an exposure amount, and a pair of resets that alternately hold the repeatedly output reset data.
  • a data holding unit, and a pixel data generation unit that repeats a process of generating a difference between the reset data held in one of the pair of reset data holding units and the signal data output after the reset data as pixel data as pixel data.
  • a pair of pixel data holding units for alternately holding the repeatedly generated pixel data, and alternately selecting the pair of pixel data holding units and outputting the pixel data held in the selected pixel data holding unit An image pickup apparatus comprising: an output-side selector that performs the above-described pixel data processing; and a signal processor that processes the pixel data. This brings about an effect that pixel data is repeatedly generated and processed from reset data and signal data alternately held in the pair of reset data holding units.
  • the data output unit, the pair of reset data holding units, the pixel data generation unit, the pair of pixel data holding units, and the output side selection unit are arranged in a solid-state imaging device. You may. This brings about an effect that the CDS processing is performed in the solid-state imaging device.
  • the data output unit, the pair of reset data holding units, and the pixel data generation unit are arranged in a solid-state imaging device, and the pair of pixel data holding units and the output side selection unit are arranged. May be disposed outside the solid-state imaging device. This brings about the effect that the circuit scale in the solid-state imaging device is reduced.
  • FIG. 1 is a block diagram illustrating a configuration example of an imaging device according to a first embodiment of the present technology.
  • FIG. 2 is a diagram illustrating an example of a stacked structure of the solid-state imaging device according to the first embodiment of the present technology.
  • FIG. 2 is a plan view illustrating a configuration example of a light receiving substrate according to the first embodiment of the present technology.
  • FIG. 2 is a block diagram illustrating a configuration example of a circuit board according to the first embodiment of the present technology.
  • FIG. 2 is a block diagram illustrating a configuration example of a cluster according to the first embodiment of the present technology.
  • FIG. 3 is a perspective view illustrating an example of a connection relationship between a pixel and a circuit in a cluster according to the first embodiment of the present technology.
  • FIG. 1 is a block diagram illustrating a configuration example of an imaging device according to a first embodiment of the present technology.
  • FIG. 2 is a diagram illustrating an example of a stacked structure of the solid-
  • FIG. 2 is a circuit diagram illustrating a configuration example of a P-phase transfer unit according to the first embodiment of the present technology.
  • FIG. 2 is a block diagram illustrating a configuration example of a data processing unit according to the first embodiment of the present technology.
  • 5 is a timing chart illustrating an example of an operation of the solid-state imaging device according to the first embodiment of the present technology.
  • FIG. 7 is a diagram for describing a circuit control method until a first CDS process is performed according to the first embodiment of the present technology.
  • FIG. 7 is a diagram for describing a circuit control method when performing the second and third CDS processes according to the first embodiment of the present technology.
  • FIG. 11 is a diagram for describing a control method in a comparative example having two frame memories according to the present technology.
  • FIG. 11 is a diagram for describing a control method in a comparative example having three frame memories of the present technology.
  • 5 is a flowchart illustrating an example of an operation of the solid-state imaging device according to the first embodiment of the present technology.
  • FIG. 13 is a block diagram illustrating a configuration example of a data processing unit according to a second embodiment of the present technology.
  • FIG. 11 is a diagram illustrating an example of a stacked structure of a solid-state imaging device according to a third embodiment of the present technology.
  • FIG. 13 is a block diagram illustrating a configuration example of a data processing unit according to a third embodiment of the present technology.
  • FIG. 21 is a block diagram illustrating a configuration example of a data processing unit according to a fourth embodiment of the present technology.
  • FIG. 21 is a block diagram illustrating a configuration example of a DSP (Digital Signal Processing) circuit according to a fourth embodiment of the present technology.
  • FIG. 2 is a block diagram illustrating a schematic configuration example of a vehicle control system.
  • FIG. 4 is an explanatory diagram illustrating an example of an installation position of an imaging unit.
  • First Embodiment an example in which data is alternately stored in a pair of frame memories
  • Second embodiment an example in which data is alternately held in a pair of frame memories and a pair of CDS processing units are arranged
  • Third embodiment an example in which data is alternately held in a pair of frame memories and a circuit for performing CDS processing is distributed and arranged on two substrates
  • Fourth embodiment an example in which data is alternately held in a pair of frame memories and a part of the circuit is arranged outside the solid-state imaging device.
  • FIG. 1 is a block diagram illustrating a configuration example of an imaging device 100 according to the first embodiment of the present technology.
  • the imaging apparatus 100 is an apparatus for imaging image data, and includes an optical unit 110, a solid-state imaging device 200, and a DSP circuit 120. Further, the imaging device 100 includes a display unit 130, an operation unit 140, a bus 150, a frame memory 160, a storage unit 170, and a power supply unit 180.
  • a digital camera such as a digital still camera, a smartphone, a personal computer, an in-vehicle camera, or the like having an imaging function is assumed.
  • the optical unit 110 collects light from a subject and guides the light to the solid-state imaging device 200.
  • the solid-state imaging device 200 generates image data by photoelectric conversion in synchronization with a vertical synchronization signal VSYNC.
  • the vertical synchronization signal VSYNC is a periodic signal of a predetermined frequency indicating the timing of imaging.
  • the solid-state imaging device 200 supplies the generated image data to the DSP circuit 120 via the signal line 209.
  • the DSP circuit 120 performs predetermined signal processing on image data from the solid-state imaging device 200.
  • the DSP circuit 120 outputs the processed image data to the frame memory 160 via the bus 150.
  • the DSP circuit 120 is an example of a signal processing unit described in the claims.
  • the display unit 130 displays image data.
  • a liquid crystal panel or an organic EL (Electro Luminescence) panel is assumed.
  • the operation unit 140 generates an operation signal according to a user operation.
  • the bus 150 is a common path through which the optical unit 110, the solid-state imaging device 200, the DSP circuit 120, the display unit 130, the operation unit 140, the frame memory 160, the storage unit 170, and the power supply unit 180 exchange data with each other.
  • the frame memory 160 stores image data.
  • the storage unit 170 stores various data such as image data.
  • the power supply unit 180 supplies power to the solid-state imaging device 200, the DSP circuit 120, the display unit 130, and the like.
  • FIG. 2 is a diagram illustrating an example of a stacked structure of the solid-state imaging device 200 according to the first embodiment of the present technology.
  • the solid-state imaging device 200 includes a circuit board 202 and a light receiving substrate 201 laminated on the circuit board 202. These substrates are electrically connected via connection parts such as vias. Note that, in addition to vias, connection can also be made by inductive coupling communication technology such as Cu—Cu bonding, bumps, and TCI (ThruChip Interface).
  • FIG. 3 is a plan view illustrating a configuration example of the light receiving substrate 201 according to the first embodiment of the present technology.
  • the pixel array unit 210 is provided on the light receiving substrate 201.
  • a plurality of pixel blocks 211 are arranged in a two-dimensional lattice.
  • a plurality of pixels 212 are arranged in each of the pixel blocks 211.
  • eight pixels 212 of 2 rows ⁇ 4 columns are arranged for each pixel block 211. Note that the number of pixels in the pixel block 211 is not limited to eight.
  • the pixel 212 generates an analog signal by photoelectric conversion.
  • the level of the analog signal generated when the pixel 212 is initialized is hereinafter referred to as a “reset level”, and the level of the analog signal according to the exposure amount generated at the end of the exposure of the pixel 212 is hereinafter referred to as “reset level”.
  • Signal level the level of the analog signal generated when the pixel 212 is initialized.
  • FIG. 4 is a block diagram illustrating a configuration example of the circuit board 202 according to the first embodiment of the present technology.
  • the circuit board 202 includes a DAC (Digital to Analog Converter) 220, a vertical drive circuit 230, a timing control circuit 240, a time code generator 250, an AD converter 260, and a data processor 270.
  • DAC Digital to Analog Converter
  • the DAC 220 generates an analog ramp signal RMP that changes in a slope shape by DA (Digital to Analog) conversion.
  • the DAC 220 supplies the generated ramp signal RMP to the AD converter 260.
  • the AD converter 260 converts, for each pixel, an analog signal from the pixel into a digital signal.
  • the clusters 300 are arranged in a two-dimensional lattice.
  • the cluster 300 is provided for each pixel block 211. If the number of pixel blocks 211 is N (N is an integer), N clusters 300 are also provided.
  • the pixel block 211 and the cluster 300 are connected one-to-one.
  • Each of the clusters 300 converts an analog signal from the corresponding pixel block 211 into a digital signal for each pixel and supplies the digital signal to the data processing unit 270.
  • a digital signal corresponding to the reset level of the analog signal is hereinafter referred to as “P-phase data”, and a digital signal corresponding to the signal level is hereinafter referred to as “D-phase data”.
  • the P-phase data is an example of reset data described in the claims
  • the D-phase data is an example of signal data described in the claims.
  • the vertical drive circuit 230 drives each of the clusters 300 under the control of the timing control circuit 240.
  • the timing control circuit 240 controls the DAC 220, the vertical drive circuit 230, and the data processing unit 270 in synchronization with the vertical synchronization signal VSYNC.
  • the time code generation unit 250 generates a time code. This time code indicates a time within a period during which the ramp signal changes in a slope shape.
  • the time code generation unit 250 counts a count value, for example, in synchronization with a clock signal having a constant frequency, and generates data indicating the count value as a time code.
  • the time code generation unit 250 supplies the generated time code to the AD conversion unit 260.
  • the data processing unit 270 executes a predetermined process including a CDS process on the P-phase data and the D-phase data from the AD conversion unit 260.
  • the data processing unit 270 supplies the DSP circuit 120 with image data including the processed data.
  • FIG. 5 is a block diagram illustrating a configuration example of the cluster 300 according to the first embodiment of the present technology.
  • the cluster 300 includes comparison circuits 311 to 318, data storage units 321 to 328, and data storage units 331 to 338.
  • a data transfer unit 261 is arranged for each column of the cluster 300.
  • M is an integer
  • M data transfer units 261 are also arranged.
  • L is an integer
  • those L clusters 300 share one data transfer unit 261 corresponding to the column.
  • the data transfer unit 261 transfers data from the time code generation unit 250 to the cluster 300, and transfers data from the cluster 300 to the data processing unit 270.
  • the data transfer unit 261 includes a P-phase transfer unit 262, a write data transfer unit 264, and a D-phase transfer unit 265.
  • $ P-phase transfer unit 262 transfers P-phase data from cluster 300 to data processing unit 270.
  • the write data transfer unit 264 transfers the time code from the time code generation unit 250 to the cluster 300.
  • the D-phase transfer unit 265 transfers the D-phase data from the cluster 300 to the data processing unit 270.
  • the comparison circuits 311 to 318 are connected one-to-one with the eight pixels 212 in the pixel block 211 corresponding to the cluster 300.
  • R Red
  • Gb Green
  • B Blue
  • Gr Green
  • R, Gb, B, and Gr pixels are also arranged in the right 2 rows ⁇ 2 columns.
  • the left Gb pixel is connected to the comparison circuit 311
  • the left B pixel is connected to the comparison circuit 312.
  • the left R pixel is connected to the comparison circuit 313, and the left Gr pixel is connected to the comparison circuit 314.
  • the right Gb pixel is connected to the comparison circuit 315, and the right B pixel is connected to the comparison circuit 316.
  • the right R pixel is connected to the comparison circuit 317, and the right Gr pixel is connected to the comparison circuit 318.
  • Comparison circuit 311 outputs the comparison result to data storage units 321 and 322, and comparison circuit 312 outputs the result to data storage units 323 and 324.
  • Comparison circuit 313 outputs the comparison result to data storage units 325 and 326, and comparison circuit 314 outputs the result to data storage units 327 and 328.
  • comparison circuit 315 outputs the comparison result to data storage units 331 and 332, and comparison circuit 316 outputs the result to data storage units 333 and 334.
  • Comparison circuit 317 outputs the comparison result to data storage units 335 and 336, and comparison circuit 318 outputs the result to data storage units 337 and 338.
  • the data storage unit 321 holds P-phase data.
  • the data storage unit 321 holds the time code from the write data transfer unit 264 as P-phase data at the timing when the comparison result is inverted after the output of the reset level. Then, the data storage unit 321 outputs the held P-phase data to the data processing unit 270 via the P-phase transfer unit 262.
  • the configuration of the data storage units 323, 325, 327, 331, 333, 335, and 337 is the same as that of the data storage unit 321.
  • the data storage unit 322 holds D-phase data.
  • the data storage unit 322 holds the time code from the write data transfer unit 264 as D-phase data at the timing when the comparison result is inverted after the output of the signal level. Then, the data storage unit 322 outputs the held D-phase data to the data processing unit 270 via the D-phase transfer unit 265.
  • the configuration of the data storage units 324, 326, 328, 332, 334, 336, and 338 is the same as that of the data storage unit 322.
  • the cluster 300 alternately converts the reset level and the signal level of each pixel from the pixel block 211 into digital signals.
  • This AD conversion is repeatedly executed in all pixels in synchronization with the vertical synchronization signal VSYNC.
  • the data transfer unit 261 repeatedly and alternately outputs the P-phase data and the D-phase data to the data processing unit 270 in synchronization with the vertical synchronization signal VSYNC.
  • the data transfer unit 261 is an example of a data output unit described in the claims.
  • the numbers of the comparison circuits and the data storage units in the cluster 300 are not limited to eight and sixteen, respectively.
  • the number of pixels in the pixel block 211 is K (K is an integer)
  • the number of comparison circuits in the cluster 300 is K
  • the number of data storage units is K ⁇ 2.
  • FIG. 6 is a perspective view illustrating an example of a connection relationship between the pixel 212 and a circuit in the cluster 300 according to the first embodiment of the present technology.
  • Gb, B, R, and Gr pixels are arranged at coordinates (0, 0), (0, 1), (1, 0), and (1, 1) on the left side. Further, Gb pixels, B pixels, R pixels, and Gr pixels are also arranged at the coordinates (0, 2), (0, 3), (1, 2), and (1, 3) on the right side.
  • Such an array is called a Bayer array. Note that the pixels can be arranged in an arrangement other than the Bayer arrangement. For example, R, G, B and W (White) pixels can be arranged.
  • the left Gb pixel is connected to the comparison circuit 311, and the left B pixel is connected to the comparison circuit 312.
  • the left R pixel is connected to the comparison circuit 313, and the left Gr pixel is connected to the comparison circuit 314.
  • the right Gb pixel is connected to the comparison circuit 315, and the right B pixel is connected to the comparison circuit 316.
  • the right R pixel is connected to the comparison circuit 317, and the right Gr pixel is connected to the comparison circuit 318.
  • data storage units 321 to 328 are arranged in the left memory 320 in FIG.
  • data storage units 331 to 338 are arranged in the right memory 330.
  • FIG. 7 is a circuit diagram illustrating a configuration example of the P-phase transfer unit 262 according to the first embodiment of the present technology.
  • the P-phase transfer unit 262 is provided with a shift register including a predetermined number of flip-flops 263. These flip-flops 263 are connected in series.
  • Each of the data storage units holding the P-phase data such as the left data storage units 321, 323, 325, and 327, is connected to input terminals of flip-flops 263 different from each other by wired-OR connection.
  • the shift register holds the P-phase data bit by bit in synchronization with the clock signal CLK and outputs the data to the data processing unit 270.
  • the configurations of the write data transfer unit 264 and the D-phase transfer unit 265 are the same as those of the P-phase transfer unit 262.
  • FIG. 8 is a block diagram illustrating a configuration example of the data processing unit 270 according to the first embodiment of the present technology.
  • the data processing unit 270 includes frame memories 271, 272, 275, and 276, selectors 273 and 277, and a CDS processing unit 274.
  • the AD converter 260 includes a clock supply unit 266 in addition to the cluster 300, the P-phase transfer unit 262, the write data transfer unit 264, and the D-phase transfer unit 265. Note that, in the figure, the cluster 300 is omitted.
  • the clock supply unit 266 supplies the clock signal CLK to each of the P-phase transfer unit 262, the write data transfer unit 264, and the D-phase transfer unit 265.
  • the frame memories 271 and 272 alternately hold P-phase data repeatedly output from the P-phase transfer unit 262.
  • the P-phase data from each of the M P-phase transfer units 262 is input to the frame memory 271 via the signal line 291.
  • P-phase data from each of the M P-phase transfer units 262 is input to the frame memory 272 via a signal line 292.
  • a control signal from the timing control circuit 240 is input to the frame memories 271 and 272.
  • This control signal is a signal for instructing writing or reading of data.
  • frame memories 271 and 272 fetch and hold P-phase data of all pixels.
  • Each of these frame memories has such a capacity that it can hold at least one frame of P-phase data.
  • frame memories 271 and 272 when reading is instructed, output the held P-phase data to selector 273 via signal lines 293 and 294.
  • the frame memories 271 and 272 are an example of a pair of reset data holding units described in the claims.
  • the selector 273 alternately selects the frame memories 271 and 272 in accordance with the timing control circuit 240 selection signal, and outputs the P-phase data from the selected memory to the CDS processing unit 274 via the signal line 296. Note that the selector 273 is an example of an input-side selection unit described in the claims.
  • the CDS processing unit 274 repeats the CDS process of generating the difference between the P-phase data from the selector 273 and the D-phase data from the M D-phase transfer units 265 as pixel data in synchronization with the vertical synchronization signal VSYNC. Is what you do.
  • the CDS processing unit 274 supplies the generated pixel data to the frame memories 275 and 276 via the signal lines 297 and 298.
  • the CDS processing unit 274 is an example of a pixel data generation unit described in the claims.
  • the frame memories 275 and 276 alternately hold pixel data repeatedly generated by the CDS processing unit 274.
  • the capacity of each of these frame memories is large enough to hold at least one frame of pixel data.
  • Control signals from the timing control circuit 240 are also input to the frame memories 275 and 276. When writing is instructed by the control signal, the frame memories 275 and 276 fetch and hold the pixel data of all the pixels.
  • frame memories 275 and 276 output the held pixel data to selector 277 via signal lines 299 and 299-1.
  • the frame memories 275 and 276 are examples of a pair of pixel data holding units described in the claims.
  • the frame memories 275 and 276 hold pixel data in the order of output in units of clusters 300.
  • pixel data is arranged in raster (row) order. Therefore, the timing control circuit 240 transmits a control signal to the frame memories 275 and 276 so that the reading order of the pixels is in the raster order. Thereby, the data processing unit 270 can rearrange and output the pixel data in the raster order.
  • the selector 277 alternately selects the frame memories 275 and 275 in accordance with a selection signal from the timing control circuit 240, and outputs pixel data from the selected memory to the DSP circuit 120. Note that the selector 277 is an example of an output-side selection unit described in the claims.
  • each of the signal lines 291 to 299-1 is represented by one line for convenience of description, but physically, not one, but a plurality of lines are wired.
  • a total of M signal lines are wired as signal lines 291, one from each of the M P-phase transfer units 262.
  • FIG. 9 is a timing chart illustrating an example of an operation of the solid-state imaging device 200 according to the first embodiment of the present technology.
  • the vertical synchronization signal VSYNC falls, and after timing T0, the DAC 220 decreases the ramp signal RMP in a slope shape.
  • the left memory 320 holds P-phase data based on the result of comparison between the ramp signal RMP and the reset level.
  • the P-phase transfer unit 262 transfers the P-phase data to the frame memory 271 for a certain period after the timing T1, and the frame memory 271 holds the P-phase data.
  • the DAC 220 starts supplying the ramp signal RMP immediately after the timing T1, and the left memory 320 retains the D-phase data based on the comparison result between the ramp signal RMP and the signal level.
  • the vertical synchronizing signal VSYNC falls, and after timing T2, the DAC 220 lowers the ramp signal RMP like a slope.
  • the left memory 320 holds the second P-phase data.
  • the D-phase transfer unit 265 starts the first transfer of the D-phase data.
  • the CDS processing unit 274 reads the first P-phase data from the frame memory 271 and obtains a CDS result (that is, pixel data) of a difference from the first D-phase data.
  • the frame memory 275 holds the CDS result.
  • the P-phase transfer unit 262 starts the second transfer of the P-phase data, and the frame memory 272 holds the P-phase data.
  • the DAC 220 starts supplying the ramp-shaped ramp signal RMP, and the left memory 320 holds the second D-phase data.
  • the vertical synchronization signal VSYNC falls, and after timing T4, the DAC 220 decreases the ramp signal RMP in a slope shape.
  • the left memory 320 holds the third P-phase data.
  • the D-phase transfer unit 265 starts the second transfer of the D-phase data.
  • the CDS processing unit 274 reads the second P-phase data from the frame memory 275 and obtains a CDS result (that is, pixel data) of a difference from the second D-phase data.
  • the frame memory 276 holds the CDS result.
  • the first CDS result is read from the frame memory 275 in raster order, and output from the selector 277.
  • the P-phase transfer unit 262 starts the third transfer of the P-phase data, and the frame memory 271 holds the P-phase data.
  • the DAC 220 starts supplying the ramp-shaped ramp signal RMP, and the left memory 320 holds the third D-phase data.
  • the same read control is repeatedly executed.
  • the cluster 300 including the left memory 320 alternately and repeatedly holds the time code as P-phase data and D-phase data in synchronization with the vertical synchronization signal VSYNC.
  • the P-phase transfer unit 262 repeatedly transfers P-phase data in synchronization with the vertical synchronization signal VSYNC, while the D-phase transfer unit 265 repeats the D-phase data in synchronization with the vertical synchronization signal VSYNC. Forward.
  • the time until the result of comparison with the reset level is inverted is relatively short, the time from the falling of the vertical synchronization signal VSYNC to the holding of the P-phase data is generally longer than the transfer period of the D-phase data. Is also shorter. Therefore, the P-phase data is held during the transfer of the D-phase data. For this reason, if the transfer of the P-phase data is started immediately after the P-phase data is held, a period occurs in which the P-phase data and the D-phase data are simultaneously transferred.
  • the frame memories 271 and 272 alternately hold the P-phase data output repeatedly.
  • the P-phase data is read from the other.
  • the CDS processing unit 274 generates a difference between the read P-phase data and the subsequently transferred D-phase data as pixel data.
  • the frame memories 275 and 276 alternately hold the repeatedly generated pixel data.
  • the selector 277 reads and outputs the pixel data from the other in raster order.
  • FIG. 10 is a diagram for describing a circuit control method until the first CDS process is performed in the first embodiment of the present technology.
  • a shows the state of the data processing unit 270 when holding the first P-phase data
  • b in the figure shows the state of the data processing unit 270 when performing the first CDS processing.
  • the timing control circuit 240 writes the data in the frame memory 271 by a control signal as illustrated in a in FIG.
  • the first D-phase data is transferred as illustrated in FIG.
  • the timing control circuit 240 writes the second P-phase data in the frame memory 272 by the control signal.
  • the selector 273 selects the frame memory 271, reads out the first P-phase data from the memory, and supplies the data to the CDS processing unit 274.
  • the CDS processing unit 274 generates the difference between the first P-phase data and the D-phase data as net pixel data.
  • the timing control circuit 240 writes the first pixel data into the frame memory 275 according to the control signal.
  • FIG. 11 is a diagram for describing a circuit control method when performing the second and third CDS processes according to the first embodiment of the present technology.
  • a shows the state of the data processing unit 270 when performing the second CDS processing
  • b in the figure shows the state of the data processing unit 270 when performing the third CDS processing.
  • the second D-phase data is transferred. It is assumed that the third transfer of P-phase data is started during the second transfer of D-phase data.
  • the timing control circuit 240 writes the third P-phase data in the frame memory 271 by the control signal.
  • the selector 273 selects the frame memory 272, reads the second P-phase data from the memory, and supplies it to the CDS processing unit 274.
  • the CDS processing unit 274 generates a difference between the second P-phase data and the D-phase data as pixel data.
  • the timing control circuit 240 writes the second pixel data to the frame memory 276 according to the control signal.
  • the selector 277 selects the frame memory 275, reads out the first pixel data from the memory in raster order, and outputs the pixel data to the DSP circuit 120.
  • the third D-phase data is transferred. It is assumed that the fourth transfer of P-phase data is started during the third transfer of D-phase data.
  • the timing control circuit 240 writes the fourth P-phase data in the frame memory 272 according to the control signal.
  • the selector 273 selects the frame memory 271, reads out the third P-phase data from the memory, and supplies it to the CDS processing unit 274.
  • the CDS processing unit 274 generates a difference between the third P-phase data and the D-phase data as pixel data.
  • the timing control circuit 240 writes the third pixel data in the frame memory 275 by the control signal.
  • the selector 277 selects the frame memory 276, reads out the second pixel data from the memory in raster order, and outputs it to the DSP circuit 120.
  • the timing control circuit 240 stores the next Can be written. Therefore, even when the transfer of the next P-phase data is started during the transfer of the D-phase data, the data processing unit 270 can generate the pixel data by the CDS processing. In this configuration, the transfer of the next P-phase data can be started during the transfer of the D-phase data, so that the reading speed (frame rate) of the image data (frame) can be increased.
  • Frame memories 271 and 272 are used only for writing P-phase data, and are used only for writing frame memories 275 and 276. With this configuration, an increase in the number of wirings in the data processing unit 270 can be suppressed.
  • FIG. 12 is a diagram for explaining a control method in a comparative example having two frame memories according to the present technology.
  • a shows the state of the comparative example when the first P-phase data is held
  • b in the figure shows the state of the comparative example when the first CDS processing is executed.
  • the selectors A, B, C, and D, the frame memories A and B, and the CDS processor are arranged in the data processor of this comparative example.
  • the selector A selects one of the P-phase data and the pixel data and outputs it to the frame memory A
  • the selector B selects one of the P-phase data and the pixel data and outputs it to the frame memory B.
  • the selector C selects one of the frame memories A and B, reads out pixel data from the memory, and outputs the pixel data to a subsequent circuit.
  • the selector D selects one of the frame memories A and B, reads pixel data from the memory, and outputs the pixel data to the CDS processing unit.
  • the selector A selects the P-phase data and supplies it to the frame memory A, as illustrated in FIG. P-phase data is written in the frame memory A.
  • the selector D reads the first P-phase data from the frame memory A as illustrated in FIG.
  • the difference between the P-phase data and the D-phase data is output as pixel data.
  • the selector B selects the pixel data and outputs it to the frame memory B. Pixel data is written to the frame memory B.
  • both the frame memories A and B are used during the first D-phase data transfer. For this reason, even if the second P-phase data is transferred during the first transfer of the D-phase data, there is no memory to write to, and the second and subsequent CDS processes cannot be executed. For this reason, in this comparative example, the period of the vertical synchronization signal VSYNC needs to be sufficiently long. Therefore, the reading speed (frame rate) of the image data is reduced as compared with the configuration of FIG.
  • FIG. 13 is a diagram for describing a control method in a comparative example having three frame memories of the present technology.
  • a shows the state of the comparative example when the first P-phase data is held
  • b in the figure shows the state of the comparative example when the first CDS processing is executed.
  • the selectors A, B, C, D, and E, the frame memories A, B, and C, and the CDS processor are arranged in the data processor of this comparative example.
  • the selector A selects one of the P-phase data and the pixel data and outputs it to the frame memory A
  • the selector B selects one of the P-phase data and the pixel data and outputs it to the frame memory B.
  • the selector C selects one of the P-phase data and the pixel data and outputs it to the frame memory C.
  • the selector D selects one of the frame memories A, B, and C, reads pixel data from the memory, and outputs the pixel data to a subsequent circuit.
  • the selector E selects one of the frame memories A, B, and C, reads pixel data from the memory, and outputs the pixel data to the CDS processing unit.
  • the selector A selects the P-phase data and supplies it to the frame memory A, as illustrated in FIG. P-phase data is written in the frame memory A.
  • the selector E reads the first P-phase data from the frame memory A as illustrated in FIG.
  • the difference between the P-phase data and the D-phase data is output as pixel data.
  • the selector C selects the pixel data and outputs it to the frame memory C. Pixel data is written in the frame memory C.
  • the selector B selects the P-phase data and outputs it to the frame memory B. The P-phase data is written into the frame memory B.
  • the number of wires in the data processing unit is increased as compared with the configuration of FIG. Since the number of wirings is proportional to the number of arrows indicating the data output destination, comparing the number of arrows, the number of the arrows is 11 in FIG. 11, but increases to 18 in the comparative example of FIG. ing.
  • the frame memory is not divided into a memory for writing only P-phase data and a memory for writing only pixel data.
  • the number of frame memories is two, it becomes difficult to improve the reading speed of image data.
  • the number of wirings increases.
  • the data processing unit 270 is provided with frame memories 271 and 272 for writing only P-phase data and frame memories 275 and 276 for writing only pixel data.
  • the data can be written to the empty one of the frame memories 271 and 272.
  • the read speed of the image data can be improved as compared with the comparative example having two frame memories.
  • the number of wirings can be reduced as compared with the comparative example having three frame memories. Further, the number of selectors can be reduced as compared with the comparative example. By reducing the number of wirings and the like, an increase in manufacturing cost can be suppressed.
  • FIG. 14 is a flowchart illustrating an example of an operation of the solid-state imaging device 200 according to the first embodiment of the present technology. This operation is started, for example, when a predetermined application for capturing image data is executed.
  • the cluster 300 in the solid-state imaging device 200 generates P-phase data, and the data transfer unit 261 transfers the data (Step S901).
  • the timing control circuit 240 writes P-phase data in one of the frame memories 271 and 272 (step S902).
  • the cluster 300 generates the D-phase data, and the data transfer unit 261 transfers the data (Step S903).
  • the CDS processing unit 274 generates pixel data by a CDS operation (Step S904).
  • the timing control circuit 240 writes the pixel data into one of the frame memories 275 and 276 (step S905), and the selector 277 reads out the pixel data from the other in raster order and outputs it (step S906).
  • the solid-state imaging device 200 repeatedly executes step S901 and subsequent steps in synchronization with the vertical synchronization signal VSYNC.
  • the frame memories 271 and 272 alternately hold the P-phase data
  • the frame memories 275 and 276 alternately hold the pixel data after the CDS processing. This eliminates the need to arrange a selector for selecting either the P-phase data or the pixel data for each frame memory, and reduces the number of wires as compared with a comparative example in which the selector is arranged for each frame memory. it can.
  • the selector 273 is arranged between the frame memories 271 and 272 and the CDS processing unit 274.
  • the wiring distance between the frame memories 271 and 272 and the CDS processing unit 274 is increased by the amount of the selector 273, and the propagation delay may be increased.
  • the data processing unit 270 according to the second embodiment differs from the first embodiment in that the selector 273 is omitted.
  • FIG. 15 is a block diagram illustrating a configuration example of the data processing unit 270 according to the second embodiment of the present technology.
  • the data processing unit 270 of the second embodiment differs from the first embodiment in that CDS processing units 278 and 279 are provided instead of the selector 273 and the CDS processing unit 274.
  • the D-phase data from the D-phase transfer unit 265 is input to both the CDS processing units 278 and 279.
  • the CDS processing unit 278 reads out the P-phase data stored in the frame memory 271 and generates a difference from the subsequently transferred D-phase data as pixel data.
  • the CDS processing unit 279 reads out the P-phase data held in the frame memory 272 and generates a difference from the subsequently transferred D-phase data as pixel data. Then, the CDS processing unit 278 outputs the pixel data to the frame memory 275, and the CDS processing unit 279 outputs the pixel data to the frame memory 276.
  • the CDS processing units 278 and 279 are an example of a pair of pixel data generation circuits described in the claims.
  • the CDS processing unit 278 reads the P-phase data from the frame memory 271 and the CDS processing unit 279 reads the pixel data from the frame memory 272. There is no need for a selector between the units. Thus, the wiring distance between the frame memory and the CDS processing unit can be reduced, and the propagation delay can be reduced.
  • the circuits in the solid-state imaging device 200 are distributed and arranged on the light receiving substrate 201 and the circuit substrate 202. However, in this configuration, as the number of pixels increases, the scale of a circuit disposed on each of the substrates may increase.
  • the solid-state imaging device 200 according to the third embodiment is different from the first embodiment in that circuits in the solid-state imaging device 200 are dispersed and arranged on three substrates.
  • FIG. 16 is a diagram illustrating an example of a stacked structure of the solid-state imaging device 200 according to the third embodiment of the present technology.
  • the solid-state imaging device 200 according to the third embodiment includes an upper circuit board 203 and a lower circuit board 204 instead of the circuit board 202.
  • the upper circuit board 203 is disposed between the light receiving board 201 and the lower circuit board 204.
  • the upper circuit board 203 is an example of a first circuit board described in the claims
  • the lower circuit board 204 is an example of a second circuit board described in the claims.
  • FIG. 17 is a block diagram illustrating a configuration example of the data processing unit 270 according to the third embodiment of the present technology.
  • a part of the circuit in the data processing unit 270 is disposed on the upper circuit board 203 as the upper data processing unit 281, and the rest is formed as the lower data processing unit 282. It is arranged on the side circuit board 204.
  • a circuit including the frame memories 271 and 272, the selector 273, and the CDS processing unit 274 is provided on the upper circuit board 203 as the upper data processing unit 281.
  • a circuit including the frame memories 275 and 276 and the selector 277 is provided on the lower circuit board 204 as the lower data processing unit 282.
  • the DAC 220, the vertical drive circuit 230, the AD converter 260, and the time code generator 250 are arranged on the upper circuit board 203 as in the first embodiment.
  • the frame memories 271 and 272, the selector 273, and the CDS processing unit 274 in the data processing unit 270 are arranged on the upper circuit board 203, and the rest are arranged on the lower circuit board 204.
  • the circuit to perform is not limited to this configuration.
  • the circuits in the solid-state imaging device 200 are dispersed and arranged on three substrates.
  • the circuit scale for each substrate can be reduced.
  • the circuit for CDS processing and the circuit for converting the reading order of the pixels into the raster order are arranged in the solid-state imaging device 200.
  • the circuit scale of the solid-state imaging device 200 may increase as the number increases.
  • the imaging apparatus 100 according to the fourth embodiment is different from the first embodiment in that a circuit for converting the reading order of pixels into a raster order is arranged outside the solid-state imaging device 200.
  • FIG. 18 is a block diagram illustrating a configuration example of the data processing unit 270 according to the fourth embodiment of the present technology.
  • the data processing unit 270 according to the fourth embodiment differs from the first embodiment in that the frame memories 275 and 256 and the selector 277 are not provided.
  • the CDS processing unit 274 according to the fourth embodiment outputs pixel data to the DSP circuit 120.
  • FIG. 19 is a block diagram illustrating a configuration example of a DSP circuit 120 according to the fourth embodiment of the present technology.
  • the DSP circuit 120 includes a timing control circuit 121, frame memories 122 and 123, a selector 124, and a post-processing unit 125.
  • the configurations of the timing control circuit 121, the frame memories 122 and 123, and the selector 124 are the same as those of the timing control circuit 240, the frame memories 275 and 276, and the selector 277 of the first embodiment.
  • the selector 124 supplies the selected pixel data to the post-processing unit 125.
  • the post-processing unit 125 executes various image processing such as demosaic processing and white balance correction processing. Note that the post-processing unit 125 is an example of a signal processing unit described in the claims.
  • a circuit for converting the reading order of the pixels into the raster order is arranged in the DSP circuit 120 outside the solid-state imaging device 200.
  • the circuit scale of the solid-state imaging device 200 can be reduced as compared with the case where the device is arranged in the device 200.
  • the technology (the present technology) according to the present disclosure can be applied to various products.
  • the technology according to the present disclosure is realized as a device mounted on any type of moving object such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobility, an airplane, a drone, a ship, and a robot. You may.
  • FIG. 20 is a block diagram illustrating a schematic configuration example of a vehicle control system that is an example of a moving object control system to which the technology according to the present disclosure can be applied.
  • Vehicle control system 12000 includes a plurality of electronic control units connected via communication network 12001.
  • the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, an outside information detection unit 12030, an inside information detection unit 12040, and an integrated control unit 12050.
  • a microcomputer 12051, an audio / video output unit 12052, and a vehicle-mounted network I / F (interface) 12053 are illustrated.
  • the drive system control unit 12010 controls the operation of the device related to the drive system of the vehicle according to various programs.
  • the drive system control unit 12010 includes a drive force generation device for generating a drive force of the vehicle such as an internal combustion engine or a drive motor, a drive force transmission mechanism for transmitting the drive force to the wheels, and a steering angle of the vehicle. It functions as a control mechanism such as a steering mechanism for adjusting and a braking device for generating a braking force of the vehicle.
  • the body control unit 12020 controls the operation of various devices mounted on the vehicle body according to various programs.
  • the body-related control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as a head lamp, a back lamp, a brake lamp, a blinker, and a fog lamp.
  • a radio wave or a signal of various switches transmitted from a portable device replacing the key can be input to the body control unit 12020.
  • the body control unit 12020 receives the input of these radio waves or signals and controls a door lock device, a power window device, a lamp, and the like of the vehicle.
  • Out-of-vehicle information detection unit 12030 detects information external to the vehicle on which vehicle control system 12000 is mounted.
  • an imaging unit 12031 is connected to the outside-of-vehicle information detection unit 12030.
  • the out-of-vehicle information detection unit 12030 causes the imaging unit 12031 to capture an image outside the vehicle, and receives the captured image.
  • the out-of-vehicle information detection unit 12030 may perform an object detection process or a distance detection process of a person, a vehicle, an obstacle, a sign, a character on a road surface, or the like based on the received image.
  • the imaging unit 12031 is an optical sensor that receives light and outputs an electric signal according to the amount of received light.
  • the imaging unit 12031 can output an electric signal as an image or can output the information as distance measurement information.
  • the light received by the imaging unit 12031 may be visible light or non-visible light such as infrared light.
  • the in-vehicle information detection unit 12040 detects information in the vehicle.
  • the in-vehicle information detection unit 12040 is connected to, for example, a driver status detection unit 12041 that detects the status of the driver.
  • the driver state detection unit 12041 includes, for example, a camera that captures an image of the driver, and the in-vehicle information detection unit 12040 determines the degree of driver fatigue or concentration based on the detection information input from the driver state detection unit 12041. The calculation may be performed, or it may be determined whether the driver has fallen asleep.
  • the microcomputer 12051 calculates a control target value of the driving force generation device, the steering mechanism or the braking device based on the information on the inside and outside of the vehicle acquired by the outside information detection unit 12030 or the inside information detection unit 12040, and the drive system control unit A control command can be output to 12010.
  • the microcomputer 12051 implements functions of ADAS (Advanced Driver Assistance System) including a collision avoidance or a shock mitigation of a vehicle, a following operation based on a distance between vehicles, a vehicle speed maintaining operation, a vehicle collision warning, or a vehicle lane departure warning. Cooperative control for the purpose.
  • ADAS Advanced Driver Assistance System
  • the microcomputer 12051 controls the driving force generation device, the steering mechanism, the braking device, and the like based on the information about the surroundings of the vehicle obtained by the outside information detection unit 12030 or the inside information detection unit 12040, so that the driver 120 It is possible to perform cooperative control for automatic driving or the like in which the vehicle travels autonomously without depending on the operation.
  • the microcomputer 12051 can output a control command to the body system control unit 12020 based on information on the outside of the vehicle acquired by the outside information detection unit 12030.
  • the microcomputer 12051 controls the headlamp according to the position of the preceding vehicle or the oncoming vehicle detected by the outside information detection unit 12030, and performs cooperative control for the purpose of preventing glare such as switching a high beam to a low beam. It can be carried out.
  • the sound image output unit 12052 transmits at least one of a sound signal and an image signal to an output device capable of visually or audibly notifying a passenger of the vehicle or the outside of the vehicle of information.
  • an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are illustrated as output devices.
  • the display unit 12062 may include, for example, at least one of an on-board display and a head-up display.
  • FIG. 21 is a diagram illustrating an example of an installation position of the imaging unit 12031.
  • the image pickup unit 12031 includes image pickup units 12101, 12102, 12103, 12104, and 12105.
  • the imaging units 12101, 12102, 12103, 12104, and 12105 are provided, for example, at positions such as a front nose, a side mirror, a rear bumper, a back door of the vehicle 12100, and an upper portion of a windshield in the vehicle interior.
  • the imaging unit 12101 provided on the front nose and the imaging unit 12105 provided above the windshield in the passenger compartment mainly acquire an image in front of the vehicle 12100.
  • the imaging units 12102 and 12103 provided in the side mirror mainly acquire images of the side of the vehicle 12100.
  • the imaging unit 12104 provided in the rear bumper or the back door mainly acquires an image behind the vehicle 12100.
  • the imaging unit 12105 provided above the windshield in the passenger compartment is mainly used for detecting a preceding vehicle, a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, and the like.
  • FIG. 21 shows an example of the imaging range of the imaging units 12101 to 12104.
  • the imaging range 12111 indicates the imaging range of the imaging unit 12101 provided on the front nose
  • the imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided on the side mirrors, respectively
  • the imaging range 12114 indicates 13 shows an imaging range of an imaging unit 12104 provided in a rear bumper or a back door.
  • a bird's-eye view image of the vehicle 12100 viewed from above is obtained by superimposing image data captured by the imaging units 12101 to 12104.
  • At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information.
  • at least one of the imaging units 12101 to 12104 may be a stereo camera including a plurality of imaging elements or an imaging element having pixels for detecting a phase difference.
  • the microcomputer 12051 calculates a distance to each three-dimensional object in the imaging ranges 12111 to 12114 and a temporal change of the distance (relative speed with respect to the vehicle 12100). , It is possible to extract, as a preceding vehicle, a three-dimensional object that travels at a predetermined speed (for example, 0 km / h or more) in a direction substantially the same as that of the vehicle 12100, which is the closest three-dimensional object on the traveling path of the vehicle 12100 it can.
  • a predetermined speed for example, 0 km / h or more
  • microcomputer 12051 can set an inter-vehicle distance to be secured before the preceding vehicle and perform automatic brake control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like. In this way, it is possible to perform cooperative control for automatic driving or the like in which the vehicle travels autonomously without depending on the operation of the driver.
  • the microcomputer 12051 converts the three-dimensional object data relating to the three-dimensional object into other three-dimensional objects such as a motorcycle, a normal vehicle, a large vehicle, a pedestrian, a telephone pole, and the like based on the distance information obtained from the imaging units 12101 to 12104. It can be classified and extracted and used for automatic avoidance of obstacles. For example, the microcomputer 12051 distinguishes obstacles around the vehicle 12100 into obstacles that are visible to the driver of the vehicle 12100 and obstacles that are difficult to see. Then, the microcomputer 12051 determines a collision risk indicating a risk of collision with each obstacle, and when the collision risk is equal to or more than the set value and there is a possibility of collision, via the audio speaker 12061 or the display unit 12062. By outputting an alarm to the driver through forced driving and avoidance steering via the drive system control unit 12010, driving assistance for collision avoidance can be performed.
  • driving assistance for collision avoidance can be performed.
  • At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared light.
  • the microcomputer 12051 can recognize a pedestrian by determining whether or not a pedestrian exists in the captured images of the imaging units 12101 to 12104. The recognition of such a pedestrian is performed by, for example, extracting a feature point in an image captured by the imaging units 12101 to 12104 as an infrared camera, and performing a pattern matching process on a series of feature points indicating the outline of the object to determine whether the object is a pedestrian.
  • the audio image output unit 12052 outputs a rectangular outline to the recognized pedestrian for emphasis.
  • the display unit 12062 is controlled so that is superimposed. Further, the sound image output unit 12052 may control the display unit 12062 so as to display an icon or the like indicating a pedestrian at a desired position.
  • the technology according to the present disclosure can be applied to, for example, the imaging unit 12031 in the configuration described above.
  • the imaging device 100 in FIG. 1 can be applied to the imaging unit 12031.
  • the present technology may have the following configurations.
  • a solid-state imaging device comprising: an output-side selection unit that alternately selects the pair of pixel data holding units and outputs the pixel data held in the selected pixel data holding unit.
  • the solid-state imaging device starts outputting the reset data while outputting the signal data.
  • the above-mentioned (1) further includes an input-side selection unit that alternately selects the pair of reset data holding units and supplies the reset data held in the selected reset data holding unit to the pixel data generation unit.
  • the pixel data generation unit includes a pair of pixel data generation circuits, One of the pair of pixel data generation circuits sets a difference between the reset data held in one of the pair of reset data holding units and the signal data output after the reset data as the pixel data.
  • the other of the pair of pixel data generation circuits sets a difference between the reset data held in the other of the pair of reset data holding units and the signal data output after the reset data as the pixel data.
  • the solid-state imaging device according to the above (1) or (2), which outputs the image data to the other of the pixel data holding units.
  • (5) a pixel that generates a predetermined reset level and a signal level according to the exposure amount;
  • the (1) further includes an analog-to-digital conversion unit that performs a process of converting the reset level to a digital signal and holding the reset data as the data, and a process of converting the signal level to a digital signal and holding the signal data as the data.
  • the solid-state imaging device according to any one of (1) to (4). (6)
  • the pixel is disposed on a predetermined light receiving substrate,
  • the analog-to-digital conversion unit, the data output unit, the pair of reset data holding units, the pixel data generation unit, the pair of pixel data holding units, and the output side selection unit are disposed on a predetermined circuit board.
  • the solid-state imaging device according to (5).
  • the pixel is disposed on a predetermined light receiving substrate,
  • the analog-to-digital converter is disposed on a first circuit board,
  • a part of the data output unit, the pair of reset data holding units, the pixel data generation unit, the pair of pixel data holding units, and the output side selection unit are disposed on the first circuit board, and the rest is
  • the solid-state imaging device according to (5) which is disposed on a second circuit board.
  • a data output unit for alternately and repeatedly outputting predetermined reset data and signal data corresponding to the exposure amount;
  • a pair of reset data holding units that alternately hold the repeatedly output reset data,
  • a pixel data generation unit that repeats a process of generating a difference between the reset data held in one of the pair of reset data holding units and the signal data output after the reset data as pixel data,
  • a pair of pixel data holding units that alternately hold the repeatedly generated pixel data,
  • An output-side selection unit that alternately selects the pair of pixel data holding units and outputs the pixel data held in the selected pixel data holding unit;
  • An image pickup apparatus comprising: a signal processing unit that processes the pixel data.
  • the device according to (8) wherein the data output unit, the pair of reset data holding units, the pixel data generation unit, the pair of pixel data holding units, and the output side selection unit are arranged in a solid-state imaging device. Imaging device. (10) The data output unit, the pair of reset data holding units, and the pixel data generation unit are arranged in a solid-state imaging device, The imaging device according to (8), wherein the pair of pixel data holding units and the output side selection unit are arranged outside the solid-state imaging device.
  • (11) a data output procedure for alternately and repeatedly outputting predetermined reset data and signal data corresponding to the exposure amount;
  • Pixel data generation procedure to be repeated An output-side selecting step of alternately selecting a pair of pixel data holding units for alternately holding the repeatedly generated pixel data and outputting the pixel data held in the selected pixel data holding unit.
  • Imaging device 110 optical unit 120
  • Power supply unit 200 Solid-state imaging device 201 Light receiving substrate 202 Circuit substrate 203 Upper circuit substrate 204 Lower circuit substrate 210 Pixel array unit 211 Pixel block 212 Pixel 220 DAC 230 Vertical drive circuit 240 Timing control circuit 250 Time code generation unit 260 AD conversion unit 261 Data transfer unit 262 P-phase transfer unit 263 Flip-flop 264 Write data transfer unit 265 D-phase transfer unit 266 Clock supply unit 270 Data processing unit 274, 278 279 CDS (Correlated Double Sampling) processing unit 281 Upper data processing unit 282 Lower data processing unit 300 Cluster 311 to 318 Comparison circuit 320 Left memory 321 to 328, 331 to 338 Data storage unit 330 Right memory 12031 Imaging unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)

Abstract

Le but de la présente invention est de réduire le nombre de câblages dans un circuit de traitement CDS d'un élément d'imagerie à semi-conducteurs effectuant un traitement CDS. Une unité de sortie de données délivre en sortie en alternance et de manière répétée des données de réinitialisation prédéterminées et des données de signal correspondant à une quantité d'exposition. Une paire d'unités de conservation de données de réinitialisation conserve en alternance les données de réinitialisation de sortie répétées. Une unité de génération de données de pixel effectue de manière répétée un traitement pour générer des données de pixel correspondant à une différence entre les données de réinitialisation conservées dans l'une de la paire d'unités de conservation de données de réinitialisation et une sortie de données de signal postérieurement aux données de réinitialisation. Une paire d'unités de conservation de données de pixel conserve en alternance les données de pixel générées de manière répétée. Une unité de sélection côté sortie sélectionne en alternance l'une de la paire d'unités de conservation de données de pixel et délivre en sortie les données de pixel conservées dans l'unité de conservation de données de pixel sélectionnée.
PCT/JP2019/026210 2018-09-12 2019-07-02 Élément d'imagerie à semi-conducteurs, dispositif d'imagerie, et procédé de commande d'élément d'imagerie à semi-conducteurs WO2020054184A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-170395 2018-09-12
JP2018170395A JP2020043504A (ja) 2018-09-12 2018-09-12 固体撮像素子、撮像装置、および、固体撮像素子の制御方法

Publications (1)

Publication Number Publication Date
WO2020054184A1 true WO2020054184A1 (fr) 2020-03-19

Family

ID=69777107

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/026210 WO2020054184A1 (fr) 2018-09-12 2019-07-02 Élément d'imagerie à semi-conducteurs, dispositif d'imagerie, et procédé de commande d'élément d'imagerie à semi-conducteurs

Country Status (2)

Country Link
JP (1) JP2020043504A (fr)
WO (1) WO2020054184A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008141610A (ja) * 2006-12-04 2008-06-19 Matsushita Electric Ind Co Ltd 固体撮像装置及び撮像システム
JP2010232804A (ja) * 2009-03-26 2010-10-14 Victor Co Of Japan Ltd 固体撮像素子及びそれを用いた固体撮像装置
WO2018096813A1 (fr) * 2016-11-24 2018-05-31 ソニーセミコンダクタソリューションズ株式会社 Élément de capture d'image à semi-conducteur, dispositif de capture d'image à semi-conducteur, et procédé de commande d'élément de capture d'image à semi-conducteur

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008141610A (ja) * 2006-12-04 2008-06-19 Matsushita Electric Ind Co Ltd 固体撮像装置及び撮像システム
JP2010232804A (ja) * 2009-03-26 2010-10-14 Victor Co Of Japan Ltd 固体撮像素子及びそれを用いた固体撮像装置
WO2018096813A1 (fr) * 2016-11-24 2018-05-31 ソニーセミコンダクタソリューションズ株式会社 Élément de capture d'image à semi-conducteur, dispositif de capture d'image à semi-conducteur, et procédé de commande d'élément de capture d'image à semi-conducteur

Also Published As

Publication number Publication date
JP2020043504A (ja) 2020-03-19

Similar Documents

Publication Publication Date Title
US11653122B2 (en) Solid-state image capturing element with floating diffusion layers processing a signal undergoing pixel addition
JP2020088722A (ja) 固体撮像素子、および、撮像装置
JP2020088480A (ja) 固体撮像素子、および、撮像装置
WO2020066245A1 (fr) Élément d'imagerie à semi-conducteurs et dispositif d'imagerie
TWI842952B (zh) 攝像裝置
TW202101959A (zh) 圖像辨識裝置及圖像辨識方法
US11743449B2 (en) Imaging device and electronic apparatus
WO2020137198A1 (fr) Dispositif de capture d'image et élément de capture d'image à semi-conducteurs
WO2020054184A1 (fr) Élément d'imagerie à semi-conducteurs, dispositif d'imagerie, et procédé de commande d'élément d'imagerie à semi-conducteurs
WO2020255496A1 (fr) Capteur d'image à semi-conducteurs, dispositif d'imagerie et procédé de commande de capteur d'image à semi-conducteurs
CN113661700B (zh) 成像装置与成像方法
JP2023027703A (ja) 撮像装置、電子機器及び情報処理方法
WO2018211985A1 (fr) Élément d'imagerie, procédé de commande d'élément d'imagerie, dispositif d'imagerie, et appareil électronique
WO2019208204A1 (fr) Dispositif d'imagerie
WO2020090459A1 (fr) Dispositif d'imagerie à semi-conducteur et équipement électronique
KR102724360B1 (ko) 촬상 장치 및 전자 기기
JP2020017552A (ja) 固体撮像素子、撮像装置、および、固体撮像素子の制御方法
WO2022113528A1 (fr) Élément d'imagerie à semi-conducteurs, procédé de commande d'élément d'imagerie à semi-conducteurs, et dispositif d'imagerie
CN218888598U (zh) 图像处理设备和车载系统
WO2022209368A1 (fr) Élément d'imagerie à semi-conducteurs et dispositif d'imagerie à semi-conducteurs
WO2020166284A1 (fr) Dispositif de capture d'image
WO2020166160A1 (fr) Élément d'imagerie à semi-conducteurs, dispositif d'imagerie et procédé de commande d'élément d'imagerie à semi-conducteurs
JP2023013368A (ja) 固体撮像素子、撮像装置、および、固体撮像素子の制御方法
WO2024200155A1 (fr) Dispositif et procédé de capteur
TW202416721A (zh) 固態成像裝置及電子裝置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19859403

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19859403

Country of ref document: EP

Kind code of ref document: A1