WO2020195936A1 - Solid-state imaging device and electronic apparatus - Google Patents
Solid-state imaging device and electronic apparatus Download PDFInfo
- Publication number
- WO2020195936A1 WO2020195936A1 PCT/JP2020/011056 JP2020011056W WO2020195936A1 WO 2020195936 A1 WO2020195936 A1 WO 2020195936A1 JP 2020011056 W JP2020011056 W JP 2020011056W WO 2020195936 A1 WO2020195936 A1 WO 2020195936A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pixel
- unit
- pixels
- solid
- image sensor
- Prior art date
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 73
- 238000000034 method Methods 0.000 claims abstract description 65
- 238000006243 chemical reaction Methods 0.000 claims abstract description 64
- 230000033001 locomotion Effects 0.000 claims description 62
- 238000001514 detection method Methods 0.000 claims description 56
- 238000012545 processing Methods 0.000 claims description 48
- 238000012937 correction Methods 0.000 claims description 12
- 238000005516 engineering process Methods 0.000 description 24
- 230000000875 corresponding effect Effects 0.000 description 16
- 230000006870 function Effects 0.000 description 12
- 238000004891 communication Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 230000010363 phase shift Effects 0.000 description 4
- 230000003796 beauty Effects 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 239000000758 substrate Substances 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000001276 controlling effect Effects 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 2
- 230000001186 cumulative effect Effects 0.000 description 2
- 238000005401 electroluminescence Methods 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 240000004050 Pentaglottis sempervirens Species 0.000 description 1
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 238000002583 angiography Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000002485 combustion reaction Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 230000000116 mitigating effect Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000001454 recorded image Methods 0.000 description 1
- 210000004761 scalp Anatomy 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/76—Addressed sensors, e.g. MOS or CMOS sensors
- H04N25/78—Readout circuits for addressed sensors, e.g. output amplifiers or A/D converters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/223—Analysis of motion using block-matching
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
Definitions
- the present disclosure relates to a solid-state image sensor and an electronic device, and more particularly to a solid-state image sensor and an electronic device capable of reducing power consumption with a smaller circuit scale by using a pixel parallel ADC method.
- an ADC Analog to Digital Converter
- An ADC is used for each vertical row of pixels.
- a parallel arrangement method hereinafter referred to as a column parallel ADC method (column ADC method) is used.
- an ADC method a method in which an ADC is provided in each pixel arranged two-dimensionally in a pixel array portion (hereinafter, referred to as a pixel parallel ADC method) is known (see, for example, Patent Document 1). ..
- the pixel parallel ADC method when used as the ADC method, higher speed imaging is possible as compared with the case where the column parallel ADC method is used, but the circuit scale is smaller. It is required to reduce power consumption.
- This disclosure has been made in view of such a situation, and makes it possible to reduce power consumption with a smaller circuit scale by using a pixel parallel ADC method.
- the solid-state image sensor on one side of the present disclosure includes a pixel array unit in which a plurality of pixels are arranged in a two-dimensional manner, and the pixels AD a photoelectric conversion unit and a pixel signal obtained by photoelectric conversion by the photoelectric conversion unit.
- the pixel array unit includes an AD conversion unit for conversion, and is a solid-state image sensor in which some pixels are thinned out when reading AD conversion results from the plurality of pixels.
- the electronic device on one aspect of the present disclosure includes a pixel array unit in which a plurality of pixels are arranged in a two-dimensional manner, and the pixels AD convert a photoelectric conversion unit and a pixel signal obtained by photoelectric conversion by the photoelectric conversion unit.
- the pixel array unit is an electronic device equipped with a solid-state image sensor in which some pixels are thinned out when the AD conversion results from the plurality of pixels are read out.
- two pixels including a photoelectric conversion unit and an AD conversion unit that AD-converts a pixel signal obtained by photoelectric conversion by the photoelectric conversion unit are included as a plurality of pixels.
- the AD conversion results from the plurality of pixels in the pixel array unit arranged in a dimension some pixels are thinned out.
- the solid-state image sensor or electronic device on one aspect of the present disclosure may be an independent device or an internal block constituting one device.
- FIG. 1 shows a general motion detection method in an electronic device.
- the electronic device 900 is composed of a column-parallel ADC method (column AD method) image sensor 901, a DSP (Digital Signal Processor) 902, and a frame memory 903. Further, the DSP 902 is provided with a cache memory 911.
- a column-parallel ADC method column AD method
- DSP Digital Signal Processor
- the image sensor 901 when detecting a movement such as camera shake, the image sensor 901 captures an captured image larger than the angle of view, and the captured image is supplied to the DSP 902.
- the DSP 902 acquires an image corresponding to the search area while storing the captured image in the frame memory 903, executes processing such as block matching, and detects a motion vector.
- the block matching method for example, there is a representative point matching method.
- the pixel values of the pixels corresponding to the template blocks held in the search area SA correspond to the matching blocks (MB 1 , MB 2 , MB 3 , MB 4 , ).
- the motion vector MV is detected by taking the sum of the differences from the pixel values of the pixels (FIG. 2).
- the motion vector MV is detected by performing processing such as block matching using the frame memory 903 by the DSP 902, so that a dedicated circuit is required and the memory access is increased. As a result, the power consumption increases.
- the DSP 902 also requires a cache memory 911 for the matching block.
- the image sensor 901 of the column-parallel ADC system since the image sensor 901 of the column-parallel ADC system has a slow imaging speed, it is necessary to perform imaging by the image sensor 901 and processing such as block matching by the DSP 902 in parallel at the same time.
- the ADC method of the image sensor there is a pixel parallel ADC method in addition to the column parallel ADC method.
- the pixel parallel ADC method when used, faster imaging is possible as compared with the case where the column parallel ADC method is used, but it is less than when the column parallel ADC method is used. It is required to reduce power consumption on a circuit scale.
- the present technology solves the above-mentioned problems and makes it possible to reduce power consumption with a smaller circuit scale by using the pixel parallel ADC method.
- the present technology proposes a motion detection method capable of reducing power consumption with a smaller circuit scale when motion detection is performed.
- FIGS. 4 to 18 details of the motion detection method to which the present technology is applied will be described with reference to FIGS. 4 to 18.
- FIG. 4 shows an example of the configuration of a solid-state image sensor to which the present technology is applied.
- the solid-state image sensor 10 is an image sensor such as a back-illuminated CMOS (Complementary Metal Oxide Semiconductor) image sensor.
- CMOS Complementary Metal Oxide Semiconductor
- the solid-state imaging device 10 includes a pixel array unit 101, a repeater unit 102, a GC generation unit 103, a SRAM (Static RAM) 104, a signal processing unit 105, and a motion determination memory 106.
- a pixel array unit 101 a repeater unit 102, a GC generation unit 103, a SRAM (Static RAM) 104, a signal processing unit 105, and a motion determination memory 106.
- SRAM Static RAM
- a plurality of pixels 111 are arranged two-dimensionally in the pixel array unit 101.
- Each pixel 111 has a photoelectric conversion unit such as a photodiode, a pixel circuit including a pixel transistor, and an AD conversion unit (ADC) that converts a pixel signal output from the pixel circuit from an analog signal to a digital signal.
- ADC AD conversion unit
- a pixel parallel ADC method in which an ADC is provided in each pixel 111 two-dimensionally arranged in the pixel array unit 101 is adopted.
- the reading of the AD conversion result from each pixel 111 is performed via the repeater unit 102 in which a plurality of repeater 131 including a shift register in which flip flops (FF: Flip Flop) are connected in multiple stages is arranged. Further, this reading order is read in units of one pixel for each rectangular block (pixel block) called a cluster.
- each pixel 111 is connected to the latch portion (latch circuit) in the cluster 121 on a one-to-one basis, and a plurality of latch portions are grouped into one cluster. Further, one cluster 121 is connected to one (flip-flop) of the shift registers of the repeater 131 configured as a repeater circuit.
- the repeater unit 102 is a plurality of repeaters 131 arranged in parallel in a strip shape in the vertical direction (vertical direction in the drawing), and is configured to correspond to an image of one frame.
- a plurality of clusters 121 are vertically arranged so as to have a one-to-one relationship with flip-flops of shift registers arranged vertically in the repeater 131.
- the GC generation unit 103 inputs a Gray code (GC: Gray Code) to a plurality of repeater 131s arranged in the repeater unit 102.
- the repeater 131 writes (Writes) and reads (Reads) the Gray code.
- FIG. 5 shows an example of the operation of the repeater 131.
- the gray code from the GC generation unit 103 is input to the repeater 131 from the side opposite to the operating clock (CK) from the clock supply unit 133, and is sequentially transferred in a bucket relay manner to the data processing unit 134. Is output to.
- the Gray code is written to each latch portion, and the latch holding operation according to the inverting output (VCO) from the comparator of the ADC of the pixel 111 is performed.
- the Gray code transferred by the shift register is taken into the latch portion according to the inverting signal (VCO) of the comparator of the ADC of the pixel 111. Then, the latch data (Gray code) held in the latch portion is read via the repeater 131, converted into a binary code, and then the correlated double sampling (CDS: Correlated Double Sampling) is performed using the SRAM 104. Will be done.
- VCO inverting signal
- CDS Correlated Double Sampling
- the AD conversion result of the pixel 111 is once held in the latch portion in the cluster 121, then transferred to the shift register in the repeater 131, and sequentially output to the outside of the repeater portion 102.
- the reading order of the cluster 121 and the latch portion and the pixel 111 to be selected can be arbitrarily changed by controlling the selection signal from the control circuit (not shown).
- the signal processing unit 105 performs predetermined signal processing based on the image obtained from the pixel signal, and outputs the processing result to the outside. For example, in the signal processing unit 105, when motion detection is performed, signal processing for block matching calculation and the like are performed. Further, a correlation value for motion determination is recorded in the motion determination memory 106.
- Cluster configuration Here, a detailed configuration of the cluster 121 will be described with reference to FIGS. 6 and 7.
- FIG. 6 shows pixels 111 arranged two-dimensionally in the pixel array unit 101 in a grid pattern, and one square corresponds to one pixel 111. Further, here, the XY coordinates of the pixels 111 arranged in the pixel array unit 101 are referred to as pixels (n, m), and the XY coordinates of the cluster 121 are referred to as clusters (j, k).
- an image for one frame is generated from the pixel signals output from the plurality of pixels 111 arranged in the pixel array unit 101, but the pixels 111 within the range of one cluster are, for example, the lower left cluster ( It corresponds to 0,0) and the square in the thick frame of the cluster (j-1, k-1) on the upper right.
- FIG. 7 shows the details of the coordinates of the pixel 111 inside the cluster 121.
- the range of one cluster is 4 ⁇ 32 pixels, that is, 4 pixels in the X direction (horizontal direction) and 32 pixels in the Y direction (vertical direction).
- the number of pixels in the row direction and the column direction of "4 pixels" and "32 pixels” are shown in the space by arranging them at intervals for each cluster. However, in reality, such a space does not exist. Further, in each cluster 121, the coordinates of the pixels 111 in the cluster 121 are also shown together with the coordinates of the cluster 121.
- the lower left cluster (0,0) is pixel (0,0) to (0,31), pixel (1,0) to (1,31), pixel (2,0) to (2). , 31), and pixels (3,0) to (3,31).
- the clusters (1,0) adjacent to the right side of the clusters (0,0) are pixels (4,0) to (4,31), pixels (5,0) to (5,31), and pixels (6). , 0) to (6,31), and pixels (7,0) to (7,31), and the cluster (2,0) adjacent to the right side of the cluster (1,0) is the pixel (8,0). Consists of 0) to (8,31), pixels (9,0) to (9,31), pixels (10,0) to (10,31), and pixels (11,0) to (11,31) Will be done.
- the clusters (0,1) adjacent to the upper side of the clusters (0,0) are pixels (0,32) to (0,63), pixels (1,32) to (1,63), pixels (2,32). ) To (2,63), and pixels (3,32) to (3,63).
- the clusters (1,1) adjacent to the right side of the clusters (0,1) are pixels (4,32) to (4,63), pixels (5,32) to (5,63), and pixels (6).
- 32) to (6,63) and pixels (7,32) to (7,63)
- the cluster (2,1) adjacent to the right side of the cluster (1,1) is the pixel (8,1). Consists of 32) to (8,63), pixels (9,32) to (9,63), pixels (10,32) to (10,63), and pixels (11,32) to (11,63) Will be done.
- the clusters (j-1, k-1) are pixels (n-4, m-32) or ( n-4, m-1), pixel (n-3, m-32) to (n-3, m-1), pixel (n-2, m-32) to (n-2, m-1) , And pixels (n-1, m-32) to (n-1, m-1).
- FIG. 8 shows the configuration of a plurality of repeaters 131 arranged in the repeater unit 102 and a repeater selector 132 provided in the subsequent stage.
- j repeaters # 0 to # j-1 are arranged in parallel in a strip shape in the vertical direction (vertical direction in the drawing) as a repeater circuit.
- FIG. 9 shows the configuration of the cluster 121 inside the repeater 131.
- k clusters (0, 0) to (0, k-1) are arranged in the vertical direction.
- k clusters (1,0) to (1, k-1) and k clusters (2,0) to (2, k-1) are vertically arranged. Each is placed. Although not shown because it is repeated, k clusters 121 are similarly arranged in the vertical direction in the repeaters # 3 to # j-1.
- j repeaters # 0 to # j-1 are arranged in parallel in the vertical direction in a strip shape, and in each repeater 131, k clusters 121 are bundled in the vertical direction. ..
- the repeater unit 102 if one of the j repeaters # 0 to # j-1 is focused on the repeater 131, the repeater 131 of interest has a structure as shown in FIG. There is.
- a shift register 141 in which a flip-flop 142 as a sequential circuit is connected in multiple stages is provided in the repeater 131 in the vertical direction.
- each of the k flip-flops 142 is connected to each of the k clusters # 0 to # k-1.
- each pixel 111 (ADC) arranged in the pixel array unit 101 is connected to the latch unit in the cluster 121 on a one-to-one basis, and a plurality of latch units are grouped into one cluster. Further, one cluster 121 is connected to one flip-flop 142 constituting the shift register 141 in the repeater 131.
- the repeater unit 102 is a plurality of repeater 131s arranged in parallel in a strip shape in the vertical direction, and is configured to correspond to an image of one frame.
- the cluster 121 is connected to the flip-flop 142 of the shift register 141 arranged vertically in the repeater 131 on a one-to-one basis, and a plurality of clusters 121 are arranged vertically.
- the repeater selector 132 selects the output from any repeater 131 among the repeaters # 0 to # j-1 according to the selection signal from the control circuit (not shown). And output to the latter stage.
- the repeater selector 132 selects the output of the repeater 131 according to the horizontal thinning amount and the horizontal coordinate deviation in order to prepare the template block and the matching block. To.
- the repeater selector 132 is an output from the repeater 131 of an even number of repeaters # 0 to # j-1, that is, a cluster (0,0) to (0, k-1), a cluster ( The output of 2,0) to (2, k-1), ..., Cluster (j-2,0) to (0, k-1) is selected.
- the repeater 131 in the repeater section 102 is arranged in parallel in a strip shape in the vertical direction, but the repeater 131 may be arranged in parallel not only in the vertical direction but also in the horizontal direction. Good. Further, the arrangement of the clusters 121 in the repeater 131 is not limited to the vertical direction, and a plurality of clusters 121 may be arranged in the horizontal direction and the vertical direction. Further, the number of stages and the number of parallels of the shift register 141 in the repeater 131 are arbitrary.
- the difference between the pixel value of the representative point of the first image and the pixel value of the sampling point of the second image is calculated for each of the plurality of detection blocks, and the sum of the differences is the cumulative correlation value (matching error). Also called). Then, a plurality of cumulative correlation values are calculated while shifting the relative positions of the first image and the second image, and the deviation of the relative positions is detected as the movement between the two blocks.
- a template block is prepared in which 8 pixels are thinned out in the horizontal direction and 32 pixels are thinned out in the vertical direction.
- a matching block having the same thinning amount and whose coordinates are shifted by one pixel each in the horizontal direction and the vertical direction is prepared.
- the cluster 121 described above is used.
- the configuration of the repeater 131 can be used.
- the cluster (2j, k) for example, the cluster (0,0), the cluster (2,0), the cluster (0,1), and the cluster ( If the pixel signal is read from the pixel (0,0), pixel (8,0), pixel (0,32), and pixel (8,32) located in the lower left of 2,1), the template block and all It is possible to prepare a matching block.
- the pixel of interest is represented by inverting black and white.
- the pixel signal is transmitted from the pixel 111 located at the lower left of each of the clusters (2j, k). Is read out. Then, in the repeater 131, the pixels 111 in each cluster 121 are sequentially selected for each clock, and all the pixels 111 are read out by selecting 4 ⁇ 32 times.
- the pixel signal from the pixel 111 is read out via the shift register 122 arranged in the vertical direction in each repeater 131 of the repeater unit 102. Further, the reading order is performed by selecting a plurality of pixels 111 arranged in the cluster 121 one by one.
- this output is an image in which the pixels 111 in the horizontal direction and the vertical direction are thinned out (thinned-out image), and corresponds to the pixel signal read out while repeating the phase shift. Then, by adapting this output to the cluster size suitable for the matching block and the pixel selection order (thinning phase order) suitable for the search method, motion detection using the representative point matching method is performed.
- FIG. 11 shows an example of the output of the thinned out template block and the matching block.
- the circular symbol ( ⁇ ) in the drawing means the pixels 111 arranged two-dimensionally in the pixel array unit 101, and the pixels are represented by a plurality of circular symbols arranged in the horizontal and vertical directions. Of the total area where 111 is arranged, a part of the area is represented.
- the circular symbols representing the pixels 111 are attached to the circular symbols representing the pixels 111, and the circular symbols having the same pattern are the pixels 111 arranged in different clusters. , Indicates that they are in the same phase. Specifically, for example, the upper left pixel 111 in the figure is the phase (0,0), the pixel 111 adjacent to the lower side thereof is the phase (0,1), and the pixel 111 adjacent to the right side thereof is the phase (1,0). It can be expressed as 0) and so on.
- three pixels 111 are illustrated by three types of patterns as the pixels 111 whose phase is shifted in the same cluster 121, but other pixels 111 in the same cluster 121 are illustrated. Is also read while repeating the phase shift.
- the direction from the left side to the right side in the figure is the direction of time
- the rectangular symbols in the figure represent the thinned images obtained from the pixel signals read while performing the phase shift in chronological order.
- three types of patterns are attached to the rectangular symbols, and these patterns correspond to the patterns attached to the circular symbols shown in FIG.
- the first thinned image TI 1 is an image obtained from the pixel signal of the pixel 111 of the phase (0,0)
- the second thinned image TI Reference numeral 2 denotes an image obtained from the pixel signal of the pixel 111 having the phase (0, 1).
- the i-th thinned-out image TI i is an image obtained from the pixel signal of the pixel 111 of the phase (1, 0).
- FIG. 13 shows an example of the regions of the thinned-out images TI 1 , TI 2 , and TI i in the search area SA.
- the thinned-out image TI 1 is a region corresponding to the pixel 111 corresponding to the phase (0,0).
- the thinned-out image TI 2 is a region corresponding to the pixel 111 corresponding to the phase (0, 1), and is a region shifted downward by one pixel with respect to the region of the thinned-out image TI 1 .
- the thinned image TI i is a region corresponding to the pixel 111 corresponding to the phase (1, 0), and is a region shifted to the right by one pixel with respect to the region of the thinned image TI 1 .
- the thinned-out image TI can be repeatedly read out while shifting the phase. Therefore, for example, the thinned-out image TI can be thinned out by 8 pixels in the horizontal direction and in the vertical direction. It is possible to prepare a template block for thinning out 32 pixels and a matching block for which the coordinates are shifted by 1 pixel each in the horizontal direction and the vertical direction with the same thinning amount.
- the imaging process including the ADC can be performed 100 times faster than when the column parallel ADC method is used. Therefore, the matching block is performed while shifting the phase. It is possible to realize a frame rate such as 10,000 fps by reading out the pixel signal for the device in a time-division manner.
- the signal processing unit 105 holds the pixel signal of each pixel 111 corresponding to the template block in the memory, and corresponds to the pixel value of each pixel 111 corresponding to the matching block to be sequentially read and the held template block.
- the correlation value is obtained by taking the sum of the differences from the pixel values of each pixel 111.
- the signal processing unit 105 detects the deviation of the coordinates of the matching block having a strong correlation as the motion vector MV.
- a correlation value for motion determination is used, and the smaller the absolute value of the sum of differences, the stronger the correlation.
- the motion detection cycle (frequency) can be changed by changing the frame rate of imaging or pausing block matching.
- the electronic device includes, for example, an imaging device such as a smartphone, a digital still camera, or a digital video camera.
- the signal processing unit 105 holds the thinned image TI including the pattern of the specific shape as a template block (S11), and sequentially acquires the thinned image TI sequentially read from the search area SA as a matching block (S12).
- the signal processing unit 105 detects the motion by summing the difference between the pixel value of each pixel 111 corresponding to the held template block and the pixel value of each pixel 111 corresponding to the matching block acquired sequentially.
- FIG. 15 shows an example of the effective pixel region after motion correction.
- the pixels 111 arranged two-dimensionally in the pixel array unit 101 are represented in a grid pattern, and the entire grid-like region is the imageable pixel region IA. Further, a region smaller than the imageable pixel region IA is defined as an effective pixel region EA. That is, the solid-state image sensor 10 captures an captured image larger than the angle of view to detect motion.
- the signal processing unit 105 indicates that the value of the motion vector MV (movement amount in the horizontal direction and the vertical direction) detected by the camera shake has moved to the upper right direction in the drawing. If so, the read start coordinates for the next imaging are changed from pixel (0,0) to pixel (4,4). As a result, the effective pixel area EA is also shifted in the upper right direction in response to the movement of the subject due to camera shake, and the subject can be accommodated in the effective pixel area EA, so that camera shake correction is realized.
- the motion vector MV movement amount in the horizontal direction and the vertical direction
- the pixel parallel ADC method is adopted as the ADC method of the solid-state imaging device 10, and the pixel signal (AD conversion result) from the pixel 111 (ADC) arranged in the pixel array unit 101 is obtained.
- the configuration of the cluster 121 and the repeater 131 can be used in the same manner as when the thinned image for block matching is read out in a time-divided manner.
- the solid-state image sensor 10 itself can be treated as if it were a frame memory when the template block and the matching block are prepared.
- processing such as block matching is performed by the DSP 902 using a dedicated frame memory 903 (DRAM, SRAM, etc.) as shown in a general motion detection method (FIGS. 1 to 3). You don't have to do it. Further, in the solid-state image sensor 10, it is not necessary to provide a dedicated circuit, and the power consumption does not increase. That is, it is possible to provide a structure and an algorithm that realizes motion detection by block matching at low cost in the solid-state image sensor 10.
- the solid-state image sensor 10 employs the pixel parallel ADC method as the ADC method, it is advantageous when a large number of thinned-out images having different phases are required.
- the solid-state image sensor 10 employs a pixel parallel ADC method and performs time-division readout at ultra-high speed, it is possible to acquire an captured image in addition to the thinned image for block matching in the imaging after the thinned out readout. ..
- the shutter is a batch shutter for all pixels that operates at a high frame rate, so that motion detection with less erroneous determination is performed with almost no influence of focal plane distortion. This makes it possible to perform highly accurate motion correction.
- the pixel parallel ADC method can be used to reduce the power consumption with a smaller circuit scale.
- motion detection with reduced power consumption can be realized with a smaller circuit scale.
- FIG. 16 shows an example of the three-dimensional structure of the solid-state image sensor 10.
- the solid-state image sensor 10 has a structure in which a pixel substrate 100A into which light is incident from the back surface side and a logic substrate 100B responsible for signal processing are bonded together. At least the pixel array unit 101 and the DAC (Digital to Analog Converter) 107 are formed on the pixel substrate 100A. At least the latch repeater unit 108 and the logic calculation unit 109 are formed on the logic board 100B.
- DAC Digital to Analog Converter
- a plurality of pixels 111 are arranged two-dimensionally in the pixel array unit 101.
- An ADC 151 corresponding to one single slope method is provided for each pixel 111, and the inverted output (VCO) of the comparator is connected to one latch circuit 152 provided in the latch repeater unit 108 (FIG. 17).
- the latch repeater unit 108 includes a repeater unit 102.
- a pair of ADC 151 and a latch circuit 152 is provided in n ⁇ m pairs, and an inversion signal of the comparator of ADC 151 is provided.
- the gray code from the repeater unit 102 (repeater 131) is taken into the latch circuit 152 according to (VCO).
- FIG. 18 shows an example of the circuit configuration of the main part of the solid-state image sensor 10.
- the pixel 111 includes a pixel circuit 161, a differential amplifier circuit 162, a positive feedback circuit (PFB: Positive Feedback) 163, and a multiplex circuit (MUX: Multiplexer) 164. Further, in the pixel 111, the comparator 160 is composed of the differential amplifier circuit 162 and the positive feedback circuit 163. The comparator 160 constitutes a part of the ADC 151.
- the pixel circuit 161 includes a photodiode 171 as a photoelectric conversion unit, a transfer transistor 172, a reset transistor 173, an FD (Floating Diffusion) 174, and an emission transistor 175.
- the transfer transistor 172 transfers the electric charge generated by the photodiode 171 to the FD174.
- the reset transistor 173 resets the electric charge held in the FD174.
- the FD174 is connected to the gate of the transistor 177 of the differential amplifier circuit 162.
- the transistor 177 of the differential amplifier circuit 162 also functions as an amplifier transistor of the pixel circuit 161.
- the discharge transistor 175 discharges the electric charge accumulated in the photodiode 171.
- the differential amplifier circuit 162 includes transistors 181 and 182 as a differential pair, transistors 183 and 184 forming a current mirror, and transistors 185 as a constant current source for supplying a current corresponding to an input bias current (Vb). Consists of.
- a reference signal (REF) output from the DAC 107 (FIG. 16) is input to the gate of the transistor 181 and a pixel circuit in the pixel 111 is input to the gate of the transistor 182.
- the pixel signal (SIG) output from 161 is input.
- the reference signal (REF) input to the gate of the transistor 181 and the pixel signal (SIG) input to the gate of the transistor 182 are compared, and the reference signal (REF) and the pixel signal (SIG) are compared.
- the output signal (VCO) is output according to the comparison result with).
- the positive feedback circuit 163 includes transistors 191 to 193 and a NOR circuit 194. Further, the NOR circuit 194 is configured to include transistors 195 to 198.
- the connection point between the drain of the transistor 182 and the drain of the transistor 184 is the output end of the differential amplifier circuit 162, and is connected to the drain of the transistor 191 in the positive feedback circuit 163 via the transistors 186 and 187.
- the output signal (VCO) output from the differential amplifier circuit 162 is input to the NOR circuit 194 in the positive feedback circuit 163, and is output as an inverted signal (VCO) of the comparator 160.
- the inverting signal (VCO) from the comparator 160 (positive feedback circuit 163) and the control signal WORD are input to the multiplex circuit 164.
- the inversion signal (VCO) from the comparator 160 is output to the latch circuit 152 in the latch repeater unit 108 by controlling the control signal WORD.
- the latch repeater unit 108 is composed of a repeater 131 and a latch circuit 152.
- Each pixel 111 (ADC 151) is connected to the latch circuit 152 in the cluster 121 on a one-to-one basis, and a plurality of latch circuits 152 are grouped into one cluster. Further, one cluster 121 is connected to one flip-flop 142 of the shift register 141 of the repeater 131 (FIG. 10).
- AD conversion result from each pixel 111 (ADC 151) is held by the latch circuit 152 in the cluster 121, transferred to the shift register 141 in the repeater 131, and sequentially output to the outside of the latch repeater unit 108. Will be done.
- the thinned-out image obtained by utilizing the configuration of the cluster 121 and the repeater 131 is used for motion detection in the solid-state imaging device 10 is illustrated, but for example, the thinned-out image (display image according to) Is displayed on the display unit (for example, the display unit 1015 of FIG. 19), or the thinned image (data) is stored in the storage unit (for example, the storage unit 1016 of FIG. 19). It may be used for various purposes.
- the representative point matching method is used as a method for detecting the movement between images by image processing is illustrated, but the thinned-out image obtained by utilizing the configuration of the cluster 121 and the repeater 131 is used. Any other detection method may be used as long as the motion can be detected.
- FIG. 19 shows a configuration example of an electronic device equipped with a solid-state image sensor to which the present technology is applied.
- the electronic device 1000 is, for example, an electronic device having an imaging function such as an imaging device such as a digital still camera or a video camera, or a mobile terminal device such as a smartphone or a tablet terminal.
- an imaging function such as an imaging device such as a digital still camera or a video camera
- a mobile terminal device such as a smartphone or a tablet terminal.
- the electronic device 1000 includes a lens unit 1011, a solid-state imaging device 1012, a signal processing unit 1013, a control unit 1014, a display unit 1015, a storage unit 1016, an operation unit 1017, a communication unit 1018, and a power supply unit 1019. Further, in the electronic device 1000, the signal processing unit 1013 to the power supply unit 1019 are connected to each other via the bus 1021.
- the lens unit 1011 is composed of a zoom lens, a focus lens, and the like, and collects light from the subject.
- the light (subject light) focused by the lens unit 1011 is incident on the solid-state image sensor 1012.
- the solid-state image sensor 1012 is a solid-state image sensor to which the present technology is applied (for example, the solid-state image sensor 10 described above).
- the solid-state imaging device 1012 performs photoelectric conversion of the light (subject light) received through the lens unit 1011 and AD-converts the pixel signal obtained as a result, and supplies the signal obtained as a result to the signal processing unit 1013.
- the signal processing unit 1013 is composed of a signal processing circuit such as a DSP (Digital Signal Processor) circuit, and performs signal processing on a signal supplied from the solid-state imaging device 1012. For example, the signal processing unit 1013 generates image data of a still image or a moving image by performing signal processing on the signal from the solid-state imaging device 1012, and supplies the image data to the display unit 1015 or the storage unit 1016.
- a signal processing circuit such as a DSP (Digital Signal Processor) circuit
- the control unit 1014 is configured as, for example, a CPU (Central Processing Unit), a microprocessor, an FPGA (Field Programmable Gate Array), or the like.
- the control unit 1014 controls the operation of each unit of the electronic device 1000.
- the display unit 1015 is configured as a display device such as a liquid crystal panel or an organic EL (Electro Luminescence) panel.
- the display unit 1015 displays a still image or a moving image according to the image data supplied from the signal processing unit 1013.
- the storage unit 1016 is configured as a recording medium such as a semiconductor memory or a hard disk, for example.
- the storage unit 1016 records the image data supplied from the signal processing unit 1013. Further, the storage unit 1016 supplies the recorded image data according to the control from the control unit 1014.
- the operation unit 1017 is configured as a touch panel in combination with the display unit 1015 in addition to the physical buttons, for example.
- the operation unit 1017 outputs operation commands for various functions of the electronic device 1000 in response to an operation by the user.
- the control unit 1014 controls the operation of each unit based on the operation command supplied from the operation unit 1017.
- the communication unit 1018 is configured as, for example, a communication interface circuit.
- the communication unit 1018 exchanges data with an external device by wireless communication or wired communication according to a predetermined communication method.
- the power supply unit 1019 appropriately supplies various power sources that serve as operating power sources for the signal processing unit 1013 to the communication unit 1018 to these supply targets.
- the electronic device 1000 is configured as described above.
- This technology is applied to the solid-state image sensor 1012 as described above.
- the solid-state imaging device 1012 in the electronic device 1000, the solid-state imaging device 1012 can be operated with a smaller circuit scale and low power consumption, and for example, camera shake correction using motion detection can be realized. Can be done.
- FIG. 20 is a diagram showing a usage example of a solid-state image sensor to which the present technology is applied.
- the solid-state image sensor 10 can be used in various cases for sensing light such as visible light, infrared light, ultraviolet light, and X-ray, as shown below. That is, as shown in FIG. 20, not only the field of appreciation for taking an image used for appreciation, but also, for example, the field of transportation, the field of home appliances, the field of medical / healthcare, the field of security, and the field of beauty.
- the solid-state imaging device 10 can also be used in devices used in the fields of fields, sports, agriculture, and the like.
- the solid-state imaging device 10 can be used.
- the solid-state imaging device 10 can be used as a device used for traffic such as a surveillance camera and a distance measuring sensor that measures a distance between vehicles.
- a device used for home appliances such as a television receiver, a refrigerator, and an air conditioner in order to photograph a user's gesture and operate the device according to the gesture.
- the solid-state imaging device 10 is used in a device used for medical care or healthcare, such as an endoscope or a device for performing angiography by receiving infrared light. can do.
- the solid-state image sensor 10 can be used in a device used for security such as a surveillance camera for crime prevention and a camera for personal authentication. Further, in the field of beauty, the solid-state image sensor 10 can be used in a device used for beauty such as a skin measuring device for photographing the skin and a microscope for photographing the scalp.
- the solid-state image sensor 10 can be used in a device used for sports, such as an action camera or a wearable camera for sports applications. Further, in the field of agriculture, the solid-state image sensor 10 can be used in a device used for agriculture, such as a camera for monitoring the state of a field or a crop.
- the technology related to this disclosure can be applied to various products.
- the technology according to the present disclosure is realized as a device mounted on a moving body of any kind such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobility, an airplane, a drone, a ship, and a robot. You may.
- FIG. 21 is a block diagram showing a schematic configuration example of a vehicle control system, which is an example of a moving body control system to which the technique according to the present disclosure can be applied.
- the vehicle control system 12000 includes a plurality of electronic control units connected via the communication network 12001.
- the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, an outside information detection unit 12030, an in-vehicle information detection unit 12040, and an integrated control unit 12050.
- a microcomputer 12051, an audio image output unit 12052, and an in-vehicle network I / F (interface) 12053 are shown as a functional configuration of the integrated control unit 12050.
- the drive system control unit 12010 controls the operation of the device related to the drive system of the vehicle according to various programs.
- the drive system control unit 12010 provides a driving force generator for generating the driving force of the vehicle such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting the driving force to the wheels, and a steering angle of the vehicle. It functions as a control device such as a steering mechanism for adjusting and a braking device for generating braking force of the vehicle.
- the body system control unit 12020 controls the operation of various devices mounted on the vehicle body according to various programs.
- the body system control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as headlamps, back lamps, brake lamps, blinkers or fog lamps.
- the body system control unit 12020 may be input with radio waves transmitted from a portable device that substitutes for the key or signals of various switches.
- the body system control unit 12020 receives inputs of these radio waves or signals and controls a vehicle door lock device, a power window device, a lamp, and the like.
- the vehicle outside information detection unit 12030 detects information outside the vehicle equipped with the vehicle control system 12000.
- an imaging unit 12031 is connected to the vehicle exterior information detection unit 12030.
- the vehicle outside information detection unit 12030 causes the image pickup unit 12031 to capture an image of the outside of the vehicle and receives the captured image.
- the vehicle exterior information detection unit 12030 may perform object detection processing or distance detection processing such as a person, a vehicle, an obstacle, a sign, or characters on the road surface based on the received image.
- the imaging unit 12031 is an optical sensor that receives light and outputs an electric signal according to the amount of the light received.
- the image pickup unit 12031 can output an electric signal as an image or can output it as distance measurement information. Further, the light received by the imaging unit 12031 may be visible light or invisible light such as infrared light.
- the in-vehicle information detection unit 12040 detects the in-vehicle information.
- a driver state detection unit 12041 that detects the driver's state is connected to the in-vehicle information detection unit 12040.
- the driver state detection unit 12041 includes, for example, a camera that images the driver, and the in-vehicle information detection unit 12040 determines the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 12041. It may be calculated, or it may be determined whether the driver is dozing.
- the microcomputer 12051 calculates the control target value of the driving force generator, the steering mechanism, or the braking device based on the information inside and outside the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040, and the drive system control unit.
- a control command can be output to 12010.
- the microcomputer 12051 realizes ADAS (Advanced Driver Assistance System) functions including vehicle collision avoidance or impact mitigation, follow-up driving based on inter-vehicle distance, vehicle speed maintenance driving, vehicle collision warning, vehicle lane deviation warning, and the like. It is possible to perform cooperative control for the purpose of.
- ADAS Advanced Driver Assistance System
- the microcomputer 12051 controls the driving force generator, the steering mechanism, the braking device, and the like based on the information around the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040. It is possible to perform coordinated control for the purpose of automatic driving that runs autonomously without depending on the operation.
- the microcomputer 12051 can output a control command to the body system control unit 12020 based on the information outside the vehicle acquired by the vehicle exterior information detection unit 12030.
- the microcomputer 12051 controls the headlamps according to the position of the preceding vehicle or the oncoming vehicle detected by the external information detection unit 12030, and performs cooperative control for the purpose of antiglare such as switching the high beam to the low beam. It can be carried out.
- the audio image output unit 12052 transmits the output signal of at least one of the audio and the image to the output device capable of visually or audibly notifying the passenger of the vehicle or the outside of the vehicle.
- an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are exemplified as output devices.
- the display unit 12062 may include, for example, at least one of an onboard display and a heads-up display.
- FIG. 22 is a diagram showing an example of the installation position of the imaging unit 12031.
- the vehicle 12100 has imaging units 12101, 12102, 12103, 12104, 12105 as imaging units 12031.
- the imaging units 12101, 12102, 12103, 12104, 12105 are provided at positions such as, for example, the front nose, side mirrors, rear bumpers, back doors, and the upper part of the windshield in the vehicle interior of the vehicle 12100.
- the imaging unit 12101 provided on the front nose and the imaging unit 12105 provided on the upper part of the windshield in the vehicle interior mainly acquire an image in front of the vehicle 12100.
- the imaging units 12102 and 12103 provided in the side mirrors mainly acquire images of the side of the vehicle 12100.
- the imaging unit 12104 provided on the rear bumper or the back door mainly acquires an image of the rear of the vehicle 12100.
- the images in front acquired by the imaging units 12101 and 12105 are mainly used for detecting a preceding vehicle, a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, or the like.
- FIG. 22 shows an example of the photographing range of the imaging units 12101 to 12104.
- the imaging range 12111 indicates the imaging range of the imaging unit 12101 provided on the front nose
- the imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided on the side mirrors, respectively
- the imaging range 12114 indicates the imaging range of the imaging units 12102 and 12103.
- the imaging range of the imaging unit 12104 provided on the rear bumper or the back door is shown. For example, by superimposing the image data captured by the imaging units 12101 to 12104, a bird's-eye view image of the vehicle 12100 as viewed from above can be obtained.
- At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information.
- at least one of the image pickup units 12101 to 12104 may be a stereo camera composed of a plurality of image pickup elements, or may be an image pickup element having pixels for phase difference detection.
- the microcomputer 12051 has a distance to each three-dimensional object within the imaging range 12111 to 12114 based on the distance information obtained from the imaging units 12101 to 12104, and a temporal change of this distance (relative velocity with respect to the vehicle 12100).
- a predetermined speed for example, 0 km / h or more.
- the microcomputer 12051 can set an inter-vehicle distance to be secured in front of the preceding vehicle in advance, and can perform automatic braking control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like. In this way, it is possible to perform coordinated control for the purpose of automatic driving or the like in which the vehicle travels autonomously without depending on the operation of the driver.
- the microcomputer 12051 converts three-dimensional object data related to a three-dimensional object into two-wheeled vehicles, ordinary vehicles, large vehicles, pedestrians, utility poles, and other three-dimensional objects based on the distance information obtained from the imaging units 12101 to 12104. It can be classified and extracted and used for automatic avoidance of obstacles. For example, the microcomputer 12051 distinguishes obstacles around the vehicle 12100 into obstacles that can be seen by the driver of the vehicle 12100 and obstacles that are difficult to see. Then, the microcomputer 12051 determines the collision risk indicating the risk of collision with each obstacle, and when the collision risk is equal to or higher than the set value and there is a possibility of collision, the microcomputer 12051 via the audio speaker 12061 or the display unit 12062. By outputting an alarm to the driver and performing forced deceleration and avoidance steering via the drive system control unit 12010, driving support for collision avoidance can be provided.
- At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays.
- the microcomputer 12051 can recognize a pedestrian by determining whether or not a pedestrian is present in the captured image of the imaging units 12101 to 12104.
- pedestrian recognition includes, for example, a procedure for extracting feature points in an image captured by an imaging unit 12101 to 12104 as an infrared camera, and pattern matching processing for a series of feature points indicating the outline of an object to determine whether or not the pedestrian is a pedestrian. It is done by the procedure to determine.
- the audio image output unit 12052 When the microcomputer 12051 determines that a pedestrian is present in the captured images of the imaging units 12101 to 12104 and recognizes the pedestrian, the audio image output unit 12052 outputs a square contour line for emphasizing the recognized pedestrian.
- the display unit 12062 is controlled so as to superimpose and display. Further, the audio image output unit 12052 may control the display unit 12062 so as to display an icon or the like indicating a pedestrian at a desired position.
- the above is an example of a vehicle control system to which the technology according to the present disclosure can be applied.
- the technique according to the present disclosure can be applied to the imaging unit 12031 among the configurations described above.
- the solid-state image sensor 10 of FIG. 4 can be applied to the image pickup unit 12031.
- this technology can have the following configuration.
- the pixel includes a photoelectric conversion unit and an AD conversion unit that AD-converts a pixel signal obtained by photoelectric conversion by the photoelectric conversion unit.
- a solid-state image sensor in which some pixels are thinned out when reading AD conversion results from the plurality of pixels.
- the solid-state image sensor according to (1) or (2) above further comprising a sequence circuit for reading and a reading unit for reading the AD conversion result in cluster units according to pixel blocks.
- the sequential circuit includes a flip-flop.
- the read unit includes a plurality of repeater circuits including a shift register to which the flip-flops are connected in a number corresponding to the number of stages of the cluster.
- the solid-state image sensor according to (4), wherein the repeater circuit reads out the AD conversion result from the pixels in the cluster that are sequentially selected at a predetermined timing.
- the reading unit further includes a number of latch circuits corresponding to the plurality of pixels.
- the AD conversion unit included in the pixel and the latch circuit are connected in pairs.
- the reading unit further includes a selector for selecting the repeater circuit according to the thinning amount from the plurality of repeater circuits.
- the signal processing unit detects motion based on the plurality of thinned-out images.
- the motion detection includes motion detection using a representative point matching method.
- the signal processing unit Hold a specific thinned image as a template block,
- the AD conversion read from the plurality of pixels arranged in the pixel array unit based on the motion vector detected in the first imaging.
- the solid-state imaging device according to any one of (1) to (10), wherein in the pixel array unit, the plurality of pixels arranged in a two-dimensional manner are regularly thinned out in pixel block units. (12) It is equipped with a pixel array unit in which a plurality of pixels are arranged two-dimensionally.
- the pixel includes a photoelectric conversion unit and an AD conversion unit that AD-converts a pixel signal obtained by photoelectric conversion by the photoelectric conversion unit.
- the pixel array unit is an electronic device equipped with a solid-state image sensor in which some pixels are thinned out when reading AD conversion results from the plurality of pixels.
- Solid-state imaging device 100A pixel board, 100B logic board, 101 pixel array part, 102 repeater part, 103 GC generation part, 104 SRAM, 105 signal processing part, 106 motion judgment memory, 107 DAC, 108 latch repeater part, 111 pixels, 121 clusters, 131 repeaters, 132 repeater selectors, 141 shift registers, 142 flip-flops, 151 ADCs, 152 latch circuits, 161 pixel circuits, 162 differential amplification circuits, 163 positive feedback circuits, 164 multiple circuits, 1000 electronic devices. , 1012 Solid-state imaging device
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Transforming Light Signals Into Electric Signals (AREA)
- Image Analysis (AREA)
Abstract
The present disclosure relates to a solid-state imaging device which makes it possible to reduce power consumption with a smaller circuit scale by using a pixel-parallel ADC method, and to an electronic apparatus. Provided is a solid-state imaging device comprising a pixel array unit in which a plurality of pixels are disposed two-dimensionally. Each pixel includes a photoelectric conversion unit and an AD conversion unit, the AD conversion unit converting a pixel signal obtained from photoelectric conversion performed by the photoelectric conversion unit. In the pixel array unit, when reading out AD conversion results from the plurality of pixels, some of the pixels are thinned. This technique can be applied in image sensors which use a pixel-parallel ADC method, for example.
Description
本開示は、固体撮像装置、及び電子機器に関し、特に、画素並列ADC方式を用いて、より少ない回路規模で、消費電力を低減することができるようにした固体撮像装置、及び電子機器に関する。
The present disclosure relates to a solid-state image sensor and an electronic device, and more particularly to a solid-state image sensor and an electronic device capable of reducing power consumption with a smaller circuit scale by using a pixel parallel ADC method.
イメージセンサにおいて、画素アレイ部に2次元状に配置された画素から読み出される画素信号を、アナログ信号からデジタル信号に変換するADC(Analog to Digital Converter)の方式として、画素の垂直列ごとにADCを並列配置した方式(以下、列並列ADC方式(カラムADC方式)という)が用いられている。
In the image sensor, as an ADC (Analog to Digital Converter) method that converts the pixel signal read from the pixels arranged two-dimensionally in the pixel array section from an analog signal to a digital signal, an ADC is used for each vertical row of pixels. A parallel arrangement method (hereinafter referred to as a column parallel ADC method (column ADC method)) is used.
また、ADCの方式としては、画素アレイ部に2次元状に配置された各画素内にADCを設けた方式(以下、画素並列ADC方式という)が知られている(例えば、特許文献1参照)。
Further, as an ADC method, a method in which an ADC is provided in each pixel arranged two-dimensionally in a pixel array portion (hereinafter, referred to as a pixel parallel ADC method) is known (see, for example, Patent Document 1). ..
ところで、イメージセンサにおいて、ADCの方式として画素並列ADC方式を用いた場合には、列並列ADC方式を用いた場合と比べて、より高速な撮像が可能になる一方で、より少ない回路規模で、消費電力を低減することが求められる。
By the way, in the image sensor, when the pixel parallel ADC method is used as the ADC method, higher speed imaging is possible as compared with the case where the column parallel ADC method is used, but the circuit scale is smaller. It is required to reduce power consumption.
本開示はこのような状況に鑑みてなされたものであり、画素並列ADC方式を用いて、より少ない回路規模で、消費電力を低減することができるようにするものである。
This disclosure has been made in view of such a situation, and makes it possible to reduce power consumption with a smaller circuit scale by using a pixel parallel ADC method.
本開示の一側面の固体撮像装置は、複数の画素を2次元状に配置した画素アレイ部を備え、前記画素は、光電変換部と、前記光電変換部による光電変換で得られる画素信号をAD変換するAD変換部とを含み、前記画素アレイ部では、前記複数の画素からのAD変換結果を読み出すに際して、一部の画素が間引かれる固体撮像装置である。
The solid-state image sensor on one side of the present disclosure includes a pixel array unit in which a plurality of pixels are arranged in a two-dimensional manner, and the pixels AD a photoelectric conversion unit and a pixel signal obtained by photoelectric conversion by the photoelectric conversion unit. The pixel array unit includes an AD conversion unit for conversion, and is a solid-state image sensor in which some pixels are thinned out when reading AD conversion results from the plurality of pixels.
本開示の一側面の電子機器は、複数の画素を2次元状に配置した画素アレイ部を備え、前記画素は、光電変換部と、前記光電変換部による光電変換で得られる画素信号をAD変換するAD変換部とを含み、前記画素アレイ部では、前記複数の画素からのAD変換結果を読み出すに際して、一部の画素が間引かれる固体撮像装置を搭載した電子機器である。
The electronic device on one aspect of the present disclosure includes a pixel array unit in which a plurality of pixels are arranged in a two-dimensional manner, and the pixels AD convert a photoelectric conversion unit and a pixel signal obtained by photoelectric conversion by the photoelectric conversion unit. The pixel array unit is an electronic device equipped with a solid-state image sensor in which some pixels are thinned out when the AD conversion results from the plurality of pixels are read out.
本開示の一側面の固体撮像装置、及び電子機器においては、複数の画素として、光電変換部と前記光電変換部による光電変換で得られる画素信号をAD変換するAD変換部とを含む画素を2次元状に配置した画素アレイ部で、前記複数の画素からのAD変換結果を読み出すに際して、一部の画素が間引かれる。
In the solid-state image sensor and the electronic device on one aspect of the present disclosure, two pixels including a photoelectric conversion unit and an AD conversion unit that AD-converts a pixel signal obtained by photoelectric conversion by the photoelectric conversion unit are included as a plurality of pixels. When reading the AD conversion results from the plurality of pixels in the pixel array unit arranged in a dimension, some pixels are thinned out.
本開示の一側面の固体撮像装置、又は電子機器は、独立した装置であってもよいし、1つの装置を構成している内部ブロックであってもよい。
The solid-state image sensor or electronic device on one aspect of the present disclosure may be an independent device or an internal block constituting one device.
以下、図面を参照しながら本開示に係る技術(本技術)の実施の形態について説明する。なお、説明は以下の順序で行うものとする。
Hereinafter, embodiments of the technology (the present technology) according to the present disclosure will be described with reference to the drawings. The explanations will be given in the following order.
1.本技術の実施の形態
2.変形例
3.電子機器の構成
4.固体撮像装置の使用例
5.移動体への応用例 1. 1. Embodiment of thistechnology 2. Modification example 3. Configuration of electronic devices 4. Example of using a solid-state image sensor 5. Application example to moving body
2.変形例
3.電子機器の構成
4.固体撮像装置の使用例
5.移動体への応用例 1. 1. Embodiment of this
<1.本技術の実施の形態>
<1. Embodiment of this technology>
図1は、電子機器における一般的な動き検出方法を示している。
FIG. 1 shows a general motion detection method in an electronic device.
図1において、電子機器900は、列並列ADC方式(カラムAD方式)のイメージセンサ901と、DSP(Digital Signal Processor)902と、フレームメモリ903から構成される。また、DSP902には、キャッシュメモリ911が設けられる。
In FIG. 1, the electronic device 900 is composed of a column-parallel ADC method (column AD method) image sensor 901, a DSP (Digital Signal Processor) 902, and a frame memory 903. Further, the DSP 902 is provided with a cache memory 911.
電子機器900では、手ぶれ等の動きを検出する場合、イメージセンサ901で画角よりも大きい撮像画像を撮像し、その撮像画像をDSP902に供給する。DSP902では、撮像画像をフレームメモリ903に格納しながらサーチエリアに対応した画像を取得して、ブロックマッチング等の処理を実行し、動きベクトルを検出する。
In the electronic device 900, when detecting a movement such as camera shake, the image sensor 901 captures an captured image larger than the angle of view, and the captured image is supplied to the DSP 902. The DSP 902 acquires an image corresponding to the search area while storing the captured image in the frame memory 903, executes processing such as block matching, and detects a motion vector.
ここで、ブロックマッチング法としては、例えば、代表点マッチング法がある。この代表点マッチング法では、サーチエリアSA内で、保持しているテンプレートブロックに対応する画素の画素値と、マッチングブロック(MB1,MB2,MB3,MB4,・・・)に対応する画素の画素値との差分総和をとるなどして動きベクトルMVが検出される(図2)。
Here, as the block matching method, for example, there is a representative point matching method. In this representative point matching method, the pixel values of the pixels corresponding to the template blocks held in the search area SA correspond to the matching blocks (MB 1 , MB 2 , MB 3 , MB 4 , ...). The motion vector MV is detected by taking the sum of the differences from the pixel values of the pixels (FIG. 2).
このように、一般的な動き検出方法では、DSP902によりフレームメモリ903を用いてブロックマッチング等の処理を行うことで動きベクトルMVを検出しているため、専用の回路が必要で、メモリアクセスの増加に伴い消費電力が大きくなる。また、フレームメモリ903だけでなく、DSP902には、マッチングブロック用にキャッシュメモリ911も必要となる。
As described above, in the general motion detection method, the motion vector MV is detected by performing processing such as block matching using the frame memory 903 by the DSP 902, so that a dedicated circuit is required and the memory access is increased. As a result, the power consumption increases. In addition to the frame memory 903, the DSP 902 also requires a cache memory 911 for the matching block.
また、列並列ADC方式のイメージセンサ901では撮像速度が遅いため、イメージセンサ901による撮像と、DSP902によるブロックマッチング等の処理を同時並列で行う必要がある。
In addition, since the image sensor 901 of the column-parallel ADC system has a slow imaging speed, it is necessary to perform imaging by the image sensor 901 and processing such as block matching by the DSP 902 in parallel at the same time.
そのため、ブロックマッチングの処理を行うに際しては、イメージセンサ901にて水平方向と垂直方向に画素を間引くことで得られる画像(間引き画像)を用いることで処理軽減を図ることが一般的に行われる。具体的には、図3に示すように、イメージセンサ901において、画素アレイ部に2次元状に配置される画素を、図中の円形の記号(○)で表した場合に、斜線が記された円形で表された画素が間引かれるようにする。
Therefore, when performing block matching processing, it is common practice to reduce the processing by using an image (thinned image) obtained by thinning out pixels in the horizontal and vertical directions with the image sensor 901. Specifically, as shown in FIG. 3, in the image sensor 901, when the pixels arranged two-dimensionally in the pixel array portion are represented by circular symbols (◯) in the figure, diagonal lines are drawn. The pixels represented by the circular circle are thinned out.
ここで、イメージセンサのADCの方式としては、列並列ADC方式のほかに、画素並列ADC方式がある。イメージセンサにおいて、画素並列ADC方式を用いた場合、列並列ADC方式を用いた場合と比べて、より高速な撮像が可能になる一方で、列並列ADC方式を用いた場合と同様に、より少ない回路規模で、消費電力を低減することが求められる。
Here, as the ADC method of the image sensor, there is a pixel parallel ADC method in addition to the column parallel ADC method. In the image sensor, when the pixel parallel ADC method is used, faster imaging is possible as compared with the case where the column parallel ADC method is used, but it is less than when the column parallel ADC method is used. It is required to reduce power consumption on a circuit scale.
本技術では、上述した問題を解決して、画素並列ADC方式を用いて、より少ない回路規模で、消費電力を低減することができるようにする。特に、本技術では、動き検出を行う場合に、より少ない回路規模で、消費電力を低減することが可能な動き検出方法を提案する。以下、本技術を適用した動き検出方法の詳細について、図4ないし図18を参照しながら説明する。
This technology solves the above-mentioned problems and makes it possible to reduce power consumption with a smaller circuit scale by using the pixel parallel ADC method. In particular, the present technology proposes a motion detection method capable of reducing power consumption with a smaller circuit scale when motion detection is performed. Hereinafter, details of the motion detection method to which the present technology is applied will be described with reference to FIGS. 4 to 18.
(固体撮像装置の構成)
図4は、本技術を適用した固体撮像装置の構成の例を示している。 (Structure of solid-state image sensor)
FIG. 4 shows an example of the configuration of a solid-state image sensor to which the present technology is applied.
図4は、本技術を適用した固体撮像装置の構成の例を示している。 (Structure of solid-state image sensor)
FIG. 4 shows an example of the configuration of a solid-state image sensor to which the present technology is applied.
固体撮像装置10は、裏面照射型のCMOS(Complementary Metal Oxide Semiconductor)イメージセンサ等のイメージセンサである。
The solid-state image sensor 10 is an image sensor such as a back-illuminated CMOS (Complementary Metal Oxide Semiconductor) image sensor.
図4において、固体撮像装置10は、画素アレイ部101、リピータ部102、GC発生部103、SRAM(Static RAM)104、信号処理部105、及び動き判定用メモリ106を含んで構成される。
In FIG. 4, the solid-state imaging device 10 includes a pixel array unit 101, a repeater unit 102, a GC generation unit 103, a SRAM (Static RAM) 104, a signal processing unit 105, and a motion determination memory 106.
画素アレイ部101には、複数の画素111が2次元状に配置されている。各画素111は、フォトダイオード等の光電変換部と画素トランジスタを含む画素回路と、当該画素回路から出力される画素信号をアナログ信号からデジタル信号に変換するAD変換部(ADC)を有する。
A plurality of pixels 111 are arranged two-dimensionally in the pixel array unit 101. Each pixel 111 has a photoelectric conversion unit such as a photodiode, a pixel circuit including a pixel transistor, and an AD conversion unit (ADC) that converts a pixel signal output from the pixel circuit from an analog signal to a digital signal.
すなわち、固体撮像装置10では、ADCの方式として、画素アレイ部101に2次元状に配置された各画素111内にADCをそれぞれ設けた画素並列ADC方式を採用している。
That is, in the solid-state image sensor 10, as the ADC method, a pixel parallel ADC method in which an ADC is provided in each pixel 111 two-dimensionally arranged in the pixel array unit 101 is adopted.
ここで、各画素111(のADC)からのAD変換結果の読み出しは、フリップフロップ(FF:Flip Flop)を多段接続したシフトレジスタを含むリピータ131を複数配置したリピータ部102を介して行われる。また、この読み出し順序は、クラスタと呼ばれる矩形状のブロック(画素ブロック)ごとに、1画素単位で読み出される。
Here, the reading of the AD conversion result from each pixel 111 (ADC) is performed via the repeater unit 102 in which a plurality of repeater 131 including a shift register in which flip flops (FF: Flip Flop) are connected in multiple stages is arranged. Further, this reading order is read in units of one pixel for each rectangular block (pixel block) called a cluster.
すなわち、各画素111(のADC)は、クラスタ121内のラッチ部(ラッチ回路)に1対1で接続され、複数のラッチ部が1クラスタにまとめられる。また、1つのクラスタ121は、リピータ回路として構成されるリピータ131のシフトレジスタの1つ(のフリップフロップ)に接続される。
That is, each pixel 111 (ADC) is connected to the latch portion (latch circuit) in the cluster 121 on a one-to-one basis, and a plurality of latch portions are grouped into one cluster. Further, one cluster 121 is connected to one (flip-flop) of the shift registers of the repeater 131 configured as a repeater circuit.
リピータ部102は、垂直方向(図中の縦方向)に短冊状に並列に配置された複数個のリピータ131で、1フレームの画像に対応するように構成される。クラスタ121は、リピータ131内で垂直方向に配置されているシフトレジスタのフリップフロップと、1対1の関係になるように垂直方向に複数個が配置される。
The repeater unit 102 is a plurality of repeaters 131 arranged in parallel in a strip shape in the vertical direction (vertical direction in the drawing), and is configured to correspond to an image of one frame. A plurality of clusters 121 are vertically arranged so as to have a one-to-one relationship with flip-flops of shift registers arranged vertically in the repeater 131.
GC発生部103は、グレイコード(GC:Gray Code)を、リピータ部102に配置された複数のリピータ131に入力する。リピータ131は、グレイコードの書き込み(Write)と読み出し(Read)を行う。図5は、リピータ131の動作の例を示している。
The GC generation unit 103 inputs a Gray code (GC: Gray Code) to a plurality of repeater 131s arranged in the repeater unit 102. The repeater 131 writes (Writes) and reads (Reads) the Gray code. FIG. 5 shows an example of the operation of the repeater 131.
図5では、リピータ131に対し、GC発生部103からのグレイコードが、クロック供給部133からの動作クロック(CK)とは反対側から入力され、バケツリレー式に順次転送されてデータ処理部134に出力される。これにより、各ラッチ部へのグレイコードの書き込みが行われるとともに、画素111のADCのコンパレータからの反転出力(VCO)に応じたラッチ保持動作が行われる。
In FIG. 5, the gray code from the GC generation unit 103 is input to the repeater 131 from the side opposite to the operating clock (CK) from the clock supply unit 133, and is sequentially transferred in a bucket relay manner to the data processing unit 134. Is output to. As a result, the Gray code is written to each latch portion, and the latch holding operation according to the inverting output (VCO) from the comparator of the ADC of the pixel 111 is performed.
すなわち、リピータ131では、シフトレジスタで転送されてくるグレイコードが、画素111のADCのコンパレータの反転信号(VCO)に応じてラッチ部に取り込まれる。そして、リピータ131を介してラッチ部に保持されたラッチデータ(グレイコード)が読み出され、バイナリコードへ変換された後に、SRAM104を利用して、相関二重サンプリング(CDS:Correlated Double Sampling)が行われる。
That is, in the repeater 131, the Gray code transferred by the shift register is taken into the latch portion according to the inverting signal (VCO) of the comparator of the ADC of the pixel 111. Then, the latch data (Gray code) held in the latch portion is read via the repeater 131, converted into a binary code, and then the correlated double sampling (CDS: Correlated Double Sampling) is performed using the SRAM 104. Will be done.
このようにして、画素111のAD変換結果は、一旦、クラスタ121内のラッチ部に保持された後に、リピータ131内のシフトレジスタに転送され、リピータ部102の外部に順次出力される。なお、クラスタ121やラッチ部の読み出し順や、選択する画素111は、制御回路(不図示)からの選択信号を制御することで、任意に変更することができる。
In this way, the AD conversion result of the pixel 111 is once held in the latch portion in the cluster 121, then transferred to the shift register in the repeater 131, and sequentially output to the outside of the repeater portion 102. The reading order of the cluster 121 and the latch portion and the pixel 111 to be selected can be arbitrarily changed by controlling the selection signal from the control circuit (not shown).
固体撮像装置10では、複数のラッチ部から1つのAD変換結果だけが選択されてクラスタ121の出力とされ、シフトレジスタを介して全てのクラスタ121のAD変換結果が出力されると、1回目の読み出しが完了される。続いて、別のラッチ部からAD変換結果が選択され、同様にクラスタ121内のラッチ部の回数分だけ読み出しが完了すると、1フレーム分の画像の全ての画素信号が読み出されたことになる。
In the solid-state image sensor 10, only one AD conversion result is selected from the plurality of latches and used as the output of the cluster 121, and when the AD conversion results of all the clusters 121 are output via the shift register, the first time. The reading is completed. Subsequently, when the AD conversion result is selected from another latch portion and the reading is completed for the number of times of the latch portion in the cluster 121, all the pixel signals of the image for one frame are read. ..
信号処理部105は、画素信号から得られる画像に基づいて、所定の信号処理を行い、その処理結果を外部に出力する。例えば、信号処理部105では、動き検出を行う場合には、ブロックマッチング演算のための信号処理などが行われる。また、動き判定用メモリ106には、動き判定用の相関値が記録される。
The signal processing unit 105 performs predetermined signal processing based on the image obtained from the pixel signal, and outputs the processing result to the outside. For example, in the signal processing unit 105, when motion detection is performed, signal processing for block matching calculation and the like are performed. Further, a correlation value for motion determination is recorded in the motion determination memory 106.
(クラスタの構成)
ここで、図6及び図7を参照して、クラスタ121の詳細な構成を説明する。 (Cluster configuration)
Here, a detailed configuration of thecluster 121 will be described with reference to FIGS. 6 and 7.
ここで、図6及び図7を参照して、クラスタ121の詳細な構成を説明する。 (Cluster configuration)
Here, a detailed configuration of the
図6は、画素アレイ部101に2次元状に配置された画素111を格子状に表しており、1つの四角が1つの画素111に対応している。また、ここでは、画素アレイ部101に配置された画素111のXY座標を、画素(n,m)と表記し、クラスタ121のXY座標を、クラスタ(j,k)と表記している。
FIG. 6 shows pixels 111 arranged two-dimensionally in the pixel array unit 101 in a grid pattern, and one square corresponds to one pixel 111. Further, here, the XY coordinates of the pixels 111 arranged in the pixel array unit 101 are referred to as pixels (n, m), and the XY coordinates of the cluster 121 are referred to as clusters (j, k).
ここで、画素アレイ部101に配置された複数の画素111から出力される画素信号から、1フレーム分の画像が生成されるが、1クラスタの範囲内の画素111は、例えば、左下のクラスタ(0,0)と、右上のクラスタ(j-1,k-1)の太枠内の四角に対応している。
Here, an image for one frame is generated from the pixel signals output from the plurality of pixels 111 arranged in the pixel array unit 101, but the pixels 111 within the range of one cluster are, for example, the lower left cluster ( It corresponds to 0,0) and the square in the thick frame of the cluster (j-1, k-1) on the upper right.
また、図7は、クラスタ121の内部の画素111の座標の詳細を示している。図7では、1クラスタの範囲を、4×32画素、すなわち、X方向(水平方向)に4画素とし、Y方向(垂直方向)に32画素としている。
Further, FIG. 7 shows the details of the coordinates of the pixel 111 inside the cluster 121. In FIG. 7, the range of one cluster is 4 × 32 pixels, that is, 4 pixels in the X direction (horizontal direction) and 32 pixels in the Y direction (vertical direction).
なお、図7においては、説明の都合上、1クラスタごとに間隔を空けて配置して、その空間には「4pixels」と「32 pixels」の行方向と列方向の画素数をそれぞれ表記しているが、実際にはそのような空間は存在しない。また、各クラスタ121では、クラスタ121の座標とともに、そのクラスタ121内の画素111の座標も表記している。
In FIG. 7, for convenience of explanation, the number of pixels in the row direction and the column direction of "4 pixels" and "32 pixels" are shown in the space by arranging them at intervals for each cluster. However, in reality, such a space does not exist. Further, in each cluster 121, the coordinates of the pixels 111 in the cluster 121 are also shown together with the coordinates of the cluster 121.
具体的には、左下のクラスタ(0,0)は、画素(0,0)ないし(0,31)、画素(1,0)ないし(1,31)、画素(2,0)ないし(2,31)、及び画素(3,0)ないし(3,31)から構成される。
Specifically, the lower left cluster (0,0) is pixel (0,0) to (0,31), pixel (1,0) to (1,31), pixel (2,0) to (2). , 31), and pixels (3,0) to (3,31).
また、クラスタ(0,0)の右側に隣接するクラスタ(1,0)は、画素(4,0)ないし(4,31)、画素(5,0)ないし(5,31)、画素(6,0)ないし(6,31)、及び画素(7,0)ないし(7,31)から構成され、クラスタ(1,0)の右側に隣接するクラスタ(2,0)は、画素(8,0)ないし(8,31)、画素(9,0)ないし(9,31)、画素(10,0)ないし(10,31)、及び画素(11,0)ないし(11,31)から構成される。
The clusters (1,0) adjacent to the right side of the clusters (0,0) are pixels (4,0) to (4,31), pixels (5,0) to (5,31), and pixels (6). , 0) to (6,31), and pixels (7,0) to (7,31), and the cluster (2,0) adjacent to the right side of the cluster (1,0) is the pixel (8,0). Consists of 0) to (8,31), pixels (9,0) to (9,31), pixels (10,0) to (10,31), and pixels (11,0) to (11,31) Will be done.
クラスタ(0,0)の上側に隣接するクラスタ(0,1)は、画素(0,32)ないし(0,63)、画素(1,32)ないし(1,63)、画素(2,32)ないし(2,63)、及び画素(3,32)ないし(3,63)から構成される。
The clusters (0,1) adjacent to the upper side of the clusters (0,0) are pixels (0,32) to (0,63), pixels (1,32) to (1,63), pixels (2,32). ) To (2,63), and pixels (3,32) to (3,63).
また、クラスタ(0,1)の右側に隣接するクラスタ(1,1)は、画素(4,32)ないし(4,63)、画素(5,32)ないし(5,63)、画素(6,32)ないし(6,63)、及び画素(7,32)ないし(7,63)から構成され、クラスタ(1,1)の右側に隣接するクラスタ(2,1)は、画素(8,32)ないし(8,63)、画素(9,32)ないし(9,63)、画素(10,32)ないし(10,63)、及び画素(11,32)ないし(11,63)から構成される。
In addition, the clusters (1,1) adjacent to the right side of the clusters (0,1) are pixels (4,32) to (4,63), pixels (5,32) to (5,63), and pixels (6). , 32) to (6,63), and pixels (7,32) to (7,63), and the cluster (2,1) adjacent to the right side of the cluster (1,1) is the pixel (8,1). Consists of 32) to (8,63), pixels (9,32) to (9,63), pixels (10,32) to (10,63), and pixels (11,32) to (11,63) Will be done.
これ以上は繰り返しになるので図示していないが、クラスタ121の座標とそのクラスタ121内の画素111の座標との関係を一般化すれば、次のように表すことができる。
Although it is not shown because it will be repeated any more, it can be expressed as follows by generalizing the relationship between the coordinates of the cluster 121 and the coordinates of the pixels 111 in the cluster 121.
すなわち、図中の右上に示すように、j=n/4,k=m/32としたとき、クラスタ(j-1,k-1)は、画素(n-4,m-32)ないし(n-4,m-1)、画素(n-3,m-32)ないし(n-3,m-1)、画素(n-2,m-32)ないし(n-2,m-1)、及び画素(n-1,m-32)ないし(n-1,m-1)から構成されることになる。
That is, as shown in the upper right of the figure, when j = n / 4, k = m / 32, the clusters (j-1, k-1) are pixels (n-4, m-32) or ( n-4, m-1), pixel (n-3, m-32) to (n-3, m-1), pixel (n-2, m-32) to (n-2, m-1) , And pixels (n-1, m-32) to (n-1, m-1).
(リピータの構成)
次に、図8ないし図10を参照して、リピータ部102の詳細な構成を説明する。 (Repeater configuration)
Next, a detailed configuration of therepeater unit 102 will be described with reference to FIGS. 8 to 10.
次に、図8ないし図10を参照して、リピータ部102の詳細な構成を説明する。 (Repeater configuration)
Next, a detailed configuration of the
図8は、リピータ部102に配置される複数のリピータ131と、その後段に設けられるリピータセレクタ132の構成を示している。図8において、リピータ部102には、リピータ回路として、j個のリピータ#0ないし#j-1が垂直方向(図中の縦方向)に短冊状に並列に配置されている。
FIG. 8 shows the configuration of a plurality of repeaters 131 arranged in the repeater unit 102 and a repeater selector 132 provided in the subsequent stage. In FIG. 8, in the repeater section 102, j repeaters # 0 to # j-1 are arranged in parallel in a strip shape in the vertical direction (vertical direction in the drawing) as a repeater circuit.
ここで、図9は、リピータ131の内部のクラスタ121の構成を示している。図9において、リピータ#0では、k個のクラスタ(0,0)ないし(0,k-1)が垂直方向に配置される。
Here, FIG. 9 shows the configuration of the cluster 121 inside the repeater 131. In FIG. 9, in repeater # 0, k clusters (0, 0) to (0, k-1) are arranged in the vertical direction.
また、リピータ#1,#2では、k個のクラスタ(1,0)ないし(1,k-1)と、k個のクラスタ(2,0)ないし(2,k-1)が垂直方向にそれぞれ配置される。なお、繰り返しになるので図示はしていないが、リピータ#3ないし#j-1においても同様に、k個のクラスタ121が垂直方向にそれぞれ配置される。
In repeaters # 1 and # 2, k clusters (1,0) to (1, k-1) and k clusters (2,0) to (2, k-1) are vertically arranged. Each is placed. Although not shown because it is repeated, k clusters 121 are similarly arranged in the vertical direction in the repeaters # 3 to # j-1.
このように、リピータ部102では、j個のリピータ#0ないし#j-1が垂直方向に短冊状に並列に配置され、各リピータ131では、k個のクラスタ121を垂直方向にそれぞれ束ねている。
In this way, in the repeater section 102, j repeaters # 0 to # j-1 are arranged in parallel in the vertical direction in a strip shape, and in each repeater 131, k clusters 121 are bundled in the vertical direction. ..
ここで、リピータ部102において、j個のリピータ#0ないし#j-1のうち、1つのリピータ131に注目すれば、注目しているリピータ131は、図10に示すような構造を有している。
Here, in the repeater unit 102, if one of the j repeaters # 0 to # j-1 is focused on the repeater 131, the repeater 131 of interest has a structure as shown in FIG. There is.
図10では、リピータ131内に、順序回路としてのフリップフロップ142を多段接続したシフトレジスタ141が垂直方向に設けられる。シフトレジスタ141において、k個のフリップフロップ142のそれぞれは、k個のクラスタ#0ないし#k-1のそれぞれに接続される。
In FIG. 10, a shift register 141 in which a flip-flop 142 as a sequential circuit is connected in multiple stages is provided in the repeater 131 in the vertical direction. In the shift register 141, each of the k flip-flops 142 is connected to each of the k clusters # 0 to # k-1.
すなわち、画素アレイ部101に配置された各画素111(のADC)は、クラスタ121内のラッチ部に1対1で接続され、複数のラッチ部が1クラスタにまとめられる。また、1つのクラスタ121は、リピータ131内のシフトレジスタ141を構成する1つのフリップフロップ142に接続される。
That is, each pixel 111 (ADC) arranged in the pixel array unit 101 is connected to the latch unit in the cluster 121 on a one-to-one basis, and a plurality of latch units are grouped into one cluster. Further, one cluster 121 is connected to one flip-flop 142 constituting the shift register 141 in the repeater 131.
そして、リピータ部102は、垂直方向に短冊状に並列に並んだ複数個のリピータ131で、1フレームの画像に対応するように構成される。また、クラスタ121は、リピータ131内で垂直方向に配置されるシフトレジスタ141のフリップフロップ142と1対1で接続され、垂直方向に複数個が配置される。
Then, the repeater unit 102 is a plurality of repeater 131s arranged in parallel in a strip shape in the vertical direction, and is configured to correspond to an image of one frame. Further, the cluster 121 is connected to the flip-flop 142 of the shift register 141 arranged vertically in the repeater 131 on a one-to-one basis, and a plurality of clusters 121 are arranged vertically.
図8の説明に戻り、リピータ部102において、リピータセレクタ132は、制御回路(不図示)からの選択信号に従い、リピータ#0ないし#j-1のうち、任意のリピータ131からの出力を選択して後段に出力する。
Returning to the description of FIG. 8, in the repeater unit 102, the repeater selector 132 selects the output from any repeater 131 among the repeaters # 0 to # j-1 according to the selection signal from the control circuit (not shown). And output to the latter stage.
例えば、代表点マッチング法を用いた動き検出を行う場合には、テンプレートブロックとマッチングブロックを用意するために、リピータセレクタ132では、水平間引き量と水平座標ずれに応じてリピータ131の出力が選択される。
For example, when motion detection is performed using the representative point matching method, the repeater selector 132 selects the output of the repeater 131 according to the horizontal thinning amount and the horizontal coordinate deviation in order to prepare the template block and the matching block. To.
図8の例では、リピータセレクタ132は、リピータ#0ないし#j-1のうち、偶数列のリピータ131からの出力、すなわち、クラスタ(0,0)ないし(0,k-1)、クラスタ(2,0)ないし(2,k-1)、・・・、クラスタ(j-2,0)ないし(0,k-1)の出力を選択している。
In the example of FIG. 8, the repeater selector 132 is an output from the repeater 131 of an even number of repeaters # 0 to # j-1, that is, a cluster (0,0) to (0, k-1), a cluster ( The output of 2,0) to (2, k-1), ..., Cluster (j-2,0) to (0, k-1) is selected.
なお、上述した説明では、リピータ部102内のリピータ131の配置として、垂直方向に短冊状に並列に配置された場合を示したが、垂直方向に限らず、水平方向に並列に配置されてもよい。また、リピータ131内のクラスタ121の配置は、垂直方向に限らず、水平方向と垂直方向に複数個を並べてもよい。さらに、リピータ131内のシフトレジスタ141の段数や並列数は任意である。
In the above description, the repeater 131 in the repeater section 102 is arranged in parallel in a strip shape in the vertical direction, but the repeater 131 may be arranged in parallel not only in the vertical direction but also in the horizontal direction. Good. Further, the arrangement of the clusters 121 in the repeater 131 is not limited to the vertical direction, and a plurality of clusters 121 may be arranged in the horizontal direction and the vertical direction. Further, the number of stages and the number of parallels of the shift register 141 in the repeater 131 are arbitrary.
(動き検出の例)
次に、図11ないし図15を参照して、信号処理部105(図4)による信号処理として、代表点マッチング法を用いた動き検出の例を説明する。 (Example of motion detection)
Next, with reference to FIGS. 11 to 15, an example of motion detection using the representative point matching method as signal processing by the signal processing unit 105 (FIG. 4) will be described.
次に、図11ないし図15を参照して、信号処理部105(図4)による信号処理として、代表点マッチング法を用いた動き検出の例を説明する。 (Example of motion detection)
Next, with reference to FIGS. 11 to 15, an example of motion detection using the representative point matching method as signal processing by the signal processing unit 105 (FIG. 4) will be described.
代表点マッチング法では、複数の検出ブロックのそれぞれで第1画像の代表点の画素値と第2画像のサンプリング点の画素値との差が算出され、その差分の総和が累積相関値(マッチング誤差ともいう)とされる。そして、第1画像と第2画像の相対位置をずらしながら複数の累積相関値を計算して、相対位置のずれを両ブロック間の動きとして検出する。
In the representative point matching method, the difference between the pixel value of the representative point of the first image and the pixel value of the sampling point of the second image is calculated for each of the plurality of detection blocks, and the sum of the differences is the cumulative correlation value (matching error). Also called). Then, a plurality of cumulative correlation values are calculated while shifting the relative positions of the first image and the second image, and the deviation of the relative positions is detected as the movement between the two blocks.
この例では、水平方向の1画素単位で0ないし+7画素の範囲で、かつ、垂直方向の1画素単位で0ないし+31画素の範囲での動き検出の例を示す。
In this example, an example of motion detection in the range of 0 to +7 pixels in one pixel unit in the horizontal direction and in the range of 0 to +31 pixels in one pixel unit in the vertical direction is shown.
そのため、代表点マッチング法を用いた動き検出では、水平方向に8画素間引きで、かつ、垂直方向に32画素間引きとなるテンプレートブロックを用意する。これに対し、サーチエリアとして、同一の間引き量で、その座標が水平方向と垂直方向にそれぞれ1画素ずつずれたマッチングブロックを用意する。
Therefore, in motion detection using the representative point matching method, a template block is prepared in which 8 pixels are thinned out in the horizontal direction and 32 pixels are thinned out in the vertical direction. On the other hand, as a search area, a matching block having the same thinning amount and whose coordinates are shifted by one pixel each in the horizontal direction and the vertical direction is prepared.
そして、固体撮像装置10では、画素アレイ部101に配置された画素111を用いて、テンプレートブロックの各画素111と、8×32個のマッチングブロックの各画素111を用意するに際し、上述したクラスタ121とリピータ131の構成を利用することができる。
Then, in the solid-state image sensor 10, when the pixels 111 arranged in the pixel array unit 101 are used to prepare each pixel 111 of the template block and each pixel 111 of the 8 × 32 matching blocks, the cluster 121 described above is used. And the configuration of the repeater 131 can be used.
すなわち、上述した図7の画素座標の例において、クラスタ(2j,k)に注目すれば、例えば、クラスタ(0,0)、クラスタ(2,0)、クラスタ(0,1)、及びクラスタ(2,1)における左下に位置する画素(0,0)、画素(8,0)、画素(0,32)、及び画素(8,32)から画素信号を読み出せば、テンプレートブロックと全てのマッチングブロックを用意することが可能となる。なお、図7の例では、注目画素を、白黒を反転させて表している。
That is, in the pixel coordinate example of FIG. 7 described above, focusing on the cluster (2j, k), for example, the cluster (0,0), the cluster (2,0), the cluster (0,1), and the cluster ( If the pixel signal is read from the pixel (0,0), pixel (8,0), pixel (0,32), and pixel (8,32) located in the lower left of 2,1), the template block and all It is possible to prepare a matching block. In the example of FIG. 7, the pixel of interest is represented by inverting black and white.
ここでは、水平方向に8画素間引きで、かつ、垂直方向に32画素間引きとなるテンプレートブロックとマッチングブロックを用意しているため、クラスタ(2j,k)の各左下に位置する画素111から画素信号が読み出されるようにしている。そして、リピータ131では、1クロックごとに、各クラスタ121内の画素111が順次選択され、4×32回の選択で全ての画素111が読み出される。
Here, since a template block and a matching block that thin out 8 pixels in the horizontal direction and thin out 32 pixels in the vertical direction are prepared, the pixel signal is transmitted from the pixel 111 located at the lower left of each of the clusters (2j, k). Is read out. Then, in the repeater 131, the pixels 111 in each cluster 121 are sequentially selected for each clock, and all the pixels 111 are read out by selecting 4 × 32 times.
このように、本技術では、画素並列ADC方式を採用するに際し、画素111からの画素信号の読み出しを、リピータ部102の各リピータ131内で垂直方向に配置されるシフトレジスタ122を介して行う。また、その読み出し順序は、クラスタ121内に配置される複数の画素111を1画素ずつ選択して行われる。
As described above, in the present technology, when the pixel parallel ADC method is adopted, the pixel signal from the pixel 111 is read out via the shift register 122 arranged in the vertical direction in each repeater 131 of the repeater unit 102. Further, the reading order is performed by selecting a plurality of pixels 111 arranged in the cluster 121 one by one.
そのため、この出力としては、水平方向と垂直方向の画素111が間引かれた画像(間引き画像)であって、位相シフトを繰り返しながら読み出される画素信号に対応したものとされる。そして、この出力を、マッチングブロックに適合したクラスタサイズと、サーチ方法に適合した画素の選択順(間引き位相順)に適応させることで、代表点マッチング法を用いた動き検出が行われる。
Therefore, this output is an image in which the pixels 111 in the horizontal direction and the vertical direction are thinned out (thinned-out image), and corresponds to the pixel signal read out while repeating the phase shift. Then, by adapting this output to the cluster size suitable for the matching block and the pixel selection order (thinning phase order) suitable for the search method, motion detection using the representative point matching method is performed.
図11は、間引きされたテンプレートブロックとマッチングブロックの出力の例を示している。
FIG. 11 shows an example of the output of the thinned out template block and the matching block.
図11において、図中の円形の記号(○)は、画素アレイ部101に2次元状に配置された画素111を意味し、水平方向と垂直方向に並べられた複数の円形の記号によって、画素111が配置された全領域のうち、一部の領域を代表して表している。
In FIG. 11, the circular symbol (◯) in the drawing means the pixels 111 arranged two-dimensionally in the pixel array unit 101, and the pixels are represented by a plurality of circular symbols arranged in the horizontal and vertical directions. Of the total area where 111 is arranged, a part of the area is represented.
また、図11では、画素111を表す円形の記号に、3種類の模様を付しているが、同一の模様が付された円形の記号は、異なるクラスタ内に配置された画素111であって、同一の位相であることを表している。具体的には、例えば、図中の左上の画素111を位相(0,0)、その下側に隣接する画素111を位相(0,1)、その右側に隣接する画素111を位相(1,0)などのように表すことができる。
Further, in FIG. 11, three types of patterns are attached to the circular symbols representing the pixels 111, and the circular symbols having the same pattern are the pixels 111 arranged in different clusters. , Indicates that they are in the same phase. Specifically, for example, the upper left pixel 111 in the figure is the phase (0,0), the pixel 111 adjacent to the lower side thereof is the phase (0,1), and the pixel 111 adjacent to the right side thereof is the phase (1,0). It can be expressed as 0) and so on.
なお、図11では、説明の都合上、同一のクラスタ121内で位相シフトされる画素111として、3種類の模様により3つの画素111を例示したが、同一のクラスタ121内の他の画素111についても同様に位相シフトを繰り返しながら読み出される。
In FIG. 11, for convenience of explanation, three pixels 111 are illustrated by three types of patterns as the pixels 111 whose phase is shifted in the same cluster 121, but other pixels 111 in the same cluster 121 are illustrated. Is also read while repeating the phase shift.
このように、クラスタ121ごとに、位相シフトを繰り返しながら画素信号を読み出すことで、図12に示すように、位相(間引き位相)の異なる間引き画像(画像フレーム)が順次読み出される。
In this way, by reading out the pixel signals while repeating the phase shift for each cluster 121, as shown in FIG. 12, thinned-out images (image frames) having different phases (thinned-out phases) are sequentially read out.
図12においては、図中の左側から右側に向かう方向が時間の方向とされ、図中の矩形の記号は、位相シフトをしながら読み出された画素信号から得られる間引き画像を時系列で表している。また、図12では、矩形の記号には、3種類の模様を付しているが、この模様は、図11に示した円形の記号に付された模様に対応している。
In FIG. 12, the direction from the left side to the right side in the figure is the direction of time, and the rectangular symbols in the figure represent the thinned images obtained from the pixel signals read while performing the phase shift in chronological order. ing. Further, in FIG. 12, three types of patterns are attached to the rectangular symbols, and these patterns correspond to the patterns attached to the circular symbols shown in FIG.
すなわち、図12において、時系列に並べられた間引き画像のうち、先頭の間引き画像TI1は、位相(0,0)の画素111の画素信号から得られる画像であり、2番目の間引き画像TI2は、位相(0,1)の画素111の画素信号から得られる画像である。また、i番目の間引き画像TIiは、位相(1,0)の画素111の画素信号から得られる画像である。
That is, in FIG. 12, among the thinned images arranged in time series, the first thinned image TI 1 is an image obtained from the pixel signal of the pixel 111 of the phase (0,0), and the second thinned image TI Reference numeral 2 denotes an image obtained from the pixel signal of the pixel 111 having the phase (0, 1). The i-th thinned-out image TI i is an image obtained from the pixel signal of the pixel 111 of the phase (1, 0).
図13は、サーチエリアSA内における、間引き画像TI1,TI2,TIiの領域の例を示している。図13において、間引き画像TI1は、位相(0,0)に相当する画素111に対応した領域となる。また、間引き画像TI2は、位相(0,1)に相当する画素111に対応した領域であって、間引き画像TI1の領域に対して1画素分だけ下側にずれた領域となる。さらに、間引き画像TIiは、位相(1,0)に相当する画素111に対応した領域であって、間引き画像TI1の領域に対して1画素分だけ右側にずれた領域となる。
FIG. 13 shows an example of the regions of the thinned-out images TI 1 , TI 2 , and TI i in the search area SA. In FIG. 13, the thinned-out image TI 1 is a region corresponding to the pixel 111 corresponding to the phase (0,0). Further, the thinned-out image TI 2 is a region corresponding to the pixel 111 corresponding to the phase (0, 1), and is a region shifted downward by one pixel with respect to the region of the thinned-out image TI 1 . Further, the thinned image TI i is a region corresponding to the pixel 111 corresponding to the phase (1, 0), and is a region shifted to the right by one pixel with respect to the region of the thinned image TI 1 .
このように、クラスタ121とリピータ131の構成を利用することで、間引き画像TIを、位相シフトしながら繰り返し読み出すことが可能となるため、例えば、水平方向に8画素間引きで、かつ、垂直方向に32画素間引きとなるテンプレートブロックと、同一の間引き量で、その座標が水平方向と垂直方向にそれぞれ1画素ずつずれたマッチングブロックをそれぞれ用意することが可能となる。
In this way, by using the configuration of the cluster 121 and the repeater 131, the thinned-out image TI can be repeatedly read out while shifting the phase. Therefore, for example, the thinned-out image TI can be thinned out by 8 pixels in the horizontal direction and in the vertical direction. It is possible to prepare a template block for thinning out 32 pixels and a matching block for which the coordinates are shifted by 1 pixel each in the horizontal direction and the vertical direction with the same thinning amount.
なお、画素並列ADC方式を採用した場合、列並列ADC方式を用いた場合と比べて、100倍以上の速度でADCを含む撮像処理を行うことが可能であるため、位相をずらしながら、マッチングブロック用の画素信号を時分割で読み出すことで、例えば、10,000fpsなどのフレームレートが実現可能とされる。
When the pixel parallel ADC method is adopted, the imaging process including the ADC can be performed 100 times faster than when the column parallel ADC method is used. Therefore, the matching block is performed while shifting the phase. It is possible to realize a frame rate such as 10,000 fps by reading out the pixel signal for the device in a time-division manner.
信号処理部105では、テンプレートブロックに対応する各画素111の画素信号をメモリに保持しておき、順次読み出されるマッチングブロックに対応する各画素111の画素値と、保持しているテンプレートブロックに対応する各画素111の画素値との差分総和をとるなどして相関値を求める。
The signal processing unit 105 holds the pixel signal of each pixel 111 corresponding to the template block in the memory, and corresponds to the pixel value of each pixel 111 corresponding to the matching block to be sequentially read and the held template block. The correlation value is obtained by taking the sum of the differences from the pixel values of each pixel 111.
そして、信号処理部105では、相関が強いマッチングブロックの座標のずれを、動きベクトルMVとして検出する。この動き検出では、動き判定用の相関値が用いられるが、差分総和の絶対値が小さいほど、相関が強いものとすることができる。なお、ここでは、撮像のフレームレートを変更したり、ブロックマッチングを休止したりすることで、動き検出の周期(頻度)を変更することができる。
Then, the signal processing unit 105 detects the deviation of the coordinates of the matching block having a strong correlation as the motion vector MV. In this motion detection, a correlation value for motion determination is used, and the smaller the absolute value of the sum of differences, the stronger the correlation. Here, the motion detection cycle (frequency) can be changed by changing the frame rate of imaging or pausing block matching.
次に、図14のフローチャートを参照して、動き検出を用いた手ぶれ補正処理の流れを説明する。
Next, the flow of the image stabilization process using motion detection will be described with reference to the flowchart of FIG.
ここでは、固体撮像装置10を搭載した電子機器(後述する図19の電子機器1000)を使用して、ユーザが被写体を撮影しているときに、手ぶれが発生した場合を想定する。なお、電子機器としては、例えば、スマートフォンやデジタルスチルカメラ、デジタルビデオカメラなどの撮像装置を含む。
Here, it is assumed that a camera shake occurs while the user is shooting a subject by using an electronic device (electronic device 1000 in FIG. 19 described later) equipped with the solid-state image sensor 10. The electronic device includes, for example, an imaging device such as a smartphone, a digital still camera, or a digital video camera.
信号処理部105では、特定形状のパターンを含む間引き画像TIを、テンプレートブロックとして保持し(S11)、サーチエリアSA内から順次読み出される間引き画像TIをマッチングブロックとして順次取得する(S12)。
The signal processing unit 105 holds the thinned image TI including the pattern of the specific shape as a template block (S11), and sequentially acquires the thinned image TI sequentially read from the search area SA as a matching block (S12).
信号処理部105は、保持しているテンプレートブロックに対応する各画素111の画素値と、順次取得されるマッチングブロックに対応する各画素111の画素値との差分総和をとり、動きを検出する。
The signal processing unit 105 detects the motion by summing the difference between the pixel value of each pixel 111 corresponding to the held template block and the pixel value of each pixel 111 corresponding to the matching block acquired sequentially.
そして、こうして検出された動きベクトルMVの値から、次の撮像の読み出し開始座標を選択する(S14)ことで、動き補正(手ぶれ補正)を実現することができる。
Then, by selecting the read start coordinates of the next imaging from the value of the motion vector MV detected in this way (S14), motion correction (camera shake correction) can be realized.
図15は、動き補正後の有効画素領域の例を示している。図15の例では、画素アレイ部101に2次元状に配置された画素111を格子状に表しており、この格子状の領域全体が撮像可能画素領域IAとなる。また、撮像可能画素領域IAよりも小さい領域を有効画素領域EAとしている。つまり、固体撮像装置10では、画角よりも大きい撮像画像を撮像して動き検出を行っている。
FIG. 15 shows an example of the effective pixel region after motion correction. In the example of FIG. 15, the pixels 111 arranged two-dimensionally in the pixel array unit 101 are represented in a grid pattern, and the entire grid-like region is the imageable pixel region IA. Further, a region smaller than the imageable pixel region IA is defined as an effective pixel region EA. That is, the solid-state image sensor 10 captures an captured image larger than the angle of view to detect motion.
このとき、固体撮像装置10において、信号処理部105は、手ぶれによって検出された動きベクトルMVの値(水平方向と垂直方向の移動量)が、被写体が図中の右上方向に移動したことを示した場合、次の撮像時の読み出し開始座標を、画素(0,0)から画素(4,4)に変更する。これにより、手ぶれによる被写体の移動に対応して有効画素領域EAも右上方向にずらされ、有効画素領域EA内に被写体を収めることができるため、手ぶれ補正が実現される。
At this time, in the solid-state image sensor 10, the signal processing unit 105 indicates that the value of the motion vector MV (movement amount in the horizontal direction and the vertical direction) detected by the camera shake has moved to the upper right direction in the drawing. If so, the read start coordinates for the next imaging are changed from pixel (0,0) to pixel (4,4). As a result, the effective pixel area EA is also shifted in the upper right direction in response to the movement of the subject due to camera shake, and the subject can be accommodated in the effective pixel area EA, so that camera shake correction is realized.
このように、本技術では、固体撮像装置10のADCの方式として画素並列ADC方式を採用し、画素アレイ部101に配置された画素111(のADC)からの画素信号(AD変換結果)を、リピータ部102を介して信号処理部105に読み出す際に、クラスタ121とリピータ131の構成を利用して、読み出し順番がブロックマッチング用の間引き画像を時分割で読み出すのと同様に行うことができる。
As described above, in the present technology, the pixel parallel ADC method is adopted as the ADC method of the solid-state imaging device 10, and the pixel signal (AD conversion result) from the pixel 111 (ADC) arranged in the pixel array unit 101 is obtained. When reading to the signal processing unit 105 via the repeater unit 102, the configuration of the cluster 121 and the repeater 131 can be used in the same manner as when the thinned image for block matching is read out in a time-divided manner.
ここで、例えば、代表点マッチング法を用いた動き検出を行う場合に、テンプレートブロックとマッチングブロックを用意する際に、固体撮像装置10自体を、あたかもフレームメモリのように扱うことができる。
Here, for example, in the case of performing motion detection using the representative point matching method, the solid-state image sensor 10 itself can be treated as if it were a frame memory when the template block and the matching block are prepared.
そのため、固体撮像装置10では、一般的な動き検出方法(図1ないし図3)に示したような、DSP902により、専用のフレームメモリ903(DRAMやSRAM等)を用いてブロックマッチング等の処理を行う必要はない。また、固体撮像装置10では、専用の回路を設ける必要がなくなり、消費電力が大きくなることもない。すなわち、固体撮像装置10内で、ブロックマッチングによる動き検出を低コストに実現する構造とアルゴリズムを提供することが可能となる。
Therefore, in the solid-state image sensor 10, processing such as block matching is performed by the DSP 902 using a dedicated frame memory 903 (DRAM, SRAM, etc.) as shown in a general motion detection method (FIGS. 1 to 3). You don't have to do it. Further, in the solid-state image sensor 10, it is not necessary to provide a dedicated circuit, and the power consumption does not increase. That is, it is possible to provide a structure and an algorithm that realizes motion detection by block matching at low cost in the solid-state image sensor 10.
また、固体撮像装置10内に、手ぶれ検出機能(全画面のみでない動き検出機能)や、手ぶれ補正機能(全画面のみでない動き補正機能)などを取り込むことが可能となる。さらには、カメラシステム全体でのコストや、消費電力の低減を図ることができる。
In addition, it is possible to incorporate a camera shake detection function (motion detection function not only for the full screen), a camera shake correction function (motion correction function not only for the full screen), and the like into the solid-state image sensor 10. Furthermore, the cost of the entire camera system and the power consumption can be reduced.
さらに、固体撮像装置10では、ADCの方式として画素並列ADC方式を採用しているため、位相の異なる間引き画像を大量に必要となる場合に有利となる。
Further, since the solid-state image sensor 10 employs the pixel parallel ADC method as the ADC method, it is advantageous when a large number of thinned-out images having different phases are required.
なお、間引き画像ではなく、通常の画像を用いた通常のブロックマッチングの場合でも、画素並列ADC方式を用いた場合、列並列ADC方式を用いた場合と比べて、より高速に撮像することができるため、性能的には十分とされる。また、固体撮像装置10では、画素並列ADC方式を採用し、超高速での時分割読み出しを行うため、間引き読み出し後の撮像で、ブロックマッチング用の間引き画像以外に、撮像画像も取得可能である。
Even in the case of normal block matching using a normal image instead of a thinned image, when the pixel parallel ADC method is used, the image can be imaged at a higher speed than when the column parallel ADC method is used. Therefore, it is considered to be sufficient in terms of performance. Further, since the solid-state image sensor 10 employs a pixel parallel ADC method and performs time-division readout at ultra-high speed, it is possible to acquire an captured image in addition to the thinned image for block matching in the imaging after the thinned out readout. ..
また、固体撮像装置10では、画素並列ADC方式を用いた場合、高フレームレートで動作する全画素一括シャッタとなるため、フォーカルプレーン歪みの影響をほぼ受けずに、より誤判定の少ない動き検出を実現し、精度の高い動き補正を行うことが可能となる。
Further, in the solid-state image sensor 10, when the pixel parallel ADC method is used, the shutter is a batch shutter for all pixels that operates at a high frame rate, so that motion detection with less erroneous determination is performed with almost no influence of focal plane distortion. This makes it possible to perform highly accurate motion correction.
このように、本技術では、画素並列ADC方式を用いて、より少ない回路規模で、消費電力を低減することができる。特に、本技術を適用した動き検出方法では、画素並列ADC方式を用いた場合に、より少ない回路規模で、消費電力を低減した動き検出を実現することができる。
As described above, in this technology, the pixel parallel ADC method can be used to reduce the power consumption with a smaller circuit scale. In particular, in the motion detection method to which the present technology is applied, when the pixel parallel ADC method is used, motion detection with reduced power consumption can be realized with a smaller circuit scale.
(画素並列ADCの構造)
図16は、固体撮像装置10の3次元構造の例を示している。 (Structure of pixel parallel ADC)
FIG. 16 shows an example of the three-dimensional structure of the solid-state image sensor 10.
図16は、固体撮像装置10の3次元構造の例を示している。 (Structure of pixel parallel ADC)
FIG. 16 shows an example of the three-dimensional structure of the solid-
図16において、固体撮像装置10は、裏面側から光が入射される画素基板100Aと、信号処理を担うロジック基板100Bとを貼り合わせた構造からなる。画素基板100Aには、画素アレイ部101とDAC(Digital to Analog Converter)107が少なくとも形成される。ロジック基板100Bには、ラッチ・リピータ部108とロジック演算部109が少なくとも形成される。
In FIG. 16, the solid-state image sensor 10 has a structure in which a pixel substrate 100A into which light is incident from the back surface side and a logic substrate 100B responsible for signal processing are bonded together. At least the pixel array unit 101 and the DAC (Digital to Analog Converter) 107 are formed on the pixel substrate 100A. At least the latch repeater unit 108 and the logic calculation unit 109 are formed on the logic board 100B.
画素アレイ部101には、複数の画素111が2次元状に配置される。各画素111には、1つのシングルスロープ方式に対応したADC151が設けられ、そのコンパレータの反転出力(VCO)が、ラッチ・リピータ部108に設けられた1つのラッチ回路152に接続される(図17)。また、ラッチ・リピータ部108は、リピータ部102を含む。
A plurality of pixels 111 are arranged two-dimensionally in the pixel array unit 101. An ADC 151 corresponding to one single slope method is provided for each pixel 111, and the inverted output (VCO) of the comparator is connected to one latch circuit 152 provided in the latch repeater unit 108 (FIG. 17). ). Further, the latch repeater unit 108 includes a repeater unit 102.
すなわち、画素アレイ部101において、n×m個の画素111が2次元状に配列される場合には、ADC151とラッチ回路152との対が、n×m組設けられ、ADC151のコンパレータの反転信号(VCO)に応じて、リピータ部102(のリピータ131)からのグレイコードがラッチ回路152に取り込まれる。
That is, in the pixel array unit 101, when n × m pixels 111 are arranged in a two-dimensional manner, a pair of ADC 151 and a latch circuit 152 is provided in n × m pairs, and an inversion signal of the comparator of ADC 151 is provided. The gray code from the repeater unit 102 (repeater 131) is taken into the latch circuit 152 according to (VCO).
図18は、固体撮像装置10の要部の回路構成の例を示している。
FIG. 18 shows an example of the circuit configuration of the main part of the solid-state image sensor 10.
画素111は、画素回路161、差動増幅回路162、正帰還回路(PFB:Positive Feedback)163、及び多重回路(MUX:Multiplexer)164を含む。また、画素111では、差動増幅回路162と正帰還回路163により、コンパレータ160を構成している。なお、コンパレータ160は、ADC151の一部を構成している。
The pixel 111 includes a pixel circuit 161, a differential amplifier circuit 162, a positive feedback circuit (PFB: Positive Feedback) 163, and a multiplex circuit (MUX: Multiplexer) 164. Further, in the pixel 111, the comparator 160 is composed of the differential amplifier circuit 162 and the positive feedback circuit 163. The comparator 160 constitutes a part of the ADC 151.
画素回路161は、光電変換部としてのフォトダイオード171、転送トランジスタ172、リセットトランジスタ173、及びFD(Floating Diffusion)174、排出トランジスタ175を含んで構成される。
The pixel circuit 161 includes a photodiode 171 as a photoelectric conversion unit, a transfer transistor 172, a reset transistor 173, an FD (Floating Diffusion) 174, and an emission transistor 175.
転送トランジスタ172は、フォトダイオード171で生成された電荷をFD174に転送する。リセットトランジスタ173は、FD174に保持されている電荷をリセットする。FD174は、差動増幅回路162のトランジスタ177のゲートに接続される。
The transfer transistor 172 transfers the electric charge generated by the photodiode 171 to the FD174. The reset transistor 173 resets the electric charge held in the FD174. The FD174 is connected to the gate of the transistor 177 of the differential amplifier circuit 162.
これにより、差動増幅回路162のトランジスタ177は、画素回路161の増幅トランジスタとしても機能する。なお、排出トランジスタ175は、フォトダイオード171に蓄積された電荷を排出する。
As a result, the transistor 177 of the differential amplifier circuit 162 also functions as an amplifier transistor of the pixel circuit 161. The discharge transistor 175 discharges the electric charge accumulated in the photodiode 171.
差動増幅回路162は、差動対となるトランジスタ181,182、カレントミラーを構成するトランジスタ183,184、及び入力バイアス電流(Vb)に応じた電流を供給する定電流源としてのトランジスタ185を含んで構成される。
The differential amplifier circuit 162 includes transistors 181 and 182 as a differential pair, transistors 183 and 184 forming a current mirror, and transistors 185 as a constant current source for supplying a current corresponding to an input bias current (Vb). Consists of.
差動対となるトランジスタ181,182のうち、トランジスタ181のゲートには、DAC107(図16)から出力される参照信号(REF)が入力され、トランジスタ182のゲートには、画素111内の画素回路161から出力される画素信号(SIG)が入力される。
Of the transistors 181 and 182 that form a differential pair, a reference signal (REF) output from the DAC 107 (FIG. 16) is input to the gate of the transistor 181 and a pixel circuit in the pixel 111 is input to the gate of the transistor 182. The pixel signal (SIG) output from 161 is input.
差動増幅回路162では、トランジスタ181のゲートに入力された参照信号(REF)と、トランジスタ182のゲートに入力された画素信号(SIG)とが比較され、参照信号(REF)と画素信号(SIG)との比較結果に応じた出力信号(VCO)が出力される。
In the differential amplifier circuit 162, the reference signal (REF) input to the gate of the transistor 181 and the pixel signal (SIG) input to the gate of the transistor 182 are compared, and the reference signal (REF) and the pixel signal (SIG) are compared. The output signal (VCO) is output according to the comparison result with).
正帰還回路163は、トランジスタ191ないし193、及びNOR回路194を含んで構成される。また、NOR回路194は、トランジスタ195ないし198を含んで構成される。
The positive feedback circuit 163 includes transistors 191 to 193 and a NOR circuit 194. Further, the NOR circuit 194 is configured to include transistors 195 to 198.
トランジスタ182のドレインとトランジスタ184のドレインの接続点が、差動増幅回路162の出力端とされ、トランジスタ186,187を介して、正帰還回路163内のトランジスタ191のドレインに接続される。差動増幅回路162から出力された出力信号(VCO)は、正帰還回路163内のNOR回路194に入力され、コンパレータ160の反転信号(VCO)として出力される。
The connection point between the drain of the transistor 182 and the drain of the transistor 184 is the output end of the differential amplifier circuit 162, and is connected to the drain of the transistor 191 in the positive feedback circuit 163 via the transistors 186 and 187. The output signal (VCO) output from the differential amplifier circuit 162 is input to the NOR circuit 194 in the positive feedback circuit 163, and is output as an inverted signal (VCO) of the comparator 160.
多重回路164には、コンパレータ160(の正帰還回路163)からの反転信号(VCO)と、制御信号WORDが入力される。多重回路164では、制御信号WORDが制御されることで、コンパレータ160からの反転信号(VCO)が、ラッチ・リピータ部108内のラッチ回路152に出力される。
The inverting signal (VCO) from the comparator 160 (positive feedback circuit 163) and the control signal WORD are input to the multiplex circuit 164. In the multiplex circuit 164, the inversion signal (VCO) from the comparator 160 is output to the latch circuit 152 in the latch repeater unit 108 by controlling the control signal WORD.
ラッチ・リピータ部108は、リピータ131とラッチ回路152から構成される。各画素111(のADC151)は、クラスタ121内のラッチ回路152に1対1で接続され、複数のラッチ回路152が1クラスタにまとめられている。また、1つのクラスタ121は、リピータ131のシフトレジスタ141の1つのフリップフロップ142に接続される(図10)。
The latch repeater unit 108 is composed of a repeater 131 and a latch circuit 152. Each pixel 111 (ADC 151) is connected to the latch circuit 152 in the cluster 121 on a one-to-one basis, and a plurality of latch circuits 152 are grouped into one cluster. Further, one cluster 121 is connected to one flip-flop 142 of the shift register 141 of the repeater 131 (FIG. 10).
そして、各画素111(のADC151)からのAD変換結果は、クラスタ121内のラッチ回路152に保持された後に、リピータ131内のシフトレジスタ141に転送され、ラッチ・リピータ部108の外部に順次出力される。
Then, the AD conversion result from each pixel 111 (ADC 151) is held by the latch circuit 152 in the cluster 121, transferred to the shift register 141 in the repeater 131, and sequentially output to the outside of the latch repeater unit 108. Will be done.
<2.変形例>
<2. Modification example>
上述した説明では、固体撮像装置10において、クラスタ121とリピータ131の構成を利用して得られる間引き画像を、動き検出に用いた場合を例示したが、例えば、間引き画像(に応じた表示画像)を表示部(例えば、図19の表示部1015)に表示したり、あるいは間引き画像(のデータ)を記憶部(例えば、図19の記憶部1016)に記憶したりするなど、間引き画像を他の用途に用いてもよい。
In the above description, the case where the thinned-out image obtained by utilizing the configuration of the cluster 121 and the repeater 131 is used for motion detection in the solid-state imaging device 10 is illustrated, but for example, the thinned-out image (display image according to) Is displayed on the display unit (for example, the display unit 1015 of FIG. 19), or the thinned image (data) is stored in the storage unit (for example, the storage unit 1016 of FIG. 19). It may be used for various purposes.
また、上述した説明では、画像処理による画像間の動きの検出方法として、代表点マッチング法を用いた場合を例示したが、クラスタ121とリピータ131の構成を利用して得られる間引き画像を用いて動きを検出可能な方法であれば、他の検出方法を用いてもよい。
Further, in the above description, the case where the representative point matching method is used as a method for detecting the movement between images by image processing is illustrated, but the thinned-out image obtained by utilizing the configuration of the cluster 121 and the repeater 131 is used. Any other detection method may be used as long as the motion can be detected.
<3.電子機器の構成>
<3. Electronic device configuration>
図19は、本技術を適用した固体撮像装置を搭載した電子機器の構成例を示している。
FIG. 19 shows a configuration example of an electronic device equipped with a solid-state image sensor to which the present technology is applied.
電子機器1000は、例えば、デジタルスチルカメラやビデオカメラ等の撮像装置や、スマートフォンやタブレット型端末等の携帯端末装置などの撮像機能を有する電子機器である。
The electronic device 1000 is, for example, an electronic device having an imaging function such as an imaging device such as a digital still camera or a video camera, or a mobile terminal device such as a smartphone or a tablet terminal.
電子機器1000は、レンズ部1011、固体撮像装置1012、信号処理部1013、制御部1014、表示部1015、記憶部1016、操作部1017、通信部1018、及び電源部1019から構成される。また、電子機器1000において、信号処理部1013ないし電源部1019は、バス1021を介して相互に接続されている。
The electronic device 1000 includes a lens unit 1011, a solid-state imaging device 1012, a signal processing unit 1013, a control unit 1014, a display unit 1015, a storage unit 1016, an operation unit 1017, a communication unit 1018, and a power supply unit 1019. Further, in the electronic device 1000, the signal processing unit 1013 to the power supply unit 1019 are connected to each other via the bus 1021.
レンズ部1011は、ズームレンズやフォーカスレンズ等から構成され、被写体からの光を集光する。レンズ部1011により集光された光(被写体光)は、固体撮像装置1012に入射される。
The lens unit 1011 is composed of a zoom lens, a focus lens, and the like, and collects light from the subject. The light (subject light) focused by the lens unit 1011 is incident on the solid-state image sensor 1012.
固体撮像装置1012は、本技術を適用した固体撮像装置(例えば、上述した固体撮像装置10)である。固体撮像装置1012は、レンズ部1011を介して受光した光(被写体光)を光電変換してその結果得られる画素信号をAD変換し、その結果得られる信号を、信号処理部1013に供給する。
The solid-state image sensor 1012 is a solid-state image sensor to which the present technology is applied (for example, the solid-state image sensor 10 described above). The solid-state imaging device 1012 performs photoelectric conversion of the light (subject light) received through the lens unit 1011 and AD-converts the pixel signal obtained as a result, and supplies the signal obtained as a result to the signal processing unit 1013.
信号処理部1013は、例えばDSP(Digital Signal Processor)回路等の信号処理回路から構成され、固体撮像装置1012から供給される信号に対する信号処理を行う。例えば、信号処理部1013は、固体撮像装置1012からの信号に対して信号処理を施すことで、静止画又は動画の画像データを生成し、表示部1015又は記憶部1016に供給する。
The signal processing unit 1013 is composed of a signal processing circuit such as a DSP (Digital Signal Processor) circuit, and performs signal processing on a signal supplied from the solid-state imaging device 1012. For example, the signal processing unit 1013 generates image data of a still image or a moving image by performing signal processing on the signal from the solid-state imaging device 1012, and supplies the image data to the display unit 1015 or the storage unit 1016.
制御部1014は、例えば、CPU(Central Processing Unit)やマイクロプロセッサ、FPGA(Field Programmable Gate Array)などとして構成される。制御部1014は、電子機器1000の各部の動作を制御する。
The control unit 1014 is configured as, for example, a CPU (Central Processing Unit), a microprocessor, an FPGA (Field Programmable Gate Array), or the like. The control unit 1014 controls the operation of each unit of the electronic device 1000.
表示部1015は、例えば、液晶パネルや有機EL(Electro Luminescence)パネル等の表示装置として構成される。表示部1015は、信号処理部1013から供給される画像データに応じた静止画又は動画を表示する。
The display unit 1015 is configured as a display device such as a liquid crystal panel or an organic EL (Electro Luminescence) panel. The display unit 1015 displays a still image or a moving image according to the image data supplied from the signal processing unit 1013.
記憶部1016は、例えば、半導体メモリやハードディスク等の記録媒体として構成される。記憶部1016は、信号処理部1013から供給される画像データを記録する。また、記憶部1016は、制御部1014からの制御に従い、記録されている画像データを供給する。
The storage unit 1016 is configured as a recording medium such as a semiconductor memory or a hard disk, for example. The storage unit 1016 records the image data supplied from the signal processing unit 1013. Further, the storage unit 1016 supplies the recorded image data according to the control from the control unit 1014.
操作部1017は、例えば、物理的なボタンのほか、表示部1015と組み合わせて、タッチパネルとして構成される。操作部1017は、ユーザによる操作に応じて、電子機器1000が有する各種の機能についての操作指令を出力する。制御部1014は、操作部1017から供給される操作指令に基づき、各部の動作を制御する。
The operation unit 1017 is configured as a touch panel in combination with the display unit 1015 in addition to the physical buttons, for example. The operation unit 1017 outputs operation commands for various functions of the electronic device 1000 in response to an operation by the user. The control unit 1014 controls the operation of each unit based on the operation command supplied from the operation unit 1017.
通信部1018は、例えば、通信インターフェース回路などとして構成される。通信部1018は、所定の通信方式に従い、無線通信又は有線通信によって、外部の機器との間でデータのやりとりを行う。
The communication unit 1018 is configured as, for example, a communication interface circuit. The communication unit 1018 exchanges data with an external device by wireless communication or wired communication according to a predetermined communication method.
電源部1019は、信号処理部1013ないし通信部1018の動作電源となる各種の電源を、これらの供給対象に対して適宜供給する。
The power supply unit 1019 appropriately supplies various power sources that serve as operating power sources for the signal processing unit 1013 to the communication unit 1018 to these supply targets.
電子機器1000は、以上のように構成される。
The electronic device 1000 is configured as described above.
本技術は、以上説明したように、固体撮像装置1012に適用される。固体撮像装置1012に本技術を適用することで、電子機器1000では、固体撮像装置1012を、より少ない回路規模で、低消費電力で動作させつつ、例えば動き検出を用いた手ぶれ補正を実現することができる。
This technology is applied to the solid-state image sensor 1012 as described above. By applying this technology to the solid-state imaging device 1012, in the electronic device 1000, the solid-state imaging device 1012 can be operated with a smaller circuit scale and low power consumption, and for example, camera shake correction using motion detection can be realized. Can be done.
<4.固体撮像装置の使用例>
<4. Example of using a solid-state image sensor>
図20は、本技術を適用した固体撮像装置の使用例を示す図である。
FIG. 20 is a diagram showing a usage example of a solid-state image sensor to which the present technology is applied.
固体撮像装置10は、例えば、以下のように、可視光や、赤外光、紫外光、X線等の光をセンシングする様々なケースに使用することができる。すなわち、図20に示すように、鑑賞の用に供される画像を撮影する鑑賞の分野だけでなく、例えば、交通の分野、家電の分野、医療・ヘルスケアの分野、セキュリティの分野、美容の分野、スポーツの分野、又は、農業の分野などにおいて用いられる装置でも、固体撮像装置10を使用することができる。
The solid-state image sensor 10 can be used in various cases for sensing light such as visible light, infrared light, ultraviolet light, and X-ray, as shown below. That is, as shown in FIG. 20, not only the field of appreciation for taking an image used for appreciation, but also, for example, the field of transportation, the field of home appliances, the field of medical / healthcare, the field of security, and the field of beauty. The solid-state imaging device 10 can also be used in devices used in the fields of fields, sports, agriculture, and the like.
具体的には、鑑賞の分野において、例えば、デジタルカメラやスマートフォン、カメラ機能付きの携帯電話機等の、鑑賞の用に供される画像を撮影するための装置(例えば、図19の電子機器1000)で、固体撮像装置10を使用することができる。
Specifically, in the field of appreciation, for example, a device for taking an image to be used for appreciation, such as a digital camera, a smartphone, or a mobile phone having a camera function (for example, the electronic device 1000 in FIG. 19). Therefore, the solid-state imaging device 10 can be used.
交通の分野において、例えば、自動停止等の安全運転や、運転者の状態の認識等のために、自動車の前方や後方、周囲、車内等を撮影する車載用センサ、走行車両や道路を監視する監視カメラ、車両間等の測距を行う測距センサ等の、交通の用に供される装置で、固体撮像装置10を使用することができる。
In the field of traffic, for example, for safe driving such as automatic stop and recognition of the driver's condition, in-vehicle sensors that photograph the front, rear, surroundings, inside of the vehicle, etc., and the traveling vehicle and the road are monitored. The solid-state imaging device 10 can be used as a device used for traffic such as a surveillance camera and a distance measuring sensor that measures a distance between vehicles.
家電の分野において、例えば、ユーザのジェスチャを撮影して、そのジェスチャに従った機器操作を行うために、テレビ受像機や冷蔵庫、エアーコンディショナ等の家電に供される装置で、固体撮像装置10を使用することができる。また、医療・ヘルスケアの分野において、例えば、内視鏡や、赤外光の受光による血管撮影を行う装置等の、医療やヘルスケアの用に供される装置で、固体撮像装置10を使用することができる。
In the field of home appliances, for example, a device used for home appliances such as a television receiver, a refrigerator, and an air conditioner in order to photograph a user's gesture and operate the device according to the gesture. Can be used. Further, in the field of medical care / healthcare, the solid-state imaging device 10 is used in a device used for medical care or healthcare, such as an endoscope or a device for performing angiography by receiving infrared light. can do.
セキュリティの分野において、例えば、防犯用途の監視カメラや、人物認証用途のカメラ等の、セキュリティの用に供される装置で、固体撮像装置10を使用することができる。また、美容の分野において、例えば、肌を撮影する肌測定器や、頭皮を撮影するマイクロスコープ等の、美容の用に供される装置で、固体撮像装置10を使用することができる。
In the field of security, the solid-state image sensor 10 can be used in a device used for security such as a surveillance camera for crime prevention and a camera for personal authentication. Further, in the field of beauty, the solid-state image sensor 10 can be used in a device used for beauty such as a skin measuring device for photographing the skin and a microscope for photographing the scalp.
スポーツの分野において、例えば、スポーツ用途等向けのアクションカメラやウェアラブルカメラ等の、スポーツの用に供される装置で、固体撮像装置10を使用することができる。また、農業の分野において、例えば、畑や作物の状態を監視するためのカメラ等の、農業の用に供される装置で、固体撮像装置10を使用することができる。
In the field of sports, the solid-state image sensor 10 can be used in a device used for sports, such as an action camera or a wearable camera for sports applications. Further, in the field of agriculture, the solid-state image sensor 10 can be used in a device used for agriculture, such as a camera for monitoring the state of a field or a crop.
<5.移動体への応用例>
<5. Application example to moving body>
本開示に係る技術(本技術)は、様々な製品へ応用することができる。例えば、本開示に係る技術は、自動車、電気自動車、ハイブリッド電気自動車、自動二輪車、自転車、パーソナルモビリティ、飛行機、ドローン、船舶、ロボット等のいずれかの種類の移動体に搭載される装置として実現されてもよい。
The technology related to this disclosure (this technology) can be applied to various products. For example, the technology according to the present disclosure is realized as a device mounted on a moving body of any kind such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobility, an airplane, a drone, a ship, and a robot. You may.
図21は、本開示に係る技術が適用され得る移動体制御システムの一例である車両制御システムの概略的な構成例を示すブロック図である。
FIG. 21 is a block diagram showing a schematic configuration example of a vehicle control system, which is an example of a moving body control system to which the technique according to the present disclosure can be applied.
車両制御システム12000は、通信ネットワーク12001を介して接続された複数の電子制御ユニットを備える。図21に示した例では、車両制御システム12000は、駆動系制御ユニット12010、ボディ系制御ユニット12020、車外情報検出ユニット12030、車内情報検出ユニット12040、及び統合制御ユニット12050を備える。また、統合制御ユニット12050の機能構成として、マイクロコンピュータ12051、音声画像出力部12052、及び車載ネットワークI/F(interface)12053が図示されている。
The vehicle control system 12000 includes a plurality of electronic control units connected via the communication network 12001. In the example shown in FIG. 21, the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, an outside information detection unit 12030, an in-vehicle information detection unit 12040, and an integrated control unit 12050. Further, as a functional configuration of the integrated control unit 12050, a microcomputer 12051, an audio image output unit 12052, and an in-vehicle network I / F (interface) 12053 are shown.
駆動系制御ユニット12010は、各種プログラムにしたがって車両の駆動系に関連する装置の動作を制御する。例えば、駆動系制御ユニット12010は、内燃機関又は駆動用モータ等の車両の駆動力を発生させるための駆動力発生装置、駆動力を車輪に伝達するための駆動力伝達機構、車両の舵角を調節するステアリング機構、及び、車両の制動力を発生させる制動装置等の制御装置として機能する。
The drive system control unit 12010 controls the operation of the device related to the drive system of the vehicle according to various programs. For example, the drive system control unit 12010 provides a driving force generator for generating the driving force of the vehicle such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting the driving force to the wheels, and a steering angle of the vehicle. It functions as a control device such as a steering mechanism for adjusting and a braking device for generating braking force of the vehicle.
ボディ系制御ユニット12020は、各種プログラムにしたがって車体に装備された各種装置の動作を制御する。例えば、ボディ系制御ユニット12020は、キーレスエントリシステム、スマートキーシステム、パワーウィンドウ装置、あるいは、ヘッドランプ、バックランプ、ブレーキランプ、ウィンカー又はフォグランプ等の各種ランプの制御装置として機能する。この場合、ボディ系制御ユニット12020には、鍵を代替する携帯機から発信される電波又は各種スイッチの信号が入力され得る。ボディ系制御ユニット12020は、これらの電波又は信号の入力を受け付け、車両のドアロック装置、パワーウィンドウ装置、ランプ等を制御する。
The body system control unit 12020 controls the operation of various devices mounted on the vehicle body according to various programs. For example, the body system control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as headlamps, back lamps, brake lamps, blinkers or fog lamps. In this case, the body system control unit 12020 may be input with radio waves transmitted from a portable device that substitutes for the key or signals of various switches. The body system control unit 12020 receives inputs of these radio waves or signals and controls a vehicle door lock device, a power window device, a lamp, and the like.
車外情報検出ユニット12030は、車両制御システム12000を搭載した車両の外部の情報を検出する。例えば、車外情報検出ユニット12030には、撮像部12031が接続される。車外情報検出ユニット12030は、撮像部12031に車外の画像を撮像させるとともに、撮像された画像を受信する。車外情報検出ユニット12030は、受信した画像に基づいて、人、車、障害物、標識又は路面上の文字等の物体検出処理又は距離検出処理を行ってもよい。
The vehicle outside information detection unit 12030 detects information outside the vehicle equipped with the vehicle control system 12000. For example, an imaging unit 12031 is connected to the vehicle exterior information detection unit 12030. The vehicle outside information detection unit 12030 causes the image pickup unit 12031 to capture an image of the outside of the vehicle and receives the captured image. The vehicle exterior information detection unit 12030 may perform object detection processing or distance detection processing such as a person, a vehicle, an obstacle, a sign, or characters on the road surface based on the received image.
撮像部12031は、光を受光し、その光の受光量に応じた電気信号を出力する光センサである。撮像部12031は、電気信号を画像として出力することもできるし、測距の情報として出力することもできる。また、撮像部12031が受光する光は、可視光であっても良いし、赤外線等の非可視光であってもよい。
The imaging unit 12031 is an optical sensor that receives light and outputs an electric signal according to the amount of the light received. The image pickup unit 12031 can output an electric signal as an image or can output it as distance measurement information. Further, the light received by the imaging unit 12031 may be visible light or invisible light such as infrared light.
車内情報検出ユニット12040は、車内の情報を検出する。車内情報検出ユニット12040には、例えば、運転者の状態を検出する運転者状態検出部12041が接続される。運転者状態検出部12041は、例えば運転者を撮像するカメラを含み、車内情報検出ユニット12040は、運転者状態検出部12041から入力される検出情報に基づいて、運転者の疲労度合い又は集中度合いを算出してもよいし、運転者が居眠りをしていないかを判別してもよい。
The in-vehicle information detection unit 12040 detects the in-vehicle information. For example, a driver state detection unit 12041 that detects the driver's state is connected to the in-vehicle information detection unit 12040. The driver state detection unit 12041 includes, for example, a camera that images the driver, and the in-vehicle information detection unit 12040 determines the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 12041. It may be calculated, or it may be determined whether the driver is dozing.
マイクロコンピュータ12051は、車外情報検出ユニット12030又は車内情報検出ユニット12040で取得される車内外の情報に基づいて、駆動力発生装置、ステアリング機構又は制動装置の制御目標値を演算し、駆動系制御ユニット12010に対して制御指令を出力することができる。例えば、マイクロコンピュータ12051は、車両の衝突回避あるいは衝撃緩和、車間距離に基づく追従走行、車速維持走行、車両の衝突警告、又は車両のレーン逸脱警告等を含むADAS(Advanced Driver Assistance System)の機能実現を目的とした協調制御を行うことができる。
The microcomputer 12051 calculates the control target value of the driving force generator, the steering mechanism, or the braking device based on the information inside and outside the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040, and the drive system control unit. A control command can be output to 12010. For example, the microcomputer 12051 realizes ADAS (Advanced Driver Assistance System) functions including vehicle collision avoidance or impact mitigation, follow-up driving based on inter-vehicle distance, vehicle speed maintenance driving, vehicle collision warning, vehicle lane deviation warning, and the like. It is possible to perform cooperative control for the purpose of.
また、マイクロコンピュータ12051は、車外情報検出ユニット12030又は車内情報検出ユニット12040で取得される車両の周囲の情報に基づいて駆動力発生装置、ステアリング機構又は制動装置等を制御することにより、運転者の操作に拠らずに自律的に走行する自動運転等を目的とした協調制御を行うことができる。
Further, the microcomputer 12051 controls the driving force generator, the steering mechanism, the braking device, and the like based on the information around the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040. It is possible to perform coordinated control for the purpose of automatic driving that runs autonomously without depending on the operation.
また、マイクロコンピュータ12051は、車外情報検出ユニット12030で取得される車外の情報に基づいて、ボディ系制御ユニット12020に対して制御指令を出力することができる。例えば、マイクロコンピュータ12051は、車外情報検出ユニット12030で検知した先行車又は対向車の位置に応じてヘッドランプを制御し、ハイビームをロービームに切り替える等の防眩を図ることを目的とした協調制御を行うことができる。
Further, the microcomputer 12051 can output a control command to the body system control unit 12020 based on the information outside the vehicle acquired by the vehicle exterior information detection unit 12030. For example, the microcomputer 12051 controls the headlamps according to the position of the preceding vehicle or the oncoming vehicle detected by the external information detection unit 12030, and performs cooperative control for the purpose of antiglare such as switching the high beam to the low beam. It can be carried out.
音声画像出力部12052は、車両の搭乗者又は車外に対して、視覚的又は聴覚的に情報を通知することが可能な出力装置へ音声及び画像のうちの少なくとも一方の出力信号を送信する。図21の例では、出力装置として、オーディオスピーカ12061、表示部12062及びインストルメントパネル12063が例示されている。表示部12062は、例えば、オンボードディスプレイ及びヘッドアップディスプレイの少なくとも一つを含んでいてもよい。
The audio image output unit 12052 transmits the output signal of at least one of the audio and the image to the output device capable of visually or audibly notifying the passenger of the vehicle or the outside of the vehicle. In the example of FIG. 21, an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are exemplified as output devices. The display unit 12062 may include, for example, at least one of an onboard display and a heads-up display.
図22は、撮像部12031の設置位置の例を示す図である。
FIG. 22 is a diagram showing an example of the installation position of the imaging unit 12031.
図22では、車両12100は、撮像部12031として、撮像部12101,12102,12103,12104,12105を有する。
In FIG. 22, the vehicle 12100 has imaging units 12101, 12102, 12103, 12104, 12105 as imaging units 12031.
撮像部12101,12102,12103,12104,12105は、例えば、車両12100のフロントノーズ、サイドミラー、リアバンパ、バックドア及び車室内のフロントガラスの上部等の位置に設けられる。フロントノーズに備えられる撮像部12101及び車室内のフロントガラスの上部に備えられる撮像部12105は、主として車両12100の前方の画像を取得する。サイドミラーに備えられる撮像部12102,12103は、主として車両12100の側方の画像を取得する。リアバンパ又はバックドアに備えられる撮像部12104は、主として車両12100の後方の画像を取得する。撮像部12101及び12105で取得される前方の画像は、主として先行車両又は、歩行者、障害物、信号機、交通標識又は車線等の検出に用いられる。
The imaging units 12101, 12102, 12103, 12104, 12105 are provided at positions such as, for example, the front nose, side mirrors, rear bumpers, back doors, and the upper part of the windshield in the vehicle interior of the vehicle 12100. The imaging unit 12101 provided on the front nose and the imaging unit 12105 provided on the upper part of the windshield in the vehicle interior mainly acquire an image in front of the vehicle 12100. The imaging units 12102 and 12103 provided in the side mirrors mainly acquire images of the side of the vehicle 12100. The imaging unit 12104 provided on the rear bumper or the back door mainly acquires an image of the rear of the vehicle 12100. The images in front acquired by the imaging units 12101 and 12105 are mainly used for detecting a preceding vehicle, a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, or the like.
なお、図22には、撮像部12101ないし12104の撮影範囲の一例が示されている。撮像範囲12111は、フロントノーズに設けられた撮像部12101の撮像範囲を示し、撮像範囲12112,12113は、それぞれサイドミラーに設けられた撮像部12102,12103の撮像範囲を示し、撮像範囲12114は、リアバンパ又はバックドアに設けられた撮像部12104の撮像範囲を示す。例えば、撮像部12101ないし12104で撮像された画像データが重ね合わせられることにより、車両12100を上方から見た俯瞰画像が得られる。
Note that FIG. 22 shows an example of the photographing range of the imaging units 12101 to 12104. The imaging range 12111 indicates the imaging range of the imaging unit 12101 provided on the front nose, the imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided on the side mirrors, respectively, and the imaging range 12114 indicates the imaging range of the imaging units 12102 and 12103. The imaging range of the imaging unit 12104 provided on the rear bumper or the back door is shown. For example, by superimposing the image data captured by the imaging units 12101 to 12104, a bird's-eye view image of the vehicle 12100 as viewed from above can be obtained.
撮像部12101ないし12104の少なくとも1つは、距離情報を取得する機能を有していてもよい。例えば、撮像部12101ないし12104の少なくとも1つは、複数の撮像素子からなるステレオカメラであってもよいし、位相差検出用の画素を有する撮像素子であってもよい。
At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information. For example, at least one of the image pickup units 12101 to 12104 may be a stereo camera composed of a plurality of image pickup elements, or may be an image pickup element having pixels for phase difference detection.
例えば、マイクロコンピュータ12051は、撮像部12101ないし12104から得られた距離情報を基に、撮像範囲12111ないし12114内における各立体物までの距離と、この距離の時間的変化(車両12100に対する相対速度)を求めることにより、特に車両12100の進行路上にある最も近い立体物で、車両12100と略同じ方向に所定の速度(例えば、0km/h以上)で走行する立体物を先行車として抽出することができる。さらに、マイクロコンピュータ12051は、先行車の手前に予め確保すべき車間距離を設定し、自動ブレーキ制御(追従停止制御も含む)や自動加速制御(追従発進制御も含む)等を行うことができる。このように運転者の操作に拠らずに自律的に走行する自動運転等を目的とした協調制御を行うことができる。
For example, the microcomputer 12051 has a distance to each three-dimensional object within the imaging range 12111 to 12114 based on the distance information obtained from the imaging units 12101 to 12104, and a temporal change of this distance (relative velocity with respect to the vehicle 12100). By obtaining, it is possible to extract as the preceding vehicle a three-dimensional object that is the closest three-dimensional object on the traveling path of the vehicle 12100 and that travels in substantially the same direction as the vehicle 12100 at a predetermined speed (for example, 0 km / h or more). it can. Further, the microcomputer 12051 can set an inter-vehicle distance to be secured in front of the preceding vehicle in advance, and can perform automatic braking control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like. In this way, it is possible to perform coordinated control for the purpose of automatic driving or the like in which the vehicle travels autonomously without depending on the operation of the driver.
例えば、マイクロコンピュータ12051は、撮像部12101ないし12104から得られた距離情報を元に、立体物に関する立体物データを、2輪車、普通車両、大型車両、歩行者、電柱等その他の立体物に分類して抽出し、障害物の自動回避に用いることができる。例えば、マイクロコンピュータ12051は、車両12100の周辺の障害物を、車両12100のドライバが視認可能な障害物と視認困難な障害物とに識別する。そして、マイクロコンピュータ12051は、各障害物との衝突の危険度を示す衝突リスクを判断し、衝突リスクが設定値以上で衝突可能性がある状況であるときには、オーディオスピーカ12061や表示部12062を介してドライバに警報を出力することや、駆動系制御ユニット12010を介して強制減速や回避操舵を行うことで、衝突回避のための運転支援を行うことができる。
For example, the microcomputer 12051 converts three-dimensional object data related to a three-dimensional object into two-wheeled vehicles, ordinary vehicles, large vehicles, pedestrians, utility poles, and other three-dimensional objects based on the distance information obtained from the imaging units 12101 to 12104. It can be classified and extracted and used for automatic avoidance of obstacles. For example, the microcomputer 12051 distinguishes obstacles around the vehicle 12100 into obstacles that can be seen by the driver of the vehicle 12100 and obstacles that are difficult to see. Then, the microcomputer 12051 determines the collision risk indicating the risk of collision with each obstacle, and when the collision risk is equal to or higher than the set value and there is a possibility of collision, the microcomputer 12051 via the audio speaker 12061 or the display unit 12062. By outputting an alarm to the driver and performing forced deceleration and avoidance steering via the drive system control unit 12010, driving support for collision avoidance can be provided.
撮像部12101ないし12104の少なくとも1つは、赤外線を検出する赤外線カメラであってもよい。例えば、マイクロコンピュータ12051は、撮像部12101ないし12104の撮像画像中に歩行者が存在するか否かを判定することで歩行者を認識することができる。かかる歩行者の認識は、例えば赤外線カメラとしての撮像部12101ないし12104の撮像画像における特徴点を抽出する手順と、物体の輪郭を示す一連の特徴点にパターンマッチング処理を行って歩行者か否かを判別する手順によって行われる。マイクロコンピュータ12051が、撮像部12101ないし12104の撮像画像中に歩行者が存在すると判定し、歩行者を認識すると、音声画像出力部12052は、当該認識された歩行者に強調のための方形輪郭線を重畳表示するように、表示部12062を制御する。また、音声画像出力部12052は、歩行者を示すアイコン等を所望の位置に表示するように表示部12062を制御してもよい。
At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays. For example, the microcomputer 12051 can recognize a pedestrian by determining whether or not a pedestrian is present in the captured image of the imaging units 12101 to 12104. Such pedestrian recognition includes, for example, a procedure for extracting feature points in an image captured by an imaging unit 12101 to 12104 as an infrared camera, and pattern matching processing for a series of feature points indicating the outline of an object to determine whether or not the pedestrian is a pedestrian. It is done by the procedure to determine. When the microcomputer 12051 determines that a pedestrian is present in the captured images of the imaging units 12101 to 12104 and recognizes the pedestrian, the audio image output unit 12052 outputs a square contour line for emphasizing the recognized pedestrian. The display unit 12062 is controlled so as to superimpose and display. Further, the audio image output unit 12052 may control the display unit 12062 so as to display an icon or the like indicating a pedestrian at a desired position.
以上、本開示に係る技術が適用され得る車両制御システムの一例について説明した。本開示に係る技術は、以上説明した構成のうち、撮像部12031に適用され得る。具体的には、図4の固体撮像装置10は、撮像部12031に適用することができる。撮像部12031に本開示に係る技術を適用することにより、高フレームレートで、かつ、フォーカルプレーン歪み等の影響による誤判定の少ない動き検出を実現することが可能になる。
The above is an example of a vehicle control system to which the technology according to the present disclosure can be applied. The technique according to the present disclosure can be applied to the imaging unit 12031 among the configurations described above. Specifically, the solid-state image sensor 10 of FIG. 4 can be applied to the image pickup unit 12031. By applying the technique according to the present disclosure to the imaging unit 12031, it is possible to realize motion detection at a high frame rate and with less erroneous determination due to the influence of focal plane distortion or the like.
なお、本技術の実施の形態は、上述した実施の形態に限定されるものではなく、本技術の要旨を逸脱しない範囲において種々の変更が可能である。
It should be noted that the embodiment of the present technology is not limited to the above-described embodiment, and various changes can be made without departing from the gist of the present technology.
また、本技術は、以下のような構成をとることができる。
In addition, this technology can have the following configuration.
(1)
複数の画素を2次元状に配置した画素アレイ部を備え、
前記画素は、光電変換部と、前記光電変換部による光電変換で得られる画素信号をAD変換するAD変換部とを含み、
前記画素アレイ部では、前記複数の画素からのAD変換結果を読み出すに際して、一部の画素が間引かれる
固体撮像装置。
(2)
前記一部の画素を除いた前記複数の画素から読み出された前記AD変換結果から得られる間引き画像に基づいて、所定の信号処理を行う信号処理部をさらに備える
前記(1)に記載の固体撮像装置。
(3)
前記AD変換結果を、読み出し用の順序回路と画素ブロックに応じたクラスタ単位で読み出す読み出し部をさらに備える
前記(1)又は(2)に記載の固体撮像装置。
(4)
前記順序回路は、フリップフロップを含み、
前記読み出し部は、前記クラスタの段数に応じた数の前記フリップフロップを接続したシフトレジスタを含むリピータ回路を複数含み、
前記クラスタと前記フリップフロップとは、対になって接続される
前記(3)に記載の固体撮像装置。
(5)
前記リピータ回路は、所定のタイミングで順次選択される前記クラスタ内の前記画素から前記AD変換結果を読み出す
前記(4)に記載の固体撮像装置。
(6)
前記読み出し部は、前記複数の画素に応じた数のラッチ回路をさらに含み、
前記画素に含まれる前記AD変換部と前記ラッチ回路とは、対になって接続され、
前記AD変換部に含まれるコンパレータの反転出力が前記ラッチ回路に入力される
前記(4)又は(5)に記載の固体撮像装置。
(7)
前記読み出し部は、複数の前記リピータ回路の中から、間引き量に応じた前記リピータ回路を選択するセレクタをさらに含む
前記(4)ないし(6)のいずれかに記載の固体撮像装置。
(8)
前記信号処理部は、複数の前記間引き画像に基づいて、動き検出を行う
前記(2)に記載の固体撮像装置。
(9)
前記動き検出は、代表点マッチング法を用いた動き検出を含み、
前記信号処理部は、
特定の間引き画像をテンプレートブロックとして保持し、
順次得られる間引き画像をマッチングブロックとして、当該マッチングブロックの画素値と、保持した前記テンプレートブロックの画素値との相関をとることで、動きベクトルを検出する
前記(8)に記載の固体撮像装置。
(10)
第1の撮像よりも時間的に後の第2の撮像において、前記第1の撮像で検出した前記動きベクトルに基づいて、前記画素アレイ部に配置された前記複数の画素から読み出される前記AD変換結果の読み出し開始位置を選択して動き補正を行う
前記(9)に記載の固体撮像装置。
(11)
前記画素アレイ部では、2次元状に配置された前記複数の画素が、画素ブロック単位で規則的に間引かれる
前記(1)ないし(10)のいずれかに記載の固体撮像装置。
(12)
複数の画素を2次元状に配置した画素アレイ部を備え、
前記画素は、光電変換部と、前記光電変換部による光電変換で得られる画素信号をAD変換するAD変換部とを含み、
前記画素アレイ部では、前記複数の画素からのAD変換結果を読み出すに際して、一部の画素が間引かれる
固体撮像装置を搭載した電子機器。 (1)
It is equipped with a pixel array unit in which a plurality of pixels are arranged two-dimensionally.
The pixel includes a photoelectric conversion unit and an AD conversion unit that AD-converts a pixel signal obtained by photoelectric conversion by the photoelectric conversion unit.
In the pixel array unit, a solid-state image sensor in which some pixels are thinned out when reading AD conversion results from the plurality of pixels.
(2)
The solid according to (1) above, further comprising a signal processing unit that performs predetermined signal processing based on a thinned image obtained from the AD conversion result read from the plurality of pixels excluding some of the pixels. Image sensor.
(3)
The solid-state image sensor according to (1) or (2) above, further comprising a sequence circuit for reading and a reading unit for reading the AD conversion result in cluster units according to pixel blocks.
(4)
The sequential circuit includes a flip-flop.
The read unit includes a plurality of repeater circuits including a shift register to which the flip-flops are connected in a number corresponding to the number of stages of the cluster.
The solid-state image sensor according to (3) above, wherein the cluster and the flip-flop are connected in pairs.
(5)
The solid-state image sensor according to (4), wherein the repeater circuit reads out the AD conversion result from the pixels in the cluster that are sequentially selected at a predetermined timing.
(6)
The reading unit further includes a number of latch circuits corresponding to the plurality of pixels.
The AD conversion unit included in the pixel and the latch circuit are connected in pairs.
The solid-state image sensor according to (4) or (5), wherein the inverted output of the comparator included in the AD conversion unit is input to the latch circuit.
(7)
The solid-state image sensor according to any one of (4) to (6) above, wherein the reading unit further includes a selector for selecting the repeater circuit according to the thinning amount from the plurality of repeater circuits.
(8)
The solid-state image sensor according to (2) above, wherein the signal processing unit detects motion based on the plurality of thinned-out images.
(9)
The motion detection includes motion detection using a representative point matching method.
The signal processing unit
Hold a specific thinned image as a template block,
The solid-state image sensor according to (8) above, wherein the motion vector is detected by correlating the pixel values of the matching block with the pixel values of the held template block, using the thinned images obtained in sequence as the matching block.
(10)
In the second imaging time after the first imaging, the AD conversion read from the plurality of pixels arranged in the pixel array unit based on the motion vector detected in the first imaging. The solid-state image sensor according to (9) above, wherein the reading start position of the result is selected and motion correction is performed.
(11)
The solid-state imaging device according to any one of (1) to (10), wherein in the pixel array unit, the plurality of pixels arranged in a two-dimensional manner are regularly thinned out in pixel block units.
(12)
It is equipped with a pixel array unit in which a plurality of pixels are arranged two-dimensionally.
The pixel includes a photoelectric conversion unit and an AD conversion unit that AD-converts a pixel signal obtained by photoelectric conversion by the photoelectric conversion unit.
The pixel array unit is an electronic device equipped with a solid-state image sensor in which some pixels are thinned out when reading AD conversion results from the plurality of pixels.
複数の画素を2次元状に配置した画素アレイ部を備え、
前記画素は、光電変換部と、前記光電変換部による光電変換で得られる画素信号をAD変換するAD変換部とを含み、
前記画素アレイ部では、前記複数の画素からのAD変換結果を読み出すに際して、一部の画素が間引かれる
固体撮像装置。
(2)
前記一部の画素を除いた前記複数の画素から読み出された前記AD変換結果から得られる間引き画像に基づいて、所定の信号処理を行う信号処理部をさらに備える
前記(1)に記載の固体撮像装置。
(3)
前記AD変換結果を、読み出し用の順序回路と画素ブロックに応じたクラスタ単位で読み出す読み出し部をさらに備える
前記(1)又は(2)に記載の固体撮像装置。
(4)
前記順序回路は、フリップフロップを含み、
前記読み出し部は、前記クラスタの段数に応じた数の前記フリップフロップを接続したシフトレジスタを含むリピータ回路を複数含み、
前記クラスタと前記フリップフロップとは、対になって接続される
前記(3)に記載の固体撮像装置。
(5)
前記リピータ回路は、所定のタイミングで順次選択される前記クラスタ内の前記画素から前記AD変換結果を読み出す
前記(4)に記載の固体撮像装置。
(6)
前記読み出し部は、前記複数の画素に応じた数のラッチ回路をさらに含み、
前記画素に含まれる前記AD変換部と前記ラッチ回路とは、対になって接続され、
前記AD変換部に含まれるコンパレータの反転出力が前記ラッチ回路に入力される
前記(4)又は(5)に記載の固体撮像装置。
(7)
前記読み出し部は、複数の前記リピータ回路の中から、間引き量に応じた前記リピータ回路を選択するセレクタをさらに含む
前記(4)ないし(6)のいずれかに記載の固体撮像装置。
(8)
前記信号処理部は、複数の前記間引き画像に基づいて、動き検出を行う
前記(2)に記載の固体撮像装置。
(9)
前記動き検出は、代表点マッチング法を用いた動き検出を含み、
前記信号処理部は、
特定の間引き画像をテンプレートブロックとして保持し、
順次得られる間引き画像をマッチングブロックとして、当該マッチングブロックの画素値と、保持した前記テンプレートブロックの画素値との相関をとることで、動きベクトルを検出する
前記(8)に記載の固体撮像装置。
(10)
第1の撮像よりも時間的に後の第2の撮像において、前記第1の撮像で検出した前記動きベクトルに基づいて、前記画素アレイ部に配置された前記複数の画素から読み出される前記AD変換結果の読み出し開始位置を選択して動き補正を行う
前記(9)に記載の固体撮像装置。
(11)
前記画素アレイ部では、2次元状に配置された前記複数の画素が、画素ブロック単位で規則的に間引かれる
前記(1)ないし(10)のいずれかに記載の固体撮像装置。
(12)
複数の画素を2次元状に配置した画素アレイ部を備え、
前記画素は、光電変換部と、前記光電変換部による光電変換で得られる画素信号をAD変換するAD変換部とを含み、
前記画素アレイ部では、前記複数の画素からのAD変換結果を読み出すに際して、一部の画素が間引かれる
固体撮像装置を搭載した電子機器。 (1)
It is equipped with a pixel array unit in which a plurality of pixels are arranged two-dimensionally.
The pixel includes a photoelectric conversion unit and an AD conversion unit that AD-converts a pixel signal obtained by photoelectric conversion by the photoelectric conversion unit.
In the pixel array unit, a solid-state image sensor in which some pixels are thinned out when reading AD conversion results from the plurality of pixels.
(2)
The solid according to (1) above, further comprising a signal processing unit that performs predetermined signal processing based on a thinned image obtained from the AD conversion result read from the plurality of pixels excluding some of the pixels. Image sensor.
(3)
The solid-state image sensor according to (1) or (2) above, further comprising a sequence circuit for reading and a reading unit for reading the AD conversion result in cluster units according to pixel blocks.
(4)
The sequential circuit includes a flip-flop.
The read unit includes a plurality of repeater circuits including a shift register to which the flip-flops are connected in a number corresponding to the number of stages of the cluster.
The solid-state image sensor according to (3) above, wherein the cluster and the flip-flop are connected in pairs.
(5)
The solid-state image sensor according to (4), wherein the repeater circuit reads out the AD conversion result from the pixels in the cluster that are sequentially selected at a predetermined timing.
(6)
The reading unit further includes a number of latch circuits corresponding to the plurality of pixels.
The AD conversion unit included in the pixel and the latch circuit are connected in pairs.
The solid-state image sensor according to (4) or (5), wherein the inverted output of the comparator included in the AD conversion unit is input to the latch circuit.
(7)
The solid-state image sensor according to any one of (4) to (6) above, wherein the reading unit further includes a selector for selecting the repeater circuit according to the thinning amount from the plurality of repeater circuits.
(8)
The solid-state image sensor according to (2) above, wherein the signal processing unit detects motion based on the plurality of thinned-out images.
(9)
The motion detection includes motion detection using a representative point matching method.
The signal processing unit
Hold a specific thinned image as a template block,
The solid-state image sensor according to (8) above, wherein the motion vector is detected by correlating the pixel values of the matching block with the pixel values of the held template block, using the thinned images obtained in sequence as the matching block.
(10)
In the second imaging time after the first imaging, the AD conversion read from the plurality of pixels arranged in the pixel array unit based on the motion vector detected in the first imaging. The solid-state image sensor according to (9) above, wherein the reading start position of the result is selected and motion correction is performed.
(11)
The solid-state imaging device according to any one of (1) to (10), wherein in the pixel array unit, the plurality of pixels arranged in a two-dimensional manner are regularly thinned out in pixel block units.
(12)
It is equipped with a pixel array unit in which a plurality of pixels are arranged two-dimensionally.
The pixel includes a photoelectric conversion unit and an AD conversion unit that AD-converts a pixel signal obtained by photoelectric conversion by the photoelectric conversion unit.
The pixel array unit is an electronic device equipped with a solid-state image sensor in which some pixels are thinned out when reading AD conversion results from the plurality of pixels.
10 固体撮像装置, 100A 画素基板, 100B ロジック基板, 101 画素アレイ部, 102 リピータ部, 103 GC発生部, 104 SRAM, 105 信号処理部, 106 動き判定用メモリ, 107 DAC, 108 ラッチ・リピータ部, 111 画素, 121 クラスタ, 131 リピータ, 132 リピータセレクタ, 141 シフトレジスタ, 142 フリップフロップ, 151 ADC, 152 ラッチ回路, 161 画素回路, 162 差動増幅回路, 163 正帰還回路, 164 多重回路, 1000 電子機器, 1012 固体撮像装置
10 Solid-state imaging device, 100A pixel board, 100B logic board, 101 pixel array part, 102 repeater part, 103 GC generation part, 104 SRAM, 105 signal processing part, 106 motion judgment memory, 107 DAC, 108 latch repeater part, 111 pixels, 121 clusters, 131 repeaters, 132 repeater selectors, 141 shift registers, 142 flip-flops, 151 ADCs, 152 latch circuits, 161 pixel circuits, 162 differential amplification circuits, 163 positive feedback circuits, 164 multiple circuits, 1000 electronic devices. , 1012 Solid-state imaging device
Claims (12)
- 複数の画素を2次元状に配置した画素アレイ部を備え、
前記画素は、光電変換部と、前記光電変換部による光電変換で得られる画素信号をAD変換するAD変換部とを含み、
前記画素アレイ部では、前記複数の画素からのAD変換結果を読み出すに際して、一部の画素が間引かれる
固体撮像装置。 It is equipped with a pixel array unit in which a plurality of pixels are arranged two-dimensionally.
The pixel includes a photoelectric conversion unit and an AD conversion unit that AD-converts a pixel signal obtained by photoelectric conversion by the photoelectric conversion unit.
In the pixel array unit, a solid-state image sensor in which some pixels are thinned out when reading AD conversion results from the plurality of pixels. - 前記一部の画素を除いた前記複数の画素から読み出された前記AD変換結果から得られる間引き画像に基づいて、所定の信号処理を行う信号処理部をさらに備える
請求項1に記載の固体撮像装置。 The solid-state imaging according to claim 1, further comprising a signal processing unit that performs predetermined signal processing based on a thinned-out image obtained from the AD conversion result read from the plurality of pixels excluding some of the pixels. apparatus. - 前記AD変換結果を、読み出し用の順序回路と画素ブロックに応じたクラスタ単位で読み出す読み出し部をさらに備える
請求項1に記載の固体撮像装置。 The solid-state image sensor according to claim 1, further comprising a sequence circuit for reading and a reading unit for reading the AD conversion result in cluster units according to pixel blocks. - 前記順序回路は、フリップフロップを含み、
前記読み出し部は、前記クラスタの段数に応じた数の前記フリップフロップを接続したシフトレジスタを含むリピータ回路を複数含み、
前記クラスタと前記フリップフロップとは、対になって接続される
請求項3に記載の固体撮像装置。 The sequential circuit includes a flip-flop.
The read unit includes a plurality of repeater circuits including a shift register to which the flip-flops are connected in a number corresponding to the number of stages of the cluster.
The solid-state image sensor according to claim 3, wherein the cluster and the flip-flop are connected in pairs. - 前記リピータ回路は、所定のタイミングで順次選択される前記クラスタ内の前記画素から前記AD変換結果を読み出す
請求項4に記載の固体撮像装置。 The solid-state image sensor according to claim 4, wherein the repeater circuit reads the AD conversion result from the pixels in the cluster that are sequentially selected at a predetermined timing. - 前記読み出し部は、前記複数の画素に応じた数のラッチ回路をさらに含み、
前記画素に含まれる前記AD変換部と前記ラッチ回路とは、対になって接続され、
前記AD変換部に含まれるコンパレータの反転出力が前記ラッチ回路に入力される
請求項4に記載の固体撮像装置。 The reading unit further includes a number of latch circuits corresponding to the plurality of pixels.
The AD conversion unit included in the pixel and the latch circuit are connected in pairs.
The solid-state image sensor according to claim 4, wherein the inverted output of the comparator included in the AD conversion unit is input to the latch circuit. - 前記読み出し部は、複数の前記リピータ回路の中から、間引き量に応じた前記リピータ回路を選択するセレクタをさらに含む
請求項4に記載の固体撮像装置。 The solid-state image sensor according to claim 4, wherein the reading unit further includes a selector that selects the repeater circuit according to the thinning amount from the plurality of repeater circuits. - 前記信号処理部は、複数の前記間引き画像に基づいて、動き検出を行う
請求項2に記載の固体撮像装置。 The solid-state image sensor according to claim 2, wherein the signal processing unit detects motion based on the plurality of thinned-out images. - 前記動き検出は、代表点マッチング法を用いた動き検出を含み、
前記信号処理部は、
特定の間引き画像をテンプレートブロックとして保持し、
順次得られる間引き画像をマッチングブロックとして、当該マッチングブロックの画素値と、保持した前記テンプレートブロックの画素値との相関をとることで、動きベクトルを検出する
請求項8に記載の固体撮像装置。 The motion detection includes motion detection using a representative point matching method.
The signal processing unit
Hold a specific thinned image as a template block,
The solid-state image sensor according to claim 8, wherein the thinned-out images obtained in sequence are used as matching blocks, and the motion vector is detected by correlating the pixel value of the matching block with the pixel value of the held template block. - 第1の撮像よりも時間的に後の第2の撮像において、前記第1の撮像で検出した前記動きベクトルに基づいて、前記画素アレイ部に配置された前記複数の画素から読み出される前記AD変換結果の読み出し開始位置を選択して動き補正を行う
請求項9に記載の固体撮像装置。 In the second imaging time after the first imaging, the AD conversion read from the plurality of pixels arranged in the pixel array unit based on the motion vector detected in the first imaging. The solid-state image sensor according to claim 9, wherein the reading start position of the result is selected and motion correction is performed. - 前記画素アレイ部では、2次元状に配置された前記複数の画素が、画素ブロック単位で規則的に間引かれる
請求項1に記載の固体撮像装置。 The solid-state image sensor according to claim 1, wherein in the pixel array unit, the plurality of pixels arranged in a two-dimensional manner are regularly thinned out in pixel block units. - 複数の画素を2次元状に配置した画素アレイ部を備え、
前記画素は、光電変換部と、前記光電変換部による光電変換で得られる画素信号をAD変換するAD変換部とを含み、
前記画素アレイ部では、前記複数の画素からのAD変換結果を読み出すに際して、一部の画素が間引かれる
固体撮像装置を搭載した電子機器。 It is equipped with a pixel array unit in which a plurality of pixels are arranged two-dimensionally.
The pixel includes a photoelectric conversion unit and an AD conversion unit that AD-converts a pixel signal obtained by photoelectric conversion by the photoelectric conversion unit.
The pixel array unit is an electronic device equipped with a solid-state image sensor in which some pixels are thinned out when reading AD conversion results from the plurality of pixels.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019063262A JP2020167441A (en) | 2019-03-28 | 2019-03-28 | Solid-state image pickup device and electronic apparatus |
JP2019-063262 | 2019-03-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020195936A1 true WO2020195936A1 (en) | 2020-10-01 |
Family
ID=72609410
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2020/011056 WO2020195936A1 (en) | 2019-03-28 | 2020-03-13 | Solid-state imaging device and electronic apparatus |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP2020167441A (en) |
WO (1) | WO2020195936A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112417967B (en) * | 2020-10-22 | 2021-12-14 | 腾讯科技(深圳)有限公司 | Obstacle detection method, obstacle detection device, computer device, and storage medium |
JP2023001788A (en) * | 2021-06-21 | 2023-01-06 | ソニーセミコンダクタソリューションズ株式会社 | Imaging apparatus and electronic equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012085233A (en) * | 2010-10-14 | 2012-04-26 | Sharp Corp | Video processing device, video processing method, and program |
JP2016184843A (en) * | 2015-03-26 | 2016-10-20 | ソニー株式会社 | Image sensor, processing method, and electronic apparatus |
WO2017018215A1 (en) * | 2015-07-27 | 2017-02-02 | ソニー株式会社 | Solid-state imaging device, control method therefor, and electronic apparatus |
-
2019
- 2019-03-28 JP JP2019063262A patent/JP2020167441A/en active Pending
-
2020
- 2020-03-13 WO PCT/JP2020/011056 patent/WO2020195936A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012085233A (en) * | 2010-10-14 | 2012-04-26 | Sharp Corp | Video processing device, video processing method, and program |
JP2016184843A (en) * | 2015-03-26 | 2016-10-20 | ソニー株式会社 | Image sensor, processing method, and electronic apparatus |
WO2017018215A1 (en) * | 2015-07-27 | 2017-02-02 | ソニー株式会社 | Solid-state imaging device, control method therefor, and electronic apparatus |
Also Published As
Publication number | Publication date |
---|---|
JP2020167441A (en) | 2020-10-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7171199B2 (en) | Solid-state imaging device and electronic equipment | |
US10939062B2 (en) | Solid-state imaging apparatus and electronic equipment | |
JP2022028982A (en) | Solid-state imaging device, signal processing chip, and electronic apparatus | |
WO2020110484A1 (en) | Solid-state image sensor, imaging device, and control method of solid-state image sensor | |
JP2018036102A (en) | Distance measurement device and method of controlling distance measurement device | |
JP7370413B2 (en) | Solid-state imaging devices and electronic equipment | |
JP2020072471A (en) | Solid state image sensor, imaging apparatus, and control method of solid state image sensor | |
WO2018190126A1 (en) | Solid-state imaging device and electronic apparatus | |
JP2020088722A (en) | Solid-state imaging element and imaging device | |
US20210218923A1 (en) | Solid-state imaging device and electronic device | |
WO2017163890A1 (en) | Solid state imaging apparatus, method for driving solid state imaging apparatus, and electronic device | |
WO2018074085A1 (en) | Rangefinder and rangefinder control method | |
WO2022019026A1 (en) | Information processing device, information processing system, information processing method, and information processing program | |
WO2020195936A1 (en) | Solid-state imaging device and electronic apparatus | |
WO2020100427A1 (en) | Solid-state image capture element, image capture device, and method for controlling solid-state image capture element | |
JPWO2019035369A1 (en) | Solid-state imaging device and driving method thereof | |
WO2019193801A1 (en) | Solid-state imaging element, electronic apparatus, and method for controlling solid-state imaging element | |
WO2019167608A1 (en) | Sensor element and electronic device | |
US20230162468A1 (en) | Information processing device, information processing method, and information processing program | |
WO2018139187A1 (en) | Solid-state image capturing device, method for driving same, and electronic device | |
WO2020090459A1 (en) | Solid-state imaging device and electronic equipment | |
WO2018211985A1 (en) | Imaging element, method for controlling imaging element, imaging device, and electronic apparatus | |
JP2020205507A (en) | Solid-state imaging element, imaging apparatus, and control method for solid-state imaging element | |
WO2022158246A1 (en) | Imaging device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20777443 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20777443 Country of ref document: EP Kind code of ref document: A1 |