WO2023219045A1 - Light-receiving device, control method, and distance measuring system - Google Patents

Light-receiving device, control method, and distance measuring system Download PDF

Info

Publication number
WO2023219045A1
WO2023219045A1 PCT/JP2023/017181 JP2023017181W WO2023219045A1 WO 2023219045 A1 WO2023219045 A1 WO 2023219045A1 JP 2023017181 W JP2023017181 W JP 2023017181W WO 2023219045 A1 WO2023219045 A1 WO 2023219045A1
Authority
WO
WIPO (PCT)
Prior art keywords
phase difference
pixel
light
measurement
unit
Prior art date
Application number
PCT/JP2023/017181
Other languages
French (fr)
Japanese (ja)
Inventor
祐輔 高塚
Original Assignee
ソニーセミコンダクタソリューションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーセミコンダクタソリューションズ株式会社 filed Critical ソニーセミコンダクタソリューションズ株式会社
Publication of WO2023219045A1 publication Critical patent/WO2023219045A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • G01C3/02Details
    • G01C3/06Use of electric means to obtain final indication
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/495Counter-measures or counter-counter-measures using electronic or electro-optical means

Definitions

  • the present disclosure relates to a light receiving device, a control method, and a ranging system.
  • a light-receiving device that uses an element that generates a signal in response to photon reception as a light-receiving element.
  • measurement light is emitted from a light source toward the object, and is reflected by the object.
  • ToF Time of Flight
  • the distance measuring system may perform erroneous measurements in response to incident light originating from sources other than the measurement light.
  • the present disclosure provides a light receiving device, a control method, and a distance measuring system that can suppress the influence of incident light other than measurement light.
  • a pixel array unit having a measurement pixel used to measure a distance to a target object, and a plurality of paired phase difference pixels that divide incident light from the target object into pupils and detect a phase difference; a first distance measuring unit that generates a first distance value to the target object based on information regarding a difference between a timing at which the measurement pixel receives a photon and a predetermined time; a second distance measuring unit that generates a second distance value to the object based on information corresponding to the number of photons incident on each of the plurality of phase difference pixels; A light receiving device is provided.
  • the measurement pixel has a photoelectric conversion unit that performs photoelectric conversion according to incident photons
  • the phase difference pixel may include a photoelectric conversion section that performs photoelectric conversion according to incident photons.
  • the photoelectric conversion section may be a single photon avalanche photodiode (SPAD).
  • SPAD single photon avalanche photodiode
  • a control unit that sets the drive period of the measurement pixel according to a second distance based on the output signals of the plurality of phase difference pixels, Further provision may be made.
  • the first distance measuring unit generates the first distance value according to a value of a histogram having appearance frequency information of a difference value between a timing at which the measurement pixel receives a photon and a predetermined time
  • the second distance measuring section may generate the second distance value according to the phase difference using a signal value corresponding to the number of photons incident on each of the plurality of phase difference pixels.
  • the measurement pixel has a plurality of the photoelectric conversion units,
  • the measurement pixel and the plurality of paired phase difference pixels constitute a unit, further comprising a first conversion section corresponding to the unit,
  • the first conversion unit may generate a reception signal having information regarding a difference between a timing at which the measurement pixel receives a photon and a predetermined time.
  • the control unit may control the amount of laser light emitted by the first infrared pulse laser according to the second distance.
  • the plurality of phase difference pixels may generate a signal according to the laser light irradiated by the second infrared pulse laser.
  • the pixel array section and the first conversion section may be stacked, and the first conversion section corresponding to the unit may be arranged directly below the unit.
  • the second conversion unit may generate a second reception signal having information corresponding to the number of photons incident on each of the plurality of phase difference pixels.
  • the second conversion unit generates a third reception signal having information regarding a difference between the timing at which the phase difference pixel receives a photon and a predetermined time, and converts the second reception signal into a third reception signal according to the number of third reception signals. May be generated.
  • the measurement pixel and the plurality of paired phase difference pixels may generate signals in response to photons received during the same time period.
  • the measurement pixel and the plurality of paired phase difference pixels may be configured on different semiconductor chip chips.
  • the first chip including the measurement pixel
  • the second chip including the paired phase difference pixels
  • the first infrared pulse laser may be arranged in an L-shape.
  • a pixel array section composed of a plurality of pixels; a control unit that controls the pixel array unit,
  • the pixel array section includes: A plurality of phase difference pixels that form a pair that divides incident light from a target object into pupils and detects a phase difference; a measurement pixel used to measure the distance to the target object,
  • the control unit may set a driving period of the measurement pixel according to a second distance based on output signals of the plurality of phase difference pixels.
  • the phase difference pixel has a photoelectric conversion unit that receives visible light and performs photoelectric conversion
  • the measurement pixel may include a photoelectric conversion unit that performs photoelectric conversion according to incident photons.
  • the photoelectric conversion section may further include an antireflection section on the incident side.
  • the photoelectric conversion section may have an on-chip lens formed of a high refractive index material on the incident side.
  • the photoelectric conversion section is On-chip lens and a diffusion layer having at least an avalanche multiplication region that multiplies carriers generated by the photoelectric conversion,
  • the diffusion layer may be arranged according to the position of the main optical axis of the on-chip lens.
  • the phase difference pixel is
  • the photoelectric conversion section is The aperture of the incident area, a diffusion layer having at least an avalanche multiplication region that multiplies carriers generated by the photoelectric conversion,
  • the diffusion layer may be arranged according to the position of the aperture.
  • Two phase difference pixels among the plurality of phase difference pixels forming the pair are:
  • An elliptical on-chip lens may be provided in the photoelectric conversion section of the two phase difference pixels.
  • the photoelectric conversion unit of the two measurement pixels may include a circular on-chip lens disposed between the two measurement pixels.
  • the photoelectric conversion section of the phase difference pixel receives light through a color filter that transmits visible light
  • the photoelectric conversion section of the measurement pixel may receive light through a color filter that transmits infrared light.
  • a plurality of phase difference pixels that form a pair that divides incident light from a target object into pupils and detects a phase difference;
  • a light receiving device including a lens that condenses incident light from the object.
  • FIG. 1 is a block diagram schematically showing an example of a schematic configuration of a ranging system.
  • the figure which shows the example of a structure of several units arrange
  • the circuit diagram which shows the example of a structure of a measurement pixel.
  • 4 is a circuit diagram showing a configuration example of a phase difference pixel arranged on the left side of FIG. 3.
  • FIG. 4 is a circuit diagram showing a configuration example of a phase difference pixel arranged on the right side of FIG. 3.
  • FIG. FIG. 3 is a diagram showing an example of a circuit configuration of a reference pixel unit.
  • FIG. 3 is a schematic cross-sectional diagram of a SPAD formed in a pixel array section.
  • FIG. 3 is a block diagram showing a configuration example of a control unit.
  • FIG. 3 is a diagram illustrating phase difference detection.
  • FIG. 3 is a diagram showing the position of a phase difference pixel on one horizontal axis of a pixel array section and a counter value.
  • FIG. 3 is a diagram schematically showing an example of interfering light.
  • FIG. 16 is a diagram showing an example of a histogram of a reference pixel and a histogram of a measurement pixel when not attacked in FIG. 15;
  • FIG. 16 is a diagram showing a histogram of reference pixels and a histogram of measurement pixels during the attack in FIG. 15.
  • FIG. 7 is a diagram showing the position of a phase difference pixel on one horizontal axis of a pixel array section and a counter value when receiving interference light.
  • 5 is a timing chart showing an example of control by the comprehensive control unit.
  • 5 is a flowchart showing an example of control by the comprehensive control unit.
  • FIG. 2 is a schematic diagram illustrating a simplified cross section of a SPAD.
  • FIG. 2 is a schematic diagram showing a cross section of a SPAD provided with an antireflection section.
  • FIG. 3 is a schematic diagram in which the position of the diffusion layer is changed depending on the position of the on-chip lens or the aperture.
  • FIG. 3 is a schematic diagram showing a configuration example of a pixel array section.
  • FIG. 7 is a schematic diagram showing a configuration example of a pixel array section according to modification example 4 of the first embodiment.
  • FIG. 3 is a diagram schematically showing an example of changing the amount of light according to a distance value using a phase difference method.
  • 12 is a flowchart showing an example of control by the comprehensive control unit according to the second embodiment.
  • FIG. 7 is a block diagram showing a configuration example of a control unit according to a third embodiment.
  • FIG. 7 is a diagram schematically showing a processing example of a unit according to a third embodiment.
  • FIG. 3 is a diagram showing a histogram corresponding to the output of each phase difference pixel generated by a histogram generation unit.
  • FIG. 7 is a diagram illustrating phase difference detection according to a fourth embodiment. The figure which shows the relationship between the focal position of a pair of phase difference pixels, and an output value.
  • FIG. 6 is a diagram showing the relationship between the focal position of a phase difference pixel and an output value when receiving interference light.
  • 10 is a timing chart showing an example of control of a comprehensive control unit according to a fourth embodiment.
  • 12 is a flowchart showing an example of control by the comprehensive control unit according to the fourth embodiment.
  • FIG. 12 is a flowchart showing an example of control by the comprehensive control unit according to the fifth embodiment.
  • FIG. 7 is a block diagram schematically showing an example of a schematic configuration of a ranging system according to a sixth embodiment.
  • FIG. 7 is a block diagram schematically showing an example of a schematic configuration of a ranging system according to a seventh embodiment.
  • FIG. 2 is a block diagram schematically showing an example of a schematic configuration of a ranging system according to a comparative example.
  • FIG. 3 is a schematic cross-sectional diagram of a SPAD formed in a pixel array section.
  • FIG. 3 is a diagram showing an example of a planar arrangement of phase difference pixels and measurement pixels formed in a pixel array section.
  • FIG. 1 is a block diagram showing an example of a schematic configuration of a vehicle control system.
  • FIG. 2 is an explanatory diagram showing an example of installation positions of an outside-vehicle information detection section and an imaging section.
  • FIG. 1 is a block diagram schematically showing an example of a schematic configuration of a ranging system 1 to which the present technology is applied.
  • the distance measuring system 1 according to the first embodiment includes a light receiving device 10, a control device 22, a display device 24, an operating device 26, a lens 30 on the output side, and a lens on the input side. 40. Further, FIG. 1 further illustrates a measurement target 50. As shown in FIG.
  • the light receiving device 10 includes a substrate 11, a pixel array section 12, a circuit section 13, and an infrared pulsed laser 14.
  • the control unit 20 includes, for example, a CPU, and controls the light receiving device 10.
  • the control device 22 is a device that controls the light receiving device 10 according to the operation of the operating device 26.
  • the display device 24 is, for example, a monitor, and displays the image generated by the light receiving device 10.
  • the operating device 26 includes a keyboard, a mouse, etc., and inputs an operating signal from an operator to the control device 22.
  • the ranging system 1 irradiates the target object 50 with light emitted by the infrared pulsed laser 14 via the lens 30.
  • the system measures the distance to the object 50 using the light reflected by the object 50 and incident on the pixel array section 12 via the lens 40. Note that details of the control section 20 will be described later.
  • the pixel array section 12 is mounted on the substrate 11 and includes a plurality of measurement pixels 110i, a plurality of phase difference pixels 110z, and a plurality of reference pixels 110r.
  • Each pixel of the pixel array section 12 is formed, for example, as a semiconductor element using, for example, a single photon avalanche photodiode (SPAD).
  • the circuit section 13 is a circuit that controls the pixel array section 12 and performs signal processing.
  • the measurement pixel 110i is a pixel used for ToF (Time of Flight), which measures the time it takes for the measurement light irradiated toward the distance measurement object to be reflected by the distance measurement object and return.
  • the phase difference pixel 110z is a phase difference pixel that divides the incident light from the object 50 into pupils and detects the image plane phase difference.
  • the phase difference pixel 110z is a pixel used for PDAF (Phase Detection Auto Focus).
  • the phase difference pixel 110z formed using SPAD may be referred to as a PDAF pixel.
  • the reference pixel 110r is used to measure the emission timing of the infrared light pulse laser 14.
  • measurement using ToF may be referred to as the ToF method
  • measurement using the phase difference pixel 110z may be referred to as a phase difference method.
  • the infrared pulsed laser 14 is mounted on the substrate 11.
  • the infrared light pulse laser 14 is formed using, for example, a VCSEL (Vertical Cavity Surface Emitting LASER) light source. Furthermore, by disposing a diffractive optical element 14a (DOE) on the irradiation side of the infrared light pulse laser 14, it is possible to irradiate the object 50 with a dot pattern of spot light arranged in a matrix of, for example, 100 points. .
  • DOE diffractive optical element 14a
  • the wavelength of the measurement light emitted by the infrared pulsed laser 14 is, for example, 850 [nm]. Measurement light is irradiated in synchronization with a light emission control signal input from the control unit 20.
  • the reference pixel 110r within the pixel array section 12 receives pulsed light emitted by the infrared pulsed laser 14.
  • pulsed light emitted by the infrared pulsed laser 14 is guided to the reference pixel 110r via a light guide member.
  • the pulsed light emitted through the lens 30 enters each of the plurality of measurement pixels 110i through the lens 40 at different timings.
  • the plurality of phase difference pixels 110z each receive the pulsed light emitted through the lens 30 through the lens 40.
  • the plurality of phase difference pixels 110z receive pulsed light through left and right or upper and lower pairs of apertures, as described later. Thereby, the pair of phase difference pixels 110z can detect an image plane phase difference according to the distance to the object 50.
  • the paired phase difference pixel 110z measures the distance to the object 50 at a different timing than the measurement using the ToF method or at a time that overlaps with the measurement using the ToF method, without using a plurality of reference pixels 110r. Measurable.
  • FIG. 2 is a diagram showing a configuration example of a plurality of units 11u arranged in the pixel array section 12.
  • a plurality of units 11u are arranged in a matrix.
  • a reference pixel unit 166u in which the reference pixels 110r are arranged is configured at an end of the pixel array section 12.
  • the unit 11u has a measurement pixel 110i made up of SPADs 331 to 334, and a phase difference pixel 110z made up of SPADs 335 and 336, respectively.
  • a pixel with an aperture WL in the right half of the incident area of the SPAD 335 is defined as a phase difference pixel 110zL
  • a pixel with an aperture WR in the left half of the incident area of the SPAD 336 is defined as a phase difference pixel 110zR.
  • FIG. 3 is a circuit diagram showing an example of the configuration of the measurement pixel 110i.
  • This measurement pixel 110i includes a quench/detection circuit 310, selection transistors 321 to 324, and SPADs 331 to 334.
  • Quench/detection circuit 310 includes a resistor 311 and an inverter 312.
  • pMOS p-channel Metal Oxide Semiconductor
  • logic signals 309-1 to 309-4 are input from the control unit 20 to the gates of the corresponding selection transistors 21 to 324.
  • SPADs Single Photon Avalanche photodiodes 331 to 334 photoelectrically convert photons and avalanche multiply them to generate current.
  • a negative bias VRLD is applied to the anodes of the SPADs 331 to 334.
  • the cathode of SPAD 331 is connected to the drain of selection transistor 321, and the cathode of SPAD 332 is connected to the drain of selection transistor 322. Further, the cathode of the SPAD 333 is connected to the drain of the selection transistor 323, and the cathode of the SPAD 334 is connected to the drain of the selection transistor 324.
  • each of the selection transistors 321 to 324 are commonly connected to a common node 319. Furthermore, a selection signal XSEL1 from the control section 20 is input to the gates of the selection transistors 321 to 324 via selection lines 309-1 to 309-4. By varying the timing of the selection signal XSEL1, it is also possible to drive the SPADs 331 to 334 as individual pixels. When the selection signal XSEL1 becomes high level and the selection transistors 321 to 324 are conductive, the potentials VK_1 to VK_4 of the SPADs 331 to 334 are conducted to the common node 319.
  • a resistor 311 is inserted between a node of a predetermined power supply voltage VDD and a common node 319.
  • the inverter 312 generates a pulse signal TOUT based on the potential of the common node 319 and supplies it to a TDC (Time-to-Digital Converter) 170. Further, the resistor 311 lowers the potential of the common node 319 that exceeds a predetermined potential due to avalanche multiplication, thereby suppressing avalanche multiplication.
  • the TDC 170 corresponds to the first conversion unit.
  • the TDC 170 converts the light reception timing into a digital value based on the pulse signal TOUT.
  • the circuit section 13 includes a clock circuit.
  • the DC 170 uses the information of the clock circuit to convert the time difference between the reference time and the input time of the pulse signal TOUT into a digital value, using the measurement start time as the reference time.
  • This TDC 170 supplies digital values to the control section 20.
  • the TDC 170 can be configured for each unit 11u. Note that in the following description, the circuit configuration except for SPAD 331 to SPAD 340 is configured within the circuit section 13.
  • FIG. 4 is a circuit diagram showing a configuration example of the phase difference pixel 110zL arranged on the left side of FIG. 3.
  • This phase difference pixel 110zL includes a quench/detection circuit 310, a selection transistor 325, and a SPAD 335. Furthermore, the logic signal 309-5 is input to the gate of the corresponding selection transistor 325.
  • a negative bias VRLD is applied to the anode of the SPAD 335.
  • the cathode of SPAD 335 is connected to the drain of select transistor 325.
  • the source of selection transistor 325 is connected to node 319.
  • the selection signal XSEL2 from the control section 20 is input to the gate of the selection transistor 325 via the selection line 309-5.
  • Inverter 312 generates a pulse signal COUT based on the potential of node 319 and supplies it to counter 172. Note that the counter 172 according to this embodiment corresponds to the second conversion section.
  • the counter 172 counts the number of photons incident on the SPAD 335 based on the pulse signal COUT.
  • This counter 172 counts the number of pulses of the pulse signal COUT as a value corresponding to the number of photons, and supplies the counted value to the control unit 20. In this manner, when the selection signal XSEL2 is at a high level and the selection transistor 325 is conductive, the potential of the SPAD 335 is conducted to the node 319. Further, the counter 172 can be configured for each unit 11u.
  • FIG. 5 is a circuit diagram showing a configuration example of the phase difference pixel 110zR arranged on the right side of FIG. 3.
  • This phase difference pixel 110zR has the same configuration as the phase difference pixel 110zL. That is, the phase difference pixel 110zR includes a quench/detection circuit 310, a selection transistor 326, and a SPAD 336. Furthermore, the logic signal 309-6 is input to the gate of the corresponding selection transistor 326.
  • a negative bias VRLD is applied to the anode of the SPAD 336.
  • the cathode of SPAD 336 is connected to the drain of select transistor 326.
  • the source of selection transistor 326 is connected to node 319.
  • the selection signal XSEL3 from the control section 20 is input to the gate of the selection transistor 326 via the selection line 309-6.
  • Inverter 312 generates a pulse signal COUT based on the potential of node 319 and supplies it to counter 172.
  • the counter 172 counts the number of photons incident on the SPAD 336 based on the pulse signal COUT.
  • This counter 172 counts the number of pulses of the pulse signal COUT as a value corresponding to the number of photons, and supplies the counted value to the control unit 20. In this manner, when the selection signal XSEL3 is at a high level and the selection transistor 326 is conductive, the potential of the SPAD 336 is conducted to the node 319.
  • FIG. 6 is a diagram showing an example of the circuit configuration of the reference pixel unit 166u.
  • the reference pixel 110r also has the same configuration as the measurement pixel 110i. That is, this reference pixel 110r includes a quench/detection circuit 310, selection transistors 327 to 330, and SPADs 337 to 340.
  • Quench/detection circuit 310 includes a resistor 311 and an inverter 312. Furthermore, logic signals 309-7 to 309-10 are input to the gates of corresponding selection transistors 327 to 330.
  • each of the selection transistors 327 to 330 are commonly connected to the common node 319. Furthermore, a selection signal XSEL0 from the control section 20 is input to the gates of the selection transistors 327 to 330 via selection lines 309-7 to 309-10. When the selection signal XSEL0 is at a high level and the selection transistors 321 to 324 are conductive, the potentials VK_1 to VK_4 of the SPADs 337 to 340 are conducted to the common node 319.
  • FIG. 7 is a schematic cross-sectional view of SPADs 331, 332, 335, and 336 formed in the pixel array section 12.
  • the SPADs 331, 332, 335, and 336 correspond to two of the four pixels that constitute the phase difference pixel 110zL, the phase difference pixel 110zR, and the measurement pixel 110i, respectively (see FIG. 2).
  • the SPAD 331 and the SPAD 332 have the same configuration. Further, the SPADs 333 and 334 also have the same configuration as the SPADs 331 and 332. That is, the four pixels forming the measurement pixel 110i have the same configuration.
  • the phase difference pixel 110zL and the phase difference pixel 110zR are different from the measurement pixel 110ia in that they have apertures WL and WR, respectively.
  • the apertures WL and WR are constituted by light shielding members and divide the incident lights L1 and L2 from the object 50 into pupils, respectively.
  • the SPADs 337 to 340 also have the same configuration as the SPADs 331 and 332.
  • the SPAD 331 includes an off-chip lens 1110, an N well 1112, a diffusion layer 1114, a metal wiring 1116, a metal pad 1118, and an inter-pixel isolation section 1120. Note that detailed description of the wiring layer is omitted. Note that the SPADs 331 to 340 can have a general configuration and are not limited to the configuration shown in FIG. 7.
  • the off-chip lens 1110 focuses the light incident through the lens 40 into the N-well 1112.
  • the N-well 1112 is formed by controlling the impurity concentration of the sensor substrate to be n-type, and forms an electric field that transfers electrons generated by photoelectric conversion in the SPAD to the avalanche multiplication region.
  • the op-chip lens 1110 may be made of, for example, a high refractive material.
  • amorphous silicon, SiN, etc. can be used as the high refractive material.
  • the sensitivity to infrared light is improved, the count rate of the phase difference pixel 110zL, the phase difference pixel 110zR, and the measurement pixel 110i is improved, and the phase difference pixel 110zL and the phase difference pixel 110zR are improved. Improves light collection efficiency.
  • the diffusion layer 1114 is composed of a P-type diffusion layer, an N-type diffusion layer, a hole accumulation layer, a pinning layer, and a high concentration P-type diffusion layer.
  • an avalanche multiplication region is formed by a depletion layer formed in a region where a P-type diffusion layer and an N-type diffusion layer are connected.
  • the metal wiring 1116 is formed wider than the diffusion layer 1114 so as to cover, for example, the avalanche multiplication region.
  • the metal pad 1118 is used for electrically and mechanically bonding the metal pads formed in the logic side wiring layer to the metal (Cu) forming each pad.
  • the inter-pixel isolation section 1120 insulates and isolates each SPAD by a double structure of a metal film and an insulating film formed between adjacent SPADs.
  • FIG. 8 is a diagram showing an example of the arrangement of the TDC 170 and the counter 172 arranged on the substrate 11.
  • the TDC 170 includes a TDC 170a arranged at the left end of the pixel array section 12 and a TDC 170b arranged at the right end of the pixel array section 12.
  • the counter 172 includes a counter 172 a placed at the left end of the pixel array section 12 and a counter 172 b placed at the right end of the pixel array section 12 .
  • the circuit section 13 includes, for example, two TDCs 170 and two counters 172.
  • the TDC 170a is wired to a plurality of units 11u in the left half of the pixel array section 12, and the TDC 170b is wired to a plurality of units 11u in the right half of the pixel array section 12.
  • the counter 172a is wired to a plurality of units 11u in the left half of the pixel array section 12, and the counter 172b is wired to a plurality of units 11u in the right half of the pixel array section 12. This allows the amount of wiring to be shortened.
  • FIG. 9 is a block diagram showing an example of the configuration of the control section 20.
  • the control section 20 includes a light emission control section 200, a drive control section 202, a first distance measurement section 204, a second distance measurement section 206, and a comprehensive control section 208.
  • the first distance measurement section 204 includes a histogram generation section 204a and a processing section 204b.
  • the light emission control unit 200 controls the irradiation timing of the pulsed light of the dot pattern formed via the diffractive optical element 14a by supplying a light emission control signal to the infrared pulsed laser 14.
  • the frequency of the light emission control signal is, for example, 20 [MHz].
  • the frequency of the light emission control signal is not limited to 20 [MHz], and may be 5 [MHz] or the like.
  • the light emission control signal is not limited to a rectangular wave as long as it is a periodic signal.
  • the light emission control signal may be a sine wave.
  • pulsed light is irradiated several times at 20 [MHz] before measurement using the ToF method. Note that the irradiation period and period are just examples, and are not limited thereto.
  • FIG. 10 is a diagram schematically showing a state in which the dot pattern pulsed light reaches the light receiving surface of the pixel array section 12 as return light from the target object 50. As shown in FIG. 10, the dot pattern pulsed light is dispersed, for example, for each unit 11u and received as spot light 11s.
  • the drive control unit 202 generates selection signals XSEL0 to 3 for the selection transistors 321 to 330 (see FIGS. 3 to 5), logic signals 309-1 to 309-10, etc., and controls the drive of each unit 11u and unit 116u. .
  • the first distance measurement unit 204 includes a histogram generation unit 204a and a processing unit 204b.
  • the histogram generation unit 204a generates a histogram based on the digital values obtained by the TDC 170.
  • the processing unit 204b performs various processes based on the histogram generated by the histogram generation unit 204a. For example, the processing unit 204b can perform FIR (Finite Impulse Response) filter processing, echo determination, depth value (distance value) calculation processing, peak detection processing, and the like.
  • the distance image (depth image) generated by the processing unit 204b is output via the interface.
  • FIG. 11 is a diagram schematically showing a processing example of the TDC 170 of the unit 11u.
  • SPADs 331-334 are connected to TDC 170
  • SPADs 335 and 336 are each connected to counter 172.
  • the horizontal axis of the TDC histogram shows time, and the vertical axis shows frequency.
  • An example is schematically shown in which photons ph1 to ph7 are incident on each of SPADs 331 to 334 in time series.
  • the unit 11u generates a pulse signal TOUT according to the photons ph1 to ph7.
  • the TDC 170 generates a digital value proportional to the difference between the reference time t0 and the input timing of the pulse signal TOUT generated in time series due to the photons ph1 to ph7, and supplies it to the histogram generation unit 204a.
  • the histogram generation unit 204a adds, for example, 1 to the frequency of the time interval corresponding to the digital value.
  • the frequency of time intervals corresponding to digital values according to the distance to the target portion 50 increases.
  • photons p are introduced into the SPADs 337 to 340 of the unit 116u in time series from the infrared pulsed laser 14.
  • Unit 116u generates a pulse signal TOUT in response to the photons.
  • the TDC 170 generates a digital value proportional to the difference between the input timing of the pulse signal TOUT generated in time series due to photons and the reference time t0, and supplies it to the histogram generation unit 204a.
  • the frequency of time intervals corresponding to digital values according to the irradiation timing of the pulsed light increases.
  • the processing unit 204b generates, for example, the time corresponding to the maximum frequency of the histogram generated by the histogram generation unit 204a for the reference pixel 110r of the unit 166u as time t1.
  • the time t1 has a value approximately equal to the irradiation start time of the pulsed light.
  • the processing unit 204b generates, for example, a time t2n corresponding to the maximum frequency of the histogram generated by the histogram generation unit 204a for each of the plurality of measurement pixels 110i. n indicates each unit 11u. Thereby, the processing unit 204b generates the distance Dn to the object 50 detected by each unit 11u as (t2n-t1) ⁇ speed of light/2.
  • FIG. 12 is a diagram schematically showing a visible image 110a and a ranging image 110b.
  • Each density point Cp of the distance measurement image 110b has a density value corresponding to the distance Dn calculated by the processing unit 204b.
  • the processing unit 204b can generate a two-dimensional distance value distribution based on the pulse signal TOUT measured by each unit 11u as the distance measurement image (depth image) 110b. For example, this distance measurement image 110b is displayed on the display device 24 via the control device 22.
  • FIG. 13 is a diagram illustrating phase difference detection according to the embodiment of the present disclosure.
  • a in the figure is a diagram showing the relationship between the surface positions of the object 50, the lens 40, and the pixel array section 12, and the optical path of the incident light.
  • light passing through the left and right sides of the lens 40 is represented by 301 and 302, respectively.
  • the left, center, and right diagrams of A are on the opposite side from (the imaging surface of) the pixel array section 12 (in the focused state) when they are on (the imaging surface of) the pixel array section 12, respectively. (a so-called rear focus state) and a case on the side of the pixel array section 12 (a so-called front focus state).
  • the optical images 303 and 304 have shifted shapes. This image shift represents the phase difference.
  • the optical images 303 and 304 become images shifted to the left and right, respectively.
  • the optical images 303 and 304 become images shifted in opposite directions.
  • FIG. 14 is a diagram showing the positions and counter values of the phase difference pixels 110zL and 110zR on one horizontal axis of the pixel array section 12.
  • the horizontal axis corresponds to the positions of the phase difference pixel 110zL and the phase difference pixel 110zR.
  • the vertical axis is the count value of the counter 172. That is, the signal value L20 corresponds to the number of photons based on the pulse signal TOUT of the phase difference pixel 110zL. On the other hand, the signal value R20 corresponds to the number of photons based on the pulse signal TOUT of the phase difference pixel 110zR.
  • the phase difference corresponds to the shift amount that minimizes the sum of the difference values between the signal value L20 and the signal value R20 when the signal value L20 or the signal value R20 is shifted in the horizontal direction.
  • the second distance measuring section 206 calculates the shift amount based on the pulse signal TOUT of each unit 11u. Then, the second distance measuring unit 206 calculates the distance to the target object 50 based on the shift amount. Since the shift amount corresponds to the distance to the object 50, the distance can be measured although the accuracy is lower than that of the TOF method. In this way, in this embodiment, by using the SPADs 335 and 336, it is possible to measure the distance to the object 50 using the amount of light at the photon level.
  • the second distance measuring unit 206 calculates the shift amount that minimizes the sum of the difference values between the signal value L20 and the signal value R20. The influence on light is suppressed more than the TOF method.
  • FIG. 15 is a diagram schematically showing an example of interfering light.
  • Fig. A schematically shows a state in which no attack has been made
  • Fig. B schematically shows a state in which an attack is being received by intentional interference light.
  • the attack circuit 1000 emits pulsed light having a wavelength band equivalent to that of the pulsed light to the object 50.
  • FIG. 16 is a diagram showing an example of the histogram of the reference pixel 110r and the histogram of the measurement pixel 110i when not attacked in FIG. 15.
  • Figure A shows a histogram generated by the histogram generation unit 204a based on the output signal of the reference pixel 110r
  • Figure B shows a histogram generated by the histogram generation unit 204a based on the output signal of the measurement pixel 110i.
  • the time difference between the peak time of the histogram of the reference pixel and the peak time of the histogram of the measurement pixel corresponds to the true distance R to the object 50.
  • FIG. 17 is a diagram showing a histogram of the reference pixel 110r and a histogram of the measurement pixel 110i during the attack in FIG. 15.
  • Figure A shows a histogram generated by the histogram generation unit 204a based on the output signal of the reference pixel 110r
  • Figure B shows a histogram generated by the histogram generation unit 204a based on the output signal of the measurement pixel 110i.
  • the histogram of the measurement pixel includes the pulse signal DOUT caused by the interfering light, and the peak of the histogram of the measurement pixel is shifted.
  • the time difference between the peak time of the histogram of the reference pixel and the peak time of the histogram of the measurement pixel corresponds to the distance Rf to the false object 50f.
  • the TOF method since the incident timing of photons is converted into a digital value, an erroneous signal is generated depending on the incident timing of the pulsed light of the attack circuit 1000, and the position of the object 50 is changed, for example, to the object 50f. The position may be incorrectly measured.
  • the measurement accuracy with the TOF method is higher than the measurement accuracy with the phase difference method, but it is more susceptible to the influence of attack light or environmental light.
  • FIG. 18 is a diagram showing the positions and counter values of the phase difference pixels 110zL and 110zR on the horizontal axis of the pixel array section 12 when receiving interference light.
  • the horizontal axis corresponds to the positions of the phase difference pixel 110zL and the phase difference pixel 110zR.
  • the vertical axis is the count value of the counter 172. That is, the signal value L20a corresponds to the number of photons based on the pulse signal TOUT of the phase difference pixel 110zL. On the other hand, the signal value R20a corresponds to the number of photons based on the pulse signal TOUT of the phase difference pixel 110zR.
  • the phase difference corresponds to a shift amount that minimizes the sum of the difference values between the signal value L20a and the signal value R20a when the signal value L20a or the signal value R20a is shifted in the horizontal direction.
  • the counter values of the phase difference pixel 110zL and the phase difference pixel 110zR only increase as the count value due to photons caused by the interference light increases as an offset. Therefore, even if interference light is received, the influence on the shift amount is suppressed.
  • the signal value L20a and the signal value R20a increase by the same amount. Therefore, the influence on the shift amount is suppressed, and the influence of the pulsed light of the attack circuit 1000 and the ambient light is suppressed.
  • FIG. 19 is a timing chart showing an example of control by the comprehensive control unit 208.
  • the horizontal axis indicates time.
  • the vertical axis in Figure A indicates the light emission control signal generated by the light emission control unit 200, the selection signal XSEL2, the selection signal XSEL3, the selection signal XSEL0, and the selection signal show. A high level of these signals indicates a driving state.
  • Figure B schematically shows photons incident on the pixel array section 12. That is, photons generated by attack light or environmental light and photons generated by second measurement light are schematically shown.
  • FIG. 20 is a flowchart showing an example of control by the comprehensive control unit 208.
  • a control example of the comprehensive control unit 208 will be described with reference to FIG. 19.
  • the general control unit 208 sets the light emission control signal to a first high level signal for measuring the phase difference signal, and causes the infrared light pulse laser 14 to emit measurement pulse light (step S100).
  • the general control unit 208 sets the selection signal XSEL2 and the selection signal
  • the pixel 110zL and the phase difference pixel 110zR are driven to measure a phase difference signal (step S102).
  • the range of the first measurement period in which the selection signal XSEL2 and the selection signal XSEL3 are set to high level can be set according to a predetermined measurement distance range to the target object 50.
  • the second distance measurement unit 206 calculates the shift amount based on the pulse signal COUT of the phase difference pixels 110zL and zR for each unit 11u, and generates each first distance to the target object 50 (step S104). .
  • distance values based on the phase difference method are generated while suppressing the effects of attack light, environmental light, and the like.
  • the comprehensive control unit 208 controls the period during which the second light emission control signal is set at high level and the period during which the selection signal XSEL0 is set at high level, based on the first distance generated by the second distance measuring unit 206. . As a result, a second measurement pulse is emitted (step S106).
  • the general control unit 208 sets the selection signal XSEL0 that drives the reference pixel 110r to a high level in accordance with the timing at which the second measurement pulse is emitted, and measures the signal of the reference light (step S108).
  • the combination control unit 208 sets the selection signal XSEL1 that drives the measurement pixel 110i to a high level in accordance with the timing at which the second measurement pulse is emitted, and measures the measurement light signal (step S110).
  • the comprehensive control unit 208 sets the measurement time range of photons reflected and returned from the object 50 to the second measurement period corresponding to the first distance value based on the phase difference method.
  • the histogram generation unit 204a of the first distance measurement unit 204 adds, for example, 1 to the frequency of the time interval corresponding to the digital value, every time a digital value is input while the selection signal XSEL0 is at a high level. Then, a histogram is generated according to the output of the reference pixel 110r.
  • the processing unit 204b generates, for example, a time corresponding to the maximum frequency of the histogram generated for the reference pixel 110r by the histogram generation unit 204a as time t1.
  • the histogram generation unit 204a of the second distance measurement unit 206 adds, for example, 1 to the frequency of the time interval corresponding to the digital value, every time a digital value is input while the selection signal XSEL1 is at a high level. Then, a histogram corresponding to each of the plurality of measurement pixels 110i is generated. Then, the processing unit 204b generates, for example, a time t2n, a time corresponding to the maximum frequency of the histogram generated by the histogram generation unit 204a for each of the plurality of measurement pixels 110i. Thereby, the processing unit 204b generates the distance Dn to the object 50 detected by each unit 11u as (t2n-t1) ⁇ speed of light/2 (step S112).
  • the comprehensive control unit 208 sets the second measurement period of photons reflected and returned from the object 50 in correspondence with the first distance value based on the phase difference method.
  • photons caused by attack light or environmental light also enter the pixel array section 12 during the second measurement period in which the measurement pixel 110i is driven.
  • the attack light that is incident during the measurement period is, for example, incident at high density outside the second measurement period, and photons are incident randomly due to the environmental light.
  • the density of photons caused by the measurement light incident during the second measurement period increases around the time corresponding to the position of the target object 50. Therefore, the peak of the histogram generated by the histogram generation unit 204a is dominated by the influence of photons caused by the measurement light.
  • the measurement accuracy of the TOF method is higher than that of the phase method when there is no attack light, so even when there is attack light, it is possible to obtain a measured distance value with higher measurement accuracy than the phase method. becomes possible.
  • the comprehensive control unit 208 corresponds the second measurement period of photons reflected and returned from the target object 50 to the first distance value based on the phase difference method. I decided to set it up.
  • the second measurement period is set by the first distance value that suppresses the influence of attack light or environmental light, so the influence of attack light or environmental light outside the second measurement period in the TOF method is suppressed, Deterioration in distance measurement by the TOF method in the presence of attack light is suppressed.
  • the distance measuring system 1 according to the first modification of the first embodiment differs from the first embodiment in that anti-reflection sections (moth eyes) 1122 are provided in the SPADs 331, 332, 335, and 336 formed in the pixel array section 12.
  • the distance measuring system 1 is different from the distance measuring system 1 according to the present invention. Below, differences from the ranging system 1 of the first embodiment will be explained.
  • FIG. 21A is a simplified schematic diagram of a cross section of the SPAD. As shown in FIG. 21A, the N-well 1112 absorbs almost all visible light. On the other hand, about half of the infrared light is reflected. Therefore, in the SPAD according to the present embodiment, an antireflection portion (moth eye) 1122 is provided.
  • an antireflection portion (moth eye) 1122 is provided.
  • FIG. 21B is a schematic diagram showing a cross section of SPADs 331, 332, 335, and 336 provided with an antireflection portion (moth eye) 1122.
  • SPADs 331, 332, 335, and 336 according to Modification 1 of the first embodiment have an antireflection structure with minute protrusions, a so-called moth-eye structure, on the surface (plate surface) on the side where light enters. has.
  • the antireflection section 1122 not only prevents reflection, but also increases the effective optical path length by diffraction. In this way, an antireflection portion (moth eye) 1122, which is an uneven structure portion arranged at a predetermined pitch on the surface of the photoelectric conversion element side, is formed.
  • the SPADs 333 and 334 also have the same configuration as the SPADs 331 and 332.
  • the light entering the SPADs 331, 332, 335, 336 is caused to go back and forth between the N-well 1112 and the SPADs 331, 332, 335, 336 by the anti-reflection part (moth eye) 1122. sensitivity can be improved.
  • FIG. 22 is a schematic diagram showing a cross section of SPADs 331, 332, 335, and 336 in which the position of the diffusion layer 1114 is changed depending on the position of the on-chip lens 1110 and the aperture WL or WR.
  • the SPAD 335 by arranging the diffusion layer 1114 on the side without the aperture WL, it is possible to further increase the sensitivity.
  • the SPAD 336 by arranging the diffusion layer 1114 on the side without the aperture WR, it is possible to further increase the sensitivity.
  • each SPAD 331 and 332 varies depending on the arrangement position on the pixel array section 12 and the relationship with the optical axis of the lens 40. Therefore, in the SPADs 331 and 332 according to this modification, the diffusion layer 1114 is arranged in accordance with the focal position of the on-chip lens 1110. That is, the diffusion layer 1114 is arranged in alignment with the main optical axis of the chip lens 1110. This has the effect of so-called pupil correction, and it is possible to increase the sensitivity of the SPADs 331 and 332. Note that the SPADs 333 and 334 also have the same configuration as the SPADs 331 and 332.
  • the distance measuring system 1 according to the third modification of the first embodiment is different from the distance measuring system 1 according to the first embodiment in that the circuit section 13 formed in the pixel array section 12 is laminated in two layers. .
  • differences from the ranging system 1 of the first embodiment will be explained.
  • FIG. 23 is a schematic diagram showing an example of the configuration of the pixel array section 12.
  • a pixel array section 12 is configured on the first substrate 11a, and an inverter 312 for each pixel 110r, 110i, 110zR, and 110zL, TDCs 170a, b, and counters 172a, b are configured on the second substrate 11b. and place.
  • the upper surface area of the light receiving device 10 can be reduced, and the light receiving device 10 can be further miniaturized.
  • the distance measurement system 1 according to the fourth modification of the first embodiment is different from the distance measurement system 1 according to the first embodiment in that the circuit section 13 formed in the pixel array section 12 is laminated in three layers. .
  • differences from the ranging system 1 of the first embodiment will be explained.
  • FIG. 24 is a schematic diagram showing a configuration example of the pixel array section 12 according to Modification 4 of the first embodiment.
  • the pixel array section 12 is configured on the first substrate 11c
  • the inverter 312 for each pixel 110r, 110i, 110zR, and 110zL is configured on the second substrate 11d
  • the third substrate 11e includes: A TDC 170 and a counter 172 are arranged.
  • a TDC 170 corresponding to each unit 11u of the pixel array section 12 is arranged directly below each unit 11u.
  • a counter 172 corresponding to each unit 11u of the pixel array section 12 is arranged directly below each unit 11u.
  • the distance measurement system 1 according to the second embodiment is capable of further changing the light intensity of the measurement light used for TOF measurement based on the first distance value generated by the second distance measurement unit 206 using the phase difference method. , is different from the ranging system 1 according to the first embodiment. Below, differences from the ranging system 1 of the first embodiment will be explained.
  • FIG. 25 is a diagram schematically showing an example of changing the amount of light according to the distance value using the phase difference method.
  • Diagram A in FIG. 25 is an example of a short distance
  • diagram B is an example of a long distance, which is farther than diagram A.
  • FIG. 26 is a flowchart showing an example of control by the comprehensive control unit 208 according to the second embodiment. This flowchart is different from the flowchart showing a control example of the comprehensive control unit 208 according to the second embodiment shown in FIG. 20 in that the light amount of the infrared pulsed laser 14 is set in step S200.
  • the comprehensive control unit 208 sets the light intensity of the external light pulse laser 14 according to the first distance value generated by the second distance measurement unit 206 using the phase difference method.
  • the general control unit 208 stores, for example, a table that associates distance values with light amounts, and sets the light amount with reference to the table.
  • the comprehensive control unit 208 adjusts the second measurement period of photons reflected and returned from the object 50 and the light intensity of the pulsed laser 14 to the first distance value using the phase difference method. I decided to configure it accordingly.
  • the second measurement period is set using the first distance value that suppresses the influence of attack light or environmental light, and distance fluctuations in the amount of photons reflected from the object 50 and returned can be suppressed, so the TOF Deterioration in measurement accuracy in the method is further suppressed.
  • phase difference pixels 110zL and 110zR according to the ranging system 1 of the first embodiment generate output values as count values by the counter 172
  • the phase difference pixels 110zL and 110zR according to the ranging system 1 according to the third embodiment generate output values as count values.
  • the pixels 110zL and 110zR are different from the ranging system 1 according to the first embodiment in that they output output values to the TDC 170 and generate count values using the TDC 170. Below, differences from the ranging system 1 of the first embodiment will be explained.
  • FIG. 27 is a block diagram showing a configuration example of the control section 20 according to the third embodiment.
  • the control unit 20 according to the third embodiment further includes a third distance measurement unit 210.
  • the third distance measuring section 210 includes a histogram generating section 210a and a processing section 210b. Details of the processing by the third distance measuring section 210 will be described later.
  • FIG. 28 is a diagram schematically showing a processing example of the unit 11u according to the third embodiment.
  • the SPADs 331 to 334 are connected to the TDC 170
  • the SPADs 335 and 336 according to the third embodiment are different from the SPADs 335 and 336 according to the first embodiment in that they are respectively connected to the TDC 170. differ.
  • phase difference pixel 110zL having the SPAD 335 and the phase difference pixel 110zR having the SPAD 336 of the unit 11u generates a pulse signal TOUT according to these photons ph8 to ph10.
  • the TDC 170 corresponding to the SPAD 335 generates a digital value proportional to the difference between the input timing of the pulse signal TOUT generated in time series due to the photons ph8 to ph9 and the reference time t0, and generates a digital value in the histogram generation section 210a. supply to.
  • the TDC 170 corresponding to the PAD 336 generates a digital value proportional to the difference between the input timing of the pulse signal TOUT generated in time series due to the photons ph10 to ph11 and the reference time t0, and generates a digital value in the histogram generation section. 210a.
  • the histogram generation unit 210a adds, for example, 1 to the frequency of the time interval corresponding to the digital value.
  • the frequency of time intervals corresponding to digital values according to the distance to the target portion 50 increases.
  • FIG. 29 is a diagram showing a histogram corresponding to the output of each of the phase difference pixels 110zL and 110zR generated by the histogram generation unit 210a.
  • Figure A is a histogram corresponding to the phase difference pixel 110zL
  • Figure B is a histogram corresponding to the phase difference pixel 110zR.
  • the processing unit 210b generates, as a counter value, an integrated value obtained by integrating the histograms generated by the histogram generating unit 210a, for example.
  • the integrated value is proportional to the number of photons that reach each of the phase difference pixels 110zL and 110zR.
  • the processing unit 210b integrates the number of pulse signals TOUT generated by the TDC 170 to generate a counter value.
  • this processing unit 210b calculates the shift amount based on the counter values corresponding to each of the phase difference pixels 110zL and 110zR arranged in a column. The processing unit 210b then calculates the distance to the target object 50 based on the shift amount. Since the shift amount corresponds to the distance to the object 50, the accuracy is lower than that of the TOF method, but the distance can be measured while suppressing the influence of attack light and environmental light.
  • the SPADs 335 and 336 are connected to the TDC 170, and the third distance measuring section 210 generates a counter value by integrating the number of pulse signals TOUT generated by the TDC 170, We decided to calculate the distance value.
  • the ranging system 1 can be configured only with the TDC 170.
  • the distance measurement system 1 according to the first embodiment measures the distance using the measurement pixel 110i after measuring the distance using the phase difference pixels 110zL and 110zR, whereas the distance measurement system 1 according to the fourth embodiment measures the distance using the measurement pixel 110i.
  • differences from the ranging system 1 of the first embodiment will be explained.
  • FIG. 30 is a diagram illustrating phase difference detection according to the fourth embodiment.
  • a in the figure is a diagram showing the relationship between the surface positions of the object 50, the lens 40, and the pixel array section 12, and the optical path of the incident light.
  • light passing through the left and right sides of the lens 40 is represented by 301 and 302, respectively.
  • the lights 301 and 302 that pass through the end of the lens 40 are described. Due to the on-chip lens 1110 (see FIG. 7), light 301 actually passes through the right side of the on-chip lens 1110, and light 302 actually passes through the left side of the on-chip lens 1110. Therefore, the position of the shielding part WR is schematically shown on the opposite side from FIG.
  • the ⁇ (minus) position in the figure indicates a so-called rear focus state where the focal position is on the side opposite to (the imaging surface of) the pixel array section 12. Further, the 0 position in the figure indicates a focused state. Furthermore, the + (plus) position in the figure indicates a so-called front focus state in which the focal position is on the (imaging surface of) the pixel array section 12.
  • FIG. 31 is a diagram showing the relationship between the focal position and the output value of the paired phase difference pixels 110zL and 110zR.
  • the focus position is 0
  • the - position is the rear focus state
  • the + position is the front focus state.
  • the output value of the phase difference pixel 110zL is assumed to be L20a
  • the output value of the phase difference pixel 110zR is assumed to be R20a. That is, the output value L20a is the count value of the counter 172 corresponding to the phase difference pixel 110zL. Similarly, the output value R20a is the count value of the counter 172 corresponding to the phase difference pixel 110zR. As shown in FIG. 31, the values of L20a and R20a change depending on the degree of the rear focus state and the front focus state.
  • the second distance measurement unit 206 sets the output value of the phase difference pixel 110zL of each unit 11u to L20a, sets the output value of the phase difference pixel 110zR to R20a, and based on the output values L20a and R20a, A difference value D20 is calculated. Then, the second distance measuring section 206 calculates the distance to the target object 50 for each unit 11 based on the difference value D20. Since the difference value D20 corresponds to the distance to the object 50, it is possible to measure the distance, although the accuracy is lower than with the TOF method. In this manner, in this embodiment, by using the ASPADs 335 and 336, it is possible to measure the distance to the object 50 for each unit 11u using the amount of light at the photon level.
  • FIG. 32 is a diagram showing the relationship between the focal position of the phase difference pixels 110zL and 110zR and the output value when receiving interference light.
  • the focus position is 0
  • the - position is the rear focus state
  • the + position is the front focus state.
  • the output value of the phase difference pixel 110zL is assumed to be L20a
  • the output value of the phase difference pixel 110zR is assumed to be R20a. That is, the output value L20a is the count value of the counter 172 corresponding to the phase difference pixel 110zL. Similarly, the output value R20a is the count value of the counter 172 corresponding to the phase difference pixel 110zR.
  • the counter values of the phase difference pixel 110zL and the phase difference pixel 110zR only increase in offset due to the number of photons caused by the interference light. Therefore, even if interference light is received, the influence on the difference value D20 is suppressed. In this way, in the phase difference method using the phase difference pixel 110zL and the phase difference pixel 110zR, even if the object 50 is irradiated with the pulsed light of the attack circuit 1000, the signal value L20a and the signal value R20a increase by the same amount. Therefore, the influence of pulsed light of the attack circuit 1000, environmental light, etc. is suppressed.
  • FIG. 33 is a timing chart showing an example of control by the comprehensive control unit 208 according to the fourth embodiment.
  • the horizontal axis indicates time.
  • the vertical axis in Figure A indicates the light emission control signal generated by the light emission control unit 200, the selection signal XSEL2, the selection signal XSEL3, the selection signal XSEL0, and the selection signal show. A high level of these signals indicates a driving state.
  • Figure B schematically shows photons incident on the pixel array section 12. That is, photons generated by attack light or environmental light and photons generated by second measurement light are schematically shown.
  • FIG. 34 is a flowchart showing an example of control by the comprehensive control unit 208 according to the fourth embodiment.
  • a control example of the comprehensive control unit 208 according to the fourth embodiment will be described with reference to FIG. 33.
  • the general control unit 208 sets the light emission control signal to a high level signal and causes the infrared light pulse laser 14 to emit measurement pulse light (step S300).
  • the general control unit 208 sets the phase difference pixel 110zL and the selection signal XSEL2 and the selection signal , the phase difference pixel 110zR is driven, and the phase difference signal is measured for each unit u (step S302).
  • the range in which the selection signal XSEL2 and the selection signal XSEL3 are set to high level can be set according to a predetermined measurement distance range to the target object 50.
  • the second distance measuring section 206 calculates a difference value D20 for each unit 11u based on the pulse signal COUT of each unit 11u, and generates a first distance value for each unit 11u to the target object 50 (step S304). As described above, the distance value is generated while suppressing the influence of attack light and the like.
  • the general control unit 208 sets the selection signal XSEL0 that drives the reference pixel 110r to a high level in accordance with the timing at which the measurement pulse is emitted, and measures the signal of the reference light (step S306).
  • the histogram generation unit 204a of the first distance measurement unit 204 adds, for example, 1 to the frequency of the time interval corresponding to the digital value, every time a digital value is input while the selection signal XSEL0 is at a high level.
  • a histogram is generated according to the output of the reference pixel 110r.
  • the processing unit 204b generates, for example, a time corresponding to the maximum frequency of the histogram generated for the reference pixel 110r by the histogram generation unit 204a as time t1.
  • the general control unit 208 sets the selection signal XSEL1 that drives each measurement pixel 110i to a high level according to the timing at which the measurement pulse is emitted, and measures the measurement light signal (step S308).
  • the histogram generation unit 204a of the first distance measurement unit 204 adds, for example, 1 to the frequency of the time interval corresponding to the digital value, every time a digital value is input while the selection signal XSEL1 is at a high level. Then, a histogram corresponding to each of the plurality of measurement pixels 110i is generated.
  • the processing unit 204b generates, for example, a time t2n, a time corresponding to the maximum frequency of the histogram generated by the histogram generation unit 204a for each of the plurality of measurement pixels 110i. Thereby, the processing unit 204b generates the distance Dn to the object 50 detected by each unit 11u as (t2n-t1) ⁇ speed of light/2 (step S310).
  • the distance measurement values of the phase difference pixels 110zL and 110zR have lower measurement accuracy than the measurement pixel 110i, the influence of attack light and environmental light is suppressed. on the other hand.
  • the distance measurement value of the measurement pixel 110i has a higher measurement accuracy than the distance measurement values of the phase difference pixels 110zL and 110zR, and tends to be easily influenced by attack light and environmental light.
  • the comprehensive control unit 208 selects the distance measurement value of the measurement pixel 110i.
  • the distance measurement value of the measurement pixel 110i and the distance measurement value of the phase difference pixels 110zL and 110zR are not within the predetermined range, the distance measurement value of the phase difference pixels 110zL and 110zR is selected (step S312).
  • the range of the predetermined value is set to be 90% or more and 110% of the distance measurement values of the phase difference pixels 110zL and 110zR.
  • distance measurements are performed in parallel by the phase difference pixels 110zL and 110zR of each unit u11 and the measurement pixel 110i, and the second distance measurement based on the first distance value of each unit u11 is performed in parallel.
  • the first distance value of the phase difference pixels 110zL and 110zR is selected, and when receiving no interfering light, etc., the first distance value of the measurement pixel 110i is selected.
  • FIG. 35 is a flowchart showing an example of control by the comprehensive control unit 208 according to the fifth embodiment. As shown in FIG. 35, the comprehensive control unit 208 performs (step S300) to (step S304) in the same manner as in FIG.
  • the general control unit 208 sets the selection signal XSEL0 during the period in which the second light emission control signal is set to high level, and sets the selection signal XSEL1 for each unit 11u to high level according to the first distance value for each unit 11u.
  • a second measurement period is set (step S400).
  • the general control unit 208 sets the selection signal XSEL0 that drives the reference pixel 110r to a high level in accordance with the timing at which the second measurement pulse is emitted, and measures the signal of the reference light (step S404).
  • the histogram generation unit 204a of the first distance measurement unit 204 adds, for example, 1 to the frequency of the time interval corresponding to the digital value, every time a digital value is input while the selection signal XSEL0 is at a high level.
  • a histogram is generated according to the output of the reference pixel 110r.
  • the processing unit 204b generates, for example, a time corresponding to the maximum frequency of the histogram generated for the reference pixel 110r by the histogram generation unit 204a as time t1.
  • the general control unit 208 sets the selection signal XSEL1 that drives each measurement pixel 110i to a high level in accordance with the second measurement period of each measurement pixel 110i in accordance with the timing at which the second measurement pulse is emitted.
  • the measurement light signal is measured (step S406).
  • the comprehensive control unit 208 sets the range of measurement time of photons reflected and returned from the target object 50 for each unit 11u into the second measurement period corresponding to the first distance value for each unit 11u using the phase difference method. Set to .
  • the histogram generation unit 204a of the first distance measurement unit 204 adds, for example, 1 to the frequency of the time interval corresponding to the digital value, every time a digital value is input while the selection signal XSEL0 is at a high level. Then, a histogram is generated for each unit 11u according to the output of the reference pixel 110r.
  • the processing unit 204b generates, for example, a time corresponding to the maximum frequency of each histogram generated by the histogram generation unit 204a for the reference pixel 110r as time t1.
  • the processing unit 204b generates, for example, time t2n, a time corresponding to the maximum frequency of the histogram generated by the histogram generation unit 204a for each of the plurality of measurement pixels 110i. Thereby, the processing unit 204b generates the distance Dn to the object 50 detected by each unit 11u as (t2n-t1) ⁇ speed of light/2 (step S408).
  • the comprehensive control unit 208 determines the measurement period of photons reflected and returned from the object 50 for each unit 11u measured by the phase difference method. 1 distance value, and is set for each unit 11u.
  • the second measurement period for each unit 11u is set based on the first distance value for each unit 11u in which the influence of attack light or environmental light is suppressed, so attack light outside the second measurement period in the TOF method Alternatively, the influence of environmental light is suppressed, and a decrease in measurement accuracy in the TOF method for each unit 11u is suppressed.
  • the distance measurement system 1 according to the sixth embodiment differs from the distance measurement system 1 according to the first embodiment in that it further includes an infrared pulsed laser 16 for the phase difference pixels 110zL and 110zR. Below, differences from the ranging system 1 of the first embodiment will be explained.
  • FIG. 36 is a block diagram schematically showing an example of a schematic configuration of a ranging system 1 according to the sixth embodiment.
  • the ranging system 1 according to the sixth embodiment differs from the first embodiment in that it further includes an infrared pulsed laser 16 and a diffractive optical element 16a (DOE) for the phase difference pixels 110zL and 110zR. This is different from the distance measuring system 1 according to the configuration.
  • DOE diffractive optical element 16a
  • the infrared pulsed laser 16 is mounted on the substrate 11.
  • the infrared light pulse laser 16 is formed using, for example, a VCSEL (Vertical Cavity Surface Emitting LASER) light source. Furthermore, by arranging a diffractive optical element 16a (DOE) on the irradiation side of the infrared light pulse laser 16, it is possible to irradiate the object 50 with a dot pattern of spot light arranged in a matrix of, for example, 100 points. .
  • DOE diffractive optical element 16a
  • the infrared pulsed laser 16 for the phase difference pixels 110zL and 110zR is further provided. This allows measurement for the phase difference pixels 110zL and 110zR and measurement for the measurement pixel 110i to be performed using independent light sources.
  • the distance measurement system 1 according to the seventh embodiment differs from the distance measurement system 1 according to the first embodiment in that the phase difference pixels 110zL and 110zR, the reference pixel 110r, and the measurement pixel 110i are configured as different chips. Below, differences from the ranging system 1 of the first embodiment will be explained.
  • FIG. 37A is a block diagram schematically showing a schematic configuration example of the ranging system 1 according to the seventh embodiment.
  • the ranging system 1 according to the seventh embodiment includes phase difference pixels 110zL and 110zR, a reference pixel 110r, and a measurement pixel 110i as different chips.
  • FIG. 37B is a block diagram schematically showing an example of a schematic configuration of a distance measuring system 1 according to a comparative example.
  • the chips of the phase difference pixels 110zL and 110zR are arranged between the chip of the reference pixel 110r and the infrared light pulse laser 14. Therefore, in the comparative example, stray light from the infrared pulsed laser 14 tends to enter the chips of the phase difference pixels 110zL and 110zR.
  • the infrared pulsed laser 14 is disposed between the chip of the reference pixel 110r and the chips of the phase difference pixels 110zL and 110zR, so that stray light is suppressed. Ru.
  • the phase difference pixels 110zL and 110zR, the reference pixel 110r, and the measurement pixel 110i are configured as different chips. This makes it easy to differentiate the circuit configurations of the phase difference pixels 110zL and 110zR, the reference pixel 110r, and the measurement pixel 110i, further improving the degree of freedom in design.
  • the ranging system 1 according to the eighth embodiment differs from the ranging system 1 according to the first embodiment in that phase difference pixels 110zL, 110zR, and reference pixel 110r are each configured for each SPAD. Below, differences from the ranging system 1 of the first embodiment will be explained.
  • FIG. 38 is a diagram showing a configuration example of a plurality of phase difference pixels 110zL, 110zR and a plurality of measurement pixels 110i arranged in the pixel array section 12 according to the eighth embodiment.
  • each of the plurality of phase difference pixels 110zL, 110zR and the plurality of reference measurement pixels 110i is composed of one SPAD.
  • Apertures WL and WR are configured in the middle between the phase difference pixels 110zL and 110zR.
  • the distance measurement system 1 according to the ninth embodiment is different from the distance measurement system 1 according to the first embodiment in that the phase difference pixels 110zL and 110zR are configured with one chip, and the measurement pixel 110i is configured with another chip. differ. Below, differences from the ranging system 1 of the first embodiment will be explained.
  • the semiconductor element chip 110zch includes phase difference pixels 110zL and 110zR, and the semiconductor element chip 110ich includes a measurement pixel 110i. Although each chip is schematically shown as a circle, the pixels within each chip are arranged in a rectangular matrix.
  • FIG. 39 is a diagram showing an example of an inverted L-shaped structure in which the chips 110zch and 110ich are arranged vertically and the infrared light pulse laser 14 is brought closer to the chip 110ich. In such a configuration, stray light from the infrared pulsed laser 14 to the phase difference pixels 110zL and 110zR is suppressed.
  • FIG. 40 is a diagram showing an example of a diagonal L-shaped structure in which the chips 110zch and 110ich are arranged diagonally and the infrared light pulse laser 14 is brought closer to the chip 110ich. In such a configuration, stray light from the infrared pulsed laser 14 to the phase difference pixels 110zL and 110zR is suppressed.
  • FIG. 41 is a diagram showing an example of an L-shaped structure in which the chips 110zch and 110ich are arranged horizontally and the infrared light pulse laser 14 is brought closer to the chip 110ich. In such a configuration, stray light from the infrared pulsed laser 14 to the phase difference pixels 110zL and 110zR is suppressed. Note that the inverted L-shaped structure and the diagonal L-shaped structure according to this embodiment also correspond to the L-shaped structure.
  • the chip 110zch where the phase difference pixels 110zL and 110zR are arranged and the chip 110ich where the measurement pixel 110i is arranged are independently configured, and the external light pulse laser 14 is arranged on the chip 110ich.
  • the distance measuring system 1 according to the tenth embodiment is different from the first to ninth embodiments in that the phase difference pixels 110zL and 110zR are configured with CIS (CMOS Image Sensor), and distance measurement can be performed using visible light. This is different from system 1. Below, differences from the ranging system 1 of the first embodiment will be explained.
  • phase difference pixels 110zL and 110zR are performed by irradiating the infrared pulsed laser 14 or the infrared pulsed laser 16.
  • a CIS CMOS Image Sensor
  • measurement using the phase difference pixels 110zL and 110zR becomes possible without irradiation with pulsed laser light.
  • a visible light source can be placed in place of the infrared pulsed laser 16. This allows measurement using the phase difference pixels 110zL and 110zR even in environments with low light intensity, such as at night.
  • FIG. 42 is a schematic cross-sectional diagram of the SPADs 3310, 3320, CIS 3350, and 3360 formed in the pixel array section 12.
  • the SPADs 3310, 3320, CIS 3350, and 3360 correspond to two of the four pixels forming the phase difference pixel 110zL, the phase difference pixel 110zR, and the measurement pixel 110i, respectively (see FIG. 2).
  • the SPADs 3310 and 3320 differ from the SPADs 331 and 332 shown in FIG. 7 in that they each include a red filter 1122R and a blue filter 1122B. This suppresses visible light from entering the N-well 1112.
  • the CIS 3350 and 3360 are configured with a green filter 1122G, and the photoelectric conversion unit 1112a is configured with a CIS (CMOS Image Sensor).
  • CMOS Image Sensor CMOS Image Sensor
  • the distance measuring system 1 according to the eleventh embodiment differs from the distance measuring system 1 according to the first to tenth embodiments in that the phase difference pixels 110zL and 110zR are configured with elliptical or circular on-chip lenses. Below, the differences from the first to tenth distance measuring systems 1 will be explained.
  • the apertures WL and WR constitute the phase difference pixels 110zL and 110zR, but in the ranging system 1 according to the eleventh embodiment, the phase difference pixels 110zL and 110zR consists of an elliptical or circular on-chip lens.
  • FIG. 43 is a diagram showing an example of the planar arrangement of the phase difference pixels 110zL, 110zR and the measurement pixel 110i formed in the pixel array section 12.
  • Each pixel is composed of one SPAD.
  • An elliptical on-chip lens Lz10 is arranged in the phase difference pixels 110zL and 110zR.
  • FIG. 44 is a simplified cross-sectional schematic diagram of the phase difference pixels 110zL and 110zR.
  • the elliptical on-chip lens Lz10 allows pupil division of the phase difference pixels 110zL and 110zR. Since no aperture is used, sensitivity can be further improved.
  • FIG. 45 is a diagram showing an example of the planar arrangement of the phase difference pixels 110zL, 110zR and the measurement pixel 110i formed in the pixel array section 12.
  • Each pixel is composed of one SPAD.
  • a circular on-chip lens Lz20 is arranged in the phase difference pixels 110zL and 110zR and the two measurement pixels 110i. Thereby, by integrating the outputs of each SPAD in which the on-chip lens Lz20 is arranged, it is possible to configure one pixel.
  • phase difference pixels 110zL and 110zR by using the outputs of the phase difference pixels 110zL and 110zR, it is possible to configure the phase difference pixels 110zL and 110zR that can perform pupil division. Pupil division of the phase difference pixels 110zL and 110zR is possible. Since no aperture is used, sensitivity can be further improved.
  • the technology according to the present disclosure can be applied to various products.
  • the technology according to the present disclosure can be applied to any type of transportation such as a car, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobility vehicle, an airplane, a drone, a ship, a robot, a construction machine, an agricultural machine (tractor), etc. It may also be realized as a device mounted on the body.
  • FIG. 46 is a block diagram showing a schematic configuration example of a vehicle control system 7000, which is an example of a mobile object control system to which the technology according to the present disclosure can be applied.
  • Vehicle control system 7000 includes multiple electronic control units connected via communication network 7010.
  • the vehicle control system 7000 includes a drive system control unit 7100, a body system control unit 7200, a battery control unit 7300, an outside vehicle information detection unit 7400, an inside vehicle information detection unit 7500, and an integrated control unit 7600.
  • the communication network 7010 connecting these plurality of control units is, for example, CAN (Controller Area Network), LIN (Local Interconnect Network), LAN (Local Area Network), or FlexRay. Compliant with arbitrary standards such as y (registered trademark) It may be an in-vehicle communication network.
  • Each control unit includes a microcomputer that performs calculation processing according to various programs, a storage unit that stores programs executed by the microcomputer or parameters used in various calculations, and a drive circuit that drives various devices to be controlled. Equipped with Each control unit is equipped with a network I/F for communicating with other control units via the communication network 7010, and also communicates with devices or sensors inside and outside the vehicle through wired or wireless communication. A communication I/F is provided for communication. In FIG.
  • the functional configuration of the integrated control unit 7600 includes a microcomputer 7610, a general-purpose communication I/F 7620, a dedicated communication I/F 7630, a positioning section 7640, a beacon receiving section 7650, an in-vehicle device I/F 7660, an audio image output section 7670, An in-vehicle network I/F 7680 and a storage unit 7690 are illustrated.
  • the other control units similarly include a microcomputer, a communication I/F, a storage section, and the like.
  • the drive system control unit 7100 controls the operation of devices related to the drive system of the vehicle according to various programs.
  • the drive system control unit 7100 includes a drive force generation device such as an internal combustion engine or a drive motor that generates drive force for the vehicle, a drive force transmission mechanism that transmits the drive force to wheels, and a drive force transmission mechanism that controls the steering angle of the vehicle. It functions as a control device for a steering mechanism to adjust and a braking device to generate braking force for the vehicle.
  • the drive system control unit 7100 may have a function as a control device such as ABS (Antilock Brake System) or ESC (Electronic Stability Control).
  • a vehicle state detection section 7110 is connected to the drive system control unit 7100.
  • the vehicle state detection unit 7110 includes, for example, a gyro sensor that detects the angular velocity of the axial rotation movement of the vehicle body, an acceleration sensor that detects the acceleration of the vehicle, or an operation amount of an accelerator pedal, an operation amount of a brake pedal, or a steering wheel. At least one sensor for detecting angle, engine rotational speed, wheel rotational speed, etc. is included.
  • the drive system control unit 7100 performs arithmetic processing using signals input from the vehicle state detection section 7110, and controls the internal combustion engine, the drive motor, the electric power steering device, the brake device, and the like.
  • the body system control unit 7200 controls the operations of various devices installed in the vehicle body according to various programs.
  • the body system control unit 7200 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as a headlamp, a back lamp, a brake lamp, a turn signal, or a fog lamp.
  • radio waves transmitted from a portable device that replaces a key or signals from various switches may be input to the body control unit 7200.
  • the body system control unit 7200 receives input of these radio waves or signals, and controls the door lock device, power window device, lamp, etc. of the vehicle.
  • the battery control unit 7300 controls the secondary battery 7310, which is a power supply source for the drive motor, according to various programs. For example, information such as battery temperature, battery output voltage, or remaining battery capacity is input to the battery control unit 7300 from a battery device including a secondary battery 7310. The battery control unit 7300 performs arithmetic processing using these signals, and controls the temperature adjustment of the secondary battery 7310 or the cooling device provided in the battery device.
  • the external information detection unit 7400 detects information external to the vehicle in which the vehicle control system 7000 is mounted.
  • an imaging section 7410 and an external information detection section 7420 is connected to the vehicle exterior information detection unit 7400.
  • the imaging unit 7410 includes at least one of a ToF (Time of Flight) camera, a stereo camera, a monocular camera, an infrared camera, and other cameras.
  • the vehicle external information detection unit 7420 includes, for example, an environmental sensor for detecting the current weather or weather, or a sensor for detecting other vehicles, obstacles, pedestrians, etc. around the vehicle equipped with the vehicle control system 7000. At least one of the surrounding information detection sensors is included.
  • the environmental sensor may be, for example, at least one of a raindrop sensor that detects rainy weather, a fog sensor that detects fog, a sunlight sensor that detects the degree of sunlight, and a snow sensor that detects snowfall.
  • the surrounding information detection sensor may be at least one of an ultrasonic sensor, a radar device, and a LIDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging) device.
  • the imaging section 7410 and the vehicle external information detection section 7420 may be provided as independent sensors or devices, or may be provided as a device in which a plurality of sensors or devices are integrated.
  • FIG. 47 shows an example of the installation positions of the imaging section 7410 and the vehicle external information detection section 7420.
  • the imaging units 7910, 7912, 7914, 7916, and 7918 are provided, for example, at at least one of the front nose, side mirrors, rear bumper, back door, and upper part of the windshield inside the vehicle 7900.
  • An imaging unit 7910 provided in the front nose and an imaging unit 7918 provided above the windshield inside the vehicle mainly acquire images in front of the vehicle 7900.
  • Imaging units 7912 and 7914 provided in the side mirrors mainly capture images of the sides of the vehicle 7900.
  • An imaging unit 7916 provided in the rear bumper or back door mainly acquires images of the rear of the vehicle 7900.
  • the imaging unit 7918 provided above the windshield inside the vehicle is mainly used to detect preceding vehicles, pedestrians, obstacles, traffic lights, traffic signs, lanes, and the like.
  • FIG. 47 shows an example of the imaging range of each of the imaging units 7910, 7912, 7914, and 7916.
  • Imaging range a indicates the imaging range of imaging unit 7910 provided on the front nose
  • imaging ranges b and c indicate imaging ranges of imaging units 7912 and 7914 provided on the side mirrors, respectively
  • imaging range d is The imaging range of an imaging unit 7916 provided in the rear bumper or back door is shown. For example, by superimposing image data captured by imaging units 7910, 7912, 7914, and 7916, an overhead image of vehicle 7900 viewed from above can be obtained.
  • the external information detection units 7920, 7922, 7924, 7926, 7928, and 7930 provided at the front, rear, sides, corners, and the upper part of the windshield inside the vehicle 7900 may be, for example, ultrasonic sensors or radar devices.
  • External information detection units 7920, 7926, and 7930 provided on the front nose, rear bumper, back door, and upper part of the windshield inside the vehicle 7900 may be, for example, LIDAR devices.
  • These external information detection units 7920 to 7930 are mainly used to detect preceding vehicles, pedestrians, obstacles, and the like.
  • the vehicle exterior information detection unit 7400 causes the imaging unit 7410 to capture an image of the exterior of the vehicle, and receives the captured image data. Further, the vehicle exterior information detection unit 7400 receives detection information from the vehicle exterior information detection section 7420 to which it is connected.
  • the external information detection unit 7420 is an ultrasonic sensor, a radar device, or a LIDAR device
  • the external information detection unit 7400 transmits ultrasonic waves, electromagnetic waves, etc., and receives information on the received reflected waves.
  • the external information detection unit 7400 may perform object detection processing such as a person, car, obstacle, sign, or text on the road surface or distance detection processing based on the received information.
  • the external information detection unit 7400 may perform environment recognition processing to recognize rain, fog, road surface conditions, etc. based on the received information.
  • the vehicle exterior information detection unit 7400 may calculate the distance to the object outside the vehicle based on the received information.
  • the outside-vehicle information detection unit 7400 may perform image recognition processing or distance detection processing for recognizing people, cars, obstacles, signs, characters on the road, etc., based on the received image data.
  • the outside-vehicle information detection unit 7400 performs processing such as distortion correction or alignment on the received image data, and also synthesizes image data captured by different imaging units 7410 to generate an overhead image or a panoramic image. Good too.
  • the outside-vehicle information detection unit 7400 may perform viewpoint conversion processing using image data captured by different imaging units 7410.
  • the in-vehicle information detection unit 7500 detects in-vehicle information.
  • a driver condition detection section 7510 that detects the condition of the driver is connected to the in-vehicle information detection unit 7500.
  • the driver state detection unit 7510 may include a camera that images the driver, a biosensor that detects biometric information of the driver, a microphone that collects audio inside the vehicle, or the like.
  • the biosensor is provided, for example, on a seat surface or a steering wheel, and detects biometric information of a passenger sitting on a seat or a driver holding a steering wheel.
  • the in-vehicle information detection unit 7500 may calculate the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 7510, or determine whether the driver is dozing off. You may.
  • the in-vehicle information detection unit 7500 may perform processing such as noise canceling processing on the collected audio signal.
  • the integrated control unit 7600 controls overall operations within the vehicle control system 7000 according to various programs.
  • An input section 7800 is connected to the integrated control unit 7600.
  • the input unit 7800 is realized by, for example, a device such as a touch panel, a button, a microphone, a switch, or a lever that can be inputted by the passenger.
  • the integrated control unit 7600 may be input with data obtained by voice recognition of voice input through a microphone.
  • the input unit 7800 may be, for example, a remote control device using infrared rays or other radio waves, or an externally connected device such as a mobile phone or a PDA (Personal Digital Assistant) that is compatible with the operation of the vehicle control system 7000. It's okay.
  • the input unit 7800 may be, for example, a camera, in which case the passenger can input information using gestures. Alternatively, data obtained by detecting the movement of a wearable device worn by a passenger may be input. Further, the input section 7800 may include, for example, an input control circuit that generates an input signal based on information input by a passenger or the like using the input section 7800 described above and outputs it to the integrated control unit 7600. By operating this input unit 7800, a passenger or the like inputs various data to the vehicle control system 7000 and instructs processing operations.
  • the storage unit 7690 may include a ROM (Read Only Memory) that stores various programs executed by the microcomputer, and a RAM (Random Access Memory) that stores various parameters, calculation results, sensor values, etc. Furthermore, the storage unit 7690 may be realized by a magnetic storage device such as a HDD (Hard Disc Drive), a semiconductor storage device, an optical storage device, a magneto-optical storage device, or the like.
  • ROM Read Only Memory
  • RAM Random Access Memory
  • the storage unit 7690 may be realized by a magnetic storage device such as a HDD (Hard Disc Drive), a semiconductor storage device, an optical storage device, a magneto-optical storage device, or the like.
  • the general-purpose communication I/F 7620 is a general-purpose communication I/F that mediates communication with various devices existing in the external environment 7750.
  • the general-purpose communication I/F7620 supports GSM (registered trademark) (Global System of Mobile communications), WiMAX (registered trademark), LTE (registered trademark) (Long Term Evolution), or LTE-A (LTE -Advanced) and other cellular communication protocols , or other wireless communication protocols such as wireless LAN (also referred to as Wi-Fi (registered trademark)) or Bluetooth (registered trademark).
  • GSM Global System of Mobile communications
  • WiMAX registered trademark
  • LTE registered trademark
  • LTE-A Long Term Evolution
  • wireless communication protocols such as wireless LAN (also referred to as Wi-Fi (registered trademark)) or Bluetooth (registered trademark).
  • the general-purpose communication I/F 7620 connects to a device (for example, an application server or a control server) existing on an external network (for example, the Internet, a cloud network, or an operator-specific network) via a base station or an access point, for example. You may. Furthermore, the general-purpose communication I/F 7620 uses, for example, P2P (Peer To Peer) technology to communicate with a terminal located near the vehicle (for example, a terminal of a driver, a pedestrian, a store, or an MTC (Machine Type Communication) terminal). You can also connect it with a terminal located near the vehicle (for example, a terminal of a driver, a pedestrian, a store, or an MTC (Machine Type Communication) terminal). You can also connect it with P2P (Peer To Peer) technology to communicate with a terminal located near the vehicle (for example, a terminal of a driver, a pedestrian, a store, or an MTC (Machine Type Communication) terminal). You can also connect it with
  • the dedicated communication I/F 7630 is a communication I/F that supports communication protocols developed for use in vehicles.
  • the dedicated communication I/F 7630 supports, for example, WAVE (Wireless Access in Vehicle Environment), which is a combination of lower layer IEEE802.11p and upper layer IEEE1609, and DSRC (Dedicated Shore). standard protocols such as t Range Communications) or cellular communication protocols. May be implemented.
  • the dedicated communication I/F 7630 is typically used for vehicle-to-vehicle communication, vehicle-to-infrastructure communication, vehicle-to-home communication, and vehicle-to-vehicle communication. to pedestrian ) communications, a concept that includes one or more of the following:
  • the positioning unit 7640 receives, for example, a GNSS signal from a GNSS (Global Navigation Satellite System) satellite (for example, a GPS signal from a GPS (Global Positioning System) satellite), performs positioning, and performs positioning of the vehicle.
  • GNSS Global Navigation Satellite System
  • GPS Global Positioning System
  • Latitude, longitude and altitude Generate location information including.
  • the positioning unit 7640 may specify the current location by exchanging signals with a wireless access point, or may acquire location information from a terminal such as a mobile phone, PHS, or smartphone that has a positioning function.
  • the beacon receiving unit 7650 receives, for example, radio waves or electromagnetic waves transmitted from a wireless station installed on the road, and obtains information such as the current location, traffic jams, road closures, or required travel time. Note that the function of the beacon receiving unit 7650 may be included in the dedicated communication I/F 7630 described above.
  • the in-vehicle device I/F 7660 is a communication interface that mediates connections between the microcomputer 7610 and various in-vehicle devices 7760 present in the vehicle.
  • the in-vehicle device I/F 7660 may establish a wireless connection using a wireless communication protocol such as wireless LAN, Bluetooth (registered trademark), NFC (Near Field Communication), or WUSB (Wireless USB).
  • the in-vehicle device I/F 7660 also connects USB (Universal Serial Bus), HDMI (registered trademark) (High-Definition Multimedia Interface), or MHL (Mobile High -definition Link) etc.
  • the in-vehicle device 7760 may include, for example, at least one of a mobile device or wearable device owned by a passenger, or an information device carried into or attached to the vehicle.
  • the in-vehicle device 7760 may include a navigation device that searches for a route to an arbitrary destination. or exchange data signals.
  • the in-vehicle network I/F 7680 is an interface that mediates communication between the microcomputer 7610 and the communication network 7010.
  • the in-vehicle network I/F 7680 transmits and receives signals and the like in accordance with a predetermined protocol supported by the communication network 7010.
  • the microcomputer 7610 of the integrated control unit 7600 communicates via at least one of a general-purpose communication I/F 7620, a dedicated communication I/F 7630, a positioning section 7640, a beacon reception section 7650, an in-vehicle device I/F 7660, and an in-vehicle network I/F 7680.
  • the vehicle control system 7000 is controlled according to various programs based on the information obtained. For example, the microcomputer 7610 calculates a control target value for a driving force generating device, a steering mechanism, or a braking device based on acquired information inside and outside the vehicle, and outputs a control command to the drive system control unit 7100. Good too.
  • the microcomputer 7610 realizes ADAS (Advanced Driver Assistance System) functions, including vehicle collision avoidance or impact mitigation, following distance based on vehicle distance, vehicle speed maintenance, vehicle collision warning, vehicle lane departure warning, etc. Coordination control may be performed for the purpose of
  • the microcomputer 7610 controls the driving force generating device, steering mechanism, braking device, etc. based on the acquired information about the surroundings of the vehicle, so that the microcomputer 7610 can drive the vehicle autonomously without depending on the driver's operation. Cooperative control for the purpose of driving etc. may also be performed.
  • ADAS Advanced Driver Assistance System
  • the microcomputer 7610 acquires information through at least one of a general-purpose communication I/F 7620, a dedicated communication I/F 7630, a positioning section 7640, a beacon reception section 7650, an in-vehicle device I/F 7660, and an in-vehicle network I/F 7680. Based on this, three-dimensional distance information between the vehicle and surrounding objects such as structures and people may be generated, and local map information including surrounding information of the current position of the vehicle may be generated. Furthermore, the microcomputer 7610 may predict dangers such as a vehicle collision, a pedestrian approaching, or entering a closed road, based on the acquired information, and generate a warning signal.
  • the warning signal may be, for example, a signal for generating a warning sound or lighting a warning lamp.
  • the audio and image output unit 7670 transmits an output signal of at least one of audio and images to an output device that can visually or audibly notify information to the occupants of the vehicle or to the outside of the vehicle.
  • an audio speaker 7710, a display section 7720, and an instrument panel 7730 are illustrated as output devices.
  • Display unit 7720 may include, for example, at least one of an on-board display and a head-up display.
  • the display section 7720 may have an AR (Augmented Reality) display function.
  • the output device may be other devices other than these devices, such as headphones, a wearable device such as a glasses-type display worn by the passenger, a projector, or a lamp.
  • the output device When the output device is a display device, the display device displays results obtained from various processes performed by the microcomputer 7610 or information received from other control units in various formats such as text, images, tables, graphs, etc. Show it visually. Further, when the output device is an audio output device, the audio output device converts an audio signal consisting of reproduced audio data or acoustic data into an analog signal and audibly outputs the analog signal.
  • control units connected via the communication network 7010 may be integrated as one control unit.
  • each control unit may be composed of a plurality of control units.
  • vehicle control system 7000 may include another control unit not shown.
  • some or all of the functions performed by one of the control units may be provided to another control unit.
  • predetermined arithmetic processing may be performed by any one of the control units.
  • sensors or devices connected to any control unit may be connected to other control units, and multiple control units may send and receive detection information to and from each other via communication network 7010. .
  • a computer program for realizing each function of the ranging system 1 according to the present embodiment described using FIG. 1 can be implemented in any control unit or the like. It is also possible to provide a computer-readable recording medium in which such a computer program is stored.
  • the recording medium is, for example, a magnetic disk, an optical disk, a magneto-optical disk, a flash memory, or the like.
  • the above computer program may be distributed, for example, via a network, without using a recording medium.
  • the light receiving device 10 of the ranging system 1 according to the present embodiment described using FIG. 1 can be applied to the imaging unit 7410 of the application example shown in FIG.
  • the control section 20 can be applied to the external information detection unit 7400 shown in FIG. 46.
  • the components of the ranging system 1 described using FIG. 1 are included in a module for the integrated control unit 7600 shown in FIG. May be realized.
  • the ranging system 1 described using FIG. 1 may be realized by a plurality of control units of the vehicle control system 7000 shown in FIG. 46.
  • a pixel array unit having a measurement pixel used to measure a distance to a target object, and a plurality of paired phase difference pixels that divide incident light from the target object into pupils and detect a phase difference; a first distance measuring unit that generates a first distance value to the target object based on information regarding a difference between a timing at which the measurement pixel receives a photon and a predetermined time; a second distance measuring unit that generates a second distance value to the object based on information corresponding to the number of photons incident on each of the plurality of phase difference pixels;
  • a light receiving device comprising:
  • the measurement pixel has a photoelectric conversion unit that performs photoelectric conversion according to incident photons, The light receiving device according to (1), wherein the phase difference pixel includes a photoelectric conversion section that performs photoelectric conversion according to an incident photon.
  • a control unit that sets the drive period of the measurement pixel according to a first distance based on the output signals of the plurality of phase difference pixels,
  • the light receiving device further comprising:
  • the first distance measuring unit generates the first distance value according to a value of a histogram having appearance frequency information of a difference value between a timing at which the measurement pixel receives a photon and a predetermined time, The light receiving unit according to (1), wherein the second distance measuring unit generates the second distance value according to a phase difference using a signal value corresponding to the number of photons incident on each of the plurality of phase difference pixels.
  • the measurement pixel has a plurality of the photoelectric conversion units, The measurement pixel and the plurality of paired phase difference pixels constitute a unit, further comprising a first conversion section corresponding to the unit, The light receiving device according to (5), wherein the first conversion unit generates a reception signal having information regarding a difference between a timing at which the measurement pixel receives a photon and a predetermined time.
  • (10) further comprising a second conversion section corresponding to the unit, The light receiving device according to (6), wherein the second conversion unit generates a second reception signal having information corresponding to the number of photons incident on each of the plurality of phase difference pixels.
  • the second conversion unit generates a third reception signal having information regarding a difference between the timing at which the phase difference pixel receives a photon and a predetermined time, and converts the second reception signal into a third reception signal according to the number of third reception signals.
  • the light receiving device according to (10).
  • the first chip including the measurement pixel
  • the second chip including the paired phase difference pixels
  • the first infrared light pulse laser are arranged in an L shape ( ).
  • a pixel array section composed of a plurality of pixels; a control unit that controls the pixel array unit,
  • the pixel array section includes: A plurality of phase difference pixels that form a pair that divides incident light from a target object into pupils and detects a phase difference; a measurement pixel used to measure the distance to the target object,
  • the control unit is a light receiving device that sets a driving period of the measurement pixel according to a second distance based on output signals of the plurality of phase difference pixels.
  • the phase difference pixel has a photoelectric conversion unit that receives visible light and performs photoelectric conversion, The light receiving device according to (15), wherein the measurement pixel includes a photoelectric conversion section that performs photoelectric conversion according to incident photons.
  • the photoelectric conversion section is On-chip lens and a diffusion layer having at least an avalanche multiplication region that multiplies carriers generated by the photoelectric conversion,
  • the phase difference pixel is
  • the photoelectric conversion section is The aperture of the incident area, a diffusion layer having at least an avalanche multiplication region that multiplies carriers generated by the photoelectric conversion, The light receiving device according to (2), wherein the diffusion layer is arranged according to the position of the aperture.
  • Two phase difference pixels among the plurality of phase difference pixels forming the pair are: The light receiving device according to (2), including an elliptical on-chip lens arranged in the photoelectric conversion section of the two phase difference pixels.
  • a photoelectric conversion unit of two phase difference pixels among the plurality of paired phase difference pixels comprising a circular on-chip lens arranged in the photoelectric conversion sections of the two measurement pixels.
  • the photoelectric conversion section of the phase difference pixel receives light through a color filter that transmits visible light
  • a plurality of phase difference pixels that form a pair that divides incident light from a target object into pupils and detects a phase difference;
  • a distance measuring system comprising: a lens that condenses incident light from the target object.
  • Ranging system 10: Light receiving device, 12: Pixel array section, 14: Infrared light pulse laser, 16: Infrared light pulse laser, 20: Control section, 24: Display device, 40: Lens, 110i: Measurement Pixel, 110r: reference pixel, 110z, 110zL, 110zR: phase difference pixel, 331 to 340: SPAD, 1110: on-chip lens, 3350, 3360: CIS, Lz10, Lz20: on-chip lens, WL, WR: aperture.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Measurement Of Optical Distance (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

[Problem] To provide a light-receiving device, a control method, and a distance measuring system which make it possible to suppress the effects of incident light other than measurement light. [Solution] The present disclosure provides a light-receiving device comprising: a pixel array unit that has a measurement pixel which is used for measuring the distance to a target and a plurality of phase difference pixels which form a pair for performing pupil division of incident light from the target and detecting a phase difference; a first distance measurement unit that generates a first distance value to the target on the basis of information which relates to the difference between the timing of reception of a photon by the measurement pixel and a prescribed time; and a second distance measurement unit that generates a second distance value to the target on the basis of information which corresponds to the number of photons incident on each of the plurality of phase difference pixels.

Description

受光装置、制御方法、及び測距システムLight receiving device, control method, and ranging system
 本開示は、受光装置、制御方法、及び測距システムに関する。 The present disclosure relates to a light receiving device, a control method, and a ranging system.
 受光素子として、光子の受光に応じて信号を発生する素子を用いた受光装置がある。この種の受光装置を備える測距装置では、測距対象物(被写体)までの距離を測定する測定法として、光源から測距対象物に向けて照射した測定光が、測距対象物で反射されて戻ってくるまでの時間を計測するToF(Time of Flight:飛行時間)が採用されている。 There is a light-receiving device that uses an element that generates a signal in response to photon reception as a light-receiving element. In a distance measuring device equipped with this type of light receiving device, measurement light is emitted from a light source toward the object, and is reflected by the object. ToF (Time of Flight) is used to measure the time it takes for a plane to fly and return.
 ところが、測定光以外に起因する入射光に応答し、測距システムが誤測定を行う恐れがある。 However, there is a risk that the distance measuring system may perform erroneous measurements in response to incident light originating from sources other than the measurement light.
特開2021-156688号公報Japanese Patent Application Publication No. 2021-156688
 そこで、本開示では、測定光以外の入射光の影響を抑制可能な受光装置、制御方法、および測距システムを提供する。 Therefore, the present disclosure provides a light receiving device, a control method, and a distance measuring system that can suppress the influence of incident light other than measurement light.
 上記の課題を解決するために、本開示によれば、
 対象物までの距離を測定するために用いられる測定画素と、前記対象物からの入射光を瞳分割して位相差を検出する対となる複数の位相差画素と、を有する画素アレイ部と、
 前記測定画素が光子を受光するタイミングと所定時点との差分に関する情報に基づき、前記対象物までの第1距離値を生成する第1距離測定部と、
 前記複数の位相差画素それぞれに入射する光子の数に対応する情報に基づき、前記対象物までの第2距離値を生成する第2距離測定部と、
 を備える、受光装置が提供される。
In order to solve the above problems, according to the present disclosure,
a pixel array unit having a measurement pixel used to measure a distance to a target object, and a plurality of paired phase difference pixels that divide incident light from the target object into pupils and detect a phase difference;
a first distance measuring unit that generates a first distance value to the target object based on information regarding a difference between a timing at which the measurement pixel receives a photon and a predetermined time;
a second distance measuring unit that generates a second distance value to the object based on information corresponding to the number of photons incident on each of the plurality of phase difference pixels;
A light receiving device is provided.
 前記測定画素は、入射した光子に応じて光電変換する光電変換部を有し、
 前記位相差画素は、入射した光子に応じて光電変換する光電変換部を有してもよい。
The measurement pixel has a photoelectric conversion unit that performs photoelectric conversion according to incident photons,
The phase difference pixel may include a photoelectric conversion section that performs photoelectric conversion according to incident photons.
 前記光電変換部は、シングルフォトンアバランシェフォトダイオード(SPAD)であってもよい。 The photoelectric conversion section may be a single photon avalanche photodiode (SPAD).
 前記測定画素の駆動期間を、前記複数の位相差画素の出力信号に基づく、第2距離に応じて設定する制御部を、
 更に備えてもよい。
A control unit that sets the drive period of the measurement pixel according to a second distance based on the output signals of the plurality of phase difference pixels,
Further provision may be made.
 前記第1距離測定部は、前記測定画素が光子を受光するタイミングと所定時点との差分値の出現頻度情報を有するヒストグラムの値に応じて前記第1距離値を生成し、
 前記第2距離測定部は、前記複数の位相差画素それぞれに入射する光子の数に対応する信号値を用いた位相差に応じて前記第2距離値を生成してもよい。
The first distance measuring unit generates the first distance value according to a value of a histogram having appearance frequency information of a difference value between a timing at which the measurement pixel receives a photon and a predetermined time,
The second distance measuring section may generate the second distance value according to the phase difference using a signal value corresponding to the number of photons incident on each of the plurality of phase difference pixels.
 前記測定画素は、複数の前記光電変換部を有しており、
 前記測定画素と、前記対となる複数の位相差画素と、はユニットを構成しており、
 前記ユニットに対応する第1変換部を更に備え、
 前記第1変換部は、前記測定画素が光子を受光するタイミングと所定時点との差分に関する情報を有する受信信号を生成してもよい。
The measurement pixel has a plurality of the photoelectric conversion units,
The measurement pixel and the plurality of paired phase difference pixels constitute a unit,
further comprising a first conversion section corresponding to the unit,
The first conversion unit may generate a reception signal having information regarding a difference between a timing at which the measurement pixel receives a photon and a predetermined time.
 第1赤外光パルスレーザを更に備え、
 前記制御部は、前記第2距離に応じて前記第1赤外光パルスレーザが照射するレーザ光の光量を制御してもよい。
further comprising a first infrared pulsed laser,
The control unit may control the amount of laser light emitted by the first infrared pulse laser according to the second distance.
 第2赤外光パルスレーザを更に備え、
 前記複数の位相差画素は、前記第2赤外光パルスレーザの照射するレーザ光に応じて信号を生成してもよい。
further comprising a second infrared light pulse laser;
The plurality of phase difference pixels may generate a signal according to the laser light irradiated by the second infrared pulse laser.
 前記画素アレイ部と、前記第1変換部は積層化されており、前記ユニットに対応する前記第1変換部は、前記ユニットの直下に配置されてもよい。 The pixel array section and the first conversion section may be stacked, and the first conversion section corresponding to the unit may be arranged directly below the unit.
 前記ユニットに対応する第2変換部を更に備え、
 前記第2変換部は、前記複数の位相差画素それぞれに入射する光子の数に対応する情報を有する第2受信信号を生成してもよい。
further comprising a second conversion section corresponding to the unit,
The second conversion unit may generate a second reception signal having information corresponding to the number of photons incident on each of the plurality of phase difference pixels.
 前記第2変換部は、前記位相差画素が光子を受光するタイミングと所定時点との差分に関する情報を有する第3受信信号を生成し、第3受信信号の数に応じて前記第2受信信号を生成してもよい。 The second conversion unit generates a third reception signal having information regarding a difference between the timing at which the phase difference pixel receives a photon and a predetermined time, and converts the second reception signal into a third reception signal according to the number of third reception signals. May be generated.
 前記測定画素と、前記対となる複数の位相差画素とは、同じ時間帯に受光した光子に応じて、信号を生成してもよい。 The measurement pixel and the plurality of paired phase difference pixels may generate signals in response to photons received during the same time period.
 前記測定画素と、前記対となる複数の位相差画素とは、異なる半導体素子のチップに構成されてもよい。 The measurement pixel and the plurality of paired phase difference pixels may be configured on different semiconductor chip chips.
 第1赤外光パルスレーザを更に備え、
 前記測定画素が構成される第1チップと、前記対となる複数の位相差画素が構成される第2チップと、前記第1赤外光パルスレーザは、L字形状に配置されてもよい。
further comprising a first infrared pulsed laser,
The first chip including the measurement pixel, the second chip including the paired phase difference pixels, and the first infrared pulse laser may be arranged in an L-shape.
 複数の画素で構成される画素アレイ部と、
 前記画素アレイ部を制御する制御部と、を、備え、
 前記画素アレイ部は、
 対象物からの入射光を瞳分割して位相差を検出する対となる複数の位相差画素と、
 前記対象物までの距離を測定するために用いられる測定画素と、を有し、
 前記制御部は、前記測定画素の駆動期間を、前記複数の位相差画素の出力信号に基づく、第2距離に応じて設定されてもよい。
a pixel array section composed of a plurality of pixels;
a control unit that controls the pixel array unit,
The pixel array section includes:
A plurality of phase difference pixels that form a pair that divides incident light from a target object into pupils and detects a phase difference;
a measurement pixel used to measure the distance to the target object,
The control unit may set a driving period of the measurement pixel according to a second distance based on output signals of the plurality of phase difference pixels.
 前記位相差画素は、可視光を受光して光電変換する光電変換部を有し、
 前記測定画素は、入射した光子に応じて光電変換する光電変換部を有してもよい。
The phase difference pixel has a photoelectric conversion unit that receives visible light and performs photoelectric conversion,
The measurement pixel may include a photoelectric conversion unit that performs photoelectric conversion according to incident photons.
 前記光電変換部は、入射側に反射防止部を更に有してもよい。 The photoelectric conversion section may further include an antireflection section on the incident side.
 前記光電変換部は、入射側に高屈折率素材で形成されたオンチップレンズを有してもよい。 The photoelectric conversion section may have an on-chip lens formed of a high refractive index material on the incident side.
 前記光電変換部は、
 オンチップレンズと、
 前記光電変換により発生したキャリアを増倍するアバランシェ増倍領域を少なくとも有する拡散層とを、有し、
 前記拡散層は、前記オンチップレンズの主光軸の位置に応じて配置されてもよい。
The photoelectric conversion section is
On-chip lens and
a diffusion layer having at least an avalanche multiplication region that multiplies carriers generated by the photoelectric conversion,
The diffusion layer may be arranged according to the position of the main optical axis of the on-chip lens.
 前記位相差画素は、
 前記光電変換部は、
 入射領域の絞りと、
 前記光電変換により発生したキャリアを増倍するアバランシェ増倍領域を少なくとも有する拡散層とを、有し、
 前記拡散層は、前記絞りの位置に応じて配置されてもよい。
The phase difference pixel is
The photoelectric conversion section is
The aperture of the incident area,
a diffusion layer having at least an avalanche multiplication region that multiplies carriers generated by the photoelectric conversion,
The diffusion layer may be arranged according to the position of the aperture.
 前記対となる複数の位相差画素の内の、2つの位相差画素は、
 前記2つの位相差画素の光電変換部に配置される楕円形状のオンチップレンズを有してもよい。
Two phase difference pixels among the plurality of phase difference pixels forming the pair are:
An elliptical on-chip lens may be provided in the photoelectric conversion section of the two phase difference pixels.
 前記対となる複数の位相差画素の内の、2つの位相差画素の光電変換部と、
 2つの前記測定画素の光電変換部と、に配置される円形状のオンチップレンズを有してもよい。
A photoelectric conversion unit of two phase difference pixels among the plurality of paired phase difference pixels;
The photoelectric conversion unit of the two measurement pixels may include a circular on-chip lens disposed between the two measurement pixels.
 前記位相差画素の光電変換部は、可視光を透過するカラーフィルタを介してして受光し、
 前記測定画素の光電変換部は、赤外光を透過するカラーフィルタを介してして受光してもよい。
The photoelectric conversion section of the phase difference pixel receives light through a color filter that transmits visible light,
The photoelectric conversion section of the measurement pixel may receive light through a color filter that transmits infrared light.
 上記の課題を解決するために、本開示によれば、
 対象物からの入射光を瞳分割して位相差を検出する対となる複数の位相差画素と、
 前記対象物までの距離を測定するために用いられる測定画素と、を有する画素アレイ部の制御方法であって、
 前記測定画素の駆動期間を、前記複数の位相差画素の出力信号に基づく、第2距離に応じて設定する、制御方法が提供される。 
In order to solve the above problems, according to the present disclosure,
A plurality of phase difference pixels that form a pair that divides incident light from a target object into pupils and detects a phase difference;
A method for controlling a pixel array unit including measurement pixels used to measure a distance to the object, the method comprising:
A control method is provided in which a driving period of the measurement pixel is set according to a second distance based on output signals of the plurality of phase difference pixels.
 上記の課題を解決するために、本開示によれば、
 受光装置と、
 前記対象物からの入射光を集光するレンズと
を備える、測距システムが提供される。
In order to solve the above problems, according to the present disclosure,
a light receiving device;
A distance measuring system is provided, including a lens that condenses incident light from the object.
測距システムの概略構成例を模式的に示すブロック図。FIG. 1 is a block diagram schematically showing an example of a schematic configuration of a ranging system. 画素アレイ部に配置される複数のユニットの構成例を示す図。The figure which shows the example of a structure of several units arrange|positioned in a pixel array part. 測定画素の構成例を示す回路図。The circuit diagram which shows the example of a structure of a measurement pixel. 図3の左側に配置される位相差画素の構成例を示す回路図。4 is a circuit diagram showing a configuration example of a phase difference pixel arranged on the left side of FIG. 3. FIG. 図3の右側に配置される位相差画素の構成例を示す回路図。4 is a circuit diagram showing a configuration example of a phase difference pixel arranged on the right side of FIG. 3. FIG. 参照画素ユニットの回路構成例を示す図。FIG. 3 is a diagram showing an example of a circuit configuration of a reference pixel unit. 画素アレイ部に形成されるSPADの断面的な模式図。FIG. 3 is a schematic cross-sectional diagram of a SPAD formed in a pixel array section. 基板上に配置されるTDC1とカウンタの配置例を示す図。The figure which shows the example of arrangement|positioning of TDC1 and counter arrange|positioned on a board|substrate. 制御部の構成例を示すブロック図。FIG. 3 is a block diagram showing a configuration example of a control unit. ドットパターンのパルス光が、戻り光として到達した状態を模式的に示す図。The figure which shows typically the state where the pulsed light of the dot pattern arrives as return light. ユニットのTDCの処理例を模式的に示す図。The figure which shows typically the processing example of TDC of a unit. 可視イメージと測距イメージを模式的に示す図。A diagram schematically showing a visible image and a ranging image. 位相差検出を説明する図。FIG. 3 is a diagram illustrating phase difference detection. 画素アレイ部の横一軸上の位相差画素の位置とカウンタ値を示す図。FIG. 3 is a diagram showing the position of a phase difference pixel on one horizontal axis of a pixel array section and a counter value. 妨害光の例を模式的に示す図。FIG. 3 is a diagram schematically showing an example of interfering light. 図15の未攻撃時の参照画素のヒストグラムと測定画素のヒストグラム例を示す図。16 is a diagram showing an example of a histogram of a reference pixel and a histogram of a measurement pixel when not attacked in FIG. 15; FIG. 図15の攻撃時の参照画素のヒストグラムと測定画素のヒストグラムを示す図。16 is a diagram showing a histogram of reference pixels and a histogram of measurement pixels during the attack in FIG. 15. FIG. 妨害光を受けたときの画素アレイ部の横一軸上の位相差画素の位置とカウンタ値を示す図。FIG. 7 is a diagram showing the position of a phase difference pixel on one horizontal axis of a pixel array section and a counter value when receiving interference light. 総合制御部の制御例を示すタイミングチャート。5 is a timing chart showing an example of control by the comprehensive control unit. 総合制御部の制御例を示すフローチャート。5 is a flowchart showing an example of control by the comprehensive control unit. SPADの断面を簡略化した模式図。FIG. 2 is a schematic diagram illustrating a simplified cross section of a SPAD. 反射防止部を設けたSPADの断面を示す模式図。FIG. 2 is a schematic diagram showing a cross section of a SPAD provided with an antireflection section. 拡散層の位置をオンチップレンズ、及び絞りのいずれかの位置に応じて変更した模式図。FIG. 3 is a schematic diagram in which the position of the diffusion layer is changed depending on the position of the on-chip lens or the aperture. 画素アレイ部の構成例を示す模式図。FIG. 3 is a schematic diagram showing a configuration example of a pixel array section. 第1実施形態の変形例4に係る画素アレイ部の構成例を示す模式図。FIG. 7 is a schematic diagram showing a configuration example of a pixel array section according to modification example 4 of the first embodiment. 位相差法による距離値に応じて光量を変更する例を模式的に示す図。FIG. 3 is a diagram schematically showing an example of changing the amount of light according to a distance value using a phase difference method. 第2実施形態に係る総合制御部の制御例を示すフローチャート。12 is a flowchart showing an example of control by the comprehensive control unit according to the second embodiment. 第3実施形態に係る制御部の構成例を示すブロック図。FIG. 7 is a block diagram showing a configuration example of a control unit according to a third embodiment. 第3実施形態に係るユニットの処理例を模式的に示す図。FIG. 7 is a diagram schematically showing a processing example of a unit according to a third embodiment. ヒストグラム生成部が生成した位相差画素それぞれの出力に対応するヒストグラムを示す図。FIG. 3 is a diagram showing a histogram corresponding to the output of each phase difference pixel generated by a histogram generation unit. 第4実施形態に係る位相差検出を説明する図。FIG. 7 is a diagram illustrating phase difference detection according to a fourth embodiment. 対となるの位相差画素の焦点位置と、出力値の関係を示す図。The figure which shows the relationship between the focal position of a pair of phase difference pixels, and an output value. 妨害光を受けたときの位相差画素の焦点位置と、出力値の関係を示す図。FIG. 6 is a diagram showing the relationship between the focal position of a phase difference pixel and an output value when receiving interference light. 第4実施形態に係る総合制御部の制御例を示すタイミングチャート。10 is a timing chart showing an example of control of a comprehensive control unit according to a fourth embodiment. 第4実施形態に係る総合制御部の制御例を示すフローチャート。12 is a flowchart showing an example of control by the comprehensive control unit according to the fourth embodiment. 第5実施形態に係る総合制御部の制御例を示すフローチャート。12 is a flowchart showing an example of control by the comprehensive control unit according to the fifth embodiment. 第6実施形態に係る測距システムの概略構成例を模式的に示すブロック図。FIG. 7 is a block diagram schematically showing an example of a schematic configuration of a ranging system according to a sixth embodiment. 第7実施形態に係る測距システムの概略構成例を模式的に示すブロック図。FIG. 7 is a block diagram schematically showing an example of a schematic configuration of a ranging system according to a seventh embodiment. 比較例に係る測距システムの概略構成例を模式的に示すブロック図。FIG. 2 is a block diagram schematically showing an example of a schematic configuration of a ranging system according to a comparative example. 画素アレイ部に配置される複数の位相差画素、及び複数の測定画素の構成例を示す図。The figure which shows the example of a structure of several phase contrast pixels and several measurement pixels arrange|positioned in a pixel array part. 赤外光パルスレーザを一方のチップにより近づけた逆L字構造の例を示す図。A diagram showing an example of an inverted L-shaped structure in which an infrared pulsed laser is brought closer to one chip. 斜めL字構造の例を示す図。A diagram showing an example of a diagonal L-shaped structure. L字構造の例を示す図。The figure which shows the example of L-shaped structure. 画素アレイ部に形成されるSPADの断面的な模式図。FIG. 3 is a schematic cross-sectional diagram of a SPAD formed in a pixel array section. 位相差画素、及び測定画素の平面配置例を示す図。The figure which shows the example of a plane arrangement of a phase difference pixel and a measurement pixel. 位相差画素の簡略化した断面的な模式図。A simplified cross-sectional schematic diagram of a phase difference pixel. 画素アレイ部に形成される位相差画素、及び測定画素の平面配置例を示す図。FIG. 3 is a diagram showing an example of a planar arrangement of phase difference pixels and measurement pixels formed in a pixel array section. 車両制御システムの概略的な構成の一例を示すブロック図である。FIG. 1 is a block diagram showing an example of a schematic configuration of a vehicle control system. 車外情報検出部及び撮像部の設置位置の一例を示す説明図である。FIG. 2 is an explanatory diagram showing an example of installation positions of an outside-vehicle information detection section and an imaging section.
 以下、図面を参照して、受光装置、制御方法、及び測距システムの実施形態について説明する。以下では、受光装置、制御方法、及び測距システムの主要な構成部分を中心に説明するが受光装置、制御方法、及び測距システムには、図示又は説明されていない構成部分や機能が存在しうる。以下の説明は、図示又は説明されていない構成部分や機能を除外するものではない。 Hereinafter, embodiments of a light receiving device, a control method, and a ranging system will be described with reference to the drawings. Although the main components of the light receiving device, control method, and ranging system will be mainly explained below, there may be components and functions in the light receiving device, control method, and ranging system that are not shown or explained. sell. The following description does not exclude components or features not shown or described.
(第1実施形態)
<測距システムの構成例>
 図1は、本技術を適用した測距システム1の概略構成例を模式的に示すブロック図である。図1に示すように、第1実施形態に係る測距システム1は、受光装置10と、制御装置22と、表示装置24と、操作装置26と、出射側のレンズ30と、入射側のレンズ40とを備える。また、図1では更に測定の対象物50を図示している。
(First embodiment)
<Example of configuration of ranging system>
FIG. 1 is a block diagram schematically showing an example of a schematic configuration of a ranging system 1 to which the present technology is applied. As shown in FIG. 1, the distance measuring system 1 according to the first embodiment includes a light receiving device 10, a control device 22, a display device 24, an operating device 26, a lens 30 on the output side, and a lens on the input side. 40. Further, FIG. 1 further illustrates a measurement target 50. As shown in FIG.
 受光装置10は、基板11と、画素アレイ部12と、回路部13と、赤外光パルスレーザ14とを有する。制御部20は、例えばCPUを含んで構成され、受光装置10を制御する。 The light receiving device 10 includes a substrate 11, a pixel array section 12, a circuit section 13, and an infrared pulsed laser 14. The control unit 20 includes, for example, a CPU, and controls the light receiving device 10.
 制御装置22は、操作装置26の操作に従い、受光装置10を制御する装置である。表示装置24は、例えばモニタであり、受光装置10が生成した画像イメージを表示する。また、操作装置26は、キーボード、及ぶマウスなどを含んで構成され、操作者の操作信号を制御装置22に入力する。 The control device 22 is a device that controls the light receiving device 10 according to the operation of the operating device 26. The display device 24 is, for example, a monitor, and displays the image generated by the light receiving device 10. Further, the operating device 26 includes a keyboard, a mouse, etc., and inputs an operating signal from an operator to the control device 22.
 測距システム1は、赤外光パルスレーザ14が出射した光を、レンズ30を介して対象物50へ照射する。そして、対象物50で反射した光を、レンズ40を介して画素アレイ部12へ入射した光を用いて、対象物50までの距離を測定するシステムである。なお、制御部20の詳細は後述する。 The ranging system 1 irradiates the target object 50 with light emitted by the infrared pulsed laser 14 via the lens 30. The system measures the distance to the object 50 using the light reflected by the object 50 and incident on the pixel array section 12 via the lens 40. Note that details of the control section 20 will be described later.
 画素アレイ部12は、基板11に搭載され、複数の測定画素110iと、複数の位相差画素110zと、複数の参照画素110rを有する。画素アレイ部12の各画素は、例えば、シングルフォトンアバランシェフォトダイオード(SPAD:Single Photon Avalanche photodiode)を用いて、例えば半導体素子として形成されている。回路部13は、画素アレイ部12の制御及び信号処理を行う回路である。 The pixel array section 12 is mounted on the substrate 11 and includes a plurality of measurement pixels 110i, a plurality of phase difference pixels 110z, and a plurality of reference pixels 110r. Each pixel of the pixel array section 12 is formed, for example, as a semiconductor element using, for example, a single photon avalanche photodiode (SPAD). The circuit section 13 is a circuit that controls the pixel array section 12 and performs signal processing.
 測定画素110iは、測距対象物に向けて照射した測定光が、測距対象物で反射されて戻ってくるまでの時間を計測するToF(Time of Flight:飛行時間)に用いる画素である。一方で、位相差画素110zは、対象物50からの入射光を瞳分割して像面位相差を検出する位相差画素である。位相差画素110zは、PDAF(Phase Detection Auto Focus)に用いる画素である。本実施形態では、SPADを用いて形成される位相差画素110zをPDAF画素と称する場合がある。また、参照画素110rは、赤外光パルスレーザ14の発光タイミングを計測するために用いられる。さらにまた、本実施形態では、ToF(Time of Flight:飛行時間)を用いた計測をToF法と称し、位相差画素110zを用いた計測を位相差法と称する場合がある。 The measurement pixel 110i is a pixel used for ToF (Time of Flight), which measures the time it takes for the measurement light irradiated toward the distance measurement object to be reflected by the distance measurement object and return. On the other hand, the phase difference pixel 110z is a phase difference pixel that divides the incident light from the object 50 into pupils and detects the image plane phase difference. The phase difference pixel 110z is a pixel used for PDAF (Phase Detection Auto Focus). In this embodiment, the phase difference pixel 110z formed using SPAD may be referred to as a PDAF pixel. Further, the reference pixel 110r is used to measure the emission timing of the infrared light pulse laser 14. Furthermore, in this embodiment, measurement using ToF (Time of Flight) may be referred to as the ToF method, and measurement using the phase difference pixel 110z may be referred to as a phase difference method.
 赤外光パルスレーザ14は、基板11に搭載される。赤外光パルスレーザ14は、例えば、VCSEL(Vertical Cavity Surface Emitting LASER)光源を用いて形成されている。また、赤外光パルスレーザ14の照射側に回折光学素子14a(DOE)を配置することにより、例えば100点の行列状に配置されるスポット光のドットパターンを、対象物50に照射可能である。 The infrared pulsed laser 14 is mounted on the substrate 11. The infrared light pulse laser 14 is formed using, for example, a VCSEL (Vertical Cavity Surface Emitting LASER) light source. Furthermore, by disposing a diffractive optical element 14a (DOE) on the irradiation side of the infrared light pulse laser 14, it is possible to irradiate the object 50 with a dot pattern of spot light arranged in a matrix of, for example, 100 points. .
 赤外光パルスレーザ14が照射する計測光の波長は例えば850[nm]である。制御部20から入力される発光制御信号に同期させて計測光を照射する。 The wavelength of the measurement light emitted by the infrared pulsed laser 14 is, for example, 850 [nm]. Measurement light is irradiated in synchronization with a light emission control signal input from the control unit 20.
 図1に示すように、ToF法では、赤外光パルスレーザ14が出射したパルス光を、画素アレイ部12内で参照画素110rが受光する。例えば、赤外光パルスレーザ14が出射したパルス光は、導光部材を介して参照画素110rに導光される。一方で、レンズ30を介して出射されたパルス光は、レンズ40を介して複数の測定画素110iのそれぞれへ異なるタイミングで入射する。  As shown in FIG. 1, in the ToF method, the reference pixel 110r within the pixel array section 12 receives pulsed light emitted by the infrared pulsed laser 14. For example, pulsed light emitted by the infrared pulsed laser 14 is guided to the reference pixel 110r via a light guide member. On the other hand, the pulsed light emitted through the lens 30 enters each of the plurality of measurement pixels 110i through the lens 40 at different timings. 
 複数の位相差画素110zでは、レンズ30を介して出射されたパルス光を、レンズ40を介してそれぞれ受光する。複数の位相差画素110zでは、後述するように、左右、又は上下に対となる絞りを介して、パルス光を受光する。これにより、対となる位相差画素110zでは、対象物50までの距離に応じた像面位相差を検出することが可能となる。対となる位相差画素110zでは、複数の参照画素110rを用いることなく、ToF法での計測と異なるタイミングで、或いはToF法での計測と重なる時間を有して、対象物50までの距離を計測可能となる。 The plurality of phase difference pixels 110z each receive the pulsed light emitted through the lens 30 through the lens 40. The plurality of phase difference pixels 110z receive pulsed light through left and right or upper and lower pairs of apertures, as described later. Thereby, the pair of phase difference pixels 110z can detect an image plane phase difference according to the distance to the object 50. The paired phase difference pixel 110z measures the distance to the object 50 at a different timing than the measurement using the ToF method or at a time that overlaps with the measurement using the ToF method, without using a plurality of reference pixels 110r. Measurable.
<ユニットの構成例>
 図2は、画素アレイ部12に配置される複数のユニット11uの構成例を示す図である。画素アレイ部12は、複数のユニット11uが行列状に配置される。また、画素アレイ部12の端部には、参照画素110rが配置される参照画素ユニット166uが構成される。ユニット11uは、SPAD331乃至334で構成される測定画素110iと、SPAD335、336それぞれで構成される位相差画素110zとを有する。また、後述するように、SPAD335の入射領域の右半分に絞りWLのある画素を位相差画素110zLとし、SPAD336の入射領域の左半分に絞りWRのある画素を位相差画素110zRとする。
<Unit configuration example>
FIG. 2 is a diagram showing a configuration example of a plurality of units 11u arranged in the pixel array section 12. In the pixel array section 12, a plurality of units 11u are arranged in a matrix. Furthermore, a reference pixel unit 166u in which the reference pixels 110r are arranged is configured at an end of the pixel array section 12. The unit 11u has a measurement pixel 110i made up of SPADs 331 to 334, and a phase difference pixel 110z made up of SPADs 335 and 336, respectively. Furthermore, as will be described later, a pixel with an aperture WL in the right half of the incident area of the SPAD 335 is defined as a phase difference pixel 110zL, and a pixel with an aperture WR in the left half of the incident area of the SPAD 336 is defined as a phase difference pixel 110zR.
 図3は、測定画素110iの構成例を示す回路図である。この測定画素110iは、クエンチ・検出回路310と、選択トランジスタ321乃至324と、SPAD331乃至334とを備える。クエンチ・検出回路310は、抵抗311およびインバータ312を備える。図3の配線例では、選択トランジスタ321乃至324として、例えば、pMOS(p-channelMetalOxideSemiconductor)トランジスタが用いられる。また、論理信号309-1~309-4が対応する選択トランジスタ21乃至324のゲートに制御部20から入力される。 FIG. 3 is a circuit diagram showing an example of the configuration of the measurement pixel 110i. This measurement pixel 110i includes a quench/detection circuit 310, selection transistors 321 to 324, and SPADs 331 to 334. Quench/detection circuit 310 includes a resistor 311 and an inverter 312. In the wiring example of FIG. 3, pMOS (p-channel Metal Oxide Semiconductor) transistors are used as the selection transistors 321 to 324, for example. Furthermore, logic signals 309-1 to 309-4 are input from the control unit 20 to the gates of the corresponding selection transistors 21 to 324.
 SPAD(Single Photon Avalanche photodiode)331乃至334は、光子を光電変換してアバランシェ増倍し、電流を生成する。SPAD331乃至334のアノードには、負バイアスVRLDが印加される。SPAD331のカソードは、選択トランジスタ321のドレインに接続され、SPAD332のカソードは、選択トランジスタ322のドレインに接続される。また、SPAD333のカソードは、選択トランジスタ323のドレインに接続され、SPAD334のカソードは、選択トランジスタ324のドレインに接続される。 SPADs (Single Photon Avalanche photodiodes) 331 to 334 photoelectrically convert photons and avalanche multiply them to generate current. A negative bias VRLD is applied to the anodes of the SPADs 331 to 334. The cathode of SPAD 331 is connected to the drain of selection transistor 321, and the cathode of SPAD 332 is connected to the drain of selection transistor 322. Further, the cathode of the SPAD 333 is connected to the drain of the selection transistor 323, and the cathode of the SPAD 334 is connected to the drain of the selection transistor 324.
 選択トランジスタ321乃至324のそれぞれのソースは、共通ノード319に共通に接続される。また、選択トランジスタ321乃至324のゲートには、選択線309-1乃至309-4を介して、制御部20からの選択信号XSEL1が入力される。選択信号XSEL1のタイミングを異ならせることで、SPAD331乃至334を個別の画素として駆動することも可能である。選択信号XSEL1がハイレベルとなり、選択トランジスタ321乃至324が導通状態である場合に、SPAD331乃至334の各電位VK_1乃至VK_4が共通ノード319に伝導される。 The sources of each of the selection transistors 321 to 324 are commonly connected to a common node 319. Furthermore, a selection signal XSEL1 from the control section 20 is input to the gates of the selection transistors 321 to 324 via selection lines 309-1 to 309-4. By varying the timing of the selection signal XSEL1, it is also possible to drive the SPADs 331 to 334 as individual pixels. When the selection signal XSEL1 becomes high level and the selection transistors 321 to 324 are conductive, the potentials VK_1 to VK_4 of the SPADs 331 to 334 are conducted to the common node 319.
 抵抗311は、所定の電源電圧VDDのノードと、共通ノード319との間に挿入される。インバータ312は、共通ノード319の電位に基づいてパルス信号TOUTを生成し、TDC(Time-to-Digital Converter)170に供給する。また、抵抗311は、アバランシェ増倍により所定の電位を越えた共通ノード319の電位を低下させ、アバランシェ増倍を抑制する。なお、本実施形態に係るTDC170が、第1変換部に対応する。 A resistor 311 is inserted between a node of a predetermined power supply voltage VDD and a common node 319. The inverter 312 generates a pulse signal TOUT based on the potential of the common node 319 and supplies it to a TDC (Time-to-Digital Converter) 170. Further, the resistor 311 lowers the potential of the common node 319 that exceeds a predetermined potential due to avalanche multiplication, thereby suppressing avalanche multiplication. Note that the TDC 170 according to this embodiment corresponds to the first conversion unit.
 TDC170は、パルス信号TOUTに基づいて、受光タイミングをデジタル値に変換する。例えば、回路部13はクロック回路を有している。これにより、DC170は、クロック回路の情報を用いて、測定開始時を基準時点として、基準時点からパルス信号TOUTの入力時点の差分の時間をデジタル値に変換する。このTDC170は、デジタル値を制御部20に供給する。また、TDC170は、ユニット11u毎に構成することが可能である。なお、以下の説明でSPAD331乃至SPAD340を除く回路構成は回路部13内に構成される。 The TDC 170 converts the light reception timing into a digital value based on the pulse signal TOUT. For example, the circuit section 13 includes a clock circuit. Thereby, the DC 170 uses the information of the clock circuit to convert the time difference between the reference time and the input time of the pulse signal TOUT into a digital value, using the measurement start time as the reference time. This TDC 170 supplies digital values to the control section 20. Further, the TDC 170 can be configured for each unit 11u. Note that in the following description, the circuit configuration except for SPAD 331 to SPAD 340 is configured within the circuit section 13.
 図4は、図3の左側に配置される位相差画素110zLの構成例を示す回路図である。この位相差画素110zLは、クエンチ・検出回路310と、選択トランジスタ325と、SPAD335とを備える。また、論理信号309-5が対応する選択トランジスタ325のゲートに入力される。 FIG. 4 is a circuit diagram showing a configuration example of the phase difference pixel 110zL arranged on the left side of FIG. 3. This phase difference pixel 110zL includes a quench/detection circuit 310, a selection transistor 325, and a SPAD 335. Furthermore, the logic signal 309-5 is input to the gate of the corresponding selection transistor 325.
 SPAD335のアノードには、負バイアスVRLDが印加される。SPAD335のカソードは、選択トランジスタ325のドレインに接続される。選択トランジスタ325のソースは、ノード319に接続される。また、選択トランジスタ325のゲートには、選択線309-5を介して、制御部20からの選択信号XSEL2が入力される。インバータ312は、ノード319の電位に基づいてパルス信号COUTを生成し、カウンタ172に供給する。なお、本実施形態に係るカウンタ172が第2変換部に対応する。 A negative bias VRLD is applied to the anode of the SPAD 335. The cathode of SPAD 335 is connected to the drain of select transistor 325. The source of selection transistor 325 is connected to node 319. Furthermore, the selection signal XSEL2 from the control section 20 is input to the gate of the selection transistor 325 via the selection line 309-5. Inverter 312 generates a pulse signal COUT based on the potential of node 319 and supplies it to counter 172. Note that the counter 172 according to this embodiment corresponds to the second conversion section.
 カウンタ172は、パルス信号COUTに基づいてSPAD335に入射する光子の個数を計数する。このカウンタ172は、パルス信号COUTのパルス数を、光子数に応じた値として計数し、その計数値を制御部20に供給する。このように、選択信号XSEL2がハイレベルとなり、選択トランジスタ325が導通状態である場合に、SPAD335の電位がノード319に伝導される。また、カウンタ172は、ユニット11u毎に構成することが可能である。 The counter 172 counts the number of photons incident on the SPAD 335 based on the pulse signal COUT. This counter 172 counts the number of pulses of the pulse signal COUT as a value corresponding to the number of photons, and supplies the counted value to the control unit 20. In this manner, when the selection signal XSEL2 is at a high level and the selection transistor 325 is conductive, the potential of the SPAD 335 is conducted to the node 319. Further, the counter 172 can be configured for each unit 11u.
 図5は、図3の右側に配置される位相差画素110zRの構成例を示す回路図である。この位相差画素110zRは、位相差画素110zLと同等の構成である。すなわち、位相差画素110zRは、クエンチ・検出回路310と、選択トランジスタ326と、SPAD336とを備える。また、論理信号309-6が対応する選択トランジスタ326のゲートに入力される。 FIG. 5 is a circuit diagram showing a configuration example of the phase difference pixel 110zR arranged on the right side of FIG. 3. This phase difference pixel 110zR has the same configuration as the phase difference pixel 110zL. That is, the phase difference pixel 110zR includes a quench/detection circuit 310, a selection transistor 326, and a SPAD 336. Furthermore, the logic signal 309-6 is input to the gate of the corresponding selection transistor 326.
 SPAD336のアノードには、負バイアスVRLDが印加される。SPAD336のカソードは、選択トランジスタ326のドレインに接続される。選択トランジスタ326のソースは、ノード319に接続される。また、選択トランジスタ326のゲートには、選択線309-6を介して、制御部20からの選択信号XSEL3が入力される。インバータ312は、ノード319の電位に基づいてパルス信号COUTを生成し、カウンタ172に供給する。カウンタ172は、パルス信号COUTに基づいてSPAD336に入射する光子の個数を計数する。このカウンタ172は、パルス信号COUTのパルス数を、光子数に応じた値として計数し、その計数値を制御部20に供給する。このように、選択信号XSEL3がハイレベルとなり、選択トランジスタ326が導通状態である場合に、SPAD336の電位がノード319に伝導される。 A negative bias VRLD is applied to the anode of the SPAD 336. The cathode of SPAD 336 is connected to the drain of select transistor 326. The source of selection transistor 326 is connected to node 319. Further, the selection signal XSEL3 from the control section 20 is input to the gate of the selection transistor 326 via the selection line 309-6. Inverter 312 generates a pulse signal COUT based on the potential of node 319 and supplies it to counter 172. The counter 172 counts the number of photons incident on the SPAD 336 based on the pulse signal COUT. This counter 172 counts the number of pulses of the pulse signal COUT as a value corresponding to the number of photons, and supplies the counted value to the control unit 20. In this manner, when the selection signal XSEL3 is at a high level and the selection transistor 326 is conductive, the potential of the SPAD 336 is conducted to the node 319.
 図6は、参照画素ユニット166uの回路構成例を示す図である。参照画素110rも、測定画素110iと同等の構成である。すなわち、この参照画素110rは、クエンチ・検出回路310と、選択トランジスタ327乃至330と、SPAD337乃至340とを備える。クエンチ・検出回路310は、抵抗311およびインバータ312を備える。また、論理信号309-7~309-10が対応する選択トランジスタ327乃至330のゲートに入力される。 FIG. 6 is a diagram showing an example of the circuit configuration of the reference pixel unit 166u. The reference pixel 110r also has the same configuration as the measurement pixel 110i. That is, this reference pixel 110r includes a quench/detection circuit 310, selection transistors 327 to 330, and SPADs 337 to 340. Quench/detection circuit 310 includes a resistor 311 and an inverter 312. Furthermore, logic signals 309-7 to 309-10 are input to the gates of corresponding selection transistors 327 to 330.
 選択トランジスタ327乃至330のそれぞれのソースは、共通ノード319に共通に接続される。また、選択トランジスタ327乃至330のゲートには、選択線309-7乃至309-10を介して、制御部20からの選択信号XSEL0が入力される。選択信号XSEL0がハイレベルとなり、選択トランジスタ321乃至324が導通状態である場合に、SPAD337乃至340の各電位VK_1乃至VK_4が共通ノード319に伝導される。 The sources of each of the selection transistors 327 to 330 are commonly connected to the common node 319. Furthermore, a selection signal XSEL0 from the control section 20 is input to the gates of the selection transistors 327 to 330 via selection lines 309-7 to 309-10. When the selection signal XSEL0 is at a high level and the selection transistors 321 to 324 are conductive, the potentials VK_1 to VK_4 of the SPADs 337 to 340 are conducted to the common node 319.
 図7は、画素アレイ部12に形成されるSPAD331、332、335、336の断面的な模式図である。SPAD331、332、335、336は、それぞれ位相差画素110zL、位相差画素110zR、及び測定画素110iを構成する4つの画素の内の2つに対応する(図2参照)。 FIG. 7 is a schematic cross-sectional view of SPADs 331, 332, 335, and 336 formed in the pixel array section 12. The SPADs 331, 332, 335, and 336 correspond to two of the four pixels that constitute the phase difference pixel 110zL, the phase difference pixel 110zR, and the measurement pixel 110i, respectively (see FIG. 2).
 SPAD331と、SPAD332とは同等の構成である。また、SPAD333、334も、SPAD331、332と同等の構成である。すなわち、測定画素110iを構成する4つの画素は同等の構成である。位相差画素110zL、及び位相差画素110zRは、それぞれ絞りWL、WRを有する点で測定画素110iaと相違する。絞りWL、WRは遮光部材で構成され、それぞれ対象物50からの入射光L1、L2を瞳分割する。なお、SPAD337乃至340も、SPAD331、332と同等の構成である。 The SPAD 331 and the SPAD 332 have the same configuration. Further, the SPADs 333 and 334 also have the same configuration as the SPADs 331 and 332. That is, the four pixels forming the measurement pixel 110i have the same configuration. The phase difference pixel 110zL and the phase difference pixel 110zR are different from the measurement pixel 110ia in that they have apertures WL and WR, respectively. The apertures WL and WR are constituted by light shielding members and divide the incident lights L1 and L2 from the object 50 into pupils, respectively. Note that the SPADs 337 to 340 also have the same configuration as the SPADs 331 and 332.
 図7に示すように、SPAD331は、オチップレンズ1110と、Nウェル1112と、拡散層1114と、メタル配線1116と、メタルパッド1118と、画素間分離部1120とを有する。なお、配線層の詳細な記載は省略している。なお、SPAD331乃至340の構成は一般的な構成を用いることが可能であり、図7の構成に限定されない。 As shown in FIG. 7, the SPAD 331 includes an off-chip lens 1110, an N well 1112, a diffusion layer 1114, a metal wiring 1116, a metal pad 1118, and an inter-pixel isolation section 1120. Note that detailed description of the wiring layer is omitted. Note that the SPADs 331 to 340 can have a general configuration and are not limited to the configuration shown in FIG. 7.
 オチップレンズ1110は、レンズ40を介して入射した光をNウェル1112内に集光する。Nウェル1112は、センサ基板の不純物濃度がn型に制御されることにより形成され、SPADにおける光電変換により発生する電子をアバランシェ増倍領域へ転送する電界を形成する。オチップレンズ1110は、例えば高屈折材料で構成してもよい。例えば、高屈折材料としてアモルファスシリコン、SiN等を用いることが可能である。高屈折材料で構成する場合には、赤外光による感度が向上し、位相差画素110zL、位相差画素110zR、及び測定画素110iのカウントレートが向上し、位相差画素110zL、及び位相差画素110zRの集光効率が向上する。 The off-chip lens 1110 focuses the light incident through the lens 40 into the N-well 1112. The N-well 1112 is formed by controlling the impurity concentration of the sensor substrate to be n-type, and forms an electric field that transfers electrons generated by photoelectric conversion in the SPAD to the avalanche multiplication region. The op-chip lens 1110 may be made of, for example, a high refractive material. For example, amorphous silicon, SiN, etc. can be used as the high refractive material. When configured with a high refractive material, the sensitivity to infrared light is improved, the count rate of the phase difference pixel 110zL, the phase difference pixel 110zR, and the measurement pixel 110i is improved, and the phase difference pixel 110zL and the phase difference pixel 110zR are improved. Improves light collection efficiency.
 拡散層1114は、P型拡散層、N型拡散層、ホール蓄積層、ピニング層、および高濃度P型拡散層により構成される。例えば、このSPADでは、P型拡散層とN型拡散層とが接続する領域に形成される空乏層によって、アバランシェ増倍領域が形成される。 The diffusion layer 1114 is composed of a P-type diffusion layer, an N-type diffusion layer, a hole accumulation layer, a pinning layer, and a high concentration P-type diffusion layer. For example, in this SPAD, an avalanche multiplication region is formed by a depletion layer formed in a region where a P-type diffusion layer and an N-type diffusion layer are connected.
 メタル配線1116は、例えばアバランシェ増倍領域を覆うように、拡散層1114よりも広く形成される。メタルパッド1118は、ロジック側配線層に形成されているメタルパッドと、それぞれを形成する金属(Cu)どうしにより電気的および機械的に接合するのに用いられる。画素間分離部1120は、隣接するSPADどうしの間に形成されるメタル膜および絶縁膜による二重構造によって、それぞれのSPADを絶縁して分離する。 The metal wiring 1116 is formed wider than the diffusion layer 1114 so as to cover, for example, the avalanche multiplication region. The metal pad 1118 is used for electrically and mechanically bonding the metal pads formed in the logic side wiring layer to the metal (Cu) forming each pad. The inter-pixel isolation section 1120 insulates and isolates each SPAD by a double structure of a metal film and an insulating film formed between adjacent SPADs.
 <基板上の配置例>
 図8は、基板11上に配置されるTDC170とカウンタ172の配置例を示す図である。TDC170は、画素アレイ部12の左端部に配置されるTDC170aと、画素アレイ部12の右端部に配置されるTDC170bとを有する。同様にカウンタ172は、画素アレイ部12の左端部に配置されるカウンタ172aと、画素アレイ部12の右端部に配置されるカウンタ172bとを有する。このように、回路部13は、例えば2つのTDC170と、2つのカウンタ172を有する。
<Example of arrangement on board>
FIG. 8 is a diagram showing an example of the arrangement of the TDC 170 and the counter 172 arranged on the substrate 11. The TDC 170 includes a TDC 170a arranged at the left end of the pixel array section 12 and a TDC 170b arranged at the right end of the pixel array section 12. Similarly, the counter 172 includes a counter 172 a placed at the left end of the pixel array section 12 and a counter 172 b placed at the right end of the pixel array section 12 . In this way, the circuit section 13 includes, for example, two TDCs 170 and two counters 172.
 TDC170aは、画素アレイ部12の左半分の複数のユニット11uに配線され、TDC170bは、画素アレイ部12の右半分の複数のユニット11uに配線される。同様に、カウンタ172aは、画素アレイ部12の左半分の複数のユニット11uに配線され、カウンタ172bは、画素アレイ部12の右半分の複数のユニット11uに配線される。これにより、配線量を短く構成することが可能となる。 The TDC 170a is wired to a plurality of units 11u in the left half of the pixel array section 12, and the TDC 170b is wired to a plurality of units 11u in the right half of the pixel array section 12. Similarly, the counter 172a is wired to a plurality of units 11u in the left half of the pixel array section 12, and the counter 172b is wired to a plurality of units 11u in the right half of the pixel array section 12. This allows the amount of wiring to be shortened.
 <制御部の構成例>
 図9は、制御部20の構成例を示すブロック図である。図9に示すように、制御部20は、発光制御部200と、駆動制御部202と、第1距離測定部204と、第2距離測定部206と、総合制御部208とを有する。また、第1距離測定部204は、ヒストグラム生成部204aと、処理部204bとを有する。
<Example of configuration of control unit>
FIG. 9 is a block diagram showing an example of the configuration of the control section 20. As shown in FIG. As shown in FIG. 9, the control section 20 includes a light emission control section 200, a drive control section 202, a first distance measurement section 204, a second distance measurement section 206, and a comprehensive control section 208. Further, the first distance measurement section 204 includes a histogram generation section 204a and a processing section 204b.
 発光制御部200は、発光制御信号を赤外光パルスレーザ14に供給することで、回折光学素子14aを介して形成されたドットパターンのパルス光の照射タイミングを制御する。ToF法では、発光制御信号の周波数は、例えば、20[MHz]である。なお、発光制御信号の周波数は、20[MHz]に限定するものではなく、5[MHz]等であってもよい。また、発光制御信号は、周期信号であれば、矩形波に限定されない。例えば、発光制御信号は、サイン波であってもよい。一方で、複数の位相差画素110zを用いた計測では、例えばToF法での計測前に、例えばパルス光を20[MHz]で数回照射する。なお、照射周期、及び期間は一例であり、これに限定されない。 The light emission control unit 200 controls the irradiation timing of the pulsed light of the dot pattern formed via the diffractive optical element 14a by supplying a light emission control signal to the infrared pulsed laser 14. In the ToF method, the frequency of the light emission control signal is, for example, 20 [MHz]. Note that the frequency of the light emission control signal is not limited to 20 [MHz], and may be 5 [MHz] or the like. Further, the light emission control signal is not limited to a rectangular wave as long as it is a periodic signal. For example, the light emission control signal may be a sine wave. On the other hand, in measurement using a plurality of phase difference pixels 110z, for example, pulsed light is irradiated several times at 20 [MHz] before measurement using the ToF method. Note that the irradiation period and period are just examples, and are not limited thereto.
 図10は、ドットパターンのパルス光が、対象物50からの戻り光として、画素アレイ部12の受光面に到達した状態を模式的に示す図である。図10に示すように、ドットパターンのパルス光は、例えばユニット11u毎に分散してスポット光11sとして受光される。 FIG. 10 is a diagram schematically showing a state in which the dot pattern pulsed light reaches the light receiving surface of the pixel array section 12 as return light from the target object 50. As shown in FIG. 10, the dot pattern pulsed light is dispersed, for example, for each unit 11u and received as spot light 11s.
 駆動制御部202は、選択トランジスタ321乃至330(図3乃至5参照)の選択信号XSEL0乃至3、論理信号309-1~309-10などを生成し、各ユニット11u、ユニット116uの駆動を制御する。 The drive control unit 202 generates selection signals XSEL0 to 3 for the selection transistors 321 to 330 (see FIGS. 3 to 5), logic signals 309-1 to 309-10, etc., and controls the drive of each unit 11u and unit 116u. .
 第1距離測定部204は、ヒストグラム生成部204aと、処理部204bとを有する。ヒストグラム生成部204aは、TDC170により得られたデジタル値に基づいてヒストグラムを生成する。処理部204bは、ヒストグラム生成部204aが生成したヒストグラムに基づいて、様々な処理を行う。例えば、処理部204bは、FIR(Finite Impulse Response)フィルタ処理、エコー判定、デプス値(距離値)算出処理、ピーク検出処理などを行うことが可能である。処理部204bが生成した距離イメージ(デプス画像)は、インタフェースを介して出力される。 The first distance measurement unit 204 includes a histogram generation unit 204a and a processing unit 204b. The histogram generation unit 204a generates a histogram based on the digital values obtained by the TDC 170. The processing unit 204b performs various processes based on the histogram generated by the histogram generation unit 204a. For example, the processing unit 204b can perform FIR (Finite Impulse Response) filter processing, echo determination, depth value (distance value) calculation processing, peak detection processing, and the like. The distance image (depth image) generated by the processing unit 204b is output via the interface.
 図11は、ユニット11uのTDC170の処理例を模式的に示す図である。上述のように、SPAD331乃至334は、TDC170に接続され、SPAD335、336は、カウンタ172にそれぞれ接続される。TDCヒスグラムの横軸は時間を示し、縦軸は頻度を示す。 FIG. 11 is a diagram schematically showing a processing example of the TDC 170 of the unit 11u. As mentioned above, SPADs 331-334 are connected to TDC 170, and SPADs 335 and 336 are each connected to counter 172. The horizontal axis of the TDC histogram shows time, and the vertical axis shows frequency.
 SPAD331乃至334それぞれには、光子ph1乃至ph7が時系列に入射している例を模式的に示している。ユニット11uは、これの光子ph1乃至ph7に応じたパルス信号TOUTを生成する。そして、TDC170では、光子ph1乃至ph7に起因して時系列に生成されたパルス信号TOUTの入力タイミングと、基準時間t0との差分に比例するデジタル値を生成し、ヒストグラム生成部204aに供給する。 An example is schematically shown in which photons ph1 to ph7 are incident on each of SPADs 331 to 334 in time series. The unit 11u generates a pulse signal TOUT according to the photons ph1 to ph7. Then, the TDC 170 generates a digital value proportional to the difference between the reference time t0 and the input timing of the pulse signal TOUT generated in time series due to the photons ph1 to ph7, and supplies it to the histogram generation unit 204a.
 ヒストグラム生成部204aは、デジタル値が入力される度に、そのデジタル値に応じた時間区間の頻度に例えば1を加算する。パルス光が対象部50に繰り返し照射されることにより、対象部50までの距離に応じたデジタル値に対応する時間区間の頻度が増加する。 Each time a digital value is input, the histogram generation unit 204a adds, for example, 1 to the frequency of the time interval corresponding to the digital value. By repeatedly irradiating the target portion 50 with pulsed light, the frequency of time intervals corresponding to digital values according to the distance to the target portion 50 increases.
 同様に、ユニット116uのSPAD337乃至340にも赤外光パルスレーザ14から光子pが時系列に導入される。ユニット116uは、これの光子に応じたパルス信号TOUTを生成する。そして、TDC170では、光子に起因して時系列に生成されたパルス信号TOUTの入力タイミングと、基準時間t0との差分に比例するデジタル値を生成し、ヒストグラム生成部204aに供給する。パルス光が繰り返し照射されることにより、パルス光の照射タイミングに応じたデジタル値に対応する時間区間の頻度が増加する。 Similarly, photons p are introduced into the SPADs 337 to 340 of the unit 116u in time series from the infrared pulsed laser 14. Unit 116u generates a pulse signal TOUT in response to the photons. Then, the TDC 170 generates a digital value proportional to the difference between the input timing of the pulse signal TOUT generated in time series due to photons and the reference time t0, and supplies it to the histogram generation unit 204a. By repeatedly irradiating the pulsed light, the frequency of time intervals corresponding to digital values according to the irradiation timing of the pulsed light increases.
 処理部204bは、例えばヒストグラム生成部204aがユニット166uの参照画素110rに対して生成したヒストグラムの最大頻度に対応する時間を時間t1として生成する。時間t1は、パルス光の照射開始時間とほぼ同等の値となる。 The processing unit 204b generates, for example, the time corresponding to the maximum frequency of the histogram generated by the histogram generation unit 204a for the reference pixel 110r of the unit 166u as time t1. The time t1 has a value approximately equal to the irradiation start time of the pulsed light.
 一方で、処理部204bは、例えばヒストグラム生成部204aが複数の測定画素110iのそれぞれに対して生成したヒストグラムの最大頻度に対応する時間を例えば時間t2nとして生成する。nは、各ユニット11uを示す。これにより、処理部204bは、各ユニット11uで検出される対象物50までの距離Dnを(t2n-t1)×光速/2として生成する。 On the other hand, the processing unit 204b generates, for example, a time t2n corresponding to the maximum frequency of the histogram generated by the histogram generation unit 204a for each of the plurality of measurement pixels 110i. n indicates each unit 11u. Thereby, the processing unit 204b generates the distance Dn to the object 50 detected by each unit 11u as (t2n-t1)×speed of light/2.
 図12は、可視イメージ110aと測距イメージ110bを模式的に示す図である。測距イメージ110bの各濃度点Cpは、処理部204bで演算された距離Dnに対応させた濃度値を有する。このように、処理部204bは、各ユニット11uで計測されたパルス信号TOUTに基づく2次元状の距離値の分布を測距イメージ(デプス画像)110bとして生成することが可能である。例えば、この測距イメージ110bは、表示装置24に制御装置22を介して表示される。 FIG. 12 is a diagram schematically showing a visible image 110a and a ranging image 110b. Each density point Cp of the distance measurement image 110b has a density value corresponding to the distance Dn calculated by the processing unit 204b. In this way, the processing unit 204b can generate a two-dimensional distance value distribution based on the pulse signal TOUT measured by each unit 11u as the distance measurement image (depth image) 110b. For example, this distance measurement image 110b is displayed on the display device 24 via the control device 22.
 図13は、本開示の実施形態に係る位相差検出を説明する図である。同図におけるAは、対象物50、レンズ40および画素アレイ部12の表面位置と入射光の光路との関係を表す図である。同図において、レンズ40の左側および右側を通過する光をそれぞれ301および302により表した。便宜上、光301および302はレンズ40の端部を通過する光だけを記載した。同図におけるAの左、中央および右の図は、それぞれ画素アレイ部12(の撮像面)にある場合(合焦の状態)、画素アレイ部12(の撮像面)とは反対の側にある場合(いわゆる後ピン状態)および画素アレイ部12の側にある場合(いわゆる前ピン状態)を表したものである。 FIG. 13 is a diagram illustrating phase difference detection according to the embodiment of the present disclosure. A in the figure is a diagram showing the relationship between the surface positions of the object 50, the lens 40, and the pixel array section 12, and the optical path of the incident light. In the figure, light passing through the left and right sides of the lens 40 is represented by 301 and 302, respectively. For convenience, only the lights 301 and 302 that pass through the end of the lens 40 are described. In the figure, the left, center, and right diagrams of A are on the opposite side from (the imaging surface of) the pixel array section 12 (in the focused state) when they are on (the imaging surface of) the pixel array section 12, respectively. (a so-called rear focus state) and a case on the side of the pixel array section 12 (a so-called front focus state).
 これに対し、同図におけるBの中央および左の図の場合には、光画像303および304がずれた形状となる。この画像のずれが位相差を表す。同図におけるBの中央の図の後ピン状態の場合には光画像303および304がそれぞれ左および右にずれた画像となる。同図におけるBの左の図の後ピン状態の場合には光画像303および304が逆の方向にずれた画像となる。 On the other hand, in the case of the center and left diagrams of B in the figure, the optical images 303 and 304 have shifted shapes. This image shift represents the phase difference. In the case of the rear focus state in the center diagram of B in the figure, the optical images 303 and 304 become images shifted to the left and right, respectively. In the case of the rear focus state in the left diagram of B in the figure, the optical images 303 and 304 become images shifted in opposite directions.
 図14は、画素アレイ部12の横一軸上の位相差画素110zL,zRの位置とカウンタ値を示す図である。横軸は、位相差画素110zLと、位相差画素110zRとの位置に対応する。縦軸は、カウンタ172のカウント値である。すなわち、信号値L20は、位相差画素110zLのパルス信号TOUTに基づいた光子の個数に対応する。一方で、信号値R20は、位相差画素110zRのパルス信号TOUTに基づいた光子の個数に対応する。 FIG. 14 is a diagram showing the positions and counter values of the phase difference pixels 110zL and 110zR on one horizontal axis of the pixel array section 12. The horizontal axis corresponds to the positions of the phase difference pixel 110zL and the phase difference pixel 110zR. The vertical axis is the count value of the counter 172. That is, the signal value L20 corresponds to the number of photons based on the pulse signal TOUT of the phase difference pixel 110zL. On the other hand, the signal value R20 corresponds to the number of photons based on the pulse signal TOUT of the phase difference pixel 110zR.
 位相差は、信号値L20又は信号値R20を水平方向にシフトさせた時に、信号値L20と信号値R20の差分値の総和が最小となるシフト量に対応する。 The phase difference corresponds to the shift amount that minimizes the sum of the difference values between the signal value L20 and the signal value R20 when the signal value L20 or the signal value R20 is shifted in the horizontal direction.
 再び図9に示すように、第2距離測定部206は、各ユニット11uのパルス信号TOUTに基づき、シフト量を演算する。そして、第2距離測定部206は、シフト量に基づき、対象物50までの距離を演算する。シフト量は、対象物50までの距離に対応するので、TOF法よりも精度は低下するが距離を測定可能となる。このように、本実施形態では、SPAD335、336を用いることにより、光子レベルの光量を用いて、対象物50までの距離を測定可能となる。 As shown in FIG. 9 again, the second distance measuring section 206 calculates the shift amount based on the pulse signal TOUT of each unit 11u. Then, the second distance measuring unit 206 calculates the distance to the target object 50 based on the shift amount. Since the shift amount corresponds to the distance to the object 50, the distance can be measured although the accuracy is lower than that of the TOF method. In this way, in this embodiment, by using the SPADs 335 and 336, it is possible to measure the distance to the object 50 using the amount of light at the photon level.
 第2距離測定部206は、信号値L20と信号値R20の差分値の総和が最小となるシフト量を演算するため、第2距離測定部206は、ノイズを含む環境光や、意図的な妨害光に対する影響がTOF法よりも抑制される。 The second distance measuring unit 206 calculates the shift amount that minimizes the sum of the difference values between the signal value L20 and the signal value R20. The influence on light is suppressed more than the TOF method.
 図15は、妨害光の例を模式的に示す図である。A図が未攻撃の状態を模式的に示し、B図が意図的な妨害光を受けている攻撃時を模式的に示す図である。攻撃回路1000は、パルス光と同等の波長帯域のパルス光を対象物50に出射する。 FIG. 15 is a diagram schematically showing an example of interfering light. Fig. A schematically shows a state in which no attack has been made, and Fig. B schematically shows a state in which an attack is being received by intentional interference light. The attack circuit 1000 emits pulsed light having a wavelength band equivalent to that of the pulsed light to the object 50.
 図16は、図15の未攻撃時の参照画素110rのヒストグラムと測定画素110iのヒストグラム例を示す図である。A図は、ヒストグラム生成部204aが参照画素110rの出力信号に基づき生成したヒストグラムを示し、B図は、ヒストグラム生成部204aが測定画素110iの出力信号に基づき生成したヒストグラムを示す。参照画素のヒストグラムのピーク時点と、測定画素のヒストグラムのピーク時点の差分の時間が対象物50までの真の距離Rに対応する。 FIG. 16 is a diagram showing an example of the histogram of the reference pixel 110r and the histogram of the measurement pixel 110i when not attacked in FIG. 15. Figure A shows a histogram generated by the histogram generation unit 204a based on the output signal of the reference pixel 110r, and Figure B shows a histogram generated by the histogram generation unit 204a based on the output signal of the measurement pixel 110i. The time difference between the peak time of the histogram of the reference pixel and the peak time of the histogram of the measurement pixel corresponds to the true distance R to the object 50.
 図17は、図15の攻撃時の参照画素110rのヒストグラムと測定画素110iのヒストグラムを示す図である。A図は、ヒストグラム生成部204aが参照画素110rの出力信号に基づき生成したヒストグラムを示し、B図は、ヒストグラム生成部204aが測定画素110iの出力信号に基づき生成したヒストグラムを示す。測定画素のヒストグラムには、妨害光に起因するパルス信号DOUTが含まれてしまい、測定画素のヒストグラムのピークがずれてしまっている。参照画素のヒストグラムのピーク時点と、測定画素のヒストグラムのピーク時点の差分の時間が偽対象物50fまでの距離Rfに対応してしまう。 FIG. 17 is a diagram showing a histogram of the reference pixel 110r and a histogram of the measurement pixel 110i during the attack in FIG. 15. Figure A shows a histogram generated by the histogram generation unit 204a based on the output signal of the reference pixel 110r, and Figure B shows a histogram generated by the histogram generation unit 204a based on the output signal of the measurement pixel 110i. The histogram of the measurement pixel includes the pulse signal DOUT caused by the interfering light, and the peak of the histogram of the measurement pixel is shifted. The time difference between the peak time of the histogram of the reference pixel and the peak time of the histogram of the measurement pixel corresponds to the distance Rf to the false object 50f.
 このように、TOF法では、光子の入射タイミングをデジタル値に変換するため、攻撃回路1000のパルス光の入射タイミングにより誤信号を生成してしまい、対象物50の位置を、例えば対象物50fの位置に誤測定してしまう。これから分かるように、TOF法での測定精度は、位相差法による測定精度よりも高いが、攻撃光又は環境光による影響が受けやすくなってしまう。 In this way, in the TOF method, since the incident timing of photons is converted into a digital value, an erroneous signal is generated depending on the incident timing of the pulsed light of the attack circuit 1000, and the position of the object 50 is changed, for example, to the object 50f. The position may be incorrectly measured. As can be seen, the measurement accuracy with the TOF method is higher than the measurement accuracy with the phase difference method, but it is more susceptible to the influence of attack light or environmental light.
 図18は、妨害光を受けたときの画素アレイ部12の横一軸上の位相差画素110zL,zRの位置とカウンタ値を示す図である。横軸は、位相差画素110zLと、位相差画素110zRとの位置に対応する。縦軸は、カウンタ172のカウント値である。すなわち、信号値L20aは、位相差画素110zLのパルス信号TOUTに基づいた光子の個数に対応する。一方で、信号値R20aは、位相差画素110zRのパルス信号TOUTに基づいた光子の個数に対応する。位相差は、信号値L20a又は信号値R20aを水平方向にシフトさせた時に、信号値L20aと信号値R20aの差分値の総和が最小となるシフト量に対応する。 FIG. 18 is a diagram showing the positions and counter values of the phase difference pixels 110zL and 110zR on the horizontal axis of the pixel array section 12 when receiving interference light. The horizontal axis corresponds to the positions of the phase difference pixel 110zL and the phase difference pixel 110zR. The vertical axis is the count value of the counter 172. That is, the signal value L20a corresponds to the number of photons based on the pulse signal TOUT of the phase difference pixel 110zL. On the other hand, the signal value R20a corresponds to the number of photons based on the pulse signal TOUT of the phase difference pixel 110zR. The phase difference corresponds to a shift amount that minimizes the sum of the difference values between the signal value L20a and the signal value R20a when the signal value L20a or the signal value R20a is shifted in the horizontal direction.
 このように、妨害光を受けても、位相差画素110zLと、位相差画素110zRとのカウンタ値は、妨害光に起因する光子によるカウント値がオフセットとして増加するだけである。このため、妨害光を受けてもシフト量への影響は抑制される。このように、位相差画素110zLと、位相差画素110zRとを用いる位相差法では、攻撃回路1000のパルス光が対象物50に照射されても、信号値L20aと信号値R20aとが同程度増加するだけであるので、シフト量への影響は抑制され、攻撃回路1000のパルス光、及び環境光の影響が抑制される。 In this way, even when receiving interference light, the counter values of the phase difference pixel 110zL and the phase difference pixel 110zR only increase as the count value due to photons caused by the interference light increases as an offset. Therefore, even if interference light is received, the influence on the shift amount is suppressed. In this way, in the phase difference method using the phase difference pixel 110zL and the phase difference pixel 110zR, even if the object 50 is irradiated with the pulsed light of the attack circuit 1000, the signal value L20a and the signal value R20a increase by the same amount. Therefore, the influence on the shift amount is suppressed, and the influence of the pulsed light of the attack circuit 1000 and the ambient light is suppressed.
 <制御装置の制御例>
 図19は、総合制御部208の制御例を示すタイミングチャートである。横軸は時間を示す。A図の縦軸は、総合制御部208の制御にしたがって、発光制御部200が生成する発光制御信号、駆動制御部202が生成する選択信号XSEL2、選択信号XSEL3、選択信号XSEL0、選択信号XSEL1を示す。これらの信号は、ハイレベルが駆動状態をしめす。
<Example of control of control device>
FIG. 19 is a timing chart showing an example of control by the comprehensive control unit 208. The horizontal axis indicates time. The vertical axis in Figure A indicates the light emission control signal generated by the light emission control unit 200, the selection signal XSEL2, the selection signal XSEL3, the selection signal XSEL0, and the selection signal show. A high level of these signals indicates a driving state.
 B図は、画素アレイ部12に入射する光子を模式的に示している。すなわち、攻撃光、或いは環境光により生じる光子と、2回目の測定光により生じる光子を模式的に示している。 Figure B schematically shows photons incident on the pixel array section 12. That is, photons generated by attack light or environmental light and photons generated by second measurement light are schematically shown.
 図20は、総合制御部208の制御例を示すフローチャートである。図19を参照にしつつ総合制御部208の制御例を説明する。 FIG. 20 is a flowchart showing an example of control by the comprehensive control unit 208. A control example of the comprehensive control unit 208 will be described with reference to FIG. 19.
 図19に示すように総合制御部208は、位相差信号の測定用に発光制御信号を一回目のハイレベル信号にし、赤外光パルスレーザ14に測定パルス光を発光させる(ステップS100)。 As shown in FIG. 19, the general control unit 208 sets the light emission control signal to a first high level signal for measuring the phase difference signal, and causes the infrared light pulse laser 14 to emit measurement pulse light (step S100).
 続けて、総合制御部208は、1回目の測定パルスを発光するタイミングに応じて、位相差画素110zLと、位相差画素110zRを駆動する選択信号XSEL2、及び選択信号XSEL3をハイレベルにし、位相差画素110zLと、位相差画素110zRを駆動し、位相差信号を測定する(ステップS102)。選択信号XSEL2、及び選択信号XSEL3をハイレベルにする第1測定期間の範囲は、対象物50までの予め定められている測定距離範囲に応じて設定することが可能である。 Subsequently, the general control unit 208 sets the selection signal XSEL2 and the selection signal The pixel 110zL and the phase difference pixel 110zR are driven to measure a phase difference signal (step S102). The range of the first measurement period in which the selection signal XSEL2 and the selection signal XSEL3 are set to high level can be set according to a predetermined measurement distance range to the target object 50.
 次に、第2距離測定部206は、ユニット11u毎の位相差画素110zL、zRのパルス信号COUTに基づき、シフト量を演算し、対象物50までの各第1距離を生成する(ステップS104)。上述のように、位相差法による距離値は、攻撃光、環境光などの影響を抑制した状態で生成される。 Next, the second distance measurement unit 206 calculates the shift amount based on the pulse signal COUT of the phase difference pixels 110zL and zR for each unit 11u, and generates each first distance to the target object 50 (step S104). . As described above, distance values based on the phase difference method are generated while suppressing the effects of attack light, environmental light, and the like.
 次に、総合制御部208は、第2距離測定部206が生成した第1距離に基づき、2回目の発光制御信号をハイレベルにする期間、及び選択信号XSEL0をハイレベルにする期間を制御する。これにより、2回目の測定パルスが発光する(ステップS106)。 Next, the comprehensive control unit 208 controls the period during which the second light emission control signal is set at high level and the period during which the selection signal XSEL0 is set at high level, based on the first distance generated by the second distance measuring unit 206. . As a result, a second measurement pulse is emitted (step S106).
 そして、総合制御部208は、2回目の測定パルスが発光するタイミングに応じて、参照画素110rを駆動する選択信号XSEL0をハイレベルにし、参照光の信号を測定する(ステップS108)。同様に、合制御部208は、2回目の測定パルスが発光するタイミングに応じて、測定画素110iを駆動する選択信号XSEL1をハイレベルにし、測定光の信号を測定する(ステップS110)。このように、総合制御部208は、対象物50から反射して戻ってる光子の測定時間の範囲を位相差法による第1距離値に対応する第2測定期間に設定する。 Then, the general control unit 208 sets the selection signal XSEL0 that drives the reference pixel 110r to a high level in accordance with the timing at which the second measurement pulse is emitted, and measures the signal of the reference light (step S108). Similarly, the combination control unit 208 sets the selection signal XSEL1 that drives the measurement pixel 110i to a high level in accordance with the timing at which the second measurement pulse is emitted, and measures the measurement light signal (step S110). In this manner, the comprehensive control unit 208 sets the measurement time range of photons reflected and returned from the object 50 to the second measurement period corresponding to the first distance value based on the phase difference method.
 次に、第1距離測定部204のヒストグラム生成部204aは、選択信号XSEL0がハイレベルの期間に、デジタル値が入力される度に、そのデジタル値に応じた時間区間の頻度に例えば1を加算し、参照画素110r出力に応じたヒストグラムを生成する。処理部204bは、例えばヒストグラム生成部204aが参照画素110rに対して生成したヒストグラムの最大頻度に対応する時間を時間t1として生成する。 Next, the histogram generation unit 204a of the first distance measurement unit 204 adds, for example, 1 to the frequency of the time interval corresponding to the digital value, every time a digital value is input while the selection signal XSEL0 is at a high level. Then, a histogram is generated according to the output of the reference pixel 110r. The processing unit 204b generates, for example, a time corresponding to the maximum frequency of the histogram generated for the reference pixel 110r by the histogram generation unit 204a as time t1.
 同様に、第2距離測定部206のヒストグラム生成部204aは、選択信号XSEL1がハイレベルの期間に、デジタル値が入力される度に、そのデジタル値に応じた時間区間の頻度に例えば1を加算し、複数の測定画素110iのそれぞれに対応するヒストグラムを生成する。そして、処理部204bは、ヒストグラム生成部204aが複数の測定画素110iのそれぞれに対して生成したヒストグラムの最大頻度に対応する時間を例えば時間t2nとして生成する。これにより、処理部204bは、各ユニット11uで検出される対象物50までの距離Dnを(t2n-t1)×光速/2として生成する(ステップS112)。 Similarly, the histogram generation unit 204a of the second distance measurement unit 206 adds, for example, 1 to the frequency of the time interval corresponding to the digital value, every time a digital value is input while the selection signal XSEL1 is at a high level. Then, a histogram corresponding to each of the plurality of measurement pixels 110i is generated. Then, the processing unit 204b generates, for example, a time t2n, a time corresponding to the maximum frequency of the histogram generated by the histogram generation unit 204a for each of the plurality of measurement pixels 110i. Thereby, the processing unit 204b generates the distance Dn to the object 50 detected by each unit 11u as (t2n-t1)×speed of light/2 (step S112).
 上述のように、総合制御部208は、対象物50から反射して戻ってる光子の第2測定期間を位相差法による第1距離値に対応させて設定する。図19のB図に示すように、測定画素110iが駆動される第2測定期間にも、攻撃光又は環境光に起因する光子も画素アレイ部12に入射する。しかしながら、測定期間に入射する攻撃光は、例えば第2測定期間外に高密度に入射され、環境光により光子は、ランダムに入射される。 As described above, the comprehensive control unit 208 sets the second measurement period of photons reflected and returned from the object 50 in correspondence with the first distance value based on the phase difference method. As shown in diagram B of FIG. 19, photons caused by attack light or environmental light also enter the pixel array section 12 during the second measurement period in which the measurement pixel 110i is driven. However, the attack light that is incident during the measurement period is, for example, incident at high density outside the second measurement period, and photons are incident randomly due to the environmental light.
 一方で、第2測定期間に入射する測定光に起因する光子は、対象物50の位置に対応する時間を中心として光子の密度が増加する。このため、ヒストグラム生成部204aが生成するヒストグラムのピークは、測定光に起因する光子の影響が支配的になる。このように、位相差法による第1距離値に対応する第2測定期間を設定することにより、TOF法での測定精度の低下を抑制できる。これらから分かるように、攻撃光などが無い場合の測定精度は位相法よりもTOF法の測定精度がより高いので、攻撃光がある場合でも、位相法よりも高い測定精度の測定距離値を得ることが可能となる。 On the other hand, the density of photons caused by the measurement light incident during the second measurement period increases around the time corresponding to the position of the target object 50. Therefore, the peak of the histogram generated by the histogram generation unit 204a is dominated by the influence of photons caused by the measurement light. In this way, by setting the second measurement period corresponding to the first distance value using the phase difference method, it is possible to suppress a decrease in measurement accuracy using the TOF method. As can be seen from these, the measurement accuracy of the TOF method is higher than that of the phase method when there is no attack light, so even when there is attack light, it is possible to obtain a measured distance value with higher measurement accuracy than the phase method. becomes possible.
 以上説明したように、本実施形態に係る測距システム1は、総合制御部208が、対象物50から反射して戻ってくる光子の第2測定期間を位相差法による第1距離値に対応させて設定することとした。これにより、攻撃光又は環境光による影響が抑制される第1距離値により第2測定期間が設定されるので、TOF法での第2測定期間外の攻撃光又は環境光による影響が抑制され、攻撃光などがある場合でのTOF法による距離測定の低下が抑制される。 As explained above, in the distance measuring system 1 according to the present embodiment, the comprehensive control unit 208 corresponds the second measurement period of photons reflected and returned from the target object 50 to the first distance value based on the phase difference method. I decided to set it up. As a result, the second measurement period is set by the first distance value that suppresses the influence of attack light or environmental light, so the influence of attack light or environmental light outside the second measurement period in the TOF method is suppressed, Deterioration in distance measurement by the TOF method in the presence of attack light is suppressed.
(第1実施形態の変形例1)
 第1実施形態の変形例1に係る測距システム1は、画素アレイ部12に形成されるSPAD331、332、335、336において、反射防止部(モスアイ)1122を設けた点で、第1実施形態に係る測距システム1と相違する。以下では第1実施形態の測距システム1と相違する点を説明する。
(Modification 1 of the first embodiment)
The distance measuring system 1 according to the first modification of the first embodiment differs from the first embodiment in that anti-reflection sections (moth eyes) 1122 are provided in the SPADs 331, 332, 335, and 336 formed in the pixel array section 12. The distance measuring system 1 is different from the distance measuring system 1 according to the present invention. Below, differences from the ranging system 1 of the first embodiment will be explained.
 図21Aは、SPADの断面を簡略化した模式図である。図21Aに示すように、Nウェル1112では、可視光はほとんど吸収される。一方で、赤外光の約半分は、反射されてしまう。そこで、本実施形態に係るSPADでは、反射防止部(モスアイ)1122を設けることとした。 FIG. 21A is a simplified schematic diagram of a cross section of the SPAD. As shown in FIG. 21A, the N-well 1112 absorbs almost all visible light. On the other hand, about half of the infrared light is reflected. Therefore, in the SPAD according to the present embodiment, an antireflection portion (moth eye) 1122 is provided.
 図21Bは、反射防止部(モスアイ)1122を設けたSPAD331、332、335、336の断面を示す模式図である。図21に示すように、第1実施形態の変形例1に係るSPAD331、332、335、336では、光が入射する側の表面(板面)に、微細な突起による反射防止構造、いわゆるモスアイ構造を有する。反射防止部1122は、反射防止に加え、回折により実効光路長を長くする。このように、光電変換素子側の表面に所定のピッチで配列される凹凸構造部である反射防止部(モスアイ)1122を構成する。なお、SPAD333、334もSPAD331、332、と同様の構成である。 FIG. 21B is a schematic diagram showing a cross section of SPADs 331, 332, 335, and 336 provided with an antireflection portion (moth eye) 1122. As shown in FIG. 21, SPADs 331, 332, 335, and 336 according to Modification 1 of the first embodiment have an antireflection structure with minute protrusions, a so-called moth-eye structure, on the surface (plate surface) on the side where light enters. has. The antireflection section 1122 not only prevents reflection, but also increases the effective optical path length by diffraction. In this way, an antireflection portion (moth eye) 1122, which is an uneven structure portion arranged at a predetermined pitch on the surface of the photoelectric conversion element side, is formed. Note that the SPADs 333 and 334 also have the same configuration as the SPADs 331 and 332.
 以上説明したように、本実施形態によれば、SPAD331、332、335、336に入った光を反射防止部(モスアイ)1122により、Nウェル1112と内で往復させ、SPAD331、332、335、336の感度を向上させることができる。 As explained above, according to the present embodiment, the light entering the SPADs 331, 332, 335, 336 is caused to go back and forth between the N-well 1112 and the SPADs 331, 332, 335, 336 by the anti-reflection part (moth eye) 1122. sensitivity can be improved.
(第1実施形態の変形例2)
 第1実施形態の変形例2に係る測距システム1は、画素アレイ部12に形成されるSPAD331、332、335、336において、拡散層1114の位置をオンチップレンズ1110、及び絞りWL、WRのいずれかの位置に応じて変更する点で、第1実施形態に係る測距システム1と相違する。以下では第1実施形態の測距システム1と相違する点を説明する。
(Modification 2 of the first embodiment)
In the distance measuring system 1 according to the second modification of the first embodiment, in SPADs 331, 332, 335, and 336 formed in the pixel array section 12, the position of the diffusion layer 1114 is adjusted between the on-chip lens 1110 and the apertures WL and WR. The distance measuring system 1 is different from the distance measuring system 1 according to the first embodiment in that the distance measuring system 1 is changed depending on one of the positions. Below, differences from the ranging system 1 of the first embodiment will be explained.
 図22は、拡散層1114の位置をオンチップレンズ1110、及び絞りWL、WRのいずれかの位置に応じて変更したSPAD331、332、335、336の断面を示す模式図である。図22に示すように、SPAD335では、絞りWLの無い側に、拡散層1114を配置することにより、より感度を上げることが可能である。同様に、SPAD336では、絞りWRの無い側に、拡散層1114を配置することにより、より感度を上げることが可能である。 FIG. 22 is a schematic diagram showing a cross section of SPADs 331, 332, 335, and 336 in which the position of the diffusion layer 1114 is changed depending on the position of the on-chip lens 1110 and the aperture WL or WR. As shown in FIG. 22, in the SPAD 335, by arranging the diffusion layer 1114 on the side without the aperture WL, it is possible to further increase the sensitivity. Similarly, in the SPAD 336, by arranging the diffusion layer 1114 on the side without the aperture WR, it is possible to further increase the sensitivity.
 各SPAD331、332は、画素アレイ部12上での配置位置と、レンズ40の光軸との関係で焦点位置が配置位置に応じて変動する。このため、本変形例に係るSPAD331、332は、オンチップレンズ1110の焦点位置に合わせて、拡散層1114を配置している。すなわち、拡散層1114をンチップレンズ1110の主光軸の位置に合わせて配置する。これにより、所謂瞳補正の効果を有することとなり、SPAD331、332の感度を上げることが可能である。なお、SPAD333、334、もSPAD331、332、と同様の構成である。 The focal position of each SPAD 331 and 332 varies depending on the arrangement position on the pixel array section 12 and the relationship with the optical axis of the lens 40. Therefore, in the SPADs 331 and 332 according to this modification, the diffusion layer 1114 is arranged in accordance with the focal position of the on-chip lens 1110. That is, the diffusion layer 1114 is arranged in alignment with the main optical axis of the chip lens 1110. This has the effect of so-called pupil correction, and it is possible to increase the sensitivity of the SPADs 331 and 332. Note that the SPADs 333 and 334 also have the same configuration as the SPADs 331 and 332.
(第1実施形態の変形例3)
 第1実施形態の変形例3に係る測距システム1は、画素アレイ部12に形成される回路部13を2層に積層化した点で、第1実施形態に係る測距システム1と相違する。以下では第1実施形態の測距システム1と相違する点を説明する。
(Variation 3 of the first embodiment)
The distance measuring system 1 according to the third modification of the first embodiment is different from the distance measuring system 1 according to the first embodiment in that the circuit section 13 formed in the pixel array section 12 is laminated in two layers. . Below, differences from the ranging system 1 of the first embodiment will be explained.
 図23は、画素アレイ部12の構成例を示す模式図である。図23に示すように、第1基板11aには画素アレイ部12が構成され、第2基板11bには各画素110r、110i、110zR、110zLのインバータ312と、TDC170a、bと、カウンタ172a、bと、を配置する。これにより、受光装置10の上面面積を縮小可能となり、受光装置10をより小型化可能となる。 FIG. 23 is a schematic diagram showing an example of the configuration of the pixel array section 12. As shown in FIG. 23, a pixel array section 12 is configured on the first substrate 11a, and an inverter 312 for each pixel 110r, 110i, 110zR, and 110zL, TDCs 170a, b, and counters 172a, b are configured on the second substrate 11b. and place. Thereby, the upper surface area of the light receiving device 10 can be reduced, and the light receiving device 10 can be further miniaturized.
(第1実施形態の変形例4)
 第1実施形態の変形例4に係る測距システム1は、画素アレイ部12に形成される回路部13を3層に積層化した点で、第1実施形態に係る測距システム1と相違する。以下では第1実施形態の測距システム1と相違する点を説明する。
(Modification 4 of the first embodiment)
The distance measurement system 1 according to the fourth modification of the first embodiment is different from the distance measurement system 1 according to the first embodiment in that the circuit section 13 formed in the pixel array section 12 is laminated in three layers. . Below, differences from the ranging system 1 of the first embodiment will be explained.
 図24は、第1実施形態の変形例4に係る画素アレイ部12の構成例を示す模式図である。図24に示すように、第1基板11cには画素アレイ部12が構成され、第2基板11dには各画素110r、110i、110zR、110zLのインバータ312が構成され、第3基板11eには、TDC170と、カウンタ172と、を配置する。また、画素アレイ部12の各ユニット11uに対応するTDC170を、各ユニット11uの直下に配置する。同様に、画素アレイ部12の各ユニット11uに対応するカウンタ172を、各ユニット11uの直下に配置する。これにより、受光装置10の配線距離を更に短縮できると共に、上面面積を更に縮小可能となり、受光装置10をより小型化可能となる。 FIG. 24 is a schematic diagram showing a configuration example of the pixel array section 12 according to Modification 4 of the first embodiment. As shown in FIG. 24, the pixel array section 12 is configured on the first substrate 11c, the inverter 312 for each pixel 110r, 110i, 110zR, and 110zL is configured on the second substrate 11d, and the third substrate 11e includes: A TDC 170 and a counter 172 are arranged. Further, a TDC 170 corresponding to each unit 11u of the pixel array section 12 is arranged directly below each unit 11u. Similarly, a counter 172 corresponding to each unit 11u of the pixel array section 12 is arranged directly below each unit 11u. Thereby, the wiring distance of the light receiving device 10 can be further shortened, and the top surface area can be further reduced, so that the light receiving device 10 can be further downsized.
(第2実施形態)
 第2実施形態に係る測距システム1は、第2距離測定部206が生成した位相差法による第1距離値に基づき、TOF法の測定に用いる測定光の光量を更に変更可能である点で、第1実施形態に係る測距システム1と相違する。以下では第1実施形態の測距システム1と相違する点を説明する。
(Second embodiment)
The distance measurement system 1 according to the second embodiment is capable of further changing the light intensity of the measurement light used for TOF measurement based on the first distance value generated by the second distance measurement unit 206 using the phase difference method. , is different from the ranging system 1 according to the first embodiment. Below, differences from the ranging system 1 of the first embodiment will be explained.
 図25は、位相差法による距離値に応じて光量を変更する例を模式的に示す図である。図25のA図は、近距離の例であり、B図は、A図よりも距離の遠い遠距離の例である。 FIG. 25 is a diagram schematically showing an example of changing the amount of light according to the distance value using the phase difference method. Diagram A in FIG. 25 is an example of a short distance, and diagram B is an example of a long distance, which is farther than diagram A.
 図26は、第2実施形態に係る総合制御部208の制御例を示すフローチャートである。図20で示した第2実施形態に係る総合制御部208の制御例を示すフローチャートと、ステップS200において赤外光パルスレーザ14の光量を設定する点で相違する。 FIG. 26 is a flowchart showing an example of control by the comprehensive control unit 208 according to the second embodiment. This flowchart is different from the flowchart showing a control example of the comprehensive control unit 208 according to the second embodiment shown in FIG. 20 in that the light amount of the infrared pulsed laser 14 is set in step S200.
 すなわち、総合制御部208は、ステップS200において、第2距離測定部206が生成した位相差法による第1距離値に応じて、外光パルスレーザ14の光量を設定する。総合制御部208は、例えば距離値と光量を対応させたテーブルを記憶しており、そのテーブルを参照して光量を設定する。 That is, in step S200, the comprehensive control unit 208 sets the light intensity of the external light pulse laser 14 according to the first distance value generated by the second distance measurement unit 206 using the phase difference method. The general control unit 208 stores, for example, a table that associates distance values with light amounts, and sets the light amount with reference to the table.
 以上説明したように本実施形態によれば、総合制御部208は、対象物50から反射して戻ってる光子の第2測定期間、及びパルスレーザ14の光量を位相差法による第1距離値に対応させて設定することとした。これにより、攻撃光又は環境光による影響が抑制される第1距離値により第2測定期間が設定されるとともに、対象物50から反射して戻ってる光子の量の距離変動を抑制できるので、TOF法での測定精度の低下がより抑制される。 As described above, according to the present embodiment, the comprehensive control unit 208 adjusts the second measurement period of photons reflected and returned from the object 50 and the light intensity of the pulsed laser 14 to the first distance value using the phase difference method. I decided to configure it accordingly. As a result, the second measurement period is set using the first distance value that suppresses the influence of attack light or environmental light, and distance fluctuations in the amount of photons reflected from the object 50 and returned can be suppressed, so the TOF Deterioration in measurement accuracy in the method is further suppressed.
(第3実施形態)
 第1実施形態の測距システム1に係る位相差画素110zL、110zRは、出力値をカウンタ172でカウント値として生成していたのに対し、第3実施形態に係る測距システム1に係る位相差画素110zL、110zRは、出力値をTDC170に出力し、TDC170を用いてカウント値を生成する点で、第1実施形態に係る測距システム1と相違する。以下では第1実施形態の測距システム1と相違する点を説明する。
(Third embodiment)
The phase difference pixels 110zL and 110zR according to the ranging system 1 of the first embodiment generate output values as count values by the counter 172, whereas the phase difference pixels 110zL and 110zR according to the ranging system 1 according to the third embodiment generate output values as count values. The pixels 110zL and 110zR are different from the ranging system 1 according to the first embodiment in that they output output values to the TDC 170 and generate count values using the TDC 170. Below, differences from the ranging system 1 of the first embodiment will be explained.
 図27は、第3実施形態に係る制御部20の構成例を示すブロック図である。図27に示すように、第3実施形態に係る制御部20は、第3距離測定部210を更に有する。第3距離測定部210は、ヒストグラム生成部210aと処理部210bとを有する。第3距離測定部210の処理の詳細は後述する。 FIG. 27 is a block diagram showing a configuration example of the control section 20 according to the third embodiment. As shown in FIG. 27, the control unit 20 according to the third embodiment further includes a third distance measurement unit 210. The third distance measuring section 210 includes a histogram generating section 210a and a processing section 210b. Details of the processing by the third distance measuring section 210 will be described later.
 図28は、第3実施形態に係るユニット11uの処理例を模式的に示す図である。上述のように、SPAD331乃至334(図3参照)は、TDC170に接続され、第3実施形態に係るSPAD335、336は、TDC170にそれぞれ接続される点で、第1実施形態に係るSPAD335、336と相違する。 FIG. 28 is a diagram schematically showing a processing example of the unit 11u according to the third embodiment. As described above, the SPADs 331 to 334 (see FIG. 3) are connected to the TDC 170, and the SPADs 335 and 336 according to the third embodiment are different from the SPADs 335 and 336 according to the first embodiment in that they are respectively connected to the TDC 170. differ.
 SPAD335、336それぞれには、光子ph8乃至ph11が時系列に入射している例を模式的に示している。ユニット11uのSPAD335を有する位相差画素110zL、SPAD336を有する位相差画素110zRそれぞれは、これらの光子ph8乃至ph10に応じたパルス信号TOUTを生成する。そして、SPAD335に対応するTDC170では、光子ph8乃至ph9に起因して時系列に生成されたパルス信号TOUTの入力タイミングと、基準時間t0との差分に比例するデジタル値を生成し、ヒストグラム生成部210aに供給する。同様に、PAD336に対応するTDC170では、光子ph10乃至ph11に起因して時系列に生成されたパルス信号TOUTの入力タイミングと、基準時間t0との差分に比例するデジタル値を生成し、ヒストグラム生成部210aに供給する。 An example is schematically shown in which photons ph8 to ph11 are incident on each of SPADs 335 and 336 in chronological order. Each of the phase difference pixel 110zL having the SPAD 335 and the phase difference pixel 110zR having the SPAD 336 of the unit 11u generates a pulse signal TOUT according to these photons ph8 to ph10. Then, the TDC 170 corresponding to the SPAD 335 generates a digital value proportional to the difference between the input timing of the pulse signal TOUT generated in time series due to the photons ph8 to ph9 and the reference time t0, and generates a digital value in the histogram generation section 210a. supply to. Similarly, the TDC 170 corresponding to the PAD 336 generates a digital value proportional to the difference between the input timing of the pulse signal TOUT generated in time series due to the photons ph10 to ph11 and the reference time t0, and generates a digital value in the histogram generation section. 210a.
 ヒストグラム生成部210aは、デジタル値が入力される度に、そのデジタル値に応じた時間区間の頻度に例えば1を加算する。パルス光が対象部50に繰り返し照射されることにより、対象部50までの距離に応じたデジタル値に対応する時間区間の頻度が増加する。 Each time a digital value is input, the histogram generation unit 210a adds, for example, 1 to the frequency of the time interval corresponding to the digital value. By repeatedly irradiating the target portion 50 with pulsed light, the frequency of time intervals corresponding to digital values according to the distance to the target portion 50 increases.
 図29は、ヒストグラム生成部210aが生成した位相差画素110zL、110zRそれぞれの出力に対応するヒストグラムを示す図である。図Aが位相差画素110zLに対応するヒストグラムであり、図Bが位相差画素110zRに対応するヒストグラムである。 FIG. 29 is a diagram showing a histogram corresponding to the output of each of the phase difference pixels 110zL and 110zR generated by the histogram generation unit 210a. Figure A is a histogram corresponding to the phase difference pixel 110zL, and Figure B is a histogram corresponding to the phase difference pixel 110zR.
 処理部210bは、例えばヒストグラム生成部210aが生成したヒストグラムを積算した積算値をカウンタ値として生成する。つまり、積算値は、位相差画素110zL、110zRそれぞれに到達した光子の数に比例する。換言すると、処理部210bは、TDC170が生成するパルス信号TOUTの数を積算して、カウンタ値として生成している。 The processing unit 210b generates, as a counter value, an integrated value obtained by integrating the histograms generated by the histogram generating unit 210a, for example. In other words, the integrated value is proportional to the number of photons that reach each of the phase difference pixels 110zL and 110zR. In other words, the processing unit 210b integrates the number of pulse signals TOUT generated by the TDC 170 to generate a counter value.
 この処理部210bは、図14に示したように、列状に配置された各位相差画素110zL、110zRそれぞれに対応するカウンタ値に基づき、シフト量を演算する。そして、処理部210bは、シフト量に基づき、対象物50までの距離を演算する。シフト量は、対象物50までの距離に対応するので、TOF法よりも精度は低下するが、攻撃光、環境光の影響を抑制しつつ距離を測定可能となる。 As shown in FIG. 14, this processing unit 210b calculates the shift amount based on the counter values corresponding to each of the phase difference pixels 110zL and 110zR arranged in a column. The processing unit 210b then calculates the distance to the target object 50 based on the shift amount. Since the shift amount corresponds to the distance to the object 50, the accuracy is lower than that of the TOF method, but the distance can be measured while suppressing the influence of attack light and environmental light.
 以上説明したように本実施形態によれば、SPAD335、336をTDC170にそれぞれ接続し、第3距離測定部210は、TDC170が生成するパルス信号TOUTの数を積算することによりカウンタ値を生成し、距離値を演算することとした。これにより、第1実施形態乃至第2実施形態の測距システム1と同等の効果に加え、カウンタ172を用いずにTDC170により同等の処理効果を得ることが可能となる。このため、TDC170のみで測距システム1を構成可能となる。 As explained above, according to the present embodiment, the SPADs 335 and 336 are connected to the TDC 170, and the third distance measuring section 210 generates a counter value by integrating the number of pulse signals TOUT generated by the TDC 170, We decided to calculate the distance value. As a result, in addition to the same effects as the ranging system 1 of the first to second embodiments, it is possible to obtain the same processing effects with the TDC 170 without using the counter 172. Therefore, the ranging system 1 can be configured only with the TDC 170.
(第4実施形態)
 第1実施形態の測距システム1は、位相差画素110zL、110zRによる距離測定をした後に測定画素110iによる距離測定をしていたのに対し、第4実施形態に係る測距システム1では、ユニットU11毎の位相差画素110zL、110zR、及び測定画素110iによる距離測定を並行して行う点で、第1実施形態に係る測距システム1と相違する。以下では第1実施形態の測距システム1と相違する点を説明する。
(Fourth embodiment)
The distance measurement system 1 according to the first embodiment measures the distance using the measurement pixel 110i after measuring the distance using the phase difference pixels 110zL and 110zR, whereas the distance measurement system 1 according to the fourth embodiment measures the distance using the measurement pixel 110i. This differs from the distance measuring system 1 according to the first embodiment in that distance measurement is performed in parallel using the phase difference pixels 110zL, 110zR and the measurement pixel 110i for each U11. Below, differences from the ranging system 1 of the first embodiment will be explained.
 図30は、第4実施形態に係る位相差検出を説明する図である。同図におけるAは、対象物50、レンズ40および画素アレイ部12の表面位置と入射光の光路との関係を表す図である。同図において、レンズ40の左側および右側を通過する光をそれぞれ301および302により表した。便宜上、光301および302はレンズ40の端部を通過する光だけを記載した。オンチップレンズ1110(図7参照)により、光301は実際にはオンチップレンズ1110右側を通過し、光302は実際にはオンチップレンズ1110左側を通過する。このため、遮蔽部WRの位置を図7とは反対側に模式的に記載している。 FIG. 30 is a diagram illustrating phase difference detection according to the fourth embodiment. A in the figure is a diagram showing the relationship between the surface positions of the object 50, the lens 40, and the pixel array section 12, and the optical path of the incident light. In the figure, light passing through the left and right sides of the lens 40 is represented by 301 and 302, respectively. For convenience, only the lights 301 and 302 that pass through the end of the lens 40 are described. Due to the on-chip lens 1110 (see FIG. 7), light 301 actually passes through the right side of the on-chip lens 1110, and light 302 actually passes through the left side of the on-chip lens 1110. Therefore, the position of the shielding part WR is schematically shown on the opposite side from FIG.
 同図の-(マイナス)位置は、焦点位置が画素アレイ部12(の撮像面)とは反対の側にある、いわゆる後ピン状態を示す。また、同図の0位置は、合焦の状態を示す。さらにまた、同図の+(プラス)位置は、焦点位置が画素アレイ部12(の撮像面)側にある、いわゆる前ピン状態を示す。 The − (minus) position in the figure indicates a so-called rear focus state where the focal position is on the side opposite to (the imaging surface of) the pixel array section 12. Further, the 0 position in the figure indicates a focused state. Furthermore, the + (plus) position in the figure indicates a so-called front focus state in which the focal position is on the (imaging surface of) the pixel array section 12.
 図31は、対となるの位相差画素110zL、110zRの焦点位置と、出力値の関係を示す図である。横軸は、焦点位置を0、-位置を後ピン状態、+位置を前ピン状態とする。 FIG. 31 is a diagram showing the relationship between the focal position and the output value of the paired phase difference pixels 110zL and 110zR. On the horizontal axis, the focus position is 0, the - position is the rear focus state, and the + position is the front focus state.
 また、位相差画素110zLの出力値をL20aとし、位相差画素110zRの出力値をR20aとする。すなわち、出力値L20aは、位相差画素110zLに対応するカウンタ172のカウント値である。同様に、出力値R20aは、位相差画素110zRに対応するカウンタ172のカウント値である。図31に示すように、L20aとR20aとの値は、後ピン状態、及び前ピン状態の度合により変化する。 Furthermore, the output value of the phase difference pixel 110zL is assumed to be L20a, and the output value of the phase difference pixel 110zR is assumed to be R20a. That is, the output value L20a is the count value of the counter 172 corresponding to the phase difference pixel 110zL. Similarly, the output value R20a is the count value of the counter 172 corresponding to the phase difference pixel 110zR. As shown in FIG. 31, the values of L20a and R20a change depending on the degree of the rear focus state and the front focus state.
 再び図9に示すように、第2距離測定部206は、各ユニット11uの位相差画素110zLの出力値をL20aとし、位相差画素110zRの出力値をR20aとし、出力値L20a、R20aに基づき、差分値D20を演算する。そして、第2距離測定部206は、差分値D20に基づき、対象物50までの距離をユニット11毎に演算する。差分値D20は、対象物50までの距離に対応するので、TOF法よりも精度は低下するが距離を測定可能となる。このように、本実施形態では、ASPAD335、336を用いることにより、光子レベルの光量を用いて、対象物50までの距離を各ユニット11u毎に測定可能となる。 As shown in FIG. 9 again, the second distance measurement unit 206 sets the output value of the phase difference pixel 110zL of each unit 11u to L20a, sets the output value of the phase difference pixel 110zR to R20a, and based on the output values L20a and R20a, A difference value D20 is calculated. Then, the second distance measuring section 206 calculates the distance to the target object 50 for each unit 11 based on the difference value D20. Since the difference value D20 corresponds to the distance to the object 50, it is possible to measure the distance, although the accuracy is lower than with the TOF method. In this manner, in this embodiment, by using the ASPADs 335 and 336, it is possible to measure the distance to the object 50 for each unit 11u using the amount of light at the photon level.
 図32は、妨害光を受けたときの位相差画素110zL、110zRの焦点位置と、出力値の関係を示す図である。横軸は、焦点位置を0、-位置を後ピン状態、+位置を前ピン状態とする。 FIG. 32 is a diagram showing the relationship between the focal position of the phase difference pixels 110zL and 110zR and the output value when receiving interference light. On the horizontal axis, the focus position is 0, the - position is the rear focus state, and the + position is the front focus state.
 また、位相差画素110zLの出力値をL20aとし、位相差画素110zRの出力値をR20aとする。すなわち、出力値L20aは、位相差画素110zLに対応するカウンタ172のカウント値である。同様に、出力値R20aは、位相差画素110zRに対応するカウンタ172のカウント値である。 Furthermore, the output value of the phase difference pixel 110zL is assumed to be L20a, and the output value of the phase difference pixel 110zR is assumed to be R20a. That is, the output value L20a is the count value of the counter 172 corresponding to the phase difference pixel 110zL. Similarly, the output value R20a is the count value of the counter 172 corresponding to the phase difference pixel 110zR.
 一方で、妨害光を受けても、位相差画素110zLと、位相差画素110zRとのカウンタ値は、妨害光に起因する光子数に起因するオフセットが増加するだけである。このため、妨害光を受けても差分値D20への影響は抑制される。このように、位相差画素110zLと、位相差画素110zRとを用いる位相差法では、攻撃回路1000のパルス光が対象物50に照射されても、信号値L20aと信号値R20aとが同程度増加するだけであるので、攻撃回路1000のパルス光、環境光などの影響が抑制される。 On the other hand, even when receiving interference light, the counter values of the phase difference pixel 110zL and the phase difference pixel 110zR only increase in offset due to the number of photons caused by the interference light. Therefore, even if interference light is received, the influence on the difference value D20 is suppressed. In this way, in the phase difference method using the phase difference pixel 110zL and the phase difference pixel 110zR, even if the object 50 is irradiated with the pulsed light of the attack circuit 1000, the signal value L20a and the signal value R20a increase by the same amount. Therefore, the influence of pulsed light of the attack circuit 1000, environmental light, etc. is suppressed.
 図33は、第4実施形態に係る総合制御部208の制御例を示すタイミングチャートである。横軸は時間を示す。A図の縦軸は、総合制御部208の制御にしたがって、発光制御部200が生成する発光制御信号、駆動制御部202が生成する選択信号XSEL2、選択信号XSEL3、選択信号XSEL0、選択信号XSEL1を示す。これらの信号は、ハイレベルが駆動状態をしめす。 FIG. 33 is a timing chart showing an example of control by the comprehensive control unit 208 according to the fourth embodiment. The horizontal axis indicates time. The vertical axis in Figure A indicates the light emission control signal generated by the light emission control unit 200, the selection signal XSEL2, the selection signal XSEL3, the selection signal XSEL0, and the selection signal show. A high level of these signals indicates a driving state.
 B図は、画素アレイ部12に入射する光子を模式的に示している。すなわち、攻撃光、或いは環境光により生じる光子と、2回目の測定光により生じる光子を模式的に示している。 Figure B schematically shows photons incident on the pixel array section 12. That is, photons generated by attack light or environmental light and photons generated by second measurement light are schematically shown.
 図34は、第4実施形態に係る総合制御部208の制御例を示すフローチャートである。図33を参照にしつつ第4実施形態に係る総合制御部208の制御例を説明する。 FIG. 34 is a flowchart showing an example of control by the comprehensive control unit 208 according to the fourth embodiment. A control example of the comprehensive control unit 208 according to the fourth embodiment will be described with reference to FIG. 33.
 図33に示すように総合制御部208は、発光制御信号をハイレベル信号にし、赤外光パルスレーザ14に測定パルス光を発光させる(ステップS300)。 As shown in FIG. 33, the general control unit 208 sets the light emission control signal to a high level signal and causes the infrared light pulse laser 14 to emit measurement pulse light (step S300).
(位相画素側の駆動)
 続けて、総合制御部208は、測定パルスを発光するタイミングに応じて、位相差画素110zLと、位相差画素110zRを駆動する選択信号XSEL2、及び選択信号XSEL3をハイレベルにし、位相差画素110zLと、位相差画素110zRを駆動し、位相差信号をユニットu毎に測定する(ステップS302)。選択信号XSEL2、及び選択信号XSEL3をハイレベルにする範囲は、対象物50までの予め定められている測定距離範囲に応じて設定することが可能である。
(Drive on phase pixel side)
Subsequently, the general control unit 208 sets the phase difference pixel 110zL and the selection signal XSEL2 and the selection signal , the phase difference pixel 110zR is driven, and the phase difference signal is measured for each unit u (step S302). The range in which the selection signal XSEL2 and the selection signal XSEL3 are set to high level can be set according to a predetermined measurement distance range to the target object 50.
 次に、第2距離測定部206は、各ユニット11uのパルス信号COUTに基づき、ユニット11u毎に差分値D20を演算し、対象物50までのユニット11u毎の第1距離値を生成する(ステップS304)。上述のように、距離値は、攻撃光などの影響を抑制した状態で生成される。 Next, the second distance measuring section 206 calculates a difference value D20 for each unit 11u based on the pulse signal COUT of each unit 11u, and generates a first distance value for each unit 11u to the target object 50 (step S304). As described above, the distance value is generated while suppressing the influence of attack light and the like.
(測定画素側の駆動)
 総合制御部208は、測定パルスが発光するタイミングに応じて、参照画素110rを駆動する選択信号XSEL0をハイレベルにし、参照光の信号を測定する(ステップS306)。続けて、第1距離測定部204のヒストグラム生成部204aは、選択信号XSEL0がハイレベルの期間に、デジタル値が入力される度に、そのデジタル値に応じた時間区間の頻度に例えば1を加算し、参照画素110r出力に応じたヒストグラムを生成する。処理部204bは、例えばヒストグラム生成部204aが参照画素110rに対して生成したヒストグラムの最大頻度に対応する時間を時間t1として生成する。
(Drive on measurement pixel side)
The general control unit 208 sets the selection signal XSEL0 that drives the reference pixel 110r to a high level in accordance with the timing at which the measurement pulse is emitted, and measures the signal of the reference light (step S306). Subsequently, the histogram generation unit 204a of the first distance measurement unit 204 adds, for example, 1 to the frequency of the time interval corresponding to the digital value, every time a digital value is input while the selection signal XSEL0 is at a high level. Then, a histogram is generated according to the output of the reference pixel 110r. The processing unit 204b generates, for example, a time corresponding to the maximum frequency of the histogram generated for the reference pixel 110r by the histogram generation unit 204a as time t1.
 同様に、総合制御部208は、測定パルスが発光するタイミングに応じて、各測定画素110iを駆動する選択信号XSEL1をハイレベルにし、測定光の信号を測定する(ステップS308)。続けて、第1距離測定部204のヒストグラム生成部204aは、選択信号XSEL1がハイレベルの期間に、デジタル値が入力される度に、そのデジタル値に応じた時間区間の頻度に例えば1を加算し、複数の測定画素110iのそれぞれに対応するヒストグラムを生成する。そして、処理部204bは、ヒストグラム生成部204aが複数の測定画素110iのそれぞれに対して生成したヒストグラムの最大頻度に対応する時間を例えば時間t2nとして生成する。これにより、処理部204bは、各ユニット11uで検出される対象物50までの距離Dnを(t2n-t1)×光速/2として生成する(ステップS310)。 Similarly, the general control unit 208 sets the selection signal XSEL1 that drives each measurement pixel 110i to a high level according to the timing at which the measurement pulse is emitted, and measures the measurement light signal (step S308). Subsequently, the histogram generation unit 204a of the first distance measurement unit 204 adds, for example, 1 to the frequency of the time interval corresponding to the digital value, every time a digital value is input while the selection signal XSEL1 is at a high level. Then, a histogram corresponding to each of the plurality of measurement pixels 110i is generated. Then, the processing unit 204b generates, for example, a time t2n, a time corresponding to the maximum frequency of the histogram generated by the histogram generation unit 204a for each of the plurality of measurement pixels 110i. Thereby, the processing unit 204b generates the distance Dn to the object 50 detected by each unit 11u as (t2n-t1)×speed of light/2 (step S310).
 上述のように、位相差画素110zL、110zRの距離測定値は、測定精度が測定画素110iよりも低くなるが、攻撃光、及び環境光の影響が抑制される。一方で。測定画素110iの距離測定値は、測定精度が位相差画素110zL、110zRの距離測定値よりも高く、攻撃光、及び環境光の影響を受けやすい傾向を有する。 As described above, although the distance measurement values of the phase difference pixels 110zL and 110zR have lower measurement accuracy than the measurement pixel 110i, the influence of attack light and environmental light is suppressed. on the other hand. The distance measurement value of the measurement pixel 110i has a higher measurement accuracy than the distance measurement values of the phase difference pixels 110zL and 110zR, and tends to be easily influenced by attack light and environmental light.
 そこで、総合制御部208は、ユニット11u毎の測定画素110iの距離測定値が位相差画素110zL、110zRの距離測定値の所定の範囲内であれば、測定画素110iの距離測定値を選択する。一方で、測定画素110iの距離測定値が位相差画素110zL、110zRの距離測定値が所定の範囲内でない場合には、位相差画素110zL、110zRの距離測定値を選択する(ステップS312)。例えば、所定値の範囲を測位相差画素110zL、110zRの距離測定値の90パーセント以上110パーセントの範囲とする。 Therefore, if the distance measurement value of the measurement pixel 110i for each unit 11u is within the predetermined range of the distance measurement values of the phase difference pixels 110zL and 110zR, the comprehensive control unit 208 selects the distance measurement value of the measurement pixel 110i. On the other hand, if the distance measurement value of the measurement pixel 110i and the distance measurement value of the phase difference pixels 110zL and 110zR are not within the predetermined range, the distance measurement value of the phase difference pixels 110zL and 110zR is selected (step S312). For example, the range of the predetermined value is set to be 90% or more and 110% of the distance measurement values of the phase difference pixels 110zL and 110zR.
 以上説明したように、本実施形態によれば、ユニットu11毎の位相差画素110zL、110zR、及び係る測定画素110iによる距離測定を並行して行い、ユニットu11毎の第1距離値に基づく第2距離値の値に応じて、妨害光などを受けている場合には、位相差画素110zL、110zRの第1距離値を選択し、妨害光などを受けていない場合には、測定画素110iの第2距離値を選択することとした。これにより、測定時間を短縮できると共に、妨害光などを受けている場合には位相差画素110zL、110zRによる距離測定値を選択することにより、妨害光の影響が抑制され、妨害光などを受けていない場合には、測定画素110iの距離測定値を選択することにより、より高精度な距離測定値を生成可能となる。 As explained above, according to the present embodiment, distance measurements are performed in parallel by the phase difference pixels 110zL and 110zR of each unit u11 and the measurement pixel 110i, and the second distance measurement based on the first distance value of each unit u11 is performed in parallel. Depending on the distance value, when receiving interference light, etc., the first distance value of the phase difference pixels 110zL and 110zR is selected, and when receiving no interfering light, etc., the first distance value of the measurement pixel 110i is selected. We decided to select two distance values. This makes it possible to shorten the measurement time, and when receiving interference light, etc., by selecting the distance measurement value by the phase difference pixels 110zL and 110zR, the influence of the interference light is suppressed. If not, by selecting the distance measurement value of the measurement pixel 110i, it is possible to generate a more accurate distance measurement value.
(第5実施形態)
 第4実施形態に係る測距システム1では、ユニットU11毎の位相差画素110zL、110zR、及び測定画素110iによる距離測定を並行して行っていたのに対し、第5実施形態に係る測距システム1は、ユニットU11毎に位相差画素110zL、110zRによる距離測定をした後にユニットU11毎の測定画素110iによる距離測定を行う点で、第4実施形態に係る測距システム1と相違する。以下では第5実施形態の測距システム1と相違する点を説明する。
(Fifth embodiment)
In the distance measurement system 1 according to the fourth embodiment, distance measurement is performed in parallel using the phase difference pixels 110zL, 110zR and the measurement pixel 110i for each unit U11, whereas in the distance measurement system 1 according to the fifth embodiment 1 is different from the distance measuring system 1 according to the fourth embodiment in that distance measurement is performed using the measurement pixel 110i for each unit U11 after distance measurement is performed using the phase difference pixels 110zL and 110zR for each unit U11. The differences from the ranging system 1 of the fifth embodiment will be explained below.
 図35は、第5実施形態に係る総合制御部208の制御例を示すフローチャートである。図35に示すように総合制御部208は、(ステップS300)~(ステップS304)を図34と同様に行う。 FIG. 35 is a flowchart showing an example of control by the comprehensive control unit 208 according to the fifth embodiment. As shown in FIG. 35, the comprehensive control unit 208 performs (step S300) to (step S304) in the same manner as in FIG.
 続けて、総合制御部208は、2回目の発光制御信号をハイレベルにする期間、選択信号XSEL0を設定し、ユニット11u毎の第1距離値に応じてユニット11u毎の選択信号XSEL1をハイレベルにする第2測定期間を設定する(ステップS400)。 Subsequently, the general control unit 208 sets the selection signal XSEL0 during the period in which the second light emission control signal is set to high level, and sets the selection signal XSEL1 for each unit 11u to high level according to the first distance value for each unit 11u. A second measurement period is set (step S400).
 次に、総合制御部208は、2回目の測定パルスが発光するタイミングに応じて、参照画素110rを駆動する選択信号XSEL0をハイレベルにし、参照光の信号を測定する(ステップS404)。続けて、第1距離測定部204のヒストグラム生成部204aは、選択信号XSEL0がハイレベルの期間に、デジタル値が入力される度に、そのデジタル値に応じた時間区間の頻度に例えば1を加算し、参照画素110r出力に応じたヒストグラムを生成する。処理部204bは、例えばヒストグラム生成部204aが参照画素110rに対して生成したヒストグラムの最大頻度に対応する時間を時間t1として生成する。 Next, the general control unit 208 sets the selection signal XSEL0 that drives the reference pixel 110r to a high level in accordance with the timing at which the second measurement pulse is emitted, and measures the signal of the reference light (step S404). Subsequently, the histogram generation unit 204a of the first distance measurement unit 204 adds, for example, 1 to the frequency of the time interval corresponding to the digital value, every time a digital value is input while the selection signal XSEL0 is at a high level. Then, a histogram is generated according to the output of the reference pixel 110r. The processing unit 204b generates, for example, a time corresponding to the maximum frequency of the histogram generated for the reference pixel 110r by the histogram generation unit 204a as time t1.
 次に、総合制御部208は、2回目の測定パルスが発光するタイミングに応じて、各測定画素110iを駆動する選択信号XSEL1を、測定画素110i毎の第2測定期間に応じてハイレベルにし、測定光の信号を測定する(ステップS406)。このように、総合制御部208は、対象物50から反射して戻ってる光子の測定時間の範囲をユニット11u毎に、位相差法によるユニット11u毎の第1距離値に対応する第2測定期間に設定する。 Next, the general control unit 208 sets the selection signal XSEL1 that drives each measurement pixel 110i to a high level in accordance with the second measurement period of each measurement pixel 110i in accordance with the timing at which the second measurement pulse is emitted. The measurement light signal is measured (step S406). In this way, the comprehensive control unit 208 sets the range of measurement time of photons reflected and returned from the target object 50 for each unit 11u into the second measurement period corresponding to the first distance value for each unit 11u using the phase difference method. Set to .
 次に、第1距離測定部204のヒストグラム生成部204aは、選択信号XSEL0がハイレベルの期間に、デジタル値が入力される度に、そのデジタル値に応じた時間区間の頻度に例えば1を加算し、参照画素110r出力に応じたユニット11u毎のヒストグラムを生成する。処理部204bは、例えばヒストグラム生成部204aが参照画素110rに対して生成したヒストグラム毎の最大頻度に対応する時間を時間t1として生成する。 Next, the histogram generation unit 204a of the first distance measurement unit 204 adds, for example, 1 to the frequency of the time interval corresponding to the digital value, every time a digital value is input while the selection signal XSEL0 is at a high level. Then, a histogram is generated for each unit 11u according to the output of the reference pixel 110r. The processing unit 204b generates, for example, a time corresponding to the maximum frequency of each histogram generated by the histogram generation unit 204a for the reference pixel 110r as time t1.
 そして、処理部204bは、ヒストグラム生成部204aが複数の測定画素110iのそれぞれに対して生成したヒストグラムの最大頻度に対応する時間を例えば時間t2nとして生成する。これにより、処理部204bは、各ユニット11uで検出される対象物50までの距離Dnを(t2n-t1)×光速/2として生成する(ステップS408)。 Then, the processing unit 204b generates, for example, time t2n, a time corresponding to the maximum frequency of the histogram generated by the histogram generation unit 204a for each of the plurality of measurement pixels 110i. Thereby, the processing unit 204b generates the distance Dn to the object 50 detected by each unit 11u as (t2n-t1)×speed of light/2 (step S408).
 以上説明したように、本実施形態に係る測距システム1は、総合制御部208が、対象物50から反射して戻ってくる光子の測定期間を位相差法により測定されたユニット11u毎の第1距離値に対応させて、ユニット11u毎に設定することとした。これにより、攻撃光又は環境光による影響が抑制されるユニット11u毎の第1距離値により、ユニット11u毎の第2測定期間が設定されるので、TOF法での第2測定期間外の攻撃光又は環境光による影響が抑制され、ユニット11u毎のTOF法での測定精度の低下が抑制される。 As described above, in the distance measuring system 1 according to the present embodiment, the comprehensive control unit 208 determines the measurement period of photons reflected and returned from the object 50 for each unit 11u measured by the phase difference method. 1 distance value, and is set for each unit 11u. As a result, the second measurement period for each unit 11u is set based on the first distance value for each unit 11u in which the influence of attack light or environmental light is suppressed, so attack light outside the second measurement period in the TOF method Alternatively, the influence of environmental light is suppressed, and a decrease in measurement accuracy in the TOF method for each unit 11u is suppressed.
(第6実施形態)
 第6実施形態に係る測距システム1では位相差画素110zL、110zR用の赤外光パルスレーザ16を、更に備える点で第1実施形態の測距システム1と相違する。以下では第1実施形態の測距システム1と相違する点を説明する。
(Sixth embodiment)
The distance measurement system 1 according to the sixth embodiment differs from the distance measurement system 1 according to the first embodiment in that it further includes an infrared pulsed laser 16 for the phase difference pixels 110zL and 110zR. Below, differences from the ranging system 1 of the first embodiment will be explained.
 図36は、第6実施形態に係る測距システム1の概略構成例を模式的に示すブロック図である。図36に示すように、第6実施形態に係る測距システム1は、位相差画素110zL、110zR用の赤外光パルスレーザ16と回折光学素子16a(DOE)とを更に備える点で第1実施形態に係る測距システム1と相違する。 FIG. 36 is a block diagram schematically showing an example of a schematic configuration of a ranging system 1 according to the sixth embodiment. As shown in FIG. 36, the ranging system 1 according to the sixth embodiment differs from the first embodiment in that it further includes an infrared pulsed laser 16 and a diffractive optical element 16a (DOE) for the phase difference pixels 110zL and 110zR. This is different from the distance measuring system 1 according to the configuration.
 赤外光パルスレーザ16は、基板11に搭載される。赤外光パルスレーザ16は、例えば、VCSEL(Vertical Cavity Surface Emitting LASER)光源を用いて形成されている。また、赤外光パルスレーザ16の照射側に回折光学素子16a(DOE)を配置することにより、例えば100点の行列状に配置されるスポット光のドットパターンを、対象物50に照射可能である。 The infrared pulsed laser 16 is mounted on the substrate 11. The infrared light pulse laser 16 is formed using, for example, a VCSEL (Vertical Cavity Surface Emitting LASER) light source. Furthermore, by arranging a diffractive optical element 16a (DOE) on the irradiation side of the infrared light pulse laser 16, it is possible to irradiate the object 50 with a dot pattern of spot light arranged in a matrix of, for example, 100 points. .
 以上説明したように、本実施形態によれば、位相差画素110zL、110zR用の赤外光パルスレーザ16を更に備えることとした。これにより、位相差画素110zL、110zR用の計測と、測定画素110i用の計測を独立した光源により行うことが可能となる。 As explained above, according to this embodiment, the infrared pulsed laser 16 for the phase difference pixels 110zL and 110zR is further provided. This allows measurement for the phase difference pixels 110zL and 110zR and measurement for the measurement pixel 110i to be performed using independent light sources.
(第7実施形態)
 第7実施形態に係る測距システム1では位相差画素110zL、110zRと、参照画素110r、及び測定画素110iとを異なるチップとして構成する点で第1実施形態の測距システム1と相違する。以下では第1実施形態の測距システム1と相違する点を説明する。
(Seventh embodiment)
The distance measurement system 1 according to the seventh embodiment differs from the distance measurement system 1 according to the first embodiment in that the phase difference pixels 110zL and 110zR, the reference pixel 110r, and the measurement pixel 110i are configured as different chips. Below, differences from the ranging system 1 of the first embodiment will be explained.
 図37Aは、第7実施形態に係る測距システム1の概略構成例を模式的に示すブロック図である。図37に示すように、第7実施形態に係る測距システム1は、位相差画素110zL、110zRと、参照画素110r、及び測定画素110iとを異なるチップとして構成する。 FIG. 37A is a block diagram schematically showing a schematic configuration example of the ranging system 1 according to the seventh embodiment. As shown in FIG. 37, the ranging system 1 according to the seventh embodiment includes phase difference pixels 110zL and 110zR, a reference pixel 110r, and a measurement pixel 110i as different chips.
 図37Bは、比較例に係る測距システム1の概略構成例を模式的に示すブロック図である。図37Bに示すように、比較例に係る測距システム1は、位相差画素110zL、110zRのチップが、参照画素110rのチップ、と赤外光パルスレーザ14の間に配置されている。このため、比較例では、赤外光パルスレーザ14からの迷光が位相差画素110zL、110zRのチップに入射しやすくなってしまう。これに対して本実施形態に係る画素アレイ部12は、参照画素110rのチップと、位相差画素110zL、110zRのチップとの間に赤外光パルスレーザ14が配置されるので、迷光が抑制される。 FIG. 37B is a block diagram schematically showing an example of a schematic configuration of a distance measuring system 1 according to a comparative example. As shown in FIG. 37B, in the ranging system 1 according to the comparative example, the chips of the phase difference pixels 110zL and 110zR are arranged between the chip of the reference pixel 110r and the infrared light pulse laser 14. Therefore, in the comparative example, stray light from the infrared pulsed laser 14 tends to enter the chips of the phase difference pixels 110zL and 110zR. In contrast, in the pixel array unit 12 according to the present embodiment, the infrared pulsed laser 14 is disposed between the chip of the reference pixel 110r and the chips of the phase difference pixels 110zL and 110zR, so that stray light is suppressed. Ru.
 以上説明したように、本実施形態によれば、位相差画素110zL、110zRと、参照画素110r、及び測定画素110iとを異なるチップとして構成することとした。これにより、位相差画素110zL、110zRと、参照画素110r、及び測定画素110iとの回路構成を異ならすことが容易となり、設計の自由度がより向上する。 As explained above, according to this embodiment, the phase difference pixels 110zL and 110zR, the reference pixel 110r, and the measurement pixel 110i are configured as different chips. This makes it easy to differentiate the circuit configurations of the phase difference pixels 110zL and 110zR, the reference pixel 110r, and the measurement pixel 110i, further improving the degree of freedom in design.
(第8実施形態)
 第8実施形態に係る測距システム1では位相差画素110zL、110zR、及び参照画素110rをそれぞれを一つのSPAD毎に構成する点で、第1実施形態の測距システム1と相違する。以下では第1実施形態の測距システム1と相違する点を説明する。
(Eighth embodiment)
The ranging system 1 according to the eighth embodiment differs from the ranging system 1 according to the first embodiment in that phase difference pixels 110zL, 110zR, and reference pixel 110r are each configured for each SPAD. Below, differences from the ranging system 1 of the first embodiment will be explained.
 図38は、第8実施形態に係る画素アレイ部12に配置される複数の位相差画素110zL、110zR、及び複数の測定画素110iの構成例を示す図である。図38に示すように、複数の位相差画素110zL、110zR、及び複数の参照測定画素110iのそれぞれは、一つのSPADで構成される。位相差画素110zL、110zRの中間部には絞りWL、WRが構成される。 FIG. 38 is a diagram showing a configuration example of a plurality of phase difference pixels 110zL, 110zR and a plurality of measurement pixels 110i arranged in the pixel array section 12 according to the eighth embodiment. As shown in FIG. 38, each of the plurality of phase difference pixels 110zL, 110zR and the plurality of reference measurement pixels 110i is composed of one SPAD. Apertures WL and WR are configured in the middle between the phase difference pixels 110zL and 110zR.
 これにより、参照測定画素110iの解像度をより上げることが可能となる。 This makes it possible to further increase the resolution of the reference measurement pixel 110i.
(第9実施形態)
 第9実施形態に係る測距システム1では位相差画素110zL、110zRを一つのチップで構成し、測定画素110iを別の一つのチップで構成する点で、第1実施形態の測距システム1と相違する。以下では第1実施形態の測距システム1と相違する点を説明する。
(Ninth embodiment)
The distance measurement system 1 according to the ninth embodiment is different from the distance measurement system 1 according to the first embodiment in that the phase difference pixels 110zL and 110zR are configured with one chip, and the measurement pixel 110i is configured with another chip. differ. Below, differences from the ranging system 1 of the first embodiment will be explained.
 図39乃至図41は、画素アレイ部12に配置に配置されるチップ110zch、及びチップ110ichと、赤外光パルスレーザ14との位置関係を模式的に示す図である。半導体素子のチップ110zchには、位相差画素110zL、110zRが構成され、半導体素子のチップ110ichには測定画素110iが配置される。なお、各チップを円形で模式的に示すが、各チップ内の画素は、方形の行列状に配置される。 39 to 41 are diagrams schematically showing the positional relationship between the chips 110zch and 110ich arranged in the pixel array section 12 and the infrared light pulse laser 14. The semiconductor element chip 110zch includes phase difference pixels 110zL and 110zR, and the semiconductor element chip 110ich includes a measurement pixel 110i. Although each chip is schematically shown as a circle, the pixels within each chip are arranged in a rectangular matrix.
 図39は、チップ110zchとチップ110ichとを縦長に配置し、赤外光パルスレーザ14をチップ110ichにより近づけた逆L字構造の例を示す図である。このような構成では、位相差画素110zL、110zRへの赤外光パルスレーザ14からの迷光が抑制される。 FIG. 39 is a diagram showing an example of an inverted L-shaped structure in which the chips 110zch and 110ich are arranged vertically and the infrared light pulse laser 14 is brought closer to the chip 110ich. In such a configuration, stray light from the infrared pulsed laser 14 to the phase difference pixels 110zL and 110zR is suppressed.
 図40は、チップ110zchとチップ110ichと斜めに配置し、赤外光パルスレーザ14をチップ110ichにより近づけた斜めL字構造の例を示す図である。このような構成では、位相差画素110zL、110zRへの赤外光パルスレーザ14からの迷光が抑制される。 FIG. 40 is a diagram showing an example of a diagonal L-shaped structure in which the chips 110zch and 110ich are arranged diagonally and the infrared light pulse laser 14 is brought closer to the chip 110ich. In such a configuration, stray light from the infrared pulsed laser 14 to the phase difference pixels 110zL and 110zR is suppressed.
 図41は、チップ110zchとチップ110ichとを水平に配置し、赤外光パルスレーザ14をチップ110ichにより近づけたL字構造の例を示す図である。このような構成では、位相差画素110zL、110zRへの赤外光パルスレーザ14からの迷光が抑制される。なお、本実施形態に係る逆L字構造、及び斜めL字構造のそれぞれもL字構造に対応する。 FIG. 41 is a diagram showing an example of an L-shaped structure in which the chips 110zch and 110ich are arranged horizontally and the infrared light pulse laser 14 is brought closer to the chip 110ich. In such a configuration, stray light from the infrared pulsed laser 14 to the phase difference pixels 110zL and 110zR is suppressed. Note that the inverted L-shaped structure and the diagonal L-shaped structure according to this embodiment also correspond to the L-shaped structure.
 以上説明したように、本実施形態によれば、位相差画素110zL、110zRが配置されるチップ110zchと、測定画素110iが配置されるチップ110ichを独立に構成し、外光パルスレーザ14をチップ110ichにより近づけたL字構造をとることとした。これにより、位相差画素110zL、110zRへの赤外光パルスレーザ14からの迷光が抑制される。 As described above, according to the present embodiment, the chip 110zch where the phase difference pixels 110zL and 110zR are arranged and the chip 110ich where the measurement pixel 110i is arranged are independently configured, and the external light pulse laser 14 is arranged on the chip 110ich. We decided to take a closer L-shaped structure. This suppresses stray light from the infrared light pulse laser 14 to the phase difference pixels 110zL and 110zR.
(第10実施形態)
 第10実施形態に係る測距システム1では位相差画素110zL、110zRをCIS(CMOS Image Sensor)で構成し、可視光で距離測定を可能とする点で、第1乃至第9実施形態の測距システム1と相違する。以下では第1実施形態の測距システム1と相違する点を説明する。
(10th embodiment)
The distance measuring system 1 according to the tenth embodiment is different from the first to ninth embodiments in that the phase difference pixels 110zL and 110zR are configured with CIS (CMOS Image Sensor), and distance measurement can be performed using visible light. This is different from system 1. Below, differences from the ranging system 1 of the first embodiment will be explained.
 第1乃至第9実施形態の測距システム1では、赤外光パルスレーザ14又は赤外光パルスレーザ16を照射することにより、位相差画素110zL、110zRを用いた測定を行っていたが、位相差画素110zL、110zRをCIS(CMOS Image Sensor)で構成することにより、パルスレーザ光の照射をすることなく、位相差画素110zL、110zRを用いた測定が可能となる。また、第6実施形態に係る測距システム1(図36参照)では赤外光パルスレーザ16の代わりに可視光源を配置することが可能である。これにより、夜間などの光量が少ない環境でも、位相差画素110zL、110zRを用いた測定が可能となる。 In the distance measuring system 1 of the first to ninth embodiments, measurement using the phase difference pixels 110zL and 110zR is performed by irradiating the infrared pulsed laser 14 or the infrared pulsed laser 16. By configuring the phase difference pixels 110zL and 110zR with a CIS (CMOS Image Sensor), measurement using the phase difference pixels 110zL and 110zR becomes possible without irradiation with pulsed laser light. Furthermore, in the distance measuring system 1 according to the sixth embodiment (see FIG. 36), a visible light source can be placed in place of the infrared pulsed laser 16. This allows measurement using the phase difference pixels 110zL and 110zR even in environments with low light intensity, such as at night.
 図42は、画素アレイ部12に形成されるSPAD3310、3320、CIS3350、3360の断面的な模式図である。SPAD3310、3320、CIS3350、3360は、それぞれ位相差画素110zL、位相差画素110zR、及び測定画素110iを構成する4つの画素の内の2つに対応する(図2参照)。 FIG. 42 is a schematic cross-sectional diagram of the SPADs 3310, 3320, CIS 3350, and 3360 formed in the pixel array section 12. The SPADs 3310, 3320, CIS 3350, and 3360 correspond to two of the four pixels forming the phase difference pixel 110zL, the phase difference pixel 110zR, and the measurement pixel 110i, respectively (see FIG. 2).
 SPAD3310、3320には、それぞれ赤色フィルタ1122Rと青色フィルタ1122Bが構成される点で、図7で示したSPAD331、332と相違する。これにより、可視光のNウェル1112への入射が抑制される。 The SPADs 3310 and 3320 differ from the SPADs 331 and 332 shown in FIG. 7 in that they each include a red filter 1122R and a blue filter 1122B. This suppresses visible light from entering the N-well 1112.
 一方でCIS3350、3360には、緑色フィルタ1122Gが構成され、光電変換部1112aはCIS(CMOS Image Sensor)で構成される。このように、位相差画素110zL、110zRを可視光で距離測定を可能とすることが可能である。また、位相差画素110zL、110zRの距離測定での感度をより向上させることも可能となる。 On the other hand, the CIS 3350 and 3360 are configured with a green filter 1122G, and the photoelectric conversion unit 1112a is configured with a CIS (CMOS Image Sensor). In this way, it is possible to measure distance using visible light using the phase difference pixels 110zL and 110zR. Further, it is also possible to further improve the sensitivity in distance measurement of the phase difference pixels 110zL and 110zR.
(第11実施形態)
 第11実施形態に係る測距システム1では位相差画素110zL、110zRを楕円形又は円形のオンチップレンズで構成する点で、第1乃至第10実施形態の測距システム1と相違する。以下では第1乃至第10の測距システム1と相違する点を説明する。
(Eleventh embodiment)
The distance measuring system 1 according to the eleventh embodiment differs from the distance measuring system 1 according to the first to tenth embodiments in that the phase difference pixels 110zL and 110zR are configured with elliptical or circular on-chip lenses. Below, the differences from the first to tenth distance measuring systems 1 will be explained.
 第1乃至第10実施形態の測距システム1では、絞りWL、WRにより、位相差画素110zL、110zRを構成していたが、第11実施形態に係る測距システム1では位相差画素110zL、110zRを楕円形又は円形のオンチップレンズで構成する。 In the ranging system 1 of the first to tenth embodiments, the apertures WL and WR constitute the phase difference pixels 110zL and 110zR, but in the ranging system 1 according to the eleventh embodiment, the phase difference pixels 110zL and 110zR consists of an elliptical or circular on-chip lens.
 図43は、画素アレイ部12に形成される位相差画素110zL、110zR、及び測定画素110iの平面配置例を示す図である。各画素は、一つのSPADで構成される。位相差画素110zL、110zRには、楕円形状のオンチップレンズLz10が配置される。 FIG. 43 is a diagram showing an example of the planar arrangement of the phase difference pixels 110zL, 110zR and the measurement pixel 110i formed in the pixel array section 12. Each pixel is composed of one SPAD. An elliptical on-chip lens Lz10 is arranged in the phase difference pixels 110zL and 110zR.
 図44は、位相差画素110zL、110zRの簡略化した断面的な模式図である。図43及び図44に示すように、楕円形状のオンチップレンズLz10により、位相差画素110zL、110zRの瞳分割が可能である。絞りを用いないため、より感度を向上させることが可能となる。 FIG. 44 is a simplified cross-sectional schematic diagram of the phase difference pixels 110zL and 110zR. As shown in FIGS. 43 and 44, the elliptical on-chip lens Lz10 allows pupil division of the phase difference pixels 110zL and 110zR. Since no aperture is used, sensitivity can be further improved.
 図45は、画素アレイ部12に形成される位相差画素110zL、110zR、及び測定画素110iの平面配置例を示す図である。各画素は、一つのSPADで構成される。位相差画素110zL、110zR、二つの測定画素110iには円形状のオンチップレンズLz20が配置される。これにより、オンチップレンズLz20が配置される各SPADの出力を積算することにより、一つの画素として構成可能である。 FIG. 45 is a diagram showing an example of the planar arrangement of the phase difference pixels 110zL, 110zR and the measurement pixel 110i formed in the pixel array section 12. Each pixel is composed of one SPAD. A circular on-chip lens Lz20 is arranged in the phase difference pixels 110zL and 110zR and the two measurement pixels 110i. Thereby, by integrating the outputs of each SPAD in which the on-chip lens Lz20 is arranged, it is possible to configure one pixel.
 一方で、位相差画素110zL、110zRそれぞれの出力を用いることにより、瞳分割が可能な位相差画素110zL、110zRとして構成可能である。位相差画素110zL、110zRの瞳分割が可能である。絞りを用いないため、より感度を向上させることが可能となる。 On the other hand, by using the outputs of the phase difference pixels 110zL and 110zR, it is possible to configure the phase difference pixels 110zL and 110zR that can perform pupil division. Pupil division of the phase difference pixels 110zL and 110zR is possible. Since no aperture is used, sensitivity can be further improved.
<<応用例>>
 本開示に係る技術は、様々な製品へ応用することができる。例えば、本開示に係る技術は、自動車、電気自動車、ハイブリッド電気自動車、自動二輪車、自転車、パーソナルモビリティ、飛行機、ドローン、船舶、ロボット、建設機械、農業機械(トラクター)などのいずれかの種類の移動体に搭載される装置として実現されてもよい。
<<Application example>>
The technology according to the present disclosure can be applied to various products. For example, the technology according to the present disclosure can be applied to any type of transportation such as a car, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobility vehicle, an airplane, a drone, a ship, a robot, a construction machine, an agricultural machine (tractor), etc. It may also be realized as a device mounted on the body.
 図46は、本開示に係る技術が適用され得る移動体制御システムの一例である車両制御システム7000の概略的な構成例を示すブロック図である。車両制御システム7000は、通信ネットワーク7010を介して接続された複数の電子制御ユニットを備える。図46に示した例では、車両制御システム7000は、駆動系制御ユニット7100、ボディ系制御ユニット7200、バッテリ制御ユニット7300、車外情報検出ユニット7400、車内情報検出ユニット7500、及び統合制御ユニット7600を備える。これらの複数の制御ユニットを接続する通信ネットワーク7010は、例えば、CAN(Controller Area Network)、LIN(Local Interconnect Network)、LAN(Local Area Network)又はFlexRay(登録商標)等の任意の規格に準拠した車載通信ネットワークであってよい。 FIG. 46 is a block diagram showing a schematic configuration example of a vehicle control system 7000, which is an example of a mobile object control system to which the technology according to the present disclosure can be applied. Vehicle control system 7000 includes multiple electronic control units connected via communication network 7010. In the example shown in FIG. 46, the vehicle control system 7000 includes a drive system control unit 7100, a body system control unit 7200, a battery control unit 7300, an outside vehicle information detection unit 7400, an inside vehicle information detection unit 7500, and an integrated control unit 7600. . The communication network 7010 connecting these plurality of control units is, for example, CAN (Controller Area Network), LIN (Local Interconnect Network), LAN (Local Area Network), or FlexRay. Compliant with arbitrary standards such as y (registered trademark) It may be an in-vehicle communication network.
 各制御ユニットは、各種プログラムにしたがって演算処理を行うマイクロコンピュータと、マイクロコンピュータにより実行されるプログラム又は各種演算に用いられるパラメータ等を記憶する記憶部と、各種制御対象の装置を駆動する駆動回路とを備える。各制御ユニットは、通信ネットワーク7010を介して他の制御ユニットとの間で通信を行うためのネットワークI/Fを備えるとともに、車内外の装置又はセンサ等との間で、有線通信又は無線通信により通信を行うための通信I/Fを備える。図46では、統合制御ユニット7600の機能構成として、マイクロコンピュータ7610、汎用通信I/F7620、専用通信I/F7630、測位部7640、ビーコン受信部7650、車内機器I/F7660、音声画像出力部7670、車載ネットワークI/F7680及び記憶部7690が図示されている。他の制御ユニットも同様に、マイクロコンピュータ、通信I/F及び記憶部等を備える。 Each control unit includes a microcomputer that performs calculation processing according to various programs, a storage unit that stores programs executed by the microcomputer or parameters used in various calculations, and a drive circuit that drives various devices to be controlled. Equipped with Each control unit is equipped with a network I/F for communicating with other control units via the communication network 7010, and also communicates with devices or sensors inside and outside the vehicle through wired or wireless communication. A communication I/F is provided for communication. In FIG. 46, the functional configuration of the integrated control unit 7600 includes a microcomputer 7610, a general-purpose communication I/F 7620, a dedicated communication I/F 7630, a positioning section 7640, a beacon receiving section 7650, an in-vehicle device I/F 7660, an audio image output section 7670, An in-vehicle network I/F 7680 and a storage unit 7690 are illustrated. The other control units similarly include a microcomputer, a communication I/F, a storage section, and the like.
 駆動系制御ユニット7100は、各種プログラムにしたがって車両の駆動系に関連する装置の動作を制御する。例えば、駆動系制御ユニット7100は、内燃機関又は駆動用モータ等の車両の駆動力を発生させるための駆動力発生装置、駆動力を車輪に伝達するための駆動力伝達機構、車両の舵角を調節するステアリング機構、及び、車両の制動力を発生させる制動装置等の制御装置として機能する。駆動系制御ユニット7100は、ABS(Antilock Brake System)又はESC(Electronic Stability Control)等の制御装置としての機能を有してもよい。 The drive system control unit 7100 controls the operation of devices related to the drive system of the vehicle according to various programs. For example, the drive system control unit 7100 includes a drive force generation device such as an internal combustion engine or a drive motor that generates drive force for the vehicle, a drive force transmission mechanism that transmits the drive force to wheels, and a drive force transmission mechanism that controls the steering angle of the vehicle. It functions as a control device for a steering mechanism to adjust and a braking device to generate braking force for the vehicle. The drive system control unit 7100 may have a function as a control device such as ABS (Antilock Brake System) or ESC (Electronic Stability Control).
 駆動系制御ユニット7100には、車両状態検出部7110が接続される。車両状態検出部7110には、例えば、車体の軸回転運動の角速度を検出するジャイロセンサ、車両の加速度を検出する加速度センサ、あるいは、アクセルペダルの操作量、ブレーキペダルの操作量、ステアリングホイールの操舵角、エンジン回転数又は車輪の回転速度等を検出するためのセンサのうちの少なくとも一つが含まれる。駆動系制御ユニット7100は、車両状態検出部7110から入力される信号を用いて演算処理を行い、内燃機関、駆動用モータ、電動パワーステアリング装置又はブレーキ装置等を制御する。 A vehicle state detection section 7110 is connected to the drive system control unit 7100. The vehicle state detection unit 7110 includes, for example, a gyro sensor that detects the angular velocity of the axial rotation movement of the vehicle body, an acceleration sensor that detects the acceleration of the vehicle, or an operation amount of an accelerator pedal, an operation amount of a brake pedal, or a steering wheel. At least one sensor for detecting angle, engine rotational speed, wheel rotational speed, etc. is included. The drive system control unit 7100 performs arithmetic processing using signals input from the vehicle state detection section 7110, and controls the internal combustion engine, the drive motor, the electric power steering device, the brake device, and the like.
 ボディ系制御ユニット7200は、各種プログラムにしたがって車体に装備された各種装置の動作を制御する。例えば、ボディ系制御ユニット7200は、キーレスエントリシステム、スマートキーシステム、パワーウィンドウ装置、あるいは、ヘッドランプ、バックランプ、ブレーキランプ、ウィンカー又はフォグランプ等の各種ランプの制御装置として機能する。この場合、ボディ系制御ユニット7200には、鍵を代替する携帯機から発信される電波又は各種スイッチの信号が入力され得る。ボディ系制御ユニット7200は、これらの電波又は信号の入力を受け付け、車両のドアロック装置、パワーウィンドウ装置、ランプ等を制御する。 The body system control unit 7200 controls the operations of various devices installed in the vehicle body according to various programs. For example, the body system control unit 7200 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as a headlamp, a back lamp, a brake lamp, a turn signal, or a fog lamp. In this case, radio waves transmitted from a portable device that replaces a key or signals from various switches may be input to the body control unit 7200. The body system control unit 7200 receives input of these radio waves or signals, and controls the door lock device, power window device, lamp, etc. of the vehicle.
 バッテリ制御ユニット7300は、各種プログラムにしたがって駆動用モータの電力供給源である二次電池7310を制御する。例えば、バッテリ制御ユニット7300には、二次電池7310を備えたバッテリ装置から、バッテリ温度、バッテリ出力電圧又はバッテリの残存容量等の情報が入力される。バッテリ制御ユニット7300は、これらの信号を用いて演算処理を行い、二次電池7310の温度調節制御又はバッテリ装置に備えられた冷却装置等の制御を行う。 The battery control unit 7300 controls the secondary battery 7310, which is a power supply source for the drive motor, according to various programs. For example, information such as battery temperature, battery output voltage, or remaining battery capacity is input to the battery control unit 7300 from a battery device including a secondary battery 7310. The battery control unit 7300 performs arithmetic processing using these signals, and controls the temperature adjustment of the secondary battery 7310 or the cooling device provided in the battery device.
 車外情報検出ユニット7400は、車両制御システム7000を搭載した車両の外部の情報を検出する。例えば、車外情報検出ユニット7400には、撮像部7410及び車外情報検出部7420のうちの少なくとも一方が接続される。撮像部7410には、ToF(Time Of Flight)カメラ、ステレオカメラ、単眼カメラ、赤外線カメラ及びその他のカメラのうちの少なくとも一つが含まれる。車外情報検出部7420には、例えば、現在の天候又は気象を検出するための環境センサ、あるいは、車両制御システム7000を搭載した車両の周囲の他の車両、障害物又は歩行者等を検出するための周囲情報検出センサのうちの少なくとも一つが含まれる。 The external information detection unit 7400 detects information external to the vehicle in which the vehicle control system 7000 is mounted. For example, at least one of an imaging section 7410 and an external information detection section 7420 is connected to the vehicle exterior information detection unit 7400. The imaging unit 7410 includes at least one of a ToF (Time of Flight) camera, a stereo camera, a monocular camera, an infrared camera, and other cameras. The vehicle external information detection unit 7420 includes, for example, an environmental sensor for detecting the current weather or weather, or a sensor for detecting other vehicles, obstacles, pedestrians, etc. around the vehicle equipped with the vehicle control system 7000. At least one of the surrounding information detection sensors is included.
 環境センサは、例えば、雨天を検出する雨滴センサ、霧を検出する霧センサ、日照度合いを検出する日照センサ、及び降雪を検出する雪センサのうちの少なくとも一つであってよい。周囲情報検出センサは、超音波センサ、レーダ装置及びLIDAR(Light Detection and Ranging、Laser Imaging Detection and Ranging)装置のうちの少なくとも一つであってよい。これらの撮像部7410及び車外情報検出部7420は、それぞれ独立したセンサないし装置として備えられてもよいし、複数のセンサないし装置が統合された装置として備えられてもよい。 The environmental sensor may be, for example, at least one of a raindrop sensor that detects rainy weather, a fog sensor that detects fog, a sunlight sensor that detects the degree of sunlight, and a snow sensor that detects snowfall. The surrounding information detection sensor may be at least one of an ultrasonic sensor, a radar device, and a LIDAR (Light Detection and Ranging, Laser Imaging Detection and Ranging) device. The imaging section 7410 and the vehicle external information detection section 7420 may be provided as independent sensors or devices, or may be provided as a device in which a plurality of sensors or devices are integrated.
 ここで、図47は、撮像部7410及び車外情報検出部7420の設置位置の例を示す。撮像部7910,7912,7914,7916,7918は、例えば、車両7900のフロントノーズ、サイドミラー、リアバンパ、バックドア及び車室内のフロントガラスの上部のうちの少なくとも一つの位置に設けられる。フロントノーズに備えられる撮像部7910及び車室内のフロントガラスの上部に備えられる撮像部7918は、主として車両7900の前方の画像を取得する。サイドミラーに備えられる撮像部7912,7914は、主として車両7900の側方の画像を取得する。リアバンパ又はバックドアに備えられる撮像部7916は、主として車両7900の後方の画像を取得する。車室内のフロントガラスの上部に備えられる撮像部7918は、主として先行車両又は、歩行者、障害物、信号機、交通標識又は車線等の検出に用いられる。 Here, FIG. 47 shows an example of the installation positions of the imaging section 7410 and the vehicle external information detection section 7420. The imaging units 7910, 7912, 7914, 7916, and 7918 are provided, for example, at at least one of the front nose, side mirrors, rear bumper, back door, and upper part of the windshield inside the vehicle 7900. An imaging unit 7910 provided in the front nose and an imaging unit 7918 provided above the windshield inside the vehicle mainly acquire images in front of the vehicle 7900. Imaging units 7912 and 7914 provided in the side mirrors mainly capture images of the sides of the vehicle 7900. An imaging unit 7916 provided in the rear bumper or back door mainly acquires images of the rear of the vehicle 7900. The imaging unit 7918 provided above the windshield inside the vehicle is mainly used to detect preceding vehicles, pedestrians, obstacles, traffic lights, traffic signs, lanes, and the like.
 なお、図47には、それぞれの撮像部7910,7912,7914,7916の撮影範囲の一例が示されている。撮像範囲aは、フロントノーズに設けられた撮像部7910の撮像範囲を示し、撮像範囲b,cは、それぞれサイドミラーに設けられた撮像部7912,7914の撮像範囲を示し、撮像範囲dは、リアバンパ又はバックドアに設けられた撮像部7916の撮像範囲を示す。例えば、撮像部7910,7912,7914,7916で撮像された画像データが重ね合わせられることにより、車両7900を上方から見た俯瞰画像が得られる。 Note that FIG. 47 shows an example of the imaging range of each of the imaging units 7910, 7912, 7914, and 7916. Imaging range a indicates the imaging range of imaging unit 7910 provided on the front nose, imaging ranges b and c indicate imaging ranges of imaging units 7912 and 7914 provided on the side mirrors, respectively, and imaging range d is The imaging range of an imaging unit 7916 provided in the rear bumper or back door is shown. For example, by superimposing image data captured by imaging units 7910, 7912, 7914, and 7916, an overhead image of vehicle 7900 viewed from above can be obtained.
 車両7900のフロント、リア、サイド、コーナ及び車室内のフロントガラスの上部に設けられる車外情報検出部7920,7922,7924,7926,7928,7930は、例えば超音波センサ又はレーダ装置であってよい。車両7900のフロントノーズ、リアバンパ、バックドア及び車室内のフロントガラスの上部に設けられる車外情報検出部7920,7926,7930は、例えばLIDAR装置であってよい。これらの車外情報検出部7920~7930は、主として先行車両、歩行者又は障害物等の検出に用いられる。 The external information detection units 7920, 7922, 7924, 7926, 7928, and 7930 provided at the front, rear, sides, corners, and the upper part of the windshield inside the vehicle 7900 may be, for example, ultrasonic sensors or radar devices. External information detection units 7920, 7926, and 7930 provided on the front nose, rear bumper, back door, and upper part of the windshield inside the vehicle 7900 may be, for example, LIDAR devices. These external information detection units 7920 to 7930 are mainly used to detect preceding vehicles, pedestrians, obstacles, and the like.
 図46に戻って説明を続ける。車外情報検出ユニット7400は、撮像部7410に車外の画像を撮像させるとともに、撮像された画像データを受信する。また、車外情報検出ユニット7400は、接続されている車外情報検出部7420から検出情報を受信する。車外情報検出部7420が超音波センサ、レーダ装置又はLIDAR装置である場合には、車外情報検出ユニット7400は、超音波又は電磁波等を発信させるとともに、受信された反射波の情報を受信する。車外情報検出ユニット7400は、受信した情報に基づいて、人、車、障害物、標識又は路面上の文字等の物体検出処理又は距離検出処理を行ってもよい。車外情報検出ユニット7400は、受信した情報に基づいて、降雨、霧又は路面状況等を認識する環境認識処理を行ってもよい。車外情報検出ユニット7400は、受信した情報に基づいて、車外の物体までの距離を算出してもよい。 Returning to FIG. 46, the explanation will be continued. The vehicle exterior information detection unit 7400 causes the imaging unit 7410 to capture an image of the exterior of the vehicle, and receives the captured image data. Further, the vehicle exterior information detection unit 7400 receives detection information from the vehicle exterior information detection section 7420 to which it is connected. When the external information detection unit 7420 is an ultrasonic sensor, a radar device, or a LIDAR device, the external information detection unit 7400 transmits ultrasonic waves, electromagnetic waves, etc., and receives information on the received reflected waves. The external information detection unit 7400 may perform object detection processing such as a person, car, obstacle, sign, or text on the road surface or distance detection processing based on the received information. The external information detection unit 7400 may perform environment recognition processing to recognize rain, fog, road surface conditions, etc. based on the received information. The vehicle exterior information detection unit 7400 may calculate the distance to the object outside the vehicle based on the received information.
 また、車外情報検出ユニット7400は、受信した画像データに基づいて、人、車、障害物、標識又は路面上の文字等を認識する画像認識処理又は距離検出処理を行ってもよい。車外情報検出ユニット7400は、受信した画像データに対して歪補正又は位置合わせ等の処理を行うとともに、異なる撮像部7410により撮像された画像データを合成して、俯瞰画像又はパノラマ画像を生成してもよい。車外情報検出ユニット7400は、異なる撮像部7410により撮像された画像データを用いて、視点変換処理を行ってもよい。 Additionally, the outside-vehicle information detection unit 7400 may perform image recognition processing or distance detection processing for recognizing people, cars, obstacles, signs, characters on the road, etc., based on the received image data. The outside-vehicle information detection unit 7400 performs processing such as distortion correction or alignment on the received image data, and also synthesizes image data captured by different imaging units 7410 to generate an overhead image or a panoramic image. Good too. The outside-vehicle information detection unit 7400 may perform viewpoint conversion processing using image data captured by different imaging units 7410.
 車内情報検出ユニット7500は、車内の情報を検出する。車内情報検出ユニット7500には、例えば、運転者の状態を検出する運転者状態検出部7510が接続される。運転者状態検出部7510は、運転者を撮像するカメラ、運転者の生体情報を検出する生体センサ又は車室内の音声を集音するマイク等を含んでもよい。生体センサは、例えば、座面又はステアリングホイール等に設けられ、座席に座った搭乗者又はステアリングホイールを握る運転者の生体情報を検出する。車内情報検出ユニット7500は、運転者状態検出部7510から入力される検出情報に基づいて、運転者の疲労度合い又は集中度合いを算出してもよいし、運転者が居眠りをしていないかを判別してもよい。車内情報検出ユニット7500は、集音された音声信号に対してノイズキャンセリング処理等の処理を行ってもよい。 The in-vehicle information detection unit 7500 detects in-vehicle information. For example, a driver condition detection section 7510 that detects the condition of the driver is connected to the in-vehicle information detection unit 7500. The driver state detection unit 7510 may include a camera that images the driver, a biosensor that detects biometric information of the driver, a microphone that collects audio inside the vehicle, or the like. The biosensor is provided, for example, on a seat surface or a steering wheel, and detects biometric information of a passenger sitting on a seat or a driver holding a steering wheel. The in-vehicle information detection unit 7500 may calculate the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 7510, or determine whether the driver is dozing off. You may. The in-vehicle information detection unit 7500 may perform processing such as noise canceling processing on the collected audio signal.
 統合制御ユニット7600は、各種プログラムにしたがって車両制御システム7000内の動作全般を制御する。統合制御ユニット7600には、入力部7800が接続されている。入力部7800は、例えば、タッチパネル、ボタン、マイクロフォン、スイッチ又はレバー等、搭乗者によって入力操作され得る装置によって実現される。統合制御ユニット7600には、マイクロフォンにより入力される音声を音声認識することにより得たデータが入力されてもよい。入力部7800は、例えば、赤外線又はその他の電波を利用したリモートコントロール装置であってもよいし、車両制御システム7000の操作に対応した携帯電話又はPDA(Personal Digital Assistant)等の外部接続機器であってもよい。入力部7800は、例えばカメラであってもよく、その場合搭乗者はジェスチャにより情報を入力することができる。あるいは、搭乗者が装着したウェアラブル装置の動きを検出することで得られたデータが入力されてもよい。さらに、入力部7800は、例えば、上記の入力部7800を用いて搭乗者等により入力された情報に基づいて入力信号を生成し、統合制御ユニット7600に出力する入力制御回路などを含んでもよい。搭乗者等は、この入力部7800を操作することにより、車両制御システム7000に対して各種のデータを入力したり処理動作を指示したりする。 The integrated control unit 7600 controls overall operations within the vehicle control system 7000 according to various programs. An input section 7800 is connected to the integrated control unit 7600. The input unit 7800 is realized by, for example, a device such as a touch panel, a button, a microphone, a switch, or a lever that can be inputted by the passenger. The integrated control unit 7600 may be input with data obtained by voice recognition of voice input through a microphone. The input unit 7800 may be, for example, a remote control device using infrared rays or other radio waves, or an externally connected device such as a mobile phone or a PDA (Personal Digital Assistant) that is compatible with the operation of the vehicle control system 7000. It's okay. The input unit 7800 may be, for example, a camera, in which case the passenger can input information using gestures. Alternatively, data obtained by detecting the movement of a wearable device worn by a passenger may be input. Further, the input section 7800 may include, for example, an input control circuit that generates an input signal based on information input by a passenger or the like using the input section 7800 described above and outputs it to the integrated control unit 7600. By operating this input unit 7800, a passenger or the like inputs various data to the vehicle control system 7000 and instructs processing operations.
 記憶部7690は、マイクロコンピュータにより実行される各種プログラムを記憶するROM(Read Only Memory)、及び各種パラメータ、演算結果又はセンサ値等を記憶するRAM(Random Access Memory)を含んでいてもよい。また、記憶部7690は、HDD(Hard Disc Drive)等の磁気記憶デバイス、半導体記憶デバイス、光記憶デバイス又は光磁気記憶デバイス等によって実現してもよい。 The storage unit 7690 may include a ROM (Read Only Memory) that stores various programs executed by the microcomputer, and a RAM (Random Access Memory) that stores various parameters, calculation results, sensor values, etc. Furthermore, the storage unit 7690 may be realized by a magnetic storage device such as a HDD (Hard Disc Drive), a semiconductor storage device, an optical storage device, a magneto-optical storage device, or the like.
 汎用通信I/F7620は、外部環境7750に存在する様々な機器との間の通信を仲介する汎用的な通信I/Fである。汎用通信I/F7620は、GSM(登録商標)(Global System of Mobile communications)、WiMAX(登録商標)、LTE(登録商標)(Long Term Evolution)若しくはLTE-A(LTE-Advanced)などのセルラー通信プロトコル、又は無線LAN(Wi-Fi(登録商標)ともいう)、Bluetooth(登録商標)などのその他の無線通信プロトコルを実装してよい。汎用通信I/F7620は、例えば、基地局又はアクセスポイントを介して、外部ネットワーク(例えば、インターネット、クラウドネットワーク又は事業者固有のネットワーク)上に存在する機器(例えば、アプリケーションサーバ又は制御サーバ)へ接続してもよい。また、汎用通信I/F7620は、例えばP2P(Peer To Peer)技術を用いて、車両の近傍に存在する端末(例えば、運転者、歩行者若しくは店舗の端末、又はMTC(Machine Type Communication)端末)と接続してもよい。 The general-purpose communication I/F 7620 is a general-purpose communication I/F that mediates communication with various devices existing in the external environment 7750. The general-purpose communication I/F7620 supports GSM (registered trademark) (Global System of Mobile communications), WiMAX (registered trademark), LTE (registered trademark) (Long Term Evolution), or LTE-A (LTE -Advanced) and other cellular communication protocols , or other wireless communication protocols such as wireless LAN (also referred to as Wi-Fi (registered trademark)) or Bluetooth (registered trademark). The general-purpose communication I/F 7620 connects to a device (for example, an application server or a control server) existing on an external network (for example, the Internet, a cloud network, or an operator-specific network) via a base station or an access point, for example. You may. Furthermore, the general-purpose communication I/F 7620 uses, for example, P2P (Peer To Peer) technology to communicate with a terminal located near the vehicle (for example, a terminal of a driver, a pedestrian, a store, or an MTC (Machine Type Communication) terminal). You can also connect it with
 専用通信I/F7630は、車両における使用を目的として策定された通信プロトコルをサポートする通信I/Fである。専用通信I/F7630は、例えば、下位レイヤのIEEE802.11pと上位レイヤのIEEE1609との組合せであるWAVE(Wireless Access in Vehicle Environment)、DSRC(Dedicated Short Range Communications)、又はセルラー通信プロトコルといった標準プロトコルを実装してよい。専用通信I/F7630は、典型的には、車車間(Vehicle to Vehicle)通信、路車間(Vehicle to Infrastructure)通信、車両と家との間(Vehicle to Home)の通信及び歩車間(Vehicle to Pedestrian)通信のうちの1つ以上を含む概念であるV2X通信を遂行する。 The dedicated communication I/F 7630 is a communication I/F that supports communication protocols developed for use in vehicles. The dedicated communication I/F 7630 supports, for example, WAVE (Wireless Access in Vehicle Environment), which is a combination of lower layer IEEE802.11p and upper layer IEEE1609, and DSRC (Dedicated Shore). standard protocols such as t Range Communications) or cellular communication protocols. May be implemented. The dedicated communication I/F 7630 is typically used for vehicle-to-vehicle communication, vehicle-to-infrastructure communication, vehicle-to-home communication, and vehicle-to-vehicle communication. to pedestrian ) communications, a concept that includes one or more of the following:
 測位部7640は、例えば、GNSS(Global Navigation Satellite System)衛星からのGNSS信号(例えば、GPS(Global Positioning System)衛星からのGPS信号)を受信して測位を実行し、車両の緯度、経度及び高度を含む位置情報を生成する。なお、測位部7640は、無線アクセスポイントとの信号の交換により現在位置を特定してもよく、又は測位機能を有する携帯電話、PHS若しくはスマートフォンといった端末から位置情報を取得してもよい。 The positioning unit 7640 receives, for example, a GNSS signal from a GNSS (Global Navigation Satellite System) satellite (for example, a GPS signal from a GPS (Global Positioning System) satellite), performs positioning, and performs positioning of the vehicle. Latitude, longitude and altitude Generate location information including. Note that the positioning unit 7640 may specify the current location by exchanging signals with a wireless access point, or may acquire location information from a terminal such as a mobile phone, PHS, or smartphone that has a positioning function.
 ビーコン受信部7650は、例えば、道路上に設置された無線局等から発信される電波あるいは電磁波を受信し、現在位置、渋滞、通行止め又は所要時間等の情報を取得する。なお、ビーコン受信部7650の機能は、上述した専用通信I/F7630に含まれてもよい。 The beacon receiving unit 7650 receives, for example, radio waves or electromagnetic waves transmitted from a wireless station installed on the road, and obtains information such as the current location, traffic jams, road closures, or required travel time. Note that the function of the beacon receiving unit 7650 may be included in the dedicated communication I/F 7630 described above.
 車内機器I/F7660は、マイクロコンピュータ7610と車内に存在する様々な車内機器7760との間の接続を仲介する通信インタフェースである。車内機器I/F7660は、無線LAN、Bluetooth(登録商標)、NFC(Near Field Communication)又はWUSB(Wireless USB)といった無線通信プロトコルを用いて無線接続を確立してもよい。また、車内機器I/F7660は、図示しない接続端子(及び、必要であればケーブル)を介して、USB(Universal Serial Bus)、HDMI(登録商標)(High-Definition Multimedia Interface、又はMHL(Mobile High-definition Link)等の有線接続を確立してもよい。車内機器7760は、例えば、搭乗者が有するモバイル機器若しくはウェアラブル機器、又は車両に搬入され若しくは取り付けられる情報機器のうちの少なくとも1つを含んでいてもよい。また、車内機器7760は、任意の目的地までの経路探索を行うナビゲーション装置を含んでいてもよい。車内機器I/F7660は、これらの車内機器7760との間で、制御信号又はデータ信号を交換する。 The in-vehicle device I/F 7660 is a communication interface that mediates connections between the microcomputer 7610 and various in-vehicle devices 7760 present in the vehicle. The in-vehicle device I/F 7660 may establish a wireless connection using a wireless communication protocol such as wireless LAN, Bluetooth (registered trademark), NFC (Near Field Communication), or WUSB (Wireless USB). The in-vehicle device I/F 7660 also connects USB (Universal Serial Bus), HDMI (registered trademark) (High-Definition Multimedia Interface), or MHL (Mobile High -definition Link) etc. The in-vehicle device 7760 may include, for example, at least one of a mobile device or wearable device owned by a passenger, or an information device carried into or attached to the vehicle. In addition, the in-vehicle device 7760 may include a navigation device that searches for a route to an arbitrary destination. or exchange data signals.
 車載ネットワークI/F7680は、マイクロコンピュータ7610と通信ネットワーク7010との間の通信を仲介するインタフェースである。車載ネットワークI/F7680は、通信ネットワーク7010によりサポートされる所定のプロトコルに則して、信号等を送受信する。 The in-vehicle network I/F 7680 is an interface that mediates communication between the microcomputer 7610 and the communication network 7010. The in-vehicle network I/F 7680 transmits and receives signals and the like in accordance with a predetermined protocol supported by the communication network 7010.
 統合制御ユニット7600のマイクロコンピュータ7610は、汎用通信I/F7620、専用通信I/F7630、測位部7640、ビーコン受信部7650、車内機器I/F7660及び車載ネットワークI/F7680のうちの少なくとも一つを介して取得される情報に基づき、各種プログラムにしたがって、車両制御システム7000を制御する。例えば、マイクロコンピュータ7610は、取得される車内外の情報に基づいて、駆動力発生装置、ステアリング機構又は制動装置の制御目標値を演算し、駆動系制御ユニット7100に対して制御指令を出力してもよい。例えば、マイクロコンピュータ7610は、車両の衝突回避あるいは衝撃緩和、車間距離に基づく追従走行、車速維持走行、車両の衝突警告、又は車両のレーン逸脱警告等を含むADAS(Advanced Driver Assistance System)の機能実現を目的とした協調制御を行ってもよい。また、マイクロコンピュータ7610は、取得される車両の周囲の情報に基づいて駆動力発生装置、ステアリング機構又は制動装置等を制御することにより、運転者の操作に拠らずに自律的に走行する自動運転等を目的とした協調制御を行ってもよい。 The microcomputer 7610 of the integrated control unit 7600 communicates via at least one of a general-purpose communication I/F 7620, a dedicated communication I/F 7630, a positioning section 7640, a beacon reception section 7650, an in-vehicle device I/F 7660, and an in-vehicle network I/F 7680. The vehicle control system 7000 is controlled according to various programs based on the information obtained. For example, the microcomputer 7610 calculates a control target value for a driving force generating device, a steering mechanism, or a braking device based on acquired information inside and outside the vehicle, and outputs a control command to the drive system control unit 7100. Good too. For example, the microcomputer 7610 realizes ADAS (Advanced Driver Assistance System) functions, including vehicle collision avoidance or impact mitigation, following distance based on vehicle distance, vehicle speed maintenance, vehicle collision warning, vehicle lane departure warning, etc. Coordination control may be performed for the purpose of In addition, the microcomputer 7610 controls the driving force generating device, steering mechanism, braking device, etc. based on the acquired information about the surroundings of the vehicle, so that the microcomputer 7610 can drive the vehicle autonomously without depending on the driver's operation. Cooperative control for the purpose of driving etc. may also be performed.
 マイクロコンピュータ7610は、汎用通信I/F7620、専用通信I/F7630、測位部7640、ビーコン受信部7650、車内機器I/F7660及び車載ネットワークI/F7680のうちの少なくとも一つを介して取得される情報に基づき、車両と周辺の構造物や人物等の物体との間の3次元距離情報を生成し、車両の現在位置の周辺情報を含むローカル地図情報を作成してもよい。また、マイクロコンピュータ7610は、取得される情報に基づき、車両の衝突、歩行者等の近接又は通行止めの道路への進入等の危険を予測し、警告用信号を生成してもよい。警告用信号は、例えば、警告音を発生させたり、警告ランプを点灯させたりするための信号であってよい。 The microcomputer 7610 acquires information through at least one of a general-purpose communication I/F 7620, a dedicated communication I/F 7630, a positioning section 7640, a beacon reception section 7650, an in-vehicle device I/F 7660, and an in-vehicle network I/F 7680. Based on this, three-dimensional distance information between the vehicle and surrounding objects such as structures and people may be generated, and local map information including surrounding information of the current position of the vehicle may be generated. Furthermore, the microcomputer 7610 may predict dangers such as a vehicle collision, a pedestrian approaching, or entering a closed road, based on the acquired information, and generate a warning signal. The warning signal may be, for example, a signal for generating a warning sound or lighting a warning lamp.
 音声画像出力部7670は、車両の搭乗者又は車外に対して、視覚的又は聴覚的に情報を通知することが可能な出力装置へ音声及び画像のうちの少なくとも一方の出力信号を送信する。図46の例では、出力装置として、オーディオスピーカ7710、表示部7720及びインストルメントパネル7730が例示されている。表示部7720は、例えば、オンボードディスプレイ及びヘッドアップディスプレイの少なくとも一つを含んでいてもよい。表示部7720は、AR(Augmented Reality)表示機能を有していてもよい。出力装置は、これらの装置以外の、ヘッドホン、搭乗者が装着する眼鏡型ディスプレイ等のウェアラブルデバイス、プロジェクタ又はランプ等の他の装置であってもよい。出力装置が表示装置の場合、表示装置は、マイクロコンピュータ7610が行った各種処理により得られた結果又は他の制御ユニットから受信された情報を、テキスト、イメージ、表、グラフ等、様々な形式で視覚的に表示する。また、出力装置が音声出力装置の場合、音声出力装置は、再生された音声データ又は音響データ等からなるオーディオ信号をアナログ信号に変換して聴覚的に出力する。 The audio and image output unit 7670 transmits an output signal of at least one of audio and images to an output device that can visually or audibly notify information to the occupants of the vehicle or to the outside of the vehicle. In the example of FIG. 46, an audio speaker 7710, a display section 7720, and an instrument panel 7730 are illustrated as output devices. Display unit 7720 may include, for example, at least one of an on-board display and a head-up display. The display section 7720 may have an AR (Augmented Reality) display function. The output device may be other devices other than these devices, such as headphones, a wearable device such as a glasses-type display worn by the passenger, a projector, or a lamp. When the output device is a display device, the display device displays results obtained from various processes performed by the microcomputer 7610 or information received from other control units in various formats such as text, images, tables, graphs, etc. Show it visually. Further, when the output device is an audio output device, the audio output device converts an audio signal consisting of reproduced audio data or acoustic data into an analog signal and audibly outputs the analog signal.
 なお、図46に示した例において、通信ネットワーク7010を介して接続された少なくとも二つの制御ユニットが一つの制御ユニットとして一体化されてもよい。あるいは、個々の制御ユニットが、複数の制御ユニットにより構成されてもよい。さらに、車両制御システム7000が、図示されていない別の制御ユニットを備えてもよい。また、上記の説明において、いずれかの制御ユニットが担う機能の一部又は全部を、他の制御ユニットに持たせてもよい。つまり、通信ネットワーク7010を介して情報の送受信がされるようになっていれば、所定の演算処理が、いずれかの制御ユニットで行われるようになってもよい。同様に、いずれかの制御ユニットに接続されているセンサ又は装置が、他の制御ユニットに接続されるとともに、複数の制御ユニットが、通信ネットワーク7010を介して相互に検出情報を送受信してもよい。 Note that in the example shown in FIG. 46, at least two control units connected via the communication network 7010 may be integrated as one control unit. Alternatively, each control unit may be composed of a plurality of control units. Furthermore, vehicle control system 7000 may include another control unit not shown. Further, in the above description, some or all of the functions performed by one of the control units may be provided to another control unit. In other words, as long as information is transmitted and received via the communication network 7010, predetermined arithmetic processing may be performed by any one of the control units. Similarly, sensors or devices connected to any control unit may be connected to other control units, and multiple control units may send and receive detection information to and from each other via communication network 7010. .
 なお、図1を用いて説明した本実施形態に係る測距システム1の各機能を実現するためのコンピュータプログラムを、いずれかの制御ユニット等に実装することができる。また、このようなコンピュータプログラムが格納された、コンピュータで読み取り可能な記録媒体を提供することもできる。記録媒体は、例えば、磁気ディスク、光ディスク、光磁気ディスク、フラッシュメモリ等である。また、上記のコンピュータプログラムは、記録媒体を用いずに、例えばネットワークを介して配信されてもよい。 Note that a computer program for realizing each function of the ranging system 1 according to the present embodiment described using FIG. 1 can be implemented in any control unit or the like. It is also possible to provide a computer-readable recording medium in which such a computer program is stored. The recording medium is, for example, a magnetic disk, an optical disk, a magneto-optical disk, a flash memory, or the like. Furthermore, the above computer program may be distributed, for example, via a network, without using a recording medium.
 以上説明した車両制御システム7000において、図1を用いて説明した本実施形態に係る測距システム1の受光装置10は、図46に示した応用例の撮像部7410に適用することができる。例えば、制御部20は、図46に示した車外情報検出ユニット7400に適用することができる。 In the vehicle control system 7000 described above, the light receiving device 10 of the ranging system 1 according to the present embodiment described using FIG. 1 can be applied to the imaging unit 7410 of the application example shown in FIG. For example, the control section 20 can be applied to the external information detection unit 7400 shown in FIG. 46.
 また、図1を用いて説明した測距システム1の少なくとも一部の構成要素は、図46に示した統合制御ユニット7600のためのモジュール(例えば、一つのダイで構成される集積回路モジュール)において実現されてもよい。あるいは、図1を用いて説明した測距システム1が、図46に示した車両制御システム7000の複数の制御ユニットによって実現されてもよい。 Furthermore, at least some of the components of the ranging system 1 described using FIG. 1 are included in a module for the integrated control unit 7600 shown in FIG. May be realized. Alternatively, the ranging system 1 described using FIG. 1 may be realized by a plurality of control units of the vehicle control system 7000 shown in FIG. 46.
 なお、本技術は以下のような構成を取ることができる。 Note that the present technology can have the following configuration.
(1)
 対象物までの距離を測定するために用いられる測定画素と、前記対象物からの入射光を瞳分割して位相差を検出する対となる複数の位相差画素と、を有する画素アレイ部と、
 前記測定画素が光子を受光するタイミングと所定時点との差分に関する情報に基づき、前記対象物までの第1距離値を生成する第1距離測定部と、
 前記複数の位相差画素それぞれに入射する光子の数に対応する情報に基づき、前記対象物までの第2距離値を生成する第2距離測定部と、
 を備える、受光装置。
(1)
a pixel array unit having a measurement pixel used to measure a distance to a target object, and a plurality of paired phase difference pixels that divide incident light from the target object into pupils and detect a phase difference;
a first distance measuring unit that generates a first distance value to the target object based on information regarding a difference between a timing at which the measurement pixel receives a photon and a predetermined time;
a second distance measuring unit that generates a second distance value to the object based on information corresponding to the number of photons incident on each of the plurality of phase difference pixels;
A light receiving device comprising:
(2)
 前記測定画素は、入射した光子に応じて光電変換する光電変換部を有し、
 前記位相差画素は、入射した光子に応じて光電変換する光電変換部を有する、(1)に記載の受光装置。
(2)
The measurement pixel has a photoelectric conversion unit that performs photoelectric conversion according to incident photons,
The light receiving device according to (1), wherein the phase difference pixel includes a photoelectric conversion section that performs photoelectric conversion according to an incident photon.
(3)
 前記光電変換部は、シングルフォトンアバランシェフォトダイオード(SPAD)である、(2)に記載の受光装置。
(3)
The light receiving device according to (2), wherein the photoelectric conversion section is a single photon avalanche photodiode (SPAD).
(4)
 前記測定画素の駆動期間を、前記複数の位相差画素の出力信号に基づく、第1距離に応じて設定する制御部を、
 更に備える、(3)に記載の受光装置。
(4)
A control unit that sets the drive period of the measurement pixel according to a first distance based on the output signals of the plurality of phase difference pixels,
The light receiving device according to (3), further comprising:
(5)
 前記第1距離測定部は、前記測定画素が光子を受光するタイミングと所定時点との差分値の出現頻度情報を有するヒストグラムの値に応じて前記第1距離値を生成し、
 前記第2距離測定部は、前記複数の位相差画素それぞれに入射する光子の数に対応する信号値を用いた位相差に応じて前記第2距離値を生成する、(1)に記載の受光装置。
(5)
The first distance measuring unit generates the first distance value according to a value of a histogram having appearance frequency information of a difference value between a timing at which the measurement pixel receives a photon and a predetermined time,
The light receiving unit according to (1), wherein the second distance measuring unit generates the second distance value according to a phase difference using a signal value corresponding to the number of photons incident on each of the plurality of phase difference pixels. Device.
(6)
 前記測定画素は、複数の前記光電変換部を有しており、
 前記測定画素と、前記対となる複数の位相差画素と、はユニットを構成しており、
 前記ユニットに対応する第1変換部を更に備え、
 前記第1変換部は、前記測定画素が光子を受光するタイミングと所定時点との差分に関する情報を有する受信信号を生成する、(5)に記載の受光装置。
(6)
The measurement pixel has a plurality of the photoelectric conversion units,
The measurement pixel and the plurality of paired phase difference pixels constitute a unit,
further comprising a first conversion section corresponding to the unit,
The light receiving device according to (5), wherein the first conversion unit generates a reception signal having information regarding a difference between a timing at which the measurement pixel receives a photon and a predetermined time.
(7)
 第1赤外光パルスレーザを更に備え、
 前記制御部は、前記第2距離に応じて前記第1赤外光パルスレーザが照射するレーザ光の光量を制御する、(3)に記載の受光装置。
(7)
further comprising a first infrared pulsed laser,
The light receiving device according to (3), wherein the control unit controls the amount of laser light emitted by the first infrared pulse laser according to the second distance.
(8)
 第2赤外光パルスレーザを更に備え、
 前記複数の位相差画素は、前記第2赤外光パルスレーザの照射するレーザ光に応じて信号を生成する、(7)に記載の受光装置。
(8)
further comprising a second infrared light pulse laser;
The light receiving device according to (7), wherein the plurality of phase difference pixels generate a signal according to the laser light irradiated by the second infrared light pulse laser.
(9)
 前記画素アレイ部と、前記第1変換部は積層化されており、前記ユニットに対応する前記第1変換部は、前記ユニットの直下に配置される、(6)に記載の受光装置。
(9)
The light receiving device according to (6), wherein the pixel array section and the first conversion section are stacked, and the first conversion section corresponding to the unit is arranged directly below the unit.
(10)
 前記ユニットに対応する第2変換部を更に備え、
 前記第2変換部は、前記複数の位相差画素それぞれに入射する光子の数に対応する情報を有する第2受信信号を生成する、(6)に記載の受光装置。
(10)
further comprising a second conversion section corresponding to the unit,
The light receiving device according to (6), wherein the second conversion unit generates a second reception signal having information corresponding to the number of photons incident on each of the plurality of phase difference pixels.
(11)
 前記第2変換部は、前記位相差画素が光子を受光するタイミングと所定時点との差分に関する情報を有する第3受信信号を生成し、第3受信信号の数に応じて前記第2受信信号を生成する、(10)に記載の受光装置。
(11)
The second conversion unit generates a third reception signal having information regarding a difference between the timing at which the phase difference pixel receives a photon and a predetermined time, and converts the second reception signal into a third reception signal according to the number of third reception signals. The light receiving device according to (10).
(12)
 前記測定画素と、前記対となる複数の位相差画素とは、同じ時間帯に受光した光子に応じて、信号を生成する、(11)に記載の受光装置。
(12)
The light receiving device according to (11), wherein the measurement pixel and the plurality of paired phase difference pixels generate a signal in response to photons received during the same time period.
(13)
 前記測定画素と、前記対となる複数の位相差画素とは、異なる半導体素子のチップに構成される、(2)に記載の受光装置。
(13)
The light receiving device according to (2), wherein the measurement pixel and the plurality of paired phase difference pixels are configured on different semiconductor chip chips.
(14)
 第1赤外光パルスレーザを更に備え、
 前記測定画素が構成される第1チップと、前記対となる複数の位相差画素が構成される第2チップと、前記第1赤外光パルスレーザは、L字形状に配置される、(13)に記載の受光装置。
(14)
further comprising a first infrared pulsed laser,
The first chip including the measurement pixel, the second chip including the paired phase difference pixels, and the first infrared light pulse laser are arranged in an L shape ( ).
(15)
 複数の画素で構成される画素アレイ部と、
 前記画素アレイ部を制御する制御部と、を、備え、
 前記画素アレイ部は、
 対象物からの入射光を瞳分割して位相差を検出する対となる複数の位相差画素と、
 前記対象物までの距離を測定するために用いられる測定画素と、を有し、
 前記制御部は、前記測定画素の駆動期間を、前記複数の位相差画素の出力信号に基づく、第2距離に応じて設定する、受光装置。
(15)
a pixel array section composed of a plurality of pixels;
a control unit that controls the pixel array unit,
The pixel array section includes:
A plurality of phase difference pixels that form a pair that divides incident light from a target object into pupils and detects a phase difference;
a measurement pixel used to measure the distance to the target object,
The control unit is a light receiving device that sets a driving period of the measurement pixel according to a second distance based on output signals of the plurality of phase difference pixels.
(16)
 前記位相差画素は、可視光を受光して光電変換する光電変換部を有し、
 前記測定画素は、入射した光子に応じて光電変換する光電変換部を有する、(15)に記載の受光装置。
(16)
The phase difference pixel has a photoelectric conversion unit that receives visible light and performs photoelectric conversion,
The light receiving device according to (15), wherein the measurement pixel includes a photoelectric conversion section that performs photoelectric conversion according to incident photons.
(17)
 前記光電変換部は、入射側に反射防止部を更に有する、(2)に記載の受光装置。
(17)
The light receiving device according to (2), wherein the photoelectric conversion section further includes an antireflection section on the incident side.
(18)
 前記光電変換部は、入射側に高屈折率素材で形成されたオンチップレンズを有する、(2)に記載の受光装置。
(18)
The light receiving device according to (2), wherein the photoelectric conversion section has an on-chip lens formed of a high refractive index material on the incident side.
(19)
 前記光電変換部は、
 オンチップレンズと、
 前記光電変換により発生したキャリアを増倍するアバランシェ増倍領域を少なくとも有する拡散層とを、有し、
 前記拡散層は、前記オンチップレンズの主光軸の位置に応じて配置される、(2)に記載の受光装置。
(19)
The photoelectric conversion section is
On-chip lens and
a diffusion layer having at least an avalanche multiplication region that multiplies carriers generated by the photoelectric conversion,
The light receiving device according to (2), wherein the diffusion layer is arranged according to the position of the main optical axis of the on-chip lens.
(20)
 前記位相差画素は、
 前記光電変換部は、
 入射領域の絞りと、
 前記光電変換により発生したキャリアを増倍するアバランシェ増倍領域を少なくとも有する拡散層とを、有し、
 前記拡散層は、前記絞りの位置に応じて配置される、(2)に記載の受光装置。
(20)
The phase difference pixel is
The photoelectric conversion section is
The aperture of the incident area,
a diffusion layer having at least an avalanche multiplication region that multiplies carriers generated by the photoelectric conversion,
The light receiving device according to (2), wherein the diffusion layer is arranged according to the position of the aperture.
(21)
 前記対となる複数の位相差画素の内の、2つの位相差画素は、
 前記2つの位相差画素の光電変換部に配置される楕円形状のオンチップレンズを有する、(2)に記載の受光装置。
(21)
Two phase difference pixels among the plurality of phase difference pixels forming the pair are:
The light receiving device according to (2), including an elliptical on-chip lens arranged in the photoelectric conversion section of the two phase difference pixels.
(22)
 前記対となる複数の位相差画素の内の、2つの位相差画素の光電変換部と、
 2つの前記測定画素の光電変換部と、に配置される円形状のオンチップレンズを有する、(2)に記載の受光装置。
(22)
A photoelectric conversion unit of two phase difference pixels among the plurality of paired phase difference pixels;
The light receiving device according to (2), comprising a circular on-chip lens arranged in the photoelectric conversion sections of the two measurement pixels.
(23)
 前記位相差画素の光電変換部は、可視光を透過するカラーフィルタを介してして受光し、
 前記測定画素の光電変換部は、赤外光を透過するカラーフィルタを介してして受光する、(15)に記載の受光装置。
(23)
The photoelectric conversion section of the phase difference pixel receives light through a color filter that transmits visible light,
The light receiving device according to (15), wherein the photoelectric conversion section of the measurement pixel receives light through a color filter that transmits infrared light.
(24)
 対象物からの入射光を瞳分割して位相差を検出する対となる複数の位相差画素と、
 前記対象物までの距離を測定するために用いられる測定画素と、を有する画素アレイ部の制御方法であって、
 前記測定画素の駆動期間を、前記複数の位相差画素の出力信号に基づく、第2距離に応じて設定する、制御方法。 
(24)
A plurality of phase difference pixels that form a pair that divides incident light from a target object into pupils and detects a phase difference;
A method for controlling a pixel array unit including measurement pixels used to measure the distance to the target object, the method comprising:
A control method, wherein a driving period of the measurement pixel is set according to a second distance based on output signals of the plurality of phase difference pixels.
(25)
 前記(1に記載の受光装置と、
 前記対象物からの入射光を集光するレンズと
を備える、測距システム。
(25)
The light receiving device according to (1);
A distance measuring system comprising: a lens that condenses incident light from the target object.
 本開示の態様は、上述した個々の実施形態に限定されるものではなく、当業者が想到しうる種々の変形も含むものであり、本開示の効果も上述した内容に限定されない。すなわち、特許請求の範囲に規定された内容およびその均等物から導き出される本開示の概念的な思想と趣旨を逸脱しない範囲で種々の追加、変更および部分的削除が可能である。 Aspects of the present disclosure are not limited to the individual embodiments described above, and include various modifications that can be thought of by those skilled in the art, and the effects of the present disclosure are not limited to the contents described above. That is, various additions, changes, and partial deletions are possible without departing from the conceptual idea and spirit of the present disclosure derived from the content defined in the claims and equivalents thereof.
 1:測距システム、10:受光装置、12:画素アレイ部、14:赤外光パルスレーザ、16:赤外光パルスレーザ、20:制御部、24:表示装置、40:レンズ、110i:測定画素、110r:参照画素、110z、110zL、110zR:位相差画素、331~340:SPAD、1110:オンチップレンズ、3350、3360:CIS、Lz10、Lz20:オンチップレンズ、WL,WR:絞り。 1: Ranging system, 10: Light receiving device, 12: Pixel array section, 14: Infrared light pulse laser, 16: Infrared light pulse laser, 20: Control section, 24: Display device, 40: Lens, 110i: Measurement Pixel, 110r: reference pixel, 110z, 110zL, 110zR: phase difference pixel, 331 to 340: SPAD, 1110: on-chip lens, 3350, 3360: CIS, Lz10, Lz20: on-chip lens, WL, WR: aperture.

Claims (25)

  1.  対象物までの距離を測定するために用いられる測定画素と、前記対象物からの入射光を瞳分割して位相差を検出する対となる複数の位相差画素と、を有する画素アレイ部と、
     前記測定画素が光子を受光するタイミングと所定時点との差分に関する情報に基づき、前記対象物までの第1距離値を生成する第1距離測定部と、
     前記複数の位相差画素それぞれに入射する光子の数に対応する情報に基づき、前記対象物までの第2距離値を生成する第2距離測定部と、
     を備える、受光装置。
    a pixel array unit having a measurement pixel used to measure a distance to a target object, and a plurality of paired phase difference pixels that divide incident light from the target object into pupils and detect a phase difference;
    a first distance measuring unit that generates a first distance value to the target object based on information regarding a difference between a timing at which the measurement pixel receives a photon and a predetermined time;
    a second distance measuring unit that generates a second distance value to the object based on information corresponding to the number of photons incident on each of the plurality of phase difference pixels;
    A light receiving device comprising:
  2.  前記測定画素は、入射した光子に応じて光電変換する光電変換部を有し、
     前記位相差画素は、入射した光子に応じて光電変換する光電変換部を有する、請求項1に記載の受光装置。
    The measurement pixel has a photoelectric conversion unit that performs photoelectric conversion according to incident photons,
    The light receiving device according to claim 1, wherein the phase difference pixel includes a photoelectric conversion section that performs photoelectric conversion according to incident photons.
  3.  前記光電変換部は、シングルフォトンアバランシェフォトダイオード(SPAD)である、請求項2に記載の受光装置。 The light receiving device according to claim 2, wherein the photoelectric conversion section is a single photon avalanche photodiode (SPAD).
  4.  前記測定画素の駆動期間を、前記複数の位相差画素の出力信号に基づく、第2距離に応じて設定する制御部を、
     更に備える、請求項3に記載の受光装置。
    A control unit that sets the drive period of the measurement pixel according to a second distance based on the output signals of the plurality of phase difference pixels,
    The light receiving device according to claim 3, further comprising:
  5.  前記第1距離測定部は、前記測定画素が光子を受光するタイミングと所定時点との差分値の出現頻度情報を有するヒストグラムの値に応じて前記第1距離値を生成し、
     前記第2距離測定部は、前記複数の位相差画素それぞれに入射する光子の数に対応する信号値を用いた位相差に応じて前記第2距離値を生成する、請求項1に記載の受光装置。
    The first distance measuring unit generates the first distance value according to a value of a histogram having appearance frequency information of a difference value between a timing at which the measurement pixel receives a photon and a predetermined time,
    The light receiving unit according to claim 1, wherein the second distance measuring unit generates the second distance value according to a phase difference using a signal value corresponding to the number of photons incident on each of the plurality of phase difference pixels. Device.
  6.  前記測定画素は、複数の前記光電変換部を有しており、
     前記測定画素と、前記対となる複数の位相差画素と、はユニットを構成しており、
     前記ユニットに対応する第1変換部を更に備え、
     前記第1変換部は、前記測定画素が光子を受光するタイミングと所定時点との差分に関する情報を有する受信信号を生成する、請求項2に記載の受光装置。
    The measurement pixel has a plurality of the photoelectric conversion units,
    The measurement pixel and the plurality of paired phase difference pixels constitute a unit,
    further comprising a first conversion section corresponding to the unit,
    The light receiving device according to claim 2, wherein the first converter generates a reception signal having information regarding a difference between a timing at which the measurement pixel receives a photon and a predetermined time.
  7.  第1赤外光パルスレーザを更に備え、
     前記制御部は、前記第2距離に応じて前記第1赤外光パルスレーザが照射するレーザ光の光量を制御する、請求項4に記載の受光装置。
    further comprising a first infrared pulsed laser,
    The light receiving device according to claim 4, wherein the control unit controls the amount of laser light emitted by the first infrared pulse laser according to the second distance.
  8.  第2赤外光パルスレーザを更に備え、
     前記複数の位相差画素は、前記第2赤外光パルスレーザの照射するレーザ光に応じて信号を生成する、請求項7に記載の受光装置。
    further comprising a second infrared light pulse laser;
    The light receiving device according to claim 7, wherein the plurality of phase difference pixels generate a signal according to the laser light irradiated by the second infrared light pulse laser.
  9.  前記画素アレイ部と、前記第1変換部は積層化されており、前記ユニットに対応する前記第1変換部は、前記ユニットの直下に配置される、請求項6に記載の受光装置。 The light receiving device according to claim 6, wherein the pixel array section and the first conversion section are stacked, and the first conversion section corresponding to the unit is arranged directly below the unit.
  10.  前記ユニットに対応する第2変換部を更に備え、
     前記第2変換部は、前記複数の位相差画素それぞれに入射する光子の数に対応する情報を有する第2受信信号を生成する、請求項6に記載の受光装置。
    further comprising a second conversion section corresponding to the unit,
    The light receiving device according to claim 6, wherein the second conversion unit generates a second reception signal having information corresponding to the number of photons incident on each of the plurality of phase difference pixels.
  11.  前記第2変換部は、前記位相差画素が光子を受光するタイミングと所定時点との差分に関する情報を有する第3受信信号を生成し、第3受信信号の数に応じて前記第2受信信号を生成する、請求項10に記載の受光装置。 The second conversion unit generates a third reception signal having information regarding a difference between the timing at which the phase difference pixel receives a photon and a predetermined time, and converts the second reception signal into a third reception signal according to the number of third reception signals. The light receiving device according to claim 10.
  12.  前記測定画素と、前記対となる複数の位相差画素とは、同じ時間帯に受光した光子に応じて、信号を生成する、請求項11に記載の受光装置。 The light receiving device according to claim 11, wherein the measurement pixel and the plurality of paired phase difference pixels generate a signal in response to photons received during the same time period.
  13.  前記測定画素と、前記対となる複数の位相差画素とは、異なる半導体素子のチップに構成される、請求項2に記載の受光装置。 The light receiving device according to claim 2, wherein the measurement pixel and the plurality of paired phase difference pixels are configured on different semiconductor chip chips.
  14.  第1赤外光パルスレーザを更に備え、
     前記測定画素が構成される第1チップと、前記対となる複数の位相差画素が構成される第2チップと、前記第1赤外光パルスレーザは、L字形状に配置される、請求項13に記載の受光装置。
    further comprising a first infrared pulsed laser,
    A first chip in which the measurement pixel is formed, a second chip in which the plurality of paired phase difference pixels are formed, and the first infrared light pulse laser are arranged in an L-shape. 14. The light receiving device according to 13.
  15.  複数の画素で構成される画素アレイ部と、
     前記画素アレイ部を制御する制御部と、を、備え、
     前記画素アレイ部は、
     対象物からの入射光を瞳分割して位相差を検出する対となる複数の位相差画素と、
     前記対象物までの距離を測定するために用いられる測定画素と、を有し、
     前記制御部は、前記測定画素の駆動期間を、前記複数の位相差画素の出力信号に基づく第2距離に応じて設定する、受光装置。
    a pixel array section composed of a plurality of pixels;
    a control unit that controls the pixel array unit,
    The pixel array section includes:
    A plurality of phase difference pixels that form a pair that divides incident light from a target object into pupils and detects a phase difference;
    a measurement pixel used to measure the distance to the target object,
    The control unit is a light receiving device that sets a driving period of the measurement pixel according to a second distance based on output signals of the plurality of phase difference pixels.
  16.  前記位相差画素は、可視光を受光して光電変換する光電変換部を有し、
     前記測定画素は、入射した光子に応じて光電変換する光電変換部を有する、請求項15に記載の受光装置。
    The phase difference pixel has a photoelectric conversion unit that receives visible light and performs photoelectric conversion,
    The light receiving device according to claim 15, wherein the measurement pixel includes a photoelectric conversion section that performs photoelectric conversion according to incident photons.
  17.  前記光電変換部は、入射側に反射防止部を更に有する、請求項2に記載の受光装置。 The light receiving device according to claim 2, wherein the photoelectric conversion section further includes an antireflection section on the incident side.
  18.  前記光電変換部は、入射側に高屈折率素材で形成されたオンチップレンズを有する、請求項2に記載の受光装置。 The light receiving device according to claim 2, wherein the photoelectric conversion section has an on-chip lens formed of a high refractive index material on the incident side.
  19.  前記光電変換部は、
     オンチップレンズと、
     前記光電変換により発生したキャリアを増倍するアバランシェ増倍領域を少なくとも有する拡散層とを、有し、
     前記拡散層は、前記オンチップレンズの主光軸の位置に応じて配置される、請求項2に記載の受光装置。
    The photoelectric conversion section is
    On-chip lens and
    a diffusion layer having at least an avalanche multiplication region that multiplies carriers generated by the photoelectric conversion,
    The light receiving device according to claim 2, wherein the diffusion layer is arranged according to the position of the main optical axis of the on-chip lens.
  20.  前記位相差画素は、
     前記光電変換部は、
     入射領域の絞りと、
     前記光電変換により発生したキャリアを増倍するアバランシェ増倍領域を少なくとも有する拡散層とを、有し、
     前記拡散層は、前記絞りの位置に応じて配置される、請求項2に記載の受光装置。
    The phase difference pixel is
    The photoelectric conversion section is
    The aperture of the incident area,
    a diffusion layer having at least an avalanche multiplication region that multiplies carriers generated by the photoelectric conversion,
    The light receiving device according to claim 2, wherein the diffusion layer is arranged according to the position of the aperture.
  21.  前記対となる複数の位相差画素の内の、2つの位相差画素は、
     前記2つの位相差画素の光電変換部に配置される楕円形状のオンチップレンズを有する、請求項2に記載の受光装置。
    Two phase difference pixels among the plurality of phase difference pixels forming the pair are:
    The light receiving device according to claim 2, further comprising an elliptical on-chip lens arranged in the photoelectric conversion section of the two phase difference pixels.
  22.  前記対となる複数の位相差画素の内の、2つの位相差画素の光電変換部と、
     2つの前記測定画素の光電変換部と、に配置される円形状のオンチップレンズを有する、請求項2に記載の受光装置。
    A photoelectric conversion unit of two phase difference pixels among the plurality of paired phase difference pixels;
    The light receiving device according to claim 2, further comprising a circular on-chip lens arranged in the photoelectric conversion sections of the two measurement pixels.
  23.  前記位相差画素の光電変換部は、可視光を透過するカラーフィルタを介してして受光し、
     前記測定画素の光電変換部は、赤外光を透過するカラーフィルタを介してして受光する、請求項15に記載の受光装置。
    The photoelectric conversion section of the phase difference pixel receives light through a color filter that transmits visible light,
    The light receiving device according to claim 15, wherein the photoelectric conversion section of the measurement pixel receives light through a color filter that transmits infrared light.
  24.  対象物からの入射光を瞳分割して位相差を検出する対となる複数の位相差画素と、
     前記対象物までの距離を測定するために用いられる測定画素と、を有する画素アレイ部の制御方法であって、
     前記測定画素の駆動期間を、前記複数の位相差画素の出力信号に基づく、第2距離に応じて設定する、制御方法。
    A plurality of phase difference pixels that form a pair that divides incident light from a target object into pupils and detects a phase difference;
    A method for controlling a pixel array unit including measurement pixels used to measure the distance to the target object, the method comprising:
    A control method, wherein a driving period of the measurement pixel is set according to a second distance based on output signals of the plurality of phase difference pixels.
  25.  前記請求項1に記載の受光装置と、
     前記対象物からの入射光を集光するレンズと
    を備える、測距システム。
    The light receiving device according to claim 1;
    A distance measuring system comprising: a lens that condenses incident light from the target object.
PCT/JP2023/017181 2022-05-12 2023-05-02 Light-receiving device, control method, and distance measuring system WO2023219045A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022078931A JP2023167618A (en) 2022-05-12 2022-05-12 Light reception device, control method, and range finding system
JP2022-078931 2022-05-12

Publications (1)

Publication Number Publication Date
WO2023219045A1 true WO2023219045A1 (en) 2023-11-16

Family

ID=88730455

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/017181 WO2023219045A1 (en) 2022-05-12 2023-05-02 Light-receiving device, control method, and distance measuring system

Country Status (2)

Country Link
JP (1) JP2023167618A (en)
WO (1) WO2023219045A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000346941A (en) * 1999-06-08 2000-12-15 Mitsubishi Electric Corp Distance measuring device
US20110304842A1 (en) * 2010-06-15 2011-12-15 Ming-Tsan Kao Time of flight system capable of increasing measurement accuracy, saving power and/or increasing motion detection rate and method thereof
US20160124089A1 (en) * 2014-10-31 2016-05-05 Cedes Safety & Automation Ag Absolute distance measurement for time-of-flight sensors
WO2019065291A1 (en) * 2017-09-28 2019-04-04 ソニーセミコンダクタソリューションズ株式会社 Image capturing element and image capturing device
WO2021106529A1 (en) * 2019-11-29 2021-06-03 富士フイルム株式会社 Information processing device, information processing method, and program
JP2021148643A (en) * 2020-03-19 2021-09-27 株式会社リコー Method for calculating distance correction information, distance measuring device, moving body, and stereo camera device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000346941A (en) * 1999-06-08 2000-12-15 Mitsubishi Electric Corp Distance measuring device
US20110304842A1 (en) * 2010-06-15 2011-12-15 Ming-Tsan Kao Time of flight system capable of increasing measurement accuracy, saving power and/or increasing motion detection rate and method thereof
US20160124089A1 (en) * 2014-10-31 2016-05-05 Cedes Safety & Automation Ag Absolute distance measurement for time-of-flight sensors
WO2019065291A1 (en) * 2017-09-28 2019-04-04 ソニーセミコンダクタソリューションズ株式会社 Image capturing element and image capturing device
WO2021106529A1 (en) * 2019-11-29 2021-06-03 富士フイルム株式会社 Information processing device, information processing method, and program
JP2021148643A (en) * 2020-03-19 2021-09-27 株式会社リコー Method for calculating distance correction information, distance measuring device, moving body, and stereo camera device

Also Published As

Publication number Publication date
JP2023167618A (en) 2023-11-24

Similar Documents

Publication Publication Date Title
US20230251357A1 (en) Light reception device and distance measurement device
US11743604B2 (en) Imaging device and image processing system
CN112513678B (en) Photodetector and distance measuring device
WO2022091607A1 (en) Light receiving device and distance measurement device
WO2021111766A1 (en) Light-receiving device, method for controlling light-receiving device, and ranging device
JP2021128084A (en) Ranging device and ranging method
WO2021124762A1 (en) Light receiving device, method for controlling light receiving device, and distance measuring device
EP3904826A1 (en) Distance measuring device and distance measuring method
JP7511562B2 (en) Light receiving device, control method for light receiving device, and distance measuring device
WO2020153182A1 (en) Light detection device, method for driving light detection device, and ranging device
WO2023219045A1 (en) Light-receiving device, control method, and distance measuring system
WO2021161858A1 (en) Rangefinder and rangefinding method
WO2021053958A1 (en) Light reception device, distance measurement device, and distance measurement device control method
WO2024162109A1 (en) Light detecting device, and ranging system
WO2021161857A1 (en) Distance measurement device and distance measurement method
WO2024095625A1 (en) Rangefinder and rangefinding method
WO2023171176A1 (en) Light-receiving element and electronic device
WO2024057471A1 (en) Photoelectric conversion element, solid-state imaging element, and ranging system
WO2023162734A1 (en) Distance measurement device
WO2023190277A1 (en) Light detection device
WO2024195531A1 (en) Optical detecting device, and ranging system
WO2023067755A1 (en) Light detection device, imaging device, and distance measurement device
WO2024070673A1 (en) Solid-state imaging device, electronic device, and program
WO2023190278A1 (en) Light detection device
WO2023162651A1 (en) Light-receiving element and electronic apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23803522

Country of ref document: EP

Kind code of ref document: A1