WO2022176498A1 - Capteur de télémétrie et dispositif de télémétrie - Google Patents

Capteur de télémétrie et dispositif de télémétrie Download PDF

Info

Publication number
WO2022176498A1
WO2022176498A1 PCT/JP2022/002120 JP2022002120W WO2022176498A1 WO 2022176498 A1 WO2022176498 A1 WO 2022176498A1 JP 2022002120 W JP2022002120 W JP 2022002120W WO 2022176498 A1 WO2022176498 A1 WO 2022176498A1
Authority
WO
WIPO (PCT)
Prior art keywords
light
light emission
period
luminance
unit
Prior art date
Application number
PCT/JP2022/002120
Other languages
English (en)
Japanese (ja)
Inventor
勇輝 菊池
Original Assignee
ソニーセミコンダクタソリューションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーセミコンダクタソリューションズ株式会社 filed Critical ソニーセミコンダクタソリューションズ株式会社
Publication of WO2022176498A1 publication Critical patent/WO2022176498A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/10Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/32Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/32Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated
    • G01S17/36Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated with phase comparison between the received signal and the contemporaneously transmitted signal

Definitions

  • the present disclosure relates to ranging sensors and ranging devices.
  • a distance measuring sensor that uses an image sensor etc. as a light receiving sensor is used.
  • a distance measuring sensor using such an image pickup device employs measurement by the ToF (Time of Flight) method.
  • the ToF method is a method of irradiating an object with light, receiving the reflected light reflected by the object, and measuring the time it takes for the light to travel back and forth between the object and detecting the distance to the object. be.
  • This ToF method direct ToF and indirect ToF are used.
  • the direct ToF method is a method that directly measures the time from irradiating the object with light to receiving the reflected light. Although it can be used to measure time easily, it has the disadvantages of being susceptible to environmental light and having large errors in short-distance measurements.
  • the indirect ToF method is a method in which light modulated into a repetitive waveform such as a sine wave or pulse train is emitted to an object, and the round trip time of the light is calculated from the phase difference between the detected reflected light and the emitted light. be. It has the advantage of being able to measure relatively short distances and removing the influence of ambient light in the process of calculation.
  • the reciprocating distance corresponding to the period T of the repeated waveform of the emitted light is the distance measurement range, and the measurement of the reciprocating distance exceeding the period T causes an error. By lengthening the period of emitted light, the distance measurement range can be widened. However, if the period of emitted light is lengthened, there arises a problem that the sensitivity is lowered.
  • a distance measuring device that performs distance measurement by combining measurements using emitted light of different periods (see, for example, Patent Document 1).
  • this distance measuring device two distance measurements are performed with different periods of light irradiating the object and exposure periods of the light receiving elements, and the distance to the object is calculated based on these measurement results.
  • the present disclosure proposes a ranging sensor and a ranging device that shorten the ranging process.
  • a distance measuring sensor of the present disclosure includes a light emission control section, a light receiving section, and a distance measuring section.
  • the light emission control unit causes the light emission unit to emit light in the form of a pulse train that repeats a light emission period and a non-light emission period during which the luminance changes.
  • the light-receiving unit receives reflected light of the light reflected by the object.
  • the distance measuring unit measures the distance to the object based on the time from emission of the light to reception of the reflected light.
  • FIG. 1 is a diagram illustrating a configuration example of a distance measuring device according to a first embodiment of the present disclosure
  • FIG. 1 is a diagram illustrating a configuration example of a distance measuring device according to an embodiment of the present disclosure
  • FIG. It is a figure showing an example of composition of an image sensor concerning an embodiment of this indication.
  • FIG. 2 is a diagram showing a configuration example of a pixel according to an embodiment of the present disclosure
  • FIG. 4 is a diagram illustrating an example of generation of a light reception signal by allocating electric charges according to the embodiment of the present disclosure
  • FIG. 5 is a diagram showing an example of generation of a light reception signal according to a comparative example of the present disclosure
  • FIG. 5 is a diagram illustrating an example of lag phase detection according to an embodiment of the present disclosure
  • FIG. 5 is a diagram illustrating an example of lag phase detection according to an embodiment of the present disclosure
  • FIG. 4 is a diagram illustrating an example of a relationship between a ranging range and sensitivity according to an embodiment of the present disclosure
  • FIG. 4 is a diagram illustrating an example of generation of emitted light and received light signals according to the first embodiment of the present disclosure
  • FIG. 4 is a diagram illustrating an example of lag phase detection according to the first embodiment of the present disclosure
  • FIG. 4 is a diagram illustrating an example of lag phase detection according to the first embodiment of the present disclosure
  • FIG. 4 is a diagram illustrating an example of lag phase detection according to the first embodiment of the present disclosure
  • FIG. 4 is a diagram illustrating an example of lag phase detection according to the first embodiment of the present disclosure
  • FIG. 7 is a diagram showing another example of lag phase detection according to the first embodiment of the present disclosure
  • FIG. 7 is a diagram showing another example of lag phase detection according to the first embodiment of the present disclosure
  • FIG. 7 is a diagram showing another example of lag phase detection according to the first embodiment of the present disclosure
  • FIG. 7 is a diagram showing another example of lag phase detection according to the first embodiment of the present disclosure
  • FIG. 7 is a diagram showing another example of lag phase detection according to the first embodiment of the present disclosure
  • FIG. 7 is a diagram showing another example of lag phase detection according to the first embodiment of the present disclosure
  • FIG. 7 is a diagram showing another example of lag phase detection according to the first embodiment of the present disclosure
  • FIG. 7 is a diagram showing another example of lag phase detection according to the first embodiment of the present disclosure
  • FIG. 7 is a diagram showing another example of lag phase detection according to the first
  • FIG. 7 is a diagram showing another example of lag phase detection according to the first embodiment of the present disclosure
  • FIG. 7 is a diagram showing another example of lag phase detection according to the first embodiment of the present disclosure
  • FIG. 7 is a diagram showing another example of lag phase detection according to the first embodiment of the present disclosure
  • It is a figure showing an example of ranging processing concerning a 1st embodiment of this indication.
  • FIG. 7 is a diagram showing another example of distance measurement processing according to the first embodiment of the present disclosure
  • FIG. FIG. 4 is a diagram illustrating an example of object tracking processing according to the first embodiment of the present disclosure
  • FIG. FIG. 5 is a diagram illustrating an example of parameter setting processing according to the first embodiment of the present disclosure
  • FIG. 7 is a diagram illustrating an example of generation of emitted light and received light signals according to the second embodiment of the present disclosure
  • FIG. 3 is a diagram showing a configuration example of a light emitting unit according to an embodiment of the present disclosure
  • FIG. FIG. 3 is a diagram showing a configuration example of a light emitting unit according to an embodiment of the present disclosure
  • FIG. FIG. 3 is a diagram showing a configuration example of a light emitting unit according to an embodiment of the present disclosure
  • FIG. FIG. 2 is a diagram showing a configuration example of a pixel according to an embodiment of the present disclosure
  • FIG. FIG. 11 is a diagram showing a configuration example of a distance measuring device according to a modification of the embodiment of the present disclosure;
  • FIG. 1 is a diagram showing a configuration example of a distance measuring device according to the first embodiment of the present disclosure.
  • FIG. 1 is a block diagram showing a configuration example of the distance measuring device 1.
  • a distance measuring device 1 measures a distance to a target part.
  • the distance measuring device 1 includes a sensor section 200, a light emitting section 60, and a CPU 50. As shown in FIG.
  • the figure shows an example of measuring the distance to the object 2.
  • a light emitting unit 60 emits emitted light 3 to the object 2 .
  • This emitted light 3 is reflected by the object 2 to produce reflected light 4 .
  • the sensor section 200 receives this reflected light 4 .
  • the sensor unit 200 measures the time from the emission of the emitted light 3 of the light emitting unit 60 to the reception of the reflected light 4, and the distance to the object 2 is calculated.
  • the light emitting unit 60 emits light to the object for distance measurement.
  • a light emitting unit 60 in the figure includes a light emitting element 62 and a light emitting element driving unit 61 .
  • the light emitting element 62 is an element that emits light. This light emitting element 62 can be configured by, for example, a laser diode.
  • the light emitting element driving section 61 drives the light emitting element 62 .
  • the light emitting element driving section 61 supplies a drive current required for the light emitting element 62 to emit light.
  • the light emitting element drive section 61 drives the light emitting element 62 based on the control of the sensor section 200 .
  • the sensor unit 200 and the light emitting element driving unit 61 can be connected by, for example, LVDS (Low Voltage Differential Signaling) signal lines.
  • the sensor section 200 includes an imaging element 210, a logic section 220, a pulse generator 230, a pixel driving section 240, an LVDS transmission section 250, and a PLL 260.
  • the imaging element 210 receives the reflected light 4 from the object 2. As will be described later, the imaging element 210 is provided with a plurality of pixels each having a photoelectric conversion element that photoelectrically converts incident light. Reflected light 4 and the like are received by this pixel. The imaging device 210 generates a light reception signal based on the received reflected light 4 and outputs it to the logic section 220 . The imaging element 210 can also capture an image of a subject including the target object 2 .
  • the logic section 220 controls the sensor section 200 as a whole.
  • the logic unit 220 controls a pulse generator 230 (to be described later) to generate drive signals for the imaging element 210 and the light emitting unit 60 .
  • the logic unit 220 further calculates the distance to the object 2 based on the received light signal output from the imaging element 210 .
  • the pulse generator 230 generates a pulse signal for driving the light emitting section 60 and the imaging device 210.
  • the pixel drive section 240 generates drive signals for the pixels of the imaging device 210 based on drive pulse signals from the pulse generator 230 .
  • a pixel driving signal generated by the pixel driving section 240 is output to the imaging device 210 .
  • the LVDS transmission unit 250 converts the drive pulse signal from the pulse generator 230 into an LVDS signal and transmits the LVDS signal.
  • the driving pulse signal transmitted by the LVDS transmitting section 250 is received by the LVDS receiving section built in the light emitting element driving section 61 .
  • the PLL 260 generates a clock signal synchronized with an externally input clock signal and supplies it to the pulse generator 230 .
  • a CPU (Central Processing Unit) 50 controls the sensor section 200 .
  • the sensor unit 200 and the light emitting element driving unit 61 can be connected by signal lines other than LVDS.
  • FIG. 2 is a diagram illustrating a configuration example of a distance measuring device according to an embodiment of the present disclosure; This figure is a diagram showing a configuration example when the distance measuring device 1 is divided into functional blocks. The distance measuring device 1 shown in FIG.
  • the distance measuring sensor 300 is composed of the sensor section 200 and the CPU 50 shown in FIG.
  • This ranging sensor 300 includes a light emission control section 320 , a light receiving section 310 and a ranging section 330 .
  • the light emission control section 320 controls the emitted light of the light emitting section 60 .
  • a ranging sensor 300 in the figure performs ranging by the indirect ToF method.
  • the indirect ToF method light modulated into a waveform of a constant cycle such as a sine wave or a pulse train, that is, light whose luminance changes in a constant cycle is emitted to the object.
  • the light emission control unit 320 controls the light emitting unit 60 to emit light that changes in such a constant cycle.
  • the light emission control section 320 controls the light emission section 60 to output light in the form of a pulse train that repeats a light emission period and a non-light emission period of substantially equal periods.
  • the light emission control unit 320 further controls the light emission unit 60 to emit light whose luminance changes during the light emission period.
  • the light receiving section 310 receives the reflected light 4 which is the output light 3 reflected by the object 2 .
  • the light receiving section 310 receives light in synchronization with the emitted light 3 from the light emitting section 60 . Specifically, when a pulse train of light that repeats the above-described light emission period and non-light emission period is emitted, light is received in a period synchronized with the light emission period. For example, the light receiving section 310 receives light at the same timing as the emission period of the emitted light 3 . In this case, the reflected light 4 received by the light receiving section 310 decreases as the distance to the object 2 increases. Thereby, the phase difference of the reflected light 4 with respect to the emitted light 3 can be detected.
  • the light receiving section 310 can also receive light in synchronization with the emission period of the emitted light 3 while shifting the phase. For example, the light receiving section 310 can receive light in periods delayed by 0, 90, 180, and 270 degrees with respect to the light emission period of the emitted light 3, and generate four light receiving signals. These received light signals are output to the distance measuring section 330 .
  • the distance measuring unit 330 calculates the time until the light is received from the ratio of the received light amount of the reflected light 4, and measures the distance to the object 2. Further, the distance measurement unit 330 controls the light emission control unit 320 to adjust the period of the above-described pulse train light and the brightness of the emitted light 3 . The distance measurement unit 330 controls the cycle, light emission period, and emission timing of the emitted light 3, and detects the phase delay of the reflected light 4 with respect to the emitted light 3 based on the received light signal output from the light receiving unit 310. Measure the distance to
  • FIG. 3 is a diagram illustrating a configuration example of an imaging device according to an embodiment of the present disclosure; This figure is a block diagram showing a configuration example of the imaging device 210 .
  • the imaging device 210 is a semiconductor device that generates image data of a subject.
  • the imaging device 210 also detects reflected light (reflected light 4) from a subject (object 2) and generates a light receiving signal.
  • the imaging device 210 includes a pixel array section 10 , a vertical driving section 20 , a column signal processing section 30 and a control section 40 .
  • the pixel array section 10 is configured by arranging a plurality of pixels 100 .
  • a pixel array section 10 in the figure represents an example in which a plurality of pixels 100 are arranged in a two-dimensional matrix.
  • the pixel 100 includes a photoelectric conversion unit that photoelectrically converts incident light, and generates an image signal of a subject based on the irradiated incident light.
  • a photodiode for example, can be used for this photoelectric conversion unit.
  • Signal lines 11 and 12 are wired to each pixel 100 .
  • the pixels 100 generate image signals under the control of control signals transmitted by the signal lines 11 and output the generated image signals via the signal lines 12 .
  • the signal line 11 is arranged for each row in a two-dimensional matrix and is commonly wired to the plurality of pixels 100 arranged in one row.
  • the signal line 12 is arranged for each column in the shape of a two-dimensional matrix and is commonly wired to a plurality of pixels 100 arranged in one column.
  • the vertical driving section 20 generates control signals for the pixels 100 described above.
  • a vertical drive unit 20 in FIG. 1 generates a control signal for each row of the two-dimensional matrix of the pixel array unit 10 and sequentially outputs the control signal via the signal line 11 .
  • the column signal processing unit 30 processes image signals generated by the pixels 100 .
  • a column signal processing unit 30 shown in the figure simultaneously processes image signals from a plurality of pixels 100 arranged in one row of the pixel array unit 10 and transmitted through the signal line 12 .
  • this processing for example, analog-to-digital conversion for converting analog image signals generated by the pixels 100 into digital image signals and correlated double sampling (CDS) for removing offset errors in image signals may be performed. can be done.
  • the processed image signal is output to a circuit or the like outside the imaging device 210 .
  • the control unit 40 controls the vertical driving unit 20 and the column signal processing unit 30.
  • a control unit 40 shown in the figure outputs control signals through signal lines 41 and 42 to control the vertical driving unit 20 and the column signal processing unit 30 .
  • FIG. 4 is a diagram showing a configuration example of a pixel according to the embodiment of the present disclosure. This figure is a circuit diagram showing a configuration example of the pixel 100 .
  • a pixel 100 in the figure includes photoelectric conversion units 101 and 102, voltage application units 103 and 104, charge holding units 105 and 106, and MOS transistors 111 to 118.
  • FIG. 1 A pixel 100 in the figure includes photoelectric conversion units 101 and 102, voltage application units 103 and 104, charge holding units 105 and 106, and MOS transistors 111 to 118.
  • the signal lines 11 and 12 are wired to the pixel 100 .
  • the signal lines 11 in the figure include a signal line Vb1, a signal line Vb2, a signal line RST, a signal line TRG, and a signal line SEL.
  • the signal lines 12 include a signal line Vo1 and a signal line Vo2.
  • the pixel 100 is wired with a power line Vdd.
  • the power supply line Vdd is a wiring that supplies power to the pixels 100 .
  • the photoelectric conversion unit 101 has an anode grounded and a cathode connected to the sources of the MOS transistors 111 and 112 .
  • the drain of the MOS transistor 112 is connected to the gate of the MOS transistor 113 and one end of the charge holding portion 105 . Another end of the charge holding unit 105 is grounded.
  • the drain of the MOS transistor 111 and the drain of the MOS transistor 113 are connected to the power supply line Vdd.
  • the source of MOS transistor 113 is connected to the drain of MOS transistor 114, and the source of MOS transistor 114 is connected to signal line Vo1.
  • the anode of the photoelectric conversion unit 102 is grounded, and the cathode is connected to the sources of the MOS transistors 115 and 116 .
  • the drain of the MOS transistor 116 is connected to the gate of the MOS transistor 117 and one end of the charge holding portion 106 . Another end of the charge holding unit 106 is grounded.
  • the drain of the MOS transistor 117 and the drain of the MOS transistor 115 are connected to the power supply line Vdd.
  • the source of MOS transistor 117 is connected to the drain of MOS transistor 118, and the source of MOS transistor 118 is connected to signal line Vo2.
  • a signal line Vb1 and a signal line Vo2 are connected to the voltage application units 103 and 104, respectively.
  • a signal line SEL is connected to the gates of the MOS transistors 111 and 115 .
  • a signal line TRG is connected to the gates of the MOS transistors 112 and 116 .
  • a signal line SEL is connected to the gates of the MOS transistors 114 and 118 .
  • the photoelectric conversion units 101 and 102 are elements that perform photoelectric conversion of incident light.
  • the photoelectric conversion units 101 and 102 can be composed of photodiodes formed on a semiconductor substrate 120, which will be described later with reference to FIG.
  • the photoelectric conversion units 101 and 102 are composed of a single semiconductor element having two cathode regions and a common anode region.
  • the voltage application units 103 and 104 apply voltages to the semiconductor substrates in the vicinity of the cathode regions of the photoelectric conversion units 101 and 102 .
  • the voltage application units 103 and 104 are arranged near the cathode regions of the photoelectric conversion units 101 and 102, respectively.
  • an electric field can be formed in the semiconductor substrate near the photoelectric conversion units 101 and 102 . Electric charges generated by photoelectric conversion can be distributed to the photoelectric conversion units 101 and 102 by this electric field.
  • charges (electrons) generated by photoelectric conversion can be distributed to the photoelectric conversion units 101 .
  • charges (electrons) generated by photoelectric conversion can be distributed to the photoelectric conversion units 102 .
  • the charge holding units 105 and 106 hold charges.
  • the charge holding units 105 and 106 hold charges generated by photoelectric conversion of the photoelectric conversion units 101 and 102, respectively.
  • the charge holding portions 105 and 106 can be configured by floating diffusion regions (FDs), which are semiconductor regions formed in the semiconductor substrate 120 .
  • the MOS transistors 112 and 116 transfer charges generated by photoelectric conversion of the photoelectric conversion section to the charge holding section.
  • the MOS transistor 112 transfers charges generated by photoelectric conversion of the photoelectric conversion unit 101 to the charge holding unit 105 .
  • the MOS transistor 116 transfers charges generated by photoelectric conversion of the photoelectric conversion unit 102 to the charge holding unit 106 . This transfer can be performed by conducting between the photoelectric conversion portion and the charge holding portion. Control signals for the MOS transistors 112 and 116 are transmitted through a signal line TRG.
  • the MOS transistors 111 and 115 are for resetting the photoelectric conversion section and the charge holding section.
  • the MOS transistor 111 resets the photoelectric conversion portion 101 and the charge holding portion 105 .
  • the MOS transistor 115 resets the photoelectric conversion portion 102 and the charge holding portion 106 . This reset can be performed by conducting between the charge holding portion and the power supply line Vdd to discharge the charge in the charge holding portion. Control signals for the MOS transistors 111 and 115 are transmitted through a signal line RST.
  • the MOS transistors 113 and 117 generate image signals based on the charge held in the charge holding portion.
  • the MOS transistor 113 generates an image signal based on the charges held in the charge holding portion 105 .
  • the MOS transistor 117 generates an image signal based on the charges held in the charge holding portion 106 .
  • the MOS transistor 114 is a MOS transistor that outputs the image signal generated by the MOS transistor 113 to the signal line Vo1.
  • the MOS transistor 118 is a MOS transistor that outputs the image signal generated by the MOS transistor 117 to the signal line Vo2. Control signals for the MOS transistors 114 and 118 are transmitted through a signal line SEL.
  • An image signal based on photoelectric conversion of the photoelectric conversion unit 101 can be generated by the MOS transistors 111 to 114 and the charge holding unit 105 . Also, an image signal based on photoelectric conversion of the photoelectric conversion unit 102 can be generated by the MOS transistors 115 to 118 and the charge holding unit 106 . This image signal is used for imaging a subject.
  • the voltages are applied to the voltage application units 103 and 104 to distribute the charges generated by the photoelectric conversion of the photoelectric conversion units 101 and 102, and the distributed charges are held in the charge holding units 105 and 106, respectively.
  • FIG. 5 is a diagram illustrating an example of generation of a received light signal by charge distribution according to the embodiment of the present disclosure. This figure is a timing chart showing an example of generation of a light reception signal in the pixel 100. As shown in FIG. In the figure, "outgoing light” and “reflected light” represent waveforms of luminance of the outgoing light 3 and reflected light 4 described in FIG. The dashed lines in these waveforms represent the zero level of brightness.
  • Vb1 and Vb2 represent voltages that are transmitted by the signal line Vb1 and the signal line Vb2 and applied to the voltage applying units 103 and 104, respectively.
  • SEL represents a control signal for the signal line SEL.
  • These signals and the like are represented by binarized waveforms, where a value of "1” represents a state in which a high voltage is applied, and a value of "0” represents 0V.
  • Vo1" and Vo2 represent signals on the signal lines Vo1 and Vo2, respectively.
  • An accumulation period in the figure is a period for accumulating charges generated by the photoelectric conversion units 101 and 102 in the charge holding units 105 and 106, and a light reception signal is generated based on the accumulated charges during the light reception signal generation period. period.
  • the light emitting unit 60 emits the emitted light 3 of a pulse train in which the light emitting period and the non-light emitting period are repeated at a cycle T. Then, the reflected light 4 is incident on the light receiving section 310 .
  • This reflected light 4 has a waveform with a phase ⁇ shifted from that of the emitted light 3 .
  • This ⁇ is the phase difference corresponding to the time from the emission of the emitted light 3 to the reception of the reflected light 4 .
  • a voltage is applied to the voltage applying units 103 and 104 through the signal lines Vb1 and Vb2 in synchronization with the emitted light 3.
  • voltages are alternately applied to the voltage application units 103 and 104, and the charges of the photoelectric conversion units 101 and 102 are distributed.
  • the distributed charges are accumulated in the charge holding units 105 and 106, respectively.
  • This accumulation period can be, for example, 4 ms.
  • a signal is output to the signal line SEL and applied to the gates of the MOS transistors 114 and 118 .
  • the MOS transistors 114 and 118 are rendered conductive, and light reception signals corresponding to the charges accumulated and held in the charge holding portions 105 and 106 are output.
  • a light receiving signal A corresponding to the charge held in the charge holding unit 105 and a light receiving signal B corresponding to the charge held in the charge holding unit 106 are output to the signal lines Vo1 and Vo2, respectively. In this way, the charges are distributed, and the received light signals A and B are generated based on the distributed charges.
  • the light reception signal A corresponding to the charge holding portion 105 and the light reception signal B corresponding to the charge holding portion 106 are hereinafter referred to as the light reception signal of the tap A and the light reception signal of the tap B, respectively.
  • Such timing control between the emitted light 3 and the light reception by the light receiving section 310 is performed by the distance measurement section 330 described with reference to FIG.
  • the distance measurement unit 330 creates distance measurement data for each pixel 100 from the light reception signal of the tap A and the light reception signal of the tap B generated by the plurality of pixels 100 .
  • An image is generated from a plurality of distance measurement data for each pixel 100 .
  • This image corresponds to an image representing the shape of a subject including the object in the depth direction. Such an image is called a depth map.
  • the three-dimensional shape of the subject can be detected from this depth map.
  • FIG. 6 is a diagram illustrating an example of generation of a received light signal according to a comparative example of the present disclosure. This figure is a timing chart showing an example of the generation of the received light signal in the indirect ToF method of the pixel 100 .
  • a distance measuring unit 330 distributes the reflected light 4 in four different phases, and controls the generation of light receiving signals.
  • Emitted light 3 in the same figure employs emitted light with a cycle T having substantially equal light emitting periods and non-light emitting periods.
  • the emitted light 3 shown in FIG. 4 is a rectangular wave with constant luminance during the emission period. This rectangular wave is hereinafter referred to as pulsed light 401 .
  • pulsed light 401 is hereinafter referred to as single-luminance emitted light.
  • Reflected light 4 including pulsed light 402 with a constant luminance is incident on the light receiving section 310 according to the emitted light 3 .
  • the reflected light 4 is synchronized with the emitted light 3, and the light receiving signals A and B are generated by distributing electric charges in four phases of 0, 90, 180 and 270 degrees.
  • 0 degrees”, “90 degrees”, “180 degrees”, and “270 degrees” in the figure represent waveforms when electric charges are distributed in four delay phases of 0 degrees, 90 degrees, 180 degrees, and 270 degrees. Each is represented.
  • the upper waveform of each of these waveforms represents tap A and the lower waveform represents tap B.
  • the hatched areas at “0 degrees”, “90 degrees”, “180 degrees”, and “270 degrees” in the same figure are the charges in the charge holding portions (charge holding portions 105 and 106) of the respective taps. represents accumulation.
  • phase delay of 0 degrees in the figure will be described as an example.
  • electric charge is accumulated in the electric charge holding portion 105 during the period in which the pulsed light 402 of the reflected light 4 and the portion of the guide signal having a value of "1" overlap.
  • A0 in the figure represents the accumulated charge A0 in the charge holding unit 105 on the tap A side.
  • charges are accumulated in the charge holding unit 106 during the period when the pulsed light 402 of the reflected light 4 overlaps with the value "1" portion of the guide signal.
  • B0 in the figure represents the accumulated charge B0 in the charge holding unit 106 on the tap B side.
  • Such charge accumulation at tap A and tap B is repeated during the accumulation period.
  • a light receiving signal A0 at tap A at 0 degrees and a light receiving signal B0 at tap B are generated based on these accumulated charges A0 and B0. Also, a light receiving signal A90 at tap A at 90 degrees and a light receiving signal B90 at tap B are generated. Also, a light receiving signal A180 at tap A at 180 degrees and a light receiving signal B180 at tap B are generated. Also, a light receiving signal A270 at tap A and a light receiving signal B270 at tap B at 270 degrees are generated. A phase difference ⁇ between the emitted light 3 and the reflected light 4 is calculated from these eight signals.
  • Signal0 to Signal3 are calculated by performing the following calculations using the above tap A0 and the like.
  • Signal0 tap A0 - tap B0
  • Signal1 Tap A90-Tap B90
  • Signal2 Tap A180-Tap B180
  • Signal3 Tap A270-Tap B270
  • I (Signal0 ⁇ Signal1)/2
  • Q (Signal2-Signal3)/2
  • I is a signal corresponding to a component of the reflected light 4 that is in phase with the emitted light 3 .
  • Q is a signal corresponding to the component of the reflected light 4 in the orthogonal direction to the emitted light 3 .
  • the distance d to the object 2 can be calculated by the following equation.
  • d c ⁇ T ⁇ arctan(Q/I)/4 ⁇ Formula (1) where c represents the speed of light.
  • FIG. 7A and 7B are diagrams illustrating an example of lag phase detection according to an embodiment of the present disclosure.
  • FIG. 7A is a diagram showing the relationship between I and Q described above and the lag phase ⁇ .
  • the solid line graph in FIG. 7A represents I, and the dotted line graph represents Q.
  • FIG. I and Q are in the form of triangular waves. Also, Q has a 90-degree lag phase with respect to I.
  • FIG. 7B shows a graph in which I and Q correspond to the x-axis and y-axis, respectively.
  • the positive x-axis of FIG. 7B represents the direction of the emitted light 3 and the dotted arrow represents the reflected light.
  • the reflected light 4 has a phase lag of ⁇ from the emitted light 3 .
  • the distance measurement range in the indirect ToF method is limited by the period T of the emitted light 3.
  • the period T of the emitted light 3 is lengthened, the sensitivity of distance measurement is lowered. This situation will be described with reference to FIG.
  • FIG. 8 is a diagram illustrating an example of the relationship between the ranging range and sensitivity according to the embodiment of the present disclosure.
  • the horizontal axis in the figure represents the actual distance to the object 2, and the vertical axis represents the measured phase difference.
  • a solid line graph 421 in FIG. 4 represents the case of a relatively short period (T).
  • a dashed line graph 422 in the figure represents the case of a relatively long period (4T).
  • the measured value becomes 0 when the actual distance reaches the distance corresponding to the period of the emitted light 3, that is, the distance required for the round trip of light corresponding to the period.
  • Graph 421 with a relatively short period has a large change (slope) in the measured value with respect to the actual distance, and has high sensitivity.
  • the graph 421 narrows the measurement range.
  • the relatively long period graph 422 has low sensitivity and a wide measurement range. It is possible to calculate the distance to the relatively distant object 2 while maintaining high accuracy by using both the distance measurement using the short-period emitted light 3 and the distance measurement using the long-period emitted light 3. It is possible. However, in this case, two distance measurements are required.
  • FIG. 9 is a diagram illustrating an example of generation of emitted light and received light signals according to the first embodiment of the present disclosure. Similar to FIG. 6, this figure is a timing chart showing an example of generation of a light reception signal in the indirect ToF method of the pixel 100.
  • the emitted light 3 in FIG. 6 differs from the emitted light 3 in FIG. 6 in that the luminance of the light emission period is changed.
  • the emitted light 3 in the figure is emitted light with a period T having substantially equal periods of light emission and non-light emission.
  • the light emission period of the emitted light 3 in the figure includes a high luminance light emission period 433 and a low luminance light emission period 434 .
  • Emitted light 3 in the figure represents an example in which the high-luminance light emission period 433 is arranged in the center of the light emission period. Further, it can also be considered that the high-luminance light emission period 433 is composed of the high-luminance pulsed light 431 and the low-luminance light-emission period 434 is composed of the low-luminance pulsed light 432 .
  • Such emitted light 3 is referred to as two-level luminance emitted light.
  • Reflected light (1) and reflected light (2) in the figure represent examples in which the distances to the object 2 are different.
  • the cases of the phase delay of 0 degrees and 90 degrees are described for the reflected light (1) and the reflected light (2), respectively.
  • the high-luminance light emission period of the reflected light 4 corresponding to the high-luminance light emission period 433 of the emitted light 3 does not overlap with the timing at which the guide signal is switched. be.
  • the sensitivity is relatively low as described later.
  • the high-luminance light emission period of the reflected light 4 corresponding to the high-luminance light emission period 433 of the emitted light 3 overlaps with the time at which the guide signal is switched. .
  • the high-luminance light emission period of the reflected light 4 corresponding to the high-luminance light emission period 433 overlaps with the switching timing of the guide signal. Therefore, the high-intensity light emission period of the reflected light 4 is divided and distributed to the tap A and the tap B.
  • FIG. The change in the light reception signals of the taps A and B with respect to the change in the delayed phase of the reflected light 4 is increased, and the sensitivity is increased.
  • FIG. 10A is a diagram showing the relationship between the high-luminance light emission period 433 and the low-luminance light emission period 434 of the emitted light 3 described above.
  • Emitted light 3 in the figure represents an example in which the luminance in the high-luminance light emission period 433 and the luminance in the low-luminance light emission period 434 are set to 1.0:0.25.
  • FIG. 10B like FIG. 7B, represents a graph configured to describe points corresponding to I and Q, with I and Q corresponding to the x-axis and y-axis, respectively.
  • the graph in the figure has a regular dodecagon shape.
  • the area with wide point spacing is the area where the change with respect to the phase delay is large. That is, the area becomes the high sensitivity area 459 .
  • This high sensitivity area 459 appears in the vicinity of the areas where the phase lag is 0 degrees, 90 degrees, 180 degrees and 270 degrees. Also, this region is a region of phase delay in which the high-intensity light emission period of the reflected light 4 overlaps with the switching timing of the guide signal.
  • FIG. 10C is a diagram showing the relationship between the distance to the object 2 and the detected lag phase.
  • a region where the slope of the graph is large corresponds to the high sensitivity region 459 .
  • This high sensitivity region 459 is a region in the vicinity of the phase delay of 90 degrees.
  • 11A, 11B, and 11C are diagrams showing other examples of lag phase detection according to the first embodiment of the present disclosure.
  • 11A, 11B, and 11C are diagrams showing examples in which the high-brightness light emission period 433 is arranged at the beginning of the light emission period.
  • the high sensitivity region 459 appears in the vicinity of the regions where the phase delays are 45 degrees, 135 degrees, 225 degrees and 315 degrees. This area is a phase-delayed area in which the high-intensity emission period of the reflected light 4 overlaps with the guide signal switching timing.
  • FIG. 11C is a diagram showing the relationship between the distance to the object 2 and the detected lag phase.
  • a high-sensitivity region 459 in the figure is a region in the vicinity of a phase delay of 45 degrees.
  • 12A, 12B, and 12C are diagrams showing other examples of lag phase detection according to the first embodiment of the present disclosure.
  • 12A, 12B, and 12C are diagrams showing examples in which the high-luminance light emission period 433 is arranged at the end of the light emission period.
  • the high-sensitivity regions 459 appear in the vicinity of the regions where the phase delays are 45 degrees, 135 degrees, 225 degrees and 315 degrees.
  • FIG. 12C is a diagram showing the relationship between the distance to the object 2 and the detected lag phase. Also in the same figure, the high sensitivity region 459 is a region in the vicinity of the phase delay of 45 degrees.
  • 13A, 13B, and 13C are diagrams showing other examples of lag phase detection according to the first embodiment of the present disclosure.
  • 13A, 13B, and 13C are diagrams showing examples in which the luminance difference between the high-luminance light emission period 433 and the low-luminance light emission period 434 of the emitted light 3 is changed with respect to FIG. 11A.
  • emitted light 3 in FIG. 4 an example is described in which the luminance in the high-luminance light emission period 433 and the luminance in the low-luminance light emission period 434 are set to 1.0:0.4.
  • the output light 3 of FIG. 13A can reduce the difference in sensitivity between the high-sensitivity region 459 and the other regions compared to the case of FIGS. 11B and 11C.
  • the position of the high-luminance light emission period 433 can be adjusted according to the position of the object 2, and the position of the high-sensitivity region 459 can be changed. It becomes possible to detect the distance to the object 2 at a relatively distant position with high sensitivity. Further, distance measurement is possible even in areas outside the high sensitivity area 459 . Therefore, the distance can be measured even when the object moves in the depth direction. Further, by adjusting the difference in luminance between the high-luminance light emission period 433 and the low-luminance light emission period 434, it is possible to adjust the change in sensitivity between the high-sensitivity region 459 and other regions.
  • the adjustment of the position of the high-luminance light emission period 433 also includes adjustment of the length of the high-luminance light emission period 433 with respect to the light emission period. That is, by adjusting the ratio of the high-luminance light emission period 433 to the low-luminance light emission period 434, the position of the high-sensitivity region 459 can also be changed. For example, the length of the high-brightness light emission period 433 can be adjusted according to the shape of the object.
  • the position of the high-luminance light emission period 433 in the emitted light 3 was adjusted, but it is also possible to set a temporal offset in the guide signal when receiving light. In this case, the position of the high sensitivity area 459 can be adjusted by adjusting this offset.
  • FIG. 14 is a diagram illustrating an example of ranging processing according to the first embodiment of the present disclosure. This figure is a flow chart showing the procedure of distance measurement in the distance measuring device 1 . Also, in the process of the figure, first, normal distance measurement is performed. When an operation for designating an object is received from the user according to the result, processing is performed to switch to distance measurement in which the high-sensitivity region is aligned with the designated region.
  • step S101 the period of emitted light is set according to the distance measurement range (step S101).
  • step S102 single-luminance emission light with the set period T is emitted (step S102).
  • distance measurement is performed (step S103).
  • step S104 a depth map is generated (step S104).
  • step S105 it is determined whether or not an object has been specified (step S105). For example, the user can refer to the depth map to specify the object. As a result of this determination, if the object is not designated (step S105, No), the process from step S103 is executed again.
  • step S105 if the object is specified (step S105, Yes), the shape of the object is recognized (step S106). Next, the parameters of the two-level luminance emitted light are set (step S107). Next, a two-level luminance emitted light is emitted (step S108). Next, distance measurement is performed (step S109). Next, a depth map is generated (step S110). After that, the process proceeds to step S105.
  • the depth map generated in step S110 can be used by application software.
  • FIG. 15 is a diagram showing another example of distance measurement processing according to the first embodiment of the present disclosure. Similar to FIG. 14, FIG. The processing in FIG. 14 differs from the processing in FIG. 14 in that the object 2 is automatically tracked.
  • the period of emitted light is set according to the distance measurement range (step S121).
  • the ranging range can be set to 7m.
  • single-luminance emission light with the set period T is emitted (step S122).
  • distance measurement is performed (step S123).
  • a depth map is generated (step S124).
  • it is determined whether or not an object has been detected within the ranging range step S125. If the object is not detected within the ranging range (step S125, No), the process from step S123 is executed again.
  • step S125, Yes if the object is detected within the ranging range (step S125, Yes), the object is compared with the application-specified human characteristics in step S126. As a result of the determination, if the object is not determined to be a person (step S127, No), the process from step S123 is executed again. On the other hand, if the object is determined to be a person (step S127, Yes), the process proceeds to object tracking S130.
  • processing in FIG. 15 is not limited to this example. For example, it is possible to detect and track objects other than humans, such as animals.
  • FIG. 16 is a diagram illustrating an example of object tracking processing according to the first embodiment of the present disclosure. This figure shows the processing of object tracking S130 in FIG.
  • the luminance difference and the duty of the high-luminance light emission period are set according to the object (step S131). This can be done by setting, as initial values, the brightness difference and the duty at which the high sensitivity region 459 is 10% from a table stored in advance.
  • the position of the high-intensity pulse is set (step S132). This can be done, for example, by setting the position of the high-luminance pulsed light 432 or the position of the high-luminance light emission period 433 described in FIG. It can also be performed by adjusting the temporal offset of the guide signal during light reception.
  • step S133 two-level luminance emitted light is emitted (step S133).
  • distance measurement is performed (step S134).
  • a depth map is generated (step S135).
  • step S136 compares with the past image of the object (step S136).
  • step S137 it is determined whether or not the object has moved in the depth direction (step S137). If the object moves in the depth direction (step S137, Yes), the process proceeds to step S132.
  • step S137, No it is determined whether or not the shape of the object has changed (step S138). As a result, when the shape of the object does not change (step S138, No), the process proceeds to step S134. If the shape of the object has changed (step S138, Yes), the process proceeds to step S131.
  • the high sensitivity region 459 can be arranged at the position of the object.
  • FIG. 17 is a diagram illustrating an example of parameter setting processing according to the first embodiment of the present disclosure.
  • the image is decomposed for each distance block (step S141). This is a process performed on an image containing an object.
  • the ranging information of the block containing the object is acquired (step S142).
  • the parameter of the two-step luminance emitted light including the vicinity of the object in the high-sensitivity region is calculated (step S143).
  • the distance measuring device 1 performs distance measurement using emitted light whose luminance changes during the light emission period, thereby arranging a highly sensitive area within the distance measurement range. can do. Therefore, it is possible to improve the distance measurement sensitivity for an object at a relatively long distance. This makes it possible to simplify the range finding process for an object at a relatively long distance.
  • the range finder 1 of the first embodiment described above performs light reception with a lag phase of every 90 degrees.
  • the range finder 1 of the second embodiment of the present disclosure differs from the above-described first embodiment in that the delay phase is changed.
  • FIG. 18 is a diagram illustrating an example of generation of emitted light and received light signals according to the second embodiment of the present disclosure. Similar to FIG. 9, this figure is a timing chart showing an example of generation of a light receiving signal in the indirect ToF method of the pixel 100.
  • the light receiving process in FIG. 9 differs from the light receiving process in FIG. 9 in that the light receiving process is performed with a phase delay of 60 degrees.
  • 60 degrees and 120 degrees in the figure respectively represent the waveforms in the case of distributing charges in three delay phases of 60 degrees and 120 degrees, respectively.
  • the configuration of the distance measuring device 1 other than this is the same as the configuration of the distance measuring device 1 according to the first embodiment of the present disclosure, so description thereof will be omitted.
  • the distance measuring device 1 can perform distance measurement processing by light reception processing delayed every 60 degrees.
  • a region that becomes the high sensitivity region 459 can be widened.
  • the light emitting unit 60 in FIG. 19A includes light emitting element driving units 61 and 63 and light emitting elements 62 and 64 .
  • the light emitting element 64 is an element that emits light with higher luminance than the light emitting element 62 .
  • a light emission pulse signal corresponding to the low luminance light emission period 434 is input to the light emitting element driving section 61 that drives the light emitting element 62 .
  • a light emission timing signal corresponding to the high luminance light emission period 433 is input to the light emitting element driving section 63 that drives the light emitting element 64 .
  • emitted light 3 with different emission luminance can be output.
  • the light emitting unit 60 in FIG. 19B inputs the light emitting luminance signal in addition to the light emitting timing signal to the light emitting element driving unit 61 .
  • a light-emitting element driving section 61 shown in the figure controls the current of the laser diode constituting the light-emitting element 62 in accordance with the light emission luminance signal. As a result, emitted light 3 with different emission luminance can be output.
  • the light emission timing signal and the light emission luminance signal are input to the light emitting element driving unit 61.
  • the light emission luminance signal signals having different luminances and pulse widths are used and alternately input as shown in FIG.
  • the light emitting element 62 can be driven to alternately emit light with different luminance. Even when such light with different brightness is emitted from the light emitting unit 60, as a result of accumulation of electric charge corresponding to the light in the imaging element 210, an effect equivalent to that of the emitted light in FIG. 9 can be obtained. can be done.
  • the waveform of the light emission luminance signal can be simplified as compared with the light emitting element driving section 61 of FIG. 19B. High-luminance and low-luminance light emission driving can be easily performed even when high-frequency light emission driving is performed. It should be noted that the light emission timing signal and the like in the same figure are described to explain the operation of the light emitting element driving section 61 in the same figure.
  • FIG. 20 is a diagram showing a configuration example of a pixel according to the embodiment of the present disclosure. This figure is a cross-sectional view showing a configuration example of the pixel 100 in the image sensor 210 . A pixel 100 shown in FIG.
  • the semiconductor substrate 120 is a semiconductor substrate on which elements such as the photoelectric conversion units 101 and 102 are arranged.
  • a semiconductor substrate 120 in the figure is configured as a p-type well region. By arranging an n-type semiconductor region in this well region, photodiodes corresponding to the photoelectric conversion units 101 and 102 can be formed.
  • the p-type well region corresponds to a common anode region
  • the n-type semiconductor regions 121 and 122 correspond to cathode regions of the photoelectric conversion units 101 and 102, respectively.
  • a p-type semiconductor region 123 is arranged near the n-type semiconductor region 121
  • a p-type semiconductor region 124 is arranged near the n-type semiconductor region 122 .
  • These regions constitute voltage application sections 103 and 104 . By applying different voltages to the voltage applying units 103 and 104, the charge 501 can be distributed.
  • the charge 501 can be distributed.
  • the figure shows an example in which 1.5 V and 0 V are applied to the voltage application units 103 and 104, respectively.
  • an electric field is formed in the semiconductor substrate 120 in the direction from the voltage application section 103 to the voltage application section 104 . Since charges 501 in the figure are electrons, they move in the direction opposite to this electric field. As shown in the figure, the charge 501 moves toward the voltage application section 103 and reaches the n-type semiconductor region 121 . Thus, the charge 501 can be distributed to the photoelectric conversion units 101 and 102 .
  • the configuration of the pixel 100 is not limited to this example.
  • the pixel 100 having a configuration in which the MOS transistors 112 and 116 are commonly connected to one photoelectric conversion unit (photoelectric conversion unit 101) can also be applied.
  • the MOS transistors 112 and 116 alternately transfer the electric charges, so that the distribution can be performed.
  • FIG. 21 is a diagram showing a configuration example of a distance measuring device according to a modification of the embodiment of the present disclosure. This figure is a cross-sectional view showing a configuration example of the distance measuring device 1. As shown in FIG. A sensor unit 200 in FIG. 1 is different from the sensor unit 200 in FIG. 1 in that it includes a CPU 50 .
  • the present technology can also take the following configuration.
  • a light emission control unit that causes the light emitting unit to emit a pulse train of light that repeats a light emitting period and a non-light emitting period in which the luminance changes during the period; a light receiving unit that receives reflected light of the light reflected by an object; and a distance measuring unit that measures the distance to the object based on the time from emission of the light to reception of the reflected light.
  • the light emission control section causes the light emission section to emit the light having the light emission period including a high-luminance light emission period and a low-luminance light emission period.
  • the distance measuring sensor according to (2) wherein the light emission control unit emits light with luminance corresponding to the high-luminance light emission period and the low-luminance light emission period in different light emission periods.
  • the light receiving unit generates a plurality of light receiving signals based on the light receiving periods with different phases, The distance measuring sensor according to (8), wherein the distance measuring unit measures the distance to the object based on the plurality of light receiving signals.
  • a light emitting unit (10) a light emitting unit; a light emission control unit that causes the light emitting unit to emit a pulse train of light that repeats a light emitting period and a non-light emitting period in which the luminance changes during the period; a light receiving unit that receives reflected light of the light reflected by an object; a distance measuring unit that measures the distance to the object based on the time from emission of the light to reception of the reflected light.
  • distance measuring device 60 light emitting unit 62, 64 light emitting element 61, 63 light emitting element driving unit 100 pixel 200 sensor unit 210 imaging element 300 distance measuring sensor 310 light receiving unit 320 light emission control unit 330 distance measuring unit 433 high luminance light emitting period 434 low luminance Emission period

Landscapes

  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

La présente invention permet de réduire un processus de télémétrie dans un capteur de télémétrie et un dispositif de télémétrie. Ce capteur de télémétrie comprend une unité de commande d'émission de lumière, une unité de réception de lumière et une unité de télémétrie. L'unité de commande d'émission de lumière du capteur de télémétrie amène une unité d'émission de lumière à émettre un train d'impulsions de lumière qui répète une période d'émission de lumière pendant laquelle la luminance varie, et une période de non-émission de lumière. L'unité de réception de lumière du capteur de télémétrie reçoit la lumière réfléchie résultant de la lumière réfléchie par un objet. L'unité de télémétrie du capteur de télémétrie mesure la distance par rapport à l'objet, sur la base du temps entre l'émission de la lumière et la réception de la lumière réfléchie.
PCT/JP2022/002120 2021-02-16 2022-01-21 Capteur de télémétrie et dispositif de télémétrie WO2022176498A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-022683 2021-02-16
JP2021022683A JP2022124821A (ja) 2021-02-16 2021-02-16 測距センサ及び測距装置

Publications (1)

Publication Number Publication Date
WO2022176498A1 true WO2022176498A1 (fr) 2022-08-25

Family

ID=82930781

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/002120 WO2022176498A1 (fr) 2021-02-16 2022-01-21 Capteur de télémétrie et dispositif de télémétrie

Country Status (2)

Country Link
JP (1) JP2022124821A (fr)
WO (1) WO2022176498A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130107000A1 (en) * 2011-10-27 2013-05-02 Microvision, Inc. Scanning Laser Time of Flight 3D Imaging
US20180175586A1 (en) * 2016-12-16 2018-06-21 STMicroelectronics (Alps) SAS Sinusoidal optical emission method, and corresponding circuit
JP2019203741A (ja) * 2018-05-22 2019-11-28 スタンレー電気株式会社 Tof方式測距装置
WO2020184028A1 (fr) * 2019-03-11 2020-09-17 ソニーセミコンダクタソリューションズ株式会社 Dispositif de mesure de distance

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130107000A1 (en) * 2011-10-27 2013-05-02 Microvision, Inc. Scanning Laser Time of Flight 3D Imaging
US20180175586A1 (en) * 2016-12-16 2018-06-21 STMicroelectronics (Alps) SAS Sinusoidal optical emission method, and corresponding circuit
JP2019203741A (ja) * 2018-05-22 2019-11-28 スタンレー電気株式会社 Tof方式測距装置
WO2020184028A1 (fr) * 2019-03-11 2020-09-17 ソニーセミコンダクタソリューションズ株式会社 Dispositif de mesure de distance

Also Published As

Publication number Publication date
JP2022124821A (ja) 2022-08-26

Similar Documents

Publication Publication Date Title
US10686994B2 (en) Imaging device, and solid-state imaging element used for same
US10545239B2 (en) Distance-measuring imaging device and solid-state imaging device
US7362419B2 (en) Range image sensor
US7586077B2 (en) Reference pixel array with varying sensitivities for time of flight (TOF) sensor
KR102136850B1 (ko) 깊이 센서, 및 이의 동작 방법
CN108780151B (zh) 飞行时间距离测量装置及用于检测多路径误差的方法
US20140313376A1 (en) Processing of time-of-flight signals
WO2021103428A1 (fr) Système et procédé de mesure de profondeur
JP5744511B2 (ja) センサー、その動作方法、及びセンサーを含むデータ処理システム
JP2013076645A (ja) 距離画像生成装置および距離画像生成方法
JP5180501B2 (ja) 測距装置及び測距方法
US11698447B2 (en) Beam steering aware pixel clustering of segmented sensor area and implementing averaging algorithms for pixel processing
WO2017138032A1 (fr) Dispositif de mesure de distance par temps de vol et procédé de détection d'erreur multivoie
JP2020153799A (ja) 測距装置および測距方法
JP2006084430A (ja) 距離画像センサ
US11523099B2 (en) Measurement device
WO2022176498A1 (fr) Capteur de télémétrie et dispositif de télémétrie
JP2006084429A (ja) 距離画像センサ
JP2002195807A (ja) 光学式変位測定装置及びその投光光量補正方法
CN112540385A (zh) 光检测装置以及电子装置
WO2022158603A1 (fr) Dispositif de capture d'image de distance et procédé de capture d'image de distance
WO2022259640A1 (fr) Capteur de mesure de distance, dispositif de mesure de distance et procédé de mesure de distance
WO2022224580A1 (fr) Dispositif de mesure de distance et système de mesure de distance
US20230296738A1 (en) Distance measurement device, distance measurement method, and phase detection device
CN113518894A (zh) 光测距装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22755822

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22755822

Country of ref document: EP

Kind code of ref document: A1