WO2021051479A1 - Procédé et système de mesure de temps de vol fondée sur une interpolation - Google Patents

Procédé et système de mesure de temps de vol fondée sur une interpolation Download PDF

Info

Publication number
WO2021051479A1
WO2021051479A1 PCT/CN2019/113710 CN2019113710W WO2021051479A1 WO 2021051479 A1 WO2021051479 A1 WO 2021051479A1 CN 2019113710 W CN2019113710 W CN 2019113710W WO 2021051479 A1 WO2021051479 A1 WO 2021051479A1
Authority
WO
WIPO (PCT)
Prior art keywords
time
flight
light source
interpolation
histogram
Prior art date
Application number
PCT/CN2019/113710
Other languages
English (en)
Chinese (zh)
Inventor
何燃
朱亮
王瑞
闫敏
Original Assignee
深圳奥锐达科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳奥锐达科技有限公司 filed Critical 深圳奥锐达科技有限公司
Publication of WO2021051479A1 publication Critical patent/WO2021051479A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/10Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4804Auxiliary means for detecting or identifying lidar signals or the like, e.g. laser illuminators
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/484Transmitters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • G01S7/4865Time delay measurement, e.g. time-of-flight measurement, time of arrival measurement or determining the exact position of a peak
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • G01S7/487Extracting wanted echo signals, e.g. pulse detection
    • G01S7/4876Extracting wanted echo signals, e.g. pulse detection by removing unwanted signals
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/51Display arrangements

Definitions

  • This application relates to the field of computer technology, and in particular to a method and system for measuring flight time based on interpolation.
  • the Time of Flight (TOF) method calculates the distance of an object by measuring the flight time of the beam in space. Due to its high accuracy and large measurement range, it is widely used in consumer electronics, unmanned aerial vehicles, AR/ VR and other fields.
  • Distance measurement systems based on the time-of-flight principle, such as time-of-flight depth cameras, lidars, and other systems often include a light source emitting end and a receiving end.
  • the light source emits a light beam to the target space to provide illumination, and the receiving end receives the light beam reflected by the target. Calculate the distance of the object by calculating the time required for the beam to be reflected and received.
  • the lidar based on the time-of-flight method is mainly mechanical and non-mechanical.
  • the mechanical type uses a rotating base to achieve a 360-degree distance measurement with a large field of view.
  • the advantage is that the measurement range is large, but it has high power consumption and high resolution.
  • Non-mechanical mid-area array lidar can solve the problem of mechanical lidar to a certain extent. It transmits a surface beam of a certain field of view into space at a time and receives it through the area array receiver, so its resolution and frame The rate has been improved, and because there is no need to rotate parts, it is easier to install. Nevertheless, area array lidar still faces some challenges.
  • the dynamic measurement also has higher requirements on the frame rate and measurement accuracy.
  • the improvement of resolution, frame rate, and accuracy often depends on the circuit scale of the receiving end and the improvement of the modulation and demodulation method.
  • increasing the circuit scale will increase power consumption, signal-to-noise ratio and cost; in addition, it will also increase the amount of on-chip storage. This has brought serious challenges to mass production; current modem methods are also difficult to achieve high-precision, low-power consumption and other requirements.
  • the purpose of the present application is to provide a time-of-flight measurement method and measurement system based on interpolation, so as to solve at least one of the above-mentioned background art problems.
  • an embodiment of the present application provides an interpolation-based time-of-flight measurement method, which includes the following steps:
  • the first light source and the second light source are arranged on the same light source array, and the first light source and the second light source can be independently controlled in groups.
  • the interpolation includes one-dimensional interpolation or two-dimensional interpolation
  • the interpolation method includes at least one of linear interpolation, spline interpolation, and polynomial interpolation.
  • the step S2 further includes performing a difference calculation on the flight time values of multiple spots and pixels to be interpolated, and the interpolation calculation is performed only when the difference is less than a certain threshold.
  • the embodiment of the present application also provides a time-of-flight measurement system based on interpolation, including:
  • An emitter configured to emit a pulsed light beam, which includes a first light source and a second light source;
  • a collector configured to collect photons in the pulsed beam reflected by an object and form a photon signal, which includes a plurality of pixels;
  • the processing circuit is connected to the transmitter and the collector, and is used to perform the following steps to calculate the flight time: S1, obtain the first flight time of the first combined pixel corresponding to the first light source; S2, obtain by interpolation calculation The second time of flight of the second superpixel corresponding to the second light source; S3. According to the second time of flight, locate the second composite pixel corresponding to the second light source and draw a histogram; S4. Use the histogram The graph calculates the third flight time.
  • the first light source and the second light source are arranged on the same light source array, and the first light source and the second light source can be independently controlled in groups.
  • the interpolation includes one-dimensional interpolation or two-dimensional interpolation
  • the interpolation method includes at least one of linear interpolation, spline interpolation, and polynomial interpolation.
  • step S2 further includes performing a difference calculation on the flight time values of the multiple spots and pixels to be interpolated, and the interpolation calculation is performed only when the difference is less than a certain threshold.
  • the embodiment of the present application provides a method for measuring time of flight based on interpolation, including the following steps: S1, obtaining the first time of flight of the first combined pixel corresponding to the first light source; S2, obtaining the time of flight corresponding to the second light source through interpolation calculation The second flight time of the second superpixel; S3.
  • the second flight time locate the second composite pixel corresponding to the second light source and draw a histogram; S4, use the histogram to calculate the third flight time ;
  • the coarse flight time value is directly provided for most of the pixels through interpolation, so that these pixels can directly perform fine histogram drawing based on the coarse flight time value to calculate the high-precision fine flight time value, because there is no coarse histogram
  • the drawing steps can greatly reduce the calculation time, thereby increasing the frame rate.
  • Fig. 1 is a schematic diagram of a time-of-flight distance measurement system according to an embodiment of the present application.
  • Fig. 2 is a schematic diagram of a light source according to an embodiment of the present application.
  • Fig. 3 is a schematic diagram of a pixel unit in a collector according to an embodiment of the present application.
  • Fig. 4 is a schematic diagram of a readout circuit according to an embodiment of the present application.
  • Fig. 5 is a schematic diagram of a histogram according to an embodiment of the present application.
  • Fig. 6 is a dynamic histogram drawing time-of-flight measurement method according to an embodiment of the present application.
  • Fig. 7 is a time-of-flight measurement method according to an embodiment of the present application.
  • Fig. 8 is an interpolation-based time-of-flight measurement method according to an embodiment of the present application.
  • connection can be used for fixing or circuit connection.
  • first and second are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Therefore, the features defined with “first” and “second” may explicitly or implicitly include one or more of these features. In the description of the embodiments of the present application, “multiple” means two or more than two, unless otherwise specifically defined.
  • the present application provides an interpolation-based time-of-flight measurement method and measurement system.
  • the following describes an embodiment of the distance measurement system first.
  • a distance measurement system which has stronger resistance to ambient light and higher resolution.
  • Fig. 1 is a schematic diagram of a time-of-flight distance measurement system according to an embodiment of the present application.
  • the distance measurement system 10 includes a transmitter 11, a collector 12, and a processing circuit 13.
  • the transmitter 11 provides a emitted light beam 30 to the target space to illuminate an object 20 in the space. At least part of the emitted light beam 30 is reflected by the object 20 to form a reflected light beam. 40. At least part of the light signals (photons) of the reflected light beam 40 are collected by the collector 12.
  • the processing circuit 13 is connected to the transmitter 11 and the collector 12 respectively, and the trigger signals of the transmitter 11 and the collector 12 are synchronized to calculate the light beam from the transmitter
  • the time required for 11 to be emitted and received by the collector 12, that is, the flight time t between the emitted light beam 30 and the reflected light beam 40, further, the distance D of the corresponding point on the object can be calculated by the following formula:
  • c is the speed of light.
  • the transmitter 11 includes a light source 111 and an optical element 112.
  • the light source 111 can be a light source such as a light emitting diode (LED), an edge emitting laser (EEL), a vertical cavity surface emitting laser (VCSEL), etc., or an array light source composed of multiple light sources.
  • the array light source 111 is in a single block A plurality of VCSEL light sources are generated on a semiconductor substrate to form a VCSEL array light source chip.
  • the light beam emitted by the light source 111 may be visible light, infrared light, ultraviolet light, or the like.
  • the light source 111 emits a light beam outward under the control of the processing circuit 13.
  • the light source 111 emits a pulsed light beam at a certain frequency (pulse period) under the control of the processing circuit 13, which can be used in the direct time-of-flight method ( In Direct TOF measurement, the frequency is set according to the measurement distance, for example, it can be set to 1MHz-100MHz, and the measurement distance is several meters to several hundred meters. It is understandable that it may be a part of the processing circuit 13 or a sub-circuit independent of the processing circuit 13 to control the light source 111 to emit related light beams, such as a pulse signal generator.
  • the optical element 112 receives the pulsed beam from the light source 111, and optically modulates the pulsed beam, such as diffraction, refraction, reflection, etc., and then emits the modulated beam into space, such as a focused beam, a flood beam, and a structured light beam. Wait.
  • the optical element 112 may be one or more combinations of lenses, diffractive optical elements, masks, mirrors, MEMS galvanometers, and the like.
  • the processing circuit 13 can be an independent dedicated circuit, such as a dedicated SOC chip, FPGA chip, ASIC chip, etc., or a general-purpose processor.
  • a dedicated SOC chip such as a dedicated SOC chip, FPGA chip, ASIC chip, etc.
  • a general-purpose processor such as a general-purpose processor.
  • the processor in the terminal can be used as at least a part of the processing circuit 13.
  • the collector 12 includes a pixel unit 121 and an imaging lens unit 122.
  • the imaging lens unit 122 receives and guides at least part of the modulated light beam reflected by the object to the pixel unit 121.
  • the pixel unit 121 is composed of a single photon avalanche photodiode (SPAD), or an array pixel unit composed of multiple SPAD pixels.
  • the array size of the array pixel unit represents the resolution of the depth camera, such as 320 ⁇ 240 etc.
  • SPAD can respond to the incident single photon to realize the detection of single photon. Because of its high sensitivity and fast response speed, it can realize long-distance and high-precision measurement.
  • SPAD can count single photons, such as the use of time-correlated single photon counting (TCSPC) to realize the collection of weak light signals and the calculation of flight time .
  • TCSPC time-correlated single photon counting
  • connected to the pixel unit 121 also includes a readout circuit composed of one or more of a signal amplifier, a time-to-digital converter (TDC), an analog-to-digital converter (ADC) and other devices (not shown in the figure).
  • TDC time-to-digital converter
  • ADC analog-to-digital converter
  • these circuits can be integrated with the pixels, and they can also be part of the processing circuit 13. For ease of description, the processing circuit 13 will be collectively regarded.
  • the distance measurement system 10 may also include a color camera, an infrared camera, an IMU, and other devices.
  • the combination of these devices can achieve richer functions, such as 3D texture modeling, infrared face recognition, SLAM and other functions.
  • the transmitter 11 and the collector 12 can also be arranged in a coaxial form, that is, the two are realized by optical devices with reflection and transmission functions, such as a half mirror.
  • a single photon incident on the SPAD pixel will cause an avalanche
  • the SPAD will output an avalanche signal to the TDC circuit
  • the TDC circuit will detect the time interval between the photon emission from the emitter 11 and the avalanche.
  • the time interval is counted through the time-correlated single photon counting (TCSPC) circuit for histogram statistics to recover the waveform of the entire pulse signal, and the time corresponding to the waveform can be further determined, and the flight time can be determined based on this time , So as to achieve accurate flight time detection, and finally calculate the distance information of the object based on the flight time.
  • TCSPC time-correlated single photon counting
  • the maximum measurement range of the distance measurement system is Dmax
  • the corresponding maximum flight time is Generally, ⁇ t ⁇ t 1 is required to avoid signal aliasing, where c is the speed of light.
  • the time (frame period) to achieve a single frame measurement will not be less than n*t 1 , that is, the period of each frame measurement includes n photon counting measurements.
  • the maximum measurement range is 150m
  • the frame period will not be less than 100ms
  • the frame rate will be less than 10fps. It can be seen that the maximum measurement range in the TCSPC method limits the pulse period, which further affects the frame rate of distance measurement.
  • Fig. 2 is a schematic diagram of a light source according to an embodiment of the present application.
  • the light source 111 is composed of a plurality of sub-light sources arranged on a single substrate (or multiple substrates), and the sub-light sources are arranged on the substrate in a certain pattern.
  • the substrate may be a semiconductor substrate, a metal substrate, etc.
  • the sub-light source may be a light emitting diode, an edge-emitting laser emitter, a vertical cavity surface laser emitter (VCSEL), etc.
  • the light source 111 is composed of a plurality of VCSELs arranged on the semiconductor substrate.
  • An array of VCSEL chips composed of sub-light sources.
  • the sub-light source is used to emit light beams of any desired wavelength, such as visible light, infrared light, and ultraviolet light.
  • the light source 111 emits light under the modulation drive of the driving circuit (which may be part of the processing circuit 13), such as continuous wave modulation, pulse modulation, and the like.
  • the light source 111 can also emit light in groups or as a whole under the control of the driving circuit.
  • the light source 111 includes a first sub-light source array 201, a second sub-light source array 202, etc., and the first sub-light source array 201 emits light under the control of the first driving circuit.
  • the second sub-light source array 202 emits light under the control of the second driving circuit.
  • the arrangement of the sub-light sources can be a one-dimensional arrangement or a two-dimensional arrangement, and can be a regular arrangement or an irregular arrangement.
  • Fig. 3 is a schematic diagram of a pixel unit in a collector according to an embodiment of the present application.
  • the pixel unit includes a pixel array 31 and a readout circuit 32.
  • the pixel array 31 includes a two-dimensional array composed of a plurality of pixels 310 and a readout circuit 32 composed of a TDC circuit 321, a histogram circuit 322, etc., where the pixel array is used for Collect at least part of the light beam reflected by the object and generate the corresponding photon signal.
  • the readout circuit 32 is used to process the photon signal to draw a histogram reflecting the pulse waveform emitted by the light source in the transmitter. Further, it can also be based on the histogram. Calculate the flight time, and finally output the result.
  • the readout circuit 32 may be composed of a single TDC circuit and histogram circuit, or may be an array readout circuit composed of multiple TDC circuit units and histogram circuit units.
  • the optical element 112 in the collector 12 will guide the spot beam to the corresponding pixel.
  • the size of a single spot is set to correspond to multiple pixels (the correspondence here can be understood as imaging, and the optical element 112 generally includes an imaging lens).
  • the pixel area composed of the corresponding multiple pixels is called "combined pixel" in this application.
  • the size of the combined pixel can be based on actual conditions. Need to be set, including at least one pixel, for example, it can be 3 ⁇ 3, 4 ⁇ 4 size. Generally, the light spot is round, oval, etc., and the combined pixel size should be set to be the same as the light spot size, or slightly smaller than the light spot size, but considering the different magnifications caused by the distance of the measured object, the combined pixel The size needs to be considered comprehensively when setting.
  • the pixel unit 31 includes an array composed of 14 ⁇ 18 pixels as an example for description.
  • the measurement system 10 between the transmitter 11 and the collector 12 can be divided into coaxial and off-axis according to the different setting modes. For the coaxial situation, the light beam emitted by the transmitter 11 will be reflected by the collector 12 after being reflected by the object to be measured.
  • the position of the combined pixel will not be affected by the distance of the measured object; but for off-axis situations, due to the existence of parallax, when the measured object is at different distances, the position of the light spot on the pixel unit is also Will change, usually along the baseline (the line between the transmitter 11 and the collector 12, the horizontal direction is used to represent the baseline direction in this application) direction, so when the distance of the measured object is unknown, it The position of the pixel is uncertain.
  • this application will set a pixel area (here called "super pixel”) composed of multiple pixels exceeding the number of pixels in the combined pixel to receive the reflected spot light beam.
  • the size of a super pixel should exceed at least one super pixel.
  • the size of the super pixel is the same as the sum pixel along the vertical direction of the baseline, and is larger than the sum pixel along the baseline direction.
  • the number of super pixels is generally the same as the number of spot beams collected by the collector 12 in a single measurement, which is 4 ⁇ 3 in FIG. 3.
  • the super pixel is set to: when at the lower limit of the measurement range, that is, the spot falls on one side of the super pixel (left or right, depending on the relative position of the emitter 11 and the collector 12) at close range. Position); when at the upper limit of the measurement range, that is, the spot falls on the other side of the super pixel at a distance.
  • the super pixels are set to a size of 2x6.
  • the corresponding super pixels are 361, 371, and 381, respectively.
  • the spots 363, 373, and 383 are respectively far and middle. , The spot beam reflected by a close object, the corresponding combined pixels fall on the left, middle and right side of the super pixel respectively.
  • the combined pixels share one TDC circuit unit, that is, one TDC circuit unit is connected to each pixel in the combined pixel.
  • the TDC circuit unit Both can calculate the flight time corresponding to the photon signal. This situation is more suitable for the coaxial situation, for the off-axis situation, because the combined pixel position will change with the distance of the measured object.
  • a TDC circuit array composed of 4 ⁇ 3 TDC circuit units will be included.
  • the super pixels share a TDC circuit unit, that is, a TDC circuit unit is connected to each pixel in the super pixel.
  • a TDC circuit unit is connected to each pixel in the super pixel.
  • the TDC circuit unit Both can calculate the flight time corresponding to the photon signal. Since the super pixel can include the pixel shift caused by the off-axis parallax, the super pixel sharing TDC can be applied to the off-axis situation.
  • a TDC circuit array composed of 4 ⁇ 3 TDC circuit units will be included. Sharing the TDC circuit can effectively reduce the number of TDC circuits, thereby reducing the size and power consumption of the readout circuit.
  • the number of spots that can be collected will be much smaller than the number of pixels, in other words, the effective depth of the collection
  • the resolution of the data is much smaller than the pixel resolution.
  • the pixel resolution in Figure 3 is 14 ⁇ 18, and the spot distribution is 4 ⁇ 3, that is, the effective depth data resolution of a single frame measurement is 4 ⁇ 3.
  • multi-frame measurement can be used.
  • the spots emitted by the transmitter 11 during the multi-frame measurement are "off", resulting in a scanning effect.
  • the spots received by the collector 12 are in multiple frames. Deviations also occur in the measurement.
  • the spots corresponding to two adjacent frames of measurement in Figure 3 are 343 and 353 respectively, which can improve the resolution.
  • the “deviation” of the spots can be achieved by grouping control of the sub-light sources on the light source 111, that is, in the measurement of two or more adjacent frames, the adjacent sub-light sources are sequentially turned on, for example, in the first frame.
  • the super pixels corresponding to the spots in different positions also need to be deviated when setting.
  • the super pixel corresponding to the spot 343 is 341
  • the super pixel corresponding to the spot 353 is 351
  • the super pixel is 351.
  • 351 is laterally shifted relative to super pixel 341, and there is a partial pixel overlap between super pixel 341 and super pixel 351.
  • super pixels measured in multiple frames they overlap each other.
  • the pixel area connected by a single TDC circuit unit will include the area composed of all super pixels that deviate in the multi-frame measurement, and there is overlap between the pixel areas corresponding to two adjacent TDC circuit units.
  • the pixel area 391 shares a TDC circuit unit, and the pixel area 391 includes 6 superpixels corresponding to 6 frames of measurement when the 6 groups of sub-light sources are turned on in sequence.
  • adjacent pixel regions 392 share a TDC circuit unit, and there is a partial overlap between the two pixel regions 391 and 392, which results in some pixels connected to two TDC circuit units.
  • the processing circuit 13 will gate the corresponding pixels so that the acquired photon signals can be measured by a single TDC circuit unit, so as to avoid crosstalk and errors.
  • the number of TDC circuits is the same as the number of spots collected by the collector 12 during a single frame measurement. In Figure 3, the number of spots is 4 ⁇ 3.
  • Each shared TDC circuit is connected to 4 ⁇ 10 pixels, and adjacent TDCs are connected to each other. There are 4 ⁇ 4 pixels overlap between the pixel regions connected by the circuit unit.
  • the TDC circuit will receive the photon signal from the pixel in the super pixel area connected to it, and calculate the time interval between the signal and the starting clock signal (ie, flight Time), and convert the time interval into a temperature code or binary code and save it in the histogram circuit.
  • the histogram circuit can draw a histogram that reflects the pulse waveform. Based on the histogram, the pulse can be accurately obtained Flight time.
  • the larger the measurement range the wider the measurable time interval of the TDC circuit is required. If the accuracy requirement is higher, the higher the time resolution of the TDC circuit is required. Both the wider the time interval and the higher the time resolution are required.
  • the TDC circuit uses a larger scale to output a binary code with a larger number of digits. Due to the increase in the number of binary code digits, the storage capacity of the memory of the histogram circuit is higher. The larger the memory capacity, the higher the cost, and the greater the difficulty in mass production of monolithic integration. For this reason, the present application provides a readout circuit solution with adjustable histogram circuit.
  • Fig. 4 is a schematic diagram of a readout circuit according to an embodiment of the present application.
  • the readout circuit includes a TDC circuit 41 and a histogram circuit 42.
  • the TDC circuit 41 collects the time interval of the photon signal and converts the time interval into a time code (binary code, temperature code, etc.), and then the histogram circuit 42 will be based on this
  • the time code is counted on the corresponding internal time unit (that is, the storage unit used to save time information), such as adding 1, after multiple measurements, the photon counts in all time units can be counted and the time histogram can be drawn Figure.
  • the ordinate of the time unit ⁇ T is the photon count value stored in the corresponding storage unit. Based on the histogram, the highest peak method can be used to determine the pulse The position of the waveform, and get the corresponding flight time t.
  • the histogram circuit 42 includes an address decoder 421, a memory matrix 422, a read/write circuit 424, and a histogram drawing circuit 425.
  • the TDC circuit inputs the acquired time code (binary code, temperature code, etc.) that reflects the time interval to the address decoder 421, and the address decoder 421 converts it into address information, which will be stored in the storage matrix 422 in.
  • the storage matrix 422 includes a plurality of storage units 423, that is, time units. Each storage unit 423 is pre-configured with a certain address (or address interval).
  • the read/write circuit 424 When the time code address received by the address decoder 421 is equal to a certain When the address of the storage unit is the same or within the address range of the storage unit, the read/write circuit 424 will perform the +1 operation on the corresponding storage unit, that is, complete one photon count, and after multiple measurements, the value of each storage unit The data reflects the number of photons received during the time interval. After a single frame measurement (multiple measurements), the data of all memory cells in the memory matrix 422 are read out to the histogram drawing circuit 425 for histogram drawing.
  • a control signal is applied to the histogram circuit 42 through the processing circuit to dynamically set the address (address interval) of each storage unit 423. , Thereby further realizing the dynamic control of the histogram time resolution ⁇ T and/or the time interval width T. For example, under the premise that the number of storage units 423 remains unchanged, by setting the address interval corresponding to the storage unit 423 to a larger time interval, that is, increasing the width ⁇ T of the time unit, the time interval interval that the total storage matrix can store will change. Larger, the total time interval of the histogram will become larger.
  • the histogram with a larger time interval is called a thick histogram; for example, the address interval corresponding to the storage unit 423 can be set to a smaller time interval.
  • the time interval interval that the total storage matrix can store is reduced, but the time resolution of storage will increase, and the time resolution of the histogram will increase.
  • Is a fine histogram is used as a fine histogram.
  • Fig. 6 is a dynamic histogram drawing time-of-flight measurement method according to an embodiment of the present application. It includes the following steps:
  • Step 1 Draw a coarse histogram in coarse precision time units. That is, the address or address interval corresponding to each time unit in the storage matrix 422 is configured by applying a control signal, that is, T and ⁇ T are set, and ⁇ T is configured as a larger time interval ⁇ T 1 in this step.
  • the time interval ⁇ T 1 should be set in consideration of the measurement range and the number of histogram storage units, that is, the flight time corresponding to the measurement range is allocated to all the histogram storage units. , Such as equal distribution or non-equal distribution, etc., so that all storage units can cover the measurement range.
  • the flight time value obtained from each measurement is matched to perform an operation of adding 1 to the corresponding time unit, and finally the drawing of the rough histogram is completed.
  • Step 2 Use the rough histogram to calculate the rough flight time value t 1. Based on the rough histogram, the maximum peak value method can be used to find the pulse waveform position, and the corresponding flight time can be read as the rough flight time value t 1 .
  • the accuracy or minimum resolution of the time value is the time interval ⁇ T 1 of the time unit.
  • the measurement range may be divided into several intervals, and each interval corresponds to a respective flight time interval, and the time interval ⁇ T of each time interval T may be the same or different.
  • drawing a rough histogram you can draw each time interval one by one. Since the distance of the measured object is unknown, the time interval in which the corresponding flight time will fall is also unknown, so it may be in a certain time interval.
  • the pulse waveform cannot be detected, that is, the rough flight time value cannot be calculated.
  • the waveform position cannot be found based on the rough histogram in step 2
  • the rough histogram is drawn until the pulse waveform is found in the rough histogram.
  • you can set the number of cycles for example, when the number of rough histogram drawing times exceeds a certain threshold (such as 3 times) , It is considered that the target is not detected this time, or that the target is located at infinity this time, thus ending the measurement.
  • Step 3 According to the obtained rough flight time value, draw a fine histogram in fine time units. At this time, since the rough value of the flight time value is already known, one more round of measurement can be performed and the corresponding histogram is drawn. At this time, the histogram circuit is controlled by the control signal and corresponds to each time unit in the storage matrix 422.
  • the address or address interval is configured as a smaller time interval ⁇ T 2 . Generally, the time interval ⁇ T 2 only needs to correspond to a smaller measurement range interval that can contain the true flight time value and the number of histogram storage units when setting.
  • the measurement range interval can be set as a rough flight time value
  • add a certain margin on both sides for example, it can be set to [t 1 -T',t 1 -T'], where the smaller T'is set, the smaller the time interval ⁇ T 2 , and the higher the resolution.
  • T' the time interval
  • the ratio of the margin to the time interval of the coarse histogram may be set in the range of 1%-25%. Then a new round of multiple measurements is performed, and the flight time value obtained each time is matched and the corresponding time unit is increased by 1 to complete the drawing of the fine histogram.
  • Step 4 Use the fine histogram to calculate the fine flight time t 2. Based on the fine histogram, the maximum peak method can be used to find the waveform position, and the corresponding flight time can be read as the fine flight time value t 2 .
  • the measurement method of the above-mentioned histogram dynamic coarse and fine adjustment is essentially a process of first performing coarse positioning in a larger measurement range, and then performing fine measurement based on the positioning result. It is understandable that the above-mentioned coarse and fine adjustment method can also be extended to three or more steps of measurement. For example, in some embodiments, the first time resolution is first measured to obtain the first flight time, and then based on The first flight time is measured with the second time resolution to obtain the second flight time, and finally based on the second flight time with the third time resolution, the third flight time is obtained. The accuracy of the three times is increased successively, and finally a higher-precision measurement can be achieved.
  • the specified time interval generally includes the time interval T drawn by the histogram.
  • the time interval of the histogram is [3ns, 10ns]
  • the time interval during which the pixel is activated can be set to [2.5ns, 10.5ns].
  • Fig. 7 is a time-of-flight measurement method according to another embodiment of the present application. The following will be described with reference to Figure 3. The method includes the following steps:
  • Step 1 Receive the super pixel TDC output signal, and draw a coarse histogram in coarse-precision time units. Since the distance of the object is not clear before the measurement, the position of the spot cannot be determined, that is, the position of the combined pixel cannot be determined, and the combined pixel may fall into different positions of the super pixel according to the distance of the object. Therefore, in this step, firstly, each pixel in the super pixel is enabled to be in an activated state to receive photons, and the photon signal output by the shared TDC of the super pixel is received, and then the histogram is drawn. The histogram adopts the dynamic adjustment histogram scheme shown in Fig. 6, in this step, the coarse histogram will be drawn using the coarse precision time unit.
  • Step 2 Use the rough histogram to calculate the rough flight time value t 1. Based on the rough histogram, use the maximum peak method to find the waveform position, and read the corresponding flight time as the rough flight time value t 1 , the flight time The accuracy or minimum resolution of the value is the time interval ⁇ T 1 of the time unit.
  • the measurement range may be divided into several intervals, and each interval corresponds to a respective flight time interval, and the time interval ⁇ T of each time interval T may be the same or different.
  • drawing a rough histogram you can draw each time interval one by one. Since the distance of the measured object is unknown, it is also unknown which time interval the corresponding flight time will fall into, so it may be in a certain time interval. The pulse waveform cannot be detected when drawing the rough histogram.
  • step 2 when the waveform position cannot be found based on the rough histogram in step 2, then go back to step 1 to draw the next rough histogram until the rough histogram Until the pulse waveform is found in the figure.
  • the number of cycles for example, when the number of rough histogram drawing times exceeds a certain threshold (such as 3 times) , It is considered that the target is not detected this time, or that the target is located at infinity this time, thus ending the measurement.
  • Step 3 According to the obtained rough time-of-flight value, locate the combined pixels and draw a fine histogram in fine time units. Since the rough time-of-flight value has been clarified, it can be based on the rough time-of-flight value and the parallax to locate the position of the combined pixel. It is usually necessary to save the relationship between the position of the combined pixel and the rough time-of-flight value in the system in advance After obtaining the rough time-of-flight value, the position of the combined pixel can be directly located according to the relationship; then based on the position of the combined pixel, only the combined pixel is activated, and a fine histogram is drawn in fine time units.
  • the histogram circuit is controlled by the control signal and stores the address or address corresponding to each time unit in the matrix 422.
  • the address interval is configured as a small time interval ⁇ T 2 .
  • the time interval ⁇ T 2 only needs to correspond to a smaller measurement range interval that can contain the true flight time value and the number of histogram storage units when setting it.
  • the measurement range interval can be set as a rough flight time value
  • a certain margin is added on both sides, for example, it can be set to [t 1 -T',t 1 -T'], where the smaller T'is set, the smaller the time interval ⁇ T 2 , and the higher the resolution.
  • T' the time interval
  • the ratio of the margin to the time interval of the coarse histogram may be set in the range of 1% to 25%. Then a new round of multiple measurements is performed, and the flight time value obtained each time is matched and the corresponding time unit is increased by 1 to complete the drawing of the fine histogram.
  • Step 4 Use the fine histogram to calculate the fine flight time t 2. Based on the fine histogram, the maximum peak method can be used to find the waveform position, and the corresponding flight time can be read as the fine flight time value t 2 .
  • the measurement method of the above-mentioned histogram dynamic coarse and fine adjustment is essentially a process of first performing coarse positioning in a larger measurement range, and then performing fine measurement based on the positioning result. It is understandable that the above-mentioned coarse and fine adjustment method can also be extended to three or more steps of measurement. For example, in some embodiments, the first time resolution is first measured to obtain the first flight time, and then based on The first flight time is measured with the second time resolution to obtain the second flight time, and finally based on the second flight time with the third time resolution, the third flight time is obtained. The accuracy of the three times is increased successively, and finally a higher-precision measurement can be achieved.
  • the specified time interval generally includes the time interval T drawn by the histogram.
  • the time interval of the histogram is [3ns, 10ns]
  • the time interval during which the pixel is activated can be set to [2.5ns, 10.5ns].
  • the embodiments in Figures 2 and 3 introduce examples of improving resolution through multi-frame measurement. It can be understood that when multi-frame measurement is performed, the depth data of each frame is measured Both can use the histogram dynamic adjustment scheme shown in Fig. 6 or Fig. 7. For example, when the first sub-light source array 201 is turned on, the dynamic coarse-fine histogram is drawn to obtain the first frame of depth image; when the second sub-light source array 202 is turned on, the dynamic coarse-fine histogram is drawn to obtain the second frame of depth image; The first and second frames of depth images get higher resolution depth images. In some embodiments, depth images of more than 3 frames can also be collected and merged into a higher resolution depth image.
  • this application provides an interpolation-based time-of-flight measurement method according to an embodiment of the application as shown in FIG. 8. The method includes the following steps:
  • Step 1 Obtain the first flight time of the first combined pixel corresponding to the first light source.
  • the first light source in the emitter 11 is turned on to emit the spot beam corresponding to the first light source, and the spot beam will fall on the closed pixel on the pixel unit 31 in the collector 12, as shown in 4x3 in Figure 3
  • the processing circuit can further obtain the first flight time of the combined pixel.
  • the coarse-fine dynamic adjustment scheme in the embodiment shown in FIG. 6 or FIG. 7 or any other scheme can be used.
  • Step 2 Obtain the second flight time of the second superpixel corresponding to the second light source through interpolation calculation.
  • a spot beam adjacent to the spot beam corresponding to the first light source will be emitted, and the spot beam will also fall on the combined pixel of the collector 12.
  • the dotted circle draws a spot 353.
  • the spot 353 and the spot 343 are spatially staggered because the positions of the first light source and the second light source are staggered, and therefore the respective corresponding pixels are also staggered. Generally, when the space points are relatively close, the distance between the two points will not differ too much.
  • the flight time value of the corresponding pixel corresponding to the spot 343 obtained in step 1 can be used as the second flight time value (rough flight time) of the super pixel 351 corresponding to the spot 353, and then the fine flight time is performed. Calculation.
  • the second time-of-flight value of the superpixel of the spot 353 can be estimated by using the combined pixels corresponding to multiple first light sources around the spot 353, for example, using the time-of-flight values of the left and right combined pixels for interpolation.
  • the interpolation may be one-dimensional interpolation or two-dimensional interpolation, and the interpolation method may be at least one of interpolation methods such as linear interpolation, spline interpolation, and polynomial interpolation.
  • Step 3 According to the second flight time, locate the second composite pixel corresponding to the second light source and draw a histogram. After the second flight time is obtained by interpolation, based on the flight time and the parallax, the position of the spot in the superpixel, that is, the position of the combined pixel, can be located, and then based on the position of the combined pixel, only the combined pixel is activated, and at the same time Plot histograms in fine time units.
  • Step 4 Use the histogram to calculate the third flight time. Based on the histogram, the maximum peak value method can be used to find the waveform position, and the corresponding flight time can be read as the third (fine) flight time value t 2 , the flight time value The accuracy or minimum resolution of is the time interval ⁇ T 2 of the time unit.
  • the flight time measurement method in the above steps only needs to use the coarse-fine histogram drawing method for the flight time calculation of only a few spots, and at least 2 frames of flight time measurement are required to obtain high accuracy.
  • the flight time value of most spots can be calculated by using the flight time value of known spots as the rough flight time value of the coarse histogram, and based on the rough flight time value, only a single fine histogram drawing is required.
  • this can greatly improve efficiency. For example, if the light sources are divided into 6 groups, only the first group of light sources need to perform coarse-fine measurement when the light source is turned on, and the subsequent 5 groups only need to perform a single fine measurement when the flight time measurement is performed after the light source is turned on.
  • the flight time values of the combined pixels corresponding to the multiple spots to be interpolated are greater than a certain threshold, the difference between the two spots on the surface There is a jump in the surface depth value of the object between the two spots, the spots between the two spots will still maintain the measurement scheme of the thick-fine histogram drawing, as long as the interpolation calculation is performed when it is less than the subtraction value.
  • the first time of flight of the first combined pixel may also be the rough time of flight, that is, when the first time of flight of the first combined pixel is demodulated and calculated, only a single rough histogram drawing is required, and then the Interpolation is performed based on the rough flight time obtained by the rough histogram drawing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Optics & Photonics (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

L'invention concerne un procédé de mesure de temps de vol fondée sur une interpolation. Le procédé consiste : S1, à acquérir un premier temps de vol d'un premier pixel combiné correspondant à une première source de lumière (801) ; S2, à obtenir, par l'intermédiaire d'un calcul d'interpolation, un deuxième temps de vol d'un deuxième superpixel correspondant à une deuxième source de lumière (802) ; S3, à localiser, en fonction du deuxième temps de vol, un deuxième pixel combiné correspondant à la deuxième source de lumière, et à tracer un histogramme (803) ; et S4, à calculer un troisième temps de vol au moyen de l'histogramme (804). Des valeurs de temps de vol grossières sont directement fournies à la plupart des pixels au moyen d'un mode d'interpolation, de sorte que le tracé d'un histogramme fin puisse être directement exécuté au moyen desdits pixels en fonction des valeurs de temps de vol grossières, de manière à calculer une valeur de temps de vol fine à précision élevée ; et en raison de l'omission de l'étape de tracé d'histogrammes grossiers, le temps de calcul peut être fortement réduit, ce qui permet d'augmenter la fréquence de trame.
PCT/CN2019/113710 2019-09-19 2019-10-28 Procédé et système de mesure de temps de vol fondée sur une interpolation WO2021051479A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910889455.6A CN110596725B (zh) 2019-09-19 2019-09-19 基于插值的飞行时间测量方法及测量系统
CN201910889455.6 2019-09-19

Publications (1)

Publication Number Publication Date
WO2021051479A1 true WO2021051479A1 (fr) 2021-03-25

Family

ID=68861628

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/113710 WO2021051479A1 (fr) 2019-09-19 2019-10-28 Procédé et système de mesure de temps de vol fondée sur une interpolation

Country Status (2)

Country Link
CN (1) CN110596725B (fr)
WO (1) WO2021051479A1 (fr)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11885915B2 (en) * 2020-03-30 2024-01-30 Stmicroelectronics (Research & Development) Limited Time to digital converter
CN111487639B (zh) * 2020-04-20 2024-05-03 深圳奥锐达科技有限公司 一种激光测距装置及方法
CN114402225A (zh) * 2020-06-03 2022-04-26 深圳市大疆创新科技有限公司 测距方法、测距装置和可移动平台
CN113848538A (zh) * 2020-06-25 2021-12-28 深圳奥锐达科技有限公司 一种色散光谱激光雷达系统及测量方法
CN114355384B (zh) * 2020-07-07 2024-01-02 柳州阜民科技有限公司 飞行时间tof系统和电子设备
CN111856433B (zh) * 2020-07-25 2022-10-04 深圳奥锐达科技有限公司 一种距离测量系统及测量方法
CN112100449B (zh) * 2020-08-24 2024-02-02 深圳市力合微电子股份有限公司 实现动态大范围和高精度定位的d-ToF测距优化存储方法
WO2022109826A1 (fr) * 2020-11-25 2022-06-02 深圳市速腾聚创科技有限公司 Procédé et appareil de mesure de distance, dispositif électronique et support de stockage
CN112731425B (zh) * 2020-11-29 2024-05-03 奥比中光科技集团股份有限公司 一种处理直方图的方法、距离测量系统及距离测量设备
CN112558096B (zh) * 2020-12-11 2021-10-26 深圳市灵明光子科技有限公司 一种基于共享内存的测距方法、系统以及存储介质
CN113514842A (zh) * 2021-03-08 2021-10-19 奥诚信息科技(上海)有限公司 一种距离测量方法、系统及装置
CN115144864A (zh) * 2021-03-31 2022-10-04 上海禾赛科技有限公司 存储方法、数据处理方法、激光雷达和计算机可读存储介质
CN113484870B (zh) * 2021-07-20 2024-05-14 Oppo广东移动通信有限公司 测距方法与装置、终端及非易失性计算机可读存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105911536A (zh) * 2016-06-12 2016-08-31 中国科学院上海技术物理研究所 一种具备实时门控功能的多通道光子计数激光雷达接收机
CN107462898A (zh) * 2017-08-08 2017-12-12 中国科学院西安光学精密机械研究所 基于单光子阵列的选通型漫反射绕角成像系统与方法
US20180329064A1 (en) * 2017-05-09 2018-11-15 Stmicroelectronics (Grenoble 2) Sas Method and apparatus for mapping column illumination to column detection in a time of flight (tof) system
CN109343070A (zh) * 2018-11-21 2019-02-15 深圳奥比中光科技有限公司 时间飞行深度相机
CN109725326A (zh) * 2017-10-30 2019-05-07 豪威科技股份有限公司 飞行时间相机
CN109870704A (zh) * 2019-01-23 2019-06-11 深圳奥比中光科技有限公司 Tof相机及其测量方法
CN110073244A (zh) * 2016-12-12 2019-07-30 森斯尔科技有限公司 用于确定光子的飞行时间的直方图读出方法和电路
CN110235024A (zh) * 2017-01-25 2019-09-13 苹果公司 具有调制灵敏度的spad检测器

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1499851A4 (fr) * 2002-04-15 2008-11-19 Toolz Ltd Dispositif de mesure de distance
CN101680953B (zh) * 2007-05-16 2014-08-13 皇家飞利浦电子股份有限公司 虚拟pet探测器和用于pet的准像素化读出方案
US8587771B2 (en) * 2010-07-16 2013-11-19 Microsoft Corporation Method and system for multi-phase dynamic calibration of three-dimensional (3D) sensors in a time-of-flight system
WO2012014077A2 (fr) * 2010-07-29 2012-02-02 Waikatolink Limited Appareil et procédé de mesure des caractéristiques de distance et/ou d'intensité d'objets
KR102191139B1 (ko) * 2013-08-19 2020-12-15 바스프 에스이 광학 검출기
DE102014100696B3 (de) * 2014-01-22 2014-12-31 Sick Ag Entfernungsmessender Sensor und Verfahren zur Erfassung und Abstandsbestimmung von Objekten
JP6118948B2 (ja) * 2014-03-03 2017-04-19 コンソーシアム・ピー・インコーポレーテッドConsortium P Incorporated 除外区域を用いたリアルタイム位置検出
GB201413564D0 (en) * 2014-07-31 2014-09-17 Stmicroelectronics Res & Dev Time of flight determination
CN107015234B (zh) * 2017-05-19 2019-08-09 中国科学院国家天文台长春人造卫星观测站 嵌入式卫星激光测距控制系统
DE102017113675B4 (de) * 2017-06-21 2021-11-18 Sick Ag Optoelektronischer Sensor und Verfahren zur Messung der Entfernung zu einem Objekt
EP3428683B1 (fr) * 2017-07-11 2019-08-28 Sick Ag Capteur optoélectronique et procédé de mesure de distance
EP3460508A1 (fr) * 2017-09-22 2019-03-27 ams AG Corps semi-conducteur et procédé pour les mesures de temps de vol
US10996323B2 (en) * 2018-02-22 2021-05-04 Stmicroelectronics (Research & Development) Limited Time-of-flight imaging device, system and method
CN209167538U (zh) * 2018-11-21 2019-07-26 深圳奥比中光科技有限公司 时间飞行深度相机
CN110111239B (zh) * 2019-04-28 2022-12-20 叠境数字科技(上海)有限公司 一种基于tof相机软分割的人像头部背景虚化方法

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105911536A (zh) * 2016-06-12 2016-08-31 中国科学院上海技术物理研究所 一种具备实时门控功能的多通道光子计数激光雷达接收机
CN110073244A (zh) * 2016-12-12 2019-07-30 森斯尔科技有限公司 用于确定光子的飞行时间的直方图读出方法和电路
CN110235024A (zh) * 2017-01-25 2019-09-13 苹果公司 具有调制灵敏度的spad检测器
US20180329064A1 (en) * 2017-05-09 2018-11-15 Stmicroelectronics (Grenoble 2) Sas Method and apparatus for mapping column illumination to column detection in a time of flight (tof) system
CN107462898A (zh) * 2017-08-08 2017-12-12 中国科学院西安光学精密机械研究所 基于单光子阵列的选通型漫反射绕角成像系统与方法
CN109725326A (zh) * 2017-10-30 2019-05-07 豪威科技股份有限公司 飞行时间相机
CN109343070A (zh) * 2018-11-21 2019-02-15 深圳奥比中光科技有限公司 时间飞行深度相机
CN109870704A (zh) * 2019-01-23 2019-06-11 深圳奥比中光科技有限公司 Tof相机及其测量方法

Also Published As

Publication number Publication date
CN110596725B (zh) 2022-03-04
CN110596725A (zh) 2019-12-20

Similar Documents

Publication Publication Date Title
WO2021051477A1 (fr) Système et procédé de mesure de distance par temps de vol comportant un histogramme réglable
WO2021051478A1 (fr) Système et procédé de mesure de distance basé sur le temps de vol pour circuit tdc à double partage
WO2021051479A1 (fr) Procédé et système de mesure de temps de vol fondée sur une interpolation
WO2021051481A1 (fr) Procédé de mesure de distance par temps de vol par traçage d'un histogramme dynamique et système de mesure associé
WO2021051480A1 (fr) Procédé de mesure de distance de temps de vol basé sur un dessin d'histogramme dynamique et système de mesure
WO2021072802A1 (fr) Système et procédé de mesure de distance
CN111108407B (zh) 半导体主体和用于飞行时间测量的方法
CN111025317B (zh) 一种可调的深度测量装置及测量方法
WO2021248892A1 (fr) Système de mesure de distance et procédé de mesure
CN101449181B (zh) 测距方法和用于确定目标的空间维度的测距仪
CN110221272B (zh) 时间飞行深度相机及抗干扰的距离测量方法
CN110221274B (zh) 时间飞行深度相机及多频调制解调的距离测量方法
KR20190055238A (ko) 물체까지의 거리를 결정하기 위한 시스템 및 방법
CN110221273B (zh) 时间飞行深度相机及单频调制解调的距离测量方法
CN112731425B (zh) 一种处理直方图的方法、距离测量系统及距离测量设备
CN110780312B (zh) 一种可调距离测量系统及方法
US20220043129A1 (en) Time flight depth camera and multi-frequency modulation and demodulation distance measuring method
CN111965658B (zh) 一种距离测量系统、方法及计算机可读存储介质
WO2021035694A1 (fr) Système et procédé de mesure de distance de temps de vol à base d'un codage temporel
CN212135134U (zh) 基于时间飞行的3d成像装置
US11709271B2 (en) Time of flight sensing system and image sensor used therein
CN111796295A (zh) 一种采集器、采集器的制造方法及距离测量系统
CN211148917U (zh) 一种距离测量系统
WO2022241942A1 (fr) Caméra de profondeur et procédé de calcul de profondeur
US20230019246A1 (en) Time-of-flight imaging circuitry, time-of-flight imaging system, and time-of-flight imaging method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19945878

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19945878

Country of ref document: EP

Kind code of ref document: A1