WO2021051479A1 - Interpolation-based time of flight measurement method and system - Google Patents

Interpolation-based time of flight measurement method and system Download PDF

Info

Publication number
WO2021051479A1
WO2021051479A1 PCT/CN2019/113710 CN2019113710W WO2021051479A1 WO 2021051479 A1 WO2021051479 A1 WO 2021051479A1 CN 2019113710 W CN2019113710 W CN 2019113710W WO 2021051479 A1 WO2021051479 A1 WO 2021051479A1
Authority
WO
WIPO (PCT)
Prior art keywords
time
flight
light source
interpolation
histogram
Prior art date
Application number
PCT/CN2019/113710
Other languages
French (fr)
Chinese (zh)
Inventor
何燃
朱亮
王瑞
闫敏
Original Assignee
深圳奥锐达科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳奥锐达科技有限公司 filed Critical 深圳奥锐达科技有限公司
Publication of WO2021051479A1 publication Critical patent/WO2021051479A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/10Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4804Auxiliary means for detecting or identifying lidar signals or the like, e.g. laser illuminators
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/484Transmitters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • G01S7/4865Time delay measurement, e.g. time-of-flight measurement, time of arrival measurement or determining the exact position of a peak
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • G01S7/487Extracting wanted echo signals, e.g. pulse detection
    • G01S7/4876Extracting wanted echo signals, e.g. pulse detection by removing unwanted signals
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/51Display arrangements

Definitions

  • This application relates to the field of computer technology, and in particular to a method and system for measuring flight time based on interpolation.
  • the Time of Flight (TOF) method calculates the distance of an object by measuring the flight time of the beam in space. Due to its high accuracy and large measurement range, it is widely used in consumer electronics, unmanned aerial vehicles, AR/ VR and other fields.
  • Distance measurement systems based on the time-of-flight principle, such as time-of-flight depth cameras, lidars, and other systems often include a light source emitting end and a receiving end.
  • the light source emits a light beam to the target space to provide illumination, and the receiving end receives the light beam reflected by the target. Calculate the distance of the object by calculating the time required for the beam to be reflected and received.
  • the lidar based on the time-of-flight method is mainly mechanical and non-mechanical.
  • the mechanical type uses a rotating base to achieve a 360-degree distance measurement with a large field of view.
  • the advantage is that the measurement range is large, but it has high power consumption and high resolution.
  • Non-mechanical mid-area array lidar can solve the problem of mechanical lidar to a certain extent. It transmits a surface beam of a certain field of view into space at a time and receives it through the area array receiver, so its resolution and frame The rate has been improved, and because there is no need to rotate parts, it is easier to install. Nevertheless, area array lidar still faces some challenges.
  • the dynamic measurement also has higher requirements on the frame rate and measurement accuracy.
  • the improvement of resolution, frame rate, and accuracy often depends on the circuit scale of the receiving end and the improvement of the modulation and demodulation method.
  • increasing the circuit scale will increase power consumption, signal-to-noise ratio and cost; in addition, it will also increase the amount of on-chip storage. This has brought serious challenges to mass production; current modem methods are also difficult to achieve high-precision, low-power consumption and other requirements.
  • the purpose of the present application is to provide a time-of-flight measurement method and measurement system based on interpolation, so as to solve at least one of the above-mentioned background art problems.
  • an embodiment of the present application provides an interpolation-based time-of-flight measurement method, which includes the following steps:
  • the first light source and the second light source are arranged on the same light source array, and the first light source and the second light source can be independently controlled in groups.
  • the interpolation includes one-dimensional interpolation or two-dimensional interpolation
  • the interpolation method includes at least one of linear interpolation, spline interpolation, and polynomial interpolation.
  • the step S2 further includes performing a difference calculation on the flight time values of multiple spots and pixels to be interpolated, and the interpolation calculation is performed only when the difference is less than a certain threshold.
  • the embodiment of the present application also provides a time-of-flight measurement system based on interpolation, including:
  • An emitter configured to emit a pulsed light beam, which includes a first light source and a second light source;
  • a collector configured to collect photons in the pulsed beam reflected by an object and form a photon signal, which includes a plurality of pixels;
  • the processing circuit is connected to the transmitter and the collector, and is used to perform the following steps to calculate the flight time: S1, obtain the first flight time of the first combined pixel corresponding to the first light source; S2, obtain by interpolation calculation The second time of flight of the second superpixel corresponding to the second light source; S3. According to the second time of flight, locate the second composite pixel corresponding to the second light source and draw a histogram; S4. Use the histogram The graph calculates the third flight time.
  • the first light source and the second light source are arranged on the same light source array, and the first light source and the second light source can be independently controlled in groups.
  • the interpolation includes one-dimensional interpolation or two-dimensional interpolation
  • the interpolation method includes at least one of linear interpolation, spline interpolation, and polynomial interpolation.
  • step S2 further includes performing a difference calculation on the flight time values of the multiple spots and pixels to be interpolated, and the interpolation calculation is performed only when the difference is less than a certain threshold.
  • the embodiment of the present application provides a method for measuring time of flight based on interpolation, including the following steps: S1, obtaining the first time of flight of the first combined pixel corresponding to the first light source; S2, obtaining the time of flight corresponding to the second light source through interpolation calculation The second flight time of the second superpixel; S3.
  • the second flight time locate the second composite pixel corresponding to the second light source and draw a histogram; S4, use the histogram to calculate the third flight time ;
  • the coarse flight time value is directly provided for most of the pixels through interpolation, so that these pixels can directly perform fine histogram drawing based on the coarse flight time value to calculate the high-precision fine flight time value, because there is no coarse histogram
  • the drawing steps can greatly reduce the calculation time, thereby increasing the frame rate.
  • Fig. 1 is a schematic diagram of a time-of-flight distance measurement system according to an embodiment of the present application.
  • Fig. 2 is a schematic diagram of a light source according to an embodiment of the present application.
  • Fig. 3 is a schematic diagram of a pixel unit in a collector according to an embodiment of the present application.
  • Fig. 4 is a schematic diagram of a readout circuit according to an embodiment of the present application.
  • Fig. 5 is a schematic diagram of a histogram according to an embodiment of the present application.
  • Fig. 6 is a dynamic histogram drawing time-of-flight measurement method according to an embodiment of the present application.
  • Fig. 7 is a time-of-flight measurement method according to an embodiment of the present application.
  • Fig. 8 is an interpolation-based time-of-flight measurement method according to an embodiment of the present application.
  • connection can be used for fixing or circuit connection.
  • first and second are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Therefore, the features defined with “first” and “second” may explicitly or implicitly include one or more of these features. In the description of the embodiments of the present application, “multiple” means two or more than two, unless otherwise specifically defined.
  • the present application provides an interpolation-based time-of-flight measurement method and measurement system.
  • the following describes an embodiment of the distance measurement system first.
  • a distance measurement system which has stronger resistance to ambient light and higher resolution.
  • Fig. 1 is a schematic diagram of a time-of-flight distance measurement system according to an embodiment of the present application.
  • the distance measurement system 10 includes a transmitter 11, a collector 12, and a processing circuit 13.
  • the transmitter 11 provides a emitted light beam 30 to the target space to illuminate an object 20 in the space. At least part of the emitted light beam 30 is reflected by the object 20 to form a reflected light beam. 40. At least part of the light signals (photons) of the reflected light beam 40 are collected by the collector 12.
  • the processing circuit 13 is connected to the transmitter 11 and the collector 12 respectively, and the trigger signals of the transmitter 11 and the collector 12 are synchronized to calculate the light beam from the transmitter
  • the time required for 11 to be emitted and received by the collector 12, that is, the flight time t between the emitted light beam 30 and the reflected light beam 40, further, the distance D of the corresponding point on the object can be calculated by the following formula:
  • c is the speed of light.
  • the transmitter 11 includes a light source 111 and an optical element 112.
  • the light source 111 can be a light source such as a light emitting diode (LED), an edge emitting laser (EEL), a vertical cavity surface emitting laser (VCSEL), etc., or an array light source composed of multiple light sources.
  • the array light source 111 is in a single block A plurality of VCSEL light sources are generated on a semiconductor substrate to form a VCSEL array light source chip.
  • the light beam emitted by the light source 111 may be visible light, infrared light, ultraviolet light, or the like.
  • the light source 111 emits a light beam outward under the control of the processing circuit 13.
  • the light source 111 emits a pulsed light beam at a certain frequency (pulse period) under the control of the processing circuit 13, which can be used in the direct time-of-flight method ( In Direct TOF measurement, the frequency is set according to the measurement distance, for example, it can be set to 1MHz-100MHz, and the measurement distance is several meters to several hundred meters. It is understandable that it may be a part of the processing circuit 13 or a sub-circuit independent of the processing circuit 13 to control the light source 111 to emit related light beams, such as a pulse signal generator.
  • the optical element 112 receives the pulsed beam from the light source 111, and optically modulates the pulsed beam, such as diffraction, refraction, reflection, etc., and then emits the modulated beam into space, such as a focused beam, a flood beam, and a structured light beam. Wait.
  • the optical element 112 may be one or more combinations of lenses, diffractive optical elements, masks, mirrors, MEMS galvanometers, and the like.
  • the processing circuit 13 can be an independent dedicated circuit, such as a dedicated SOC chip, FPGA chip, ASIC chip, etc., or a general-purpose processor.
  • a dedicated SOC chip such as a dedicated SOC chip, FPGA chip, ASIC chip, etc.
  • a general-purpose processor such as a general-purpose processor.
  • the processor in the terminal can be used as at least a part of the processing circuit 13.
  • the collector 12 includes a pixel unit 121 and an imaging lens unit 122.
  • the imaging lens unit 122 receives and guides at least part of the modulated light beam reflected by the object to the pixel unit 121.
  • the pixel unit 121 is composed of a single photon avalanche photodiode (SPAD), or an array pixel unit composed of multiple SPAD pixels.
  • the array size of the array pixel unit represents the resolution of the depth camera, such as 320 ⁇ 240 etc.
  • SPAD can respond to the incident single photon to realize the detection of single photon. Because of its high sensitivity and fast response speed, it can realize long-distance and high-precision measurement.
  • SPAD can count single photons, such as the use of time-correlated single photon counting (TCSPC) to realize the collection of weak light signals and the calculation of flight time .
  • TCSPC time-correlated single photon counting
  • connected to the pixel unit 121 also includes a readout circuit composed of one or more of a signal amplifier, a time-to-digital converter (TDC), an analog-to-digital converter (ADC) and other devices (not shown in the figure).
  • TDC time-to-digital converter
  • ADC analog-to-digital converter
  • these circuits can be integrated with the pixels, and they can also be part of the processing circuit 13. For ease of description, the processing circuit 13 will be collectively regarded.
  • the distance measurement system 10 may also include a color camera, an infrared camera, an IMU, and other devices.
  • the combination of these devices can achieve richer functions, such as 3D texture modeling, infrared face recognition, SLAM and other functions.
  • the transmitter 11 and the collector 12 can also be arranged in a coaxial form, that is, the two are realized by optical devices with reflection and transmission functions, such as a half mirror.
  • a single photon incident on the SPAD pixel will cause an avalanche
  • the SPAD will output an avalanche signal to the TDC circuit
  • the TDC circuit will detect the time interval between the photon emission from the emitter 11 and the avalanche.
  • the time interval is counted through the time-correlated single photon counting (TCSPC) circuit for histogram statistics to recover the waveform of the entire pulse signal, and the time corresponding to the waveform can be further determined, and the flight time can be determined based on this time , So as to achieve accurate flight time detection, and finally calculate the distance information of the object based on the flight time.
  • TCSPC time-correlated single photon counting
  • the maximum measurement range of the distance measurement system is Dmax
  • the corresponding maximum flight time is Generally, ⁇ t ⁇ t 1 is required to avoid signal aliasing, where c is the speed of light.
  • the time (frame period) to achieve a single frame measurement will not be less than n*t 1 , that is, the period of each frame measurement includes n photon counting measurements.
  • the maximum measurement range is 150m
  • the frame period will not be less than 100ms
  • the frame rate will be less than 10fps. It can be seen that the maximum measurement range in the TCSPC method limits the pulse period, which further affects the frame rate of distance measurement.
  • Fig. 2 is a schematic diagram of a light source according to an embodiment of the present application.
  • the light source 111 is composed of a plurality of sub-light sources arranged on a single substrate (or multiple substrates), and the sub-light sources are arranged on the substrate in a certain pattern.
  • the substrate may be a semiconductor substrate, a metal substrate, etc.
  • the sub-light source may be a light emitting diode, an edge-emitting laser emitter, a vertical cavity surface laser emitter (VCSEL), etc.
  • the light source 111 is composed of a plurality of VCSELs arranged on the semiconductor substrate.
  • An array of VCSEL chips composed of sub-light sources.
  • the sub-light source is used to emit light beams of any desired wavelength, such as visible light, infrared light, and ultraviolet light.
  • the light source 111 emits light under the modulation drive of the driving circuit (which may be part of the processing circuit 13), such as continuous wave modulation, pulse modulation, and the like.
  • the light source 111 can also emit light in groups or as a whole under the control of the driving circuit.
  • the light source 111 includes a first sub-light source array 201, a second sub-light source array 202, etc., and the first sub-light source array 201 emits light under the control of the first driving circuit.
  • the second sub-light source array 202 emits light under the control of the second driving circuit.
  • the arrangement of the sub-light sources can be a one-dimensional arrangement or a two-dimensional arrangement, and can be a regular arrangement or an irregular arrangement.
  • Fig. 3 is a schematic diagram of a pixel unit in a collector according to an embodiment of the present application.
  • the pixel unit includes a pixel array 31 and a readout circuit 32.
  • the pixel array 31 includes a two-dimensional array composed of a plurality of pixels 310 and a readout circuit 32 composed of a TDC circuit 321, a histogram circuit 322, etc., where the pixel array is used for Collect at least part of the light beam reflected by the object and generate the corresponding photon signal.
  • the readout circuit 32 is used to process the photon signal to draw a histogram reflecting the pulse waveform emitted by the light source in the transmitter. Further, it can also be based on the histogram. Calculate the flight time, and finally output the result.
  • the readout circuit 32 may be composed of a single TDC circuit and histogram circuit, or may be an array readout circuit composed of multiple TDC circuit units and histogram circuit units.
  • the optical element 112 in the collector 12 will guide the spot beam to the corresponding pixel.
  • the size of a single spot is set to correspond to multiple pixels (the correspondence here can be understood as imaging, and the optical element 112 generally includes an imaging lens).
  • the pixel area composed of the corresponding multiple pixels is called "combined pixel" in this application.
  • the size of the combined pixel can be based on actual conditions. Need to be set, including at least one pixel, for example, it can be 3 ⁇ 3, 4 ⁇ 4 size. Generally, the light spot is round, oval, etc., and the combined pixel size should be set to be the same as the light spot size, or slightly smaller than the light spot size, but considering the different magnifications caused by the distance of the measured object, the combined pixel The size needs to be considered comprehensively when setting.
  • the pixel unit 31 includes an array composed of 14 ⁇ 18 pixels as an example for description.
  • the measurement system 10 between the transmitter 11 and the collector 12 can be divided into coaxial and off-axis according to the different setting modes. For the coaxial situation, the light beam emitted by the transmitter 11 will be reflected by the collector 12 after being reflected by the object to be measured.
  • the position of the combined pixel will not be affected by the distance of the measured object; but for off-axis situations, due to the existence of parallax, when the measured object is at different distances, the position of the light spot on the pixel unit is also Will change, usually along the baseline (the line between the transmitter 11 and the collector 12, the horizontal direction is used to represent the baseline direction in this application) direction, so when the distance of the measured object is unknown, it The position of the pixel is uncertain.
  • this application will set a pixel area (here called "super pixel”) composed of multiple pixels exceeding the number of pixels in the combined pixel to receive the reflected spot light beam.
  • the size of a super pixel should exceed at least one super pixel.
  • the size of the super pixel is the same as the sum pixel along the vertical direction of the baseline, and is larger than the sum pixel along the baseline direction.
  • the number of super pixels is generally the same as the number of spot beams collected by the collector 12 in a single measurement, which is 4 ⁇ 3 in FIG. 3.
  • the super pixel is set to: when at the lower limit of the measurement range, that is, the spot falls on one side of the super pixel (left or right, depending on the relative position of the emitter 11 and the collector 12) at close range. Position); when at the upper limit of the measurement range, that is, the spot falls on the other side of the super pixel at a distance.
  • the super pixels are set to a size of 2x6.
  • the corresponding super pixels are 361, 371, and 381, respectively.
  • the spots 363, 373, and 383 are respectively far and middle. , The spot beam reflected by a close object, the corresponding combined pixels fall on the left, middle and right side of the super pixel respectively.
  • the combined pixels share one TDC circuit unit, that is, one TDC circuit unit is connected to each pixel in the combined pixel.
  • the TDC circuit unit Both can calculate the flight time corresponding to the photon signal. This situation is more suitable for the coaxial situation, for the off-axis situation, because the combined pixel position will change with the distance of the measured object.
  • a TDC circuit array composed of 4 ⁇ 3 TDC circuit units will be included.
  • the super pixels share a TDC circuit unit, that is, a TDC circuit unit is connected to each pixel in the super pixel.
  • a TDC circuit unit is connected to each pixel in the super pixel.
  • the TDC circuit unit Both can calculate the flight time corresponding to the photon signal. Since the super pixel can include the pixel shift caused by the off-axis parallax, the super pixel sharing TDC can be applied to the off-axis situation.
  • a TDC circuit array composed of 4 ⁇ 3 TDC circuit units will be included. Sharing the TDC circuit can effectively reduce the number of TDC circuits, thereby reducing the size and power consumption of the readout circuit.
  • the number of spots that can be collected will be much smaller than the number of pixels, in other words, the effective depth of the collection
  • the resolution of the data is much smaller than the pixel resolution.
  • the pixel resolution in Figure 3 is 14 ⁇ 18, and the spot distribution is 4 ⁇ 3, that is, the effective depth data resolution of a single frame measurement is 4 ⁇ 3.
  • multi-frame measurement can be used.
  • the spots emitted by the transmitter 11 during the multi-frame measurement are "off", resulting in a scanning effect.
  • the spots received by the collector 12 are in multiple frames. Deviations also occur in the measurement.
  • the spots corresponding to two adjacent frames of measurement in Figure 3 are 343 and 353 respectively, which can improve the resolution.
  • the “deviation” of the spots can be achieved by grouping control of the sub-light sources on the light source 111, that is, in the measurement of two or more adjacent frames, the adjacent sub-light sources are sequentially turned on, for example, in the first frame.
  • the super pixels corresponding to the spots in different positions also need to be deviated when setting.
  • the super pixel corresponding to the spot 343 is 341
  • the super pixel corresponding to the spot 353 is 351
  • the super pixel is 351.
  • 351 is laterally shifted relative to super pixel 341, and there is a partial pixel overlap between super pixel 341 and super pixel 351.
  • super pixels measured in multiple frames they overlap each other.
  • the pixel area connected by a single TDC circuit unit will include the area composed of all super pixels that deviate in the multi-frame measurement, and there is overlap between the pixel areas corresponding to two adjacent TDC circuit units.
  • the pixel area 391 shares a TDC circuit unit, and the pixel area 391 includes 6 superpixels corresponding to 6 frames of measurement when the 6 groups of sub-light sources are turned on in sequence.
  • adjacent pixel regions 392 share a TDC circuit unit, and there is a partial overlap between the two pixel regions 391 and 392, which results in some pixels connected to two TDC circuit units.
  • the processing circuit 13 will gate the corresponding pixels so that the acquired photon signals can be measured by a single TDC circuit unit, so as to avoid crosstalk and errors.
  • the number of TDC circuits is the same as the number of spots collected by the collector 12 during a single frame measurement. In Figure 3, the number of spots is 4 ⁇ 3.
  • Each shared TDC circuit is connected to 4 ⁇ 10 pixels, and adjacent TDCs are connected to each other. There are 4 ⁇ 4 pixels overlap between the pixel regions connected by the circuit unit.
  • the TDC circuit will receive the photon signal from the pixel in the super pixel area connected to it, and calculate the time interval between the signal and the starting clock signal (ie, flight Time), and convert the time interval into a temperature code or binary code and save it in the histogram circuit.
  • the histogram circuit can draw a histogram that reflects the pulse waveform. Based on the histogram, the pulse can be accurately obtained Flight time.
  • the larger the measurement range the wider the measurable time interval of the TDC circuit is required. If the accuracy requirement is higher, the higher the time resolution of the TDC circuit is required. Both the wider the time interval and the higher the time resolution are required.
  • the TDC circuit uses a larger scale to output a binary code with a larger number of digits. Due to the increase in the number of binary code digits, the storage capacity of the memory of the histogram circuit is higher. The larger the memory capacity, the higher the cost, and the greater the difficulty in mass production of monolithic integration. For this reason, the present application provides a readout circuit solution with adjustable histogram circuit.
  • Fig. 4 is a schematic diagram of a readout circuit according to an embodiment of the present application.
  • the readout circuit includes a TDC circuit 41 and a histogram circuit 42.
  • the TDC circuit 41 collects the time interval of the photon signal and converts the time interval into a time code (binary code, temperature code, etc.), and then the histogram circuit 42 will be based on this
  • the time code is counted on the corresponding internal time unit (that is, the storage unit used to save time information), such as adding 1, after multiple measurements, the photon counts in all time units can be counted and the time histogram can be drawn Figure.
  • the ordinate of the time unit ⁇ T is the photon count value stored in the corresponding storage unit. Based on the histogram, the highest peak method can be used to determine the pulse The position of the waveform, and get the corresponding flight time t.
  • the histogram circuit 42 includes an address decoder 421, a memory matrix 422, a read/write circuit 424, and a histogram drawing circuit 425.
  • the TDC circuit inputs the acquired time code (binary code, temperature code, etc.) that reflects the time interval to the address decoder 421, and the address decoder 421 converts it into address information, which will be stored in the storage matrix 422 in.
  • the storage matrix 422 includes a plurality of storage units 423, that is, time units. Each storage unit 423 is pre-configured with a certain address (or address interval).
  • the read/write circuit 424 When the time code address received by the address decoder 421 is equal to a certain When the address of the storage unit is the same or within the address range of the storage unit, the read/write circuit 424 will perform the +1 operation on the corresponding storage unit, that is, complete one photon count, and after multiple measurements, the value of each storage unit The data reflects the number of photons received during the time interval. After a single frame measurement (multiple measurements), the data of all memory cells in the memory matrix 422 are read out to the histogram drawing circuit 425 for histogram drawing.
  • a control signal is applied to the histogram circuit 42 through the processing circuit to dynamically set the address (address interval) of each storage unit 423. , Thereby further realizing the dynamic control of the histogram time resolution ⁇ T and/or the time interval width T. For example, under the premise that the number of storage units 423 remains unchanged, by setting the address interval corresponding to the storage unit 423 to a larger time interval, that is, increasing the width ⁇ T of the time unit, the time interval interval that the total storage matrix can store will change. Larger, the total time interval of the histogram will become larger.
  • the histogram with a larger time interval is called a thick histogram; for example, the address interval corresponding to the storage unit 423 can be set to a smaller time interval.
  • the time interval interval that the total storage matrix can store is reduced, but the time resolution of storage will increase, and the time resolution of the histogram will increase.
  • Is a fine histogram is used as a fine histogram.
  • Fig. 6 is a dynamic histogram drawing time-of-flight measurement method according to an embodiment of the present application. It includes the following steps:
  • Step 1 Draw a coarse histogram in coarse precision time units. That is, the address or address interval corresponding to each time unit in the storage matrix 422 is configured by applying a control signal, that is, T and ⁇ T are set, and ⁇ T is configured as a larger time interval ⁇ T 1 in this step.
  • the time interval ⁇ T 1 should be set in consideration of the measurement range and the number of histogram storage units, that is, the flight time corresponding to the measurement range is allocated to all the histogram storage units. , Such as equal distribution or non-equal distribution, etc., so that all storage units can cover the measurement range.
  • the flight time value obtained from each measurement is matched to perform an operation of adding 1 to the corresponding time unit, and finally the drawing of the rough histogram is completed.
  • Step 2 Use the rough histogram to calculate the rough flight time value t 1. Based on the rough histogram, the maximum peak value method can be used to find the pulse waveform position, and the corresponding flight time can be read as the rough flight time value t 1 .
  • the accuracy or minimum resolution of the time value is the time interval ⁇ T 1 of the time unit.
  • the measurement range may be divided into several intervals, and each interval corresponds to a respective flight time interval, and the time interval ⁇ T of each time interval T may be the same or different.
  • drawing a rough histogram you can draw each time interval one by one. Since the distance of the measured object is unknown, the time interval in which the corresponding flight time will fall is also unknown, so it may be in a certain time interval.
  • the pulse waveform cannot be detected, that is, the rough flight time value cannot be calculated.
  • the waveform position cannot be found based on the rough histogram in step 2
  • the rough histogram is drawn until the pulse waveform is found in the rough histogram.
  • you can set the number of cycles for example, when the number of rough histogram drawing times exceeds a certain threshold (such as 3 times) , It is considered that the target is not detected this time, or that the target is located at infinity this time, thus ending the measurement.
  • Step 3 According to the obtained rough flight time value, draw a fine histogram in fine time units. At this time, since the rough value of the flight time value is already known, one more round of measurement can be performed and the corresponding histogram is drawn. At this time, the histogram circuit is controlled by the control signal and corresponds to each time unit in the storage matrix 422.
  • the address or address interval is configured as a smaller time interval ⁇ T 2 . Generally, the time interval ⁇ T 2 only needs to correspond to a smaller measurement range interval that can contain the true flight time value and the number of histogram storage units when setting.
  • the measurement range interval can be set as a rough flight time value
  • add a certain margin on both sides for example, it can be set to [t 1 -T',t 1 -T'], where the smaller T'is set, the smaller the time interval ⁇ T 2 , and the higher the resolution.
  • T' the time interval
  • the ratio of the margin to the time interval of the coarse histogram may be set in the range of 1%-25%. Then a new round of multiple measurements is performed, and the flight time value obtained each time is matched and the corresponding time unit is increased by 1 to complete the drawing of the fine histogram.
  • Step 4 Use the fine histogram to calculate the fine flight time t 2. Based on the fine histogram, the maximum peak method can be used to find the waveform position, and the corresponding flight time can be read as the fine flight time value t 2 .
  • the measurement method of the above-mentioned histogram dynamic coarse and fine adjustment is essentially a process of first performing coarse positioning in a larger measurement range, and then performing fine measurement based on the positioning result. It is understandable that the above-mentioned coarse and fine adjustment method can also be extended to three or more steps of measurement. For example, in some embodiments, the first time resolution is first measured to obtain the first flight time, and then based on The first flight time is measured with the second time resolution to obtain the second flight time, and finally based on the second flight time with the third time resolution, the third flight time is obtained. The accuracy of the three times is increased successively, and finally a higher-precision measurement can be achieved.
  • the specified time interval generally includes the time interval T drawn by the histogram.
  • the time interval of the histogram is [3ns, 10ns]
  • the time interval during which the pixel is activated can be set to [2.5ns, 10.5ns].
  • Fig. 7 is a time-of-flight measurement method according to another embodiment of the present application. The following will be described with reference to Figure 3. The method includes the following steps:
  • Step 1 Receive the super pixel TDC output signal, and draw a coarse histogram in coarse-precision time units. Since the distance of the object is not clear before the measurement, the position of the spot cannot be determined, that is, the position of the combined pixel cannot be determined, and the combined pixel may fall into different positions of the super pixel according to the distance of the object. Therefore, in this step, firstly, each pixel in the super pixel is enabled to be in an activated state to receive photons, and the photon signal output by the shared TDC of the super pixel is received, and then the histogram is drawn. The histogram adopts the dynamic adjustment histogram scheme shown in Fig. 6, in this step, the coarse histogram will be drawn using the coarse precision time unit.
  • Step 2 Use the rough histogram to calculate the rough flight time value t 1. Based on the rough histogram, use the maximum peak method to find the waveform position, and read the corresponding flight time as the rough flight time value t 1 , the flight time The accuracy or minimum resolution of the value is the time interval ⁇ T 1 of the time unit.
  • the measurement range may be divided into several intervals, and each interval corresponds to a respective flight time interval, and the time interval ⁇ T of each time interval T may be the same or different.
  • drawing a rough histogram you can draw each time interval one by one. Since the distance of the measured object is unknown, it is also unknown which time interval the corresponding flight time will fall into, so it may be in a certain time interval. The pulse waveform cannot be detected when drawing the rough histogram.
  • step 2 when the waveform position cannot be found based on the rough histogram in step 2, then go back to step 1 to draw the next rough histogram until the rough histogram Until the pulse waveform is found in the figure.
  • the number of cycles for example, when the number of rough histogram drawing times exceeds a certain threshold (such as 3 times) , It is considered that the target is not detected this time, or that the target is located at infinity this time, thus ending the measurement.
  • Step 3 According to the obtained rough time-of-flight value, locate the combined pixels and draw a fine histogram in fine time units. Since the rough time-of-flight value has been clarified, it can be based on the rough time-of-flight value and the parallax to locate the position of the combined pixel. It is usually necessary to save the relationship between the position of the combined pixel and the rough time-of-flight value in the system in advance After obtaining the rough time-of-flight value, the position of the combined pixel can be directly located according to the relationship; then based on the position of the combined pixel, only the combined pixel is activated, and a fine histogram is drawn in fine time units.
  • the histogram circuit is controlled by the control signal and stores the address or address corresponding to each time unit in the matrix 422.
  • the address interval is configured as a small time interval ⁇ T 2 .
  • the time interval ⁇ T 2 only needs to correspond to a smaller measurement range interval that can contain the true flight time value and the number of histogram storage units when setting it.
  • the measurement range interval can be set as a rough flight time value
  • a certain margin is added on both sides, for example, it can be set to [t 1 -T',t 1 -T'], where the smaller T'is set, the smaller the time interval ⁇ T 2 , and the higher the resolution.
  • T' the time interval
  • the ratio of the margin to the time interval of the coarse histogram may be set in the range of 1% to 25%. Then a new round of multiple measurements is performed, and the flight time value obtained each time is matched and the corresponding time unit is increased by 1 to complete the drawing of the fine histogram.
  • Step 4 Use the fine histogram to calculate the fine flight time t 2. Based on the fine histogram, the maximum peak method can be used to find the waveform position, and the corresponding flight time can be read as the fine flight time value t 2 .
  • the measurement method of the above-mentioned histogram dynamic coarse and fine adjustment is essentially a process of first performing coarse positioning in a larger measurement range, and then performing fine measurement based on the positioning result. It is understandable that the above-mentioned coarse and fine adjustment method can also be extended to three or more steps of measurement. For example, in some embodiments, the first time resolution is first measured to obtain the first flight time, and then based on The first flight time is measured with the second time resolution to obtain the second flight time, and finally based on the second flight time with the third time resolution, the third flight time is obtained. The accuracy of the three times is increased successively, and finally a higher-precision measurement can be achieved.
  • the specified time interval generally includes the time interval T drawn by the histogram.
  • the time interval of the histogram is [3ns, 10ns]
  • the time interval during which the pixel is activated can be set to [2.5ns, 10.5ns].
  • the embodiments in Figures 2 and 3 introduce examples of improving resolution through multi-frame measurement. It can be understood that when multi-frame measurement is performed, the depth data of each frame is measured Both can use the histogram dynamic adjustment scheme shown in Fig. 6 or Fig. 7. For example, when the first sub-light source array 201 is turned on, the dynamic coarse-fine histogram is drawn to obtain the first frame of depth image; when the second sub-light source array 202 is turned on, the dynamic coarse-fine histogram is drawn to obtain the second frame of depth image; The first and second frames of depth images get higher resolution depth images. In some embodiments, depth images of more than 3 frames can also be collected and merged into a higher resolution depth image.
  • this application provides an interpolation-based time-of-flight measurement method according to an embodiment of the application as shown in FIG. 8. The method includes the following steps:
  • Step 1 Obtain the first flight time of the first combined pixel corresponding to the first light source.
  • the first light source in the emitter 11 is turned on to emit the spot beam corresponding to the first light source, and the spot beam will fall on the closed pixel on the pixel unit 31 in the collector 12, as shown in 4x3 in Figure 3
  • the processing circuit can further obtain the first flight time of the combined pixel.
  • the coarse-fine dynamic adjustment scheme in the embodiment shown in FIG. 6 or FIG. 7 or any other scheme can be used.
  • Step 2 Obtain the second flight time of the second superpixel corresponding to the second light source through interpolation calculation.
  • a spot beam adjacent to the spot beam corresponding to the first light source will be emitted, and the spot beam will also fall on the combined pixel of the collector 12.
  • the dotted circle draws a spot 353.
  • the spot 353 and the spot 343 are spatially staggered because the positions of the first light source and the second light source are staggered, and therefore the respective corresponding pixels are also staggered. Generally, when the space points are relatively close, the distance between the two points will not differ too much.
  • the flight time value of the corresponding pixel corresponding to the spot 343 obtained in step 1 can be used as the second flight time value (rough flight time) of the super pixel 351 corresponding to the spot 353, and then the fine flight time is performed. Calculation.
  • the second time-of-flight value of the superpixel of the spot 353 can be estimated by using the combined pixels corresponding to multiple first light sources around the spot 353, for example, using the time-of-flight values of the left and right combined pixels for interpolation.
  • the interpolation may be one-dimensional interpolation or two-dimensional interpolation, and the interpolation method may be at least one of interpolation methods such as linear interpolation, spline interpolation, and polynomial interpolation.
  • Step 3 According to the second flight time, locate the second composite pixel corresponding to the second light source and draw a histogram. After the second flight time is obtained by interpolation, based on the flight time and the parallax, the position of the spot in the superpixel, that is, the position of the combined pixel, can be located, and then based on the position of the combined pixel, only the combined pixel is activated, and at the same time Plot histograms in fine time units.
  • Step 4 Use the histogram to calculate the third flight time. Based on the histogram, the maximum peak value method can be used to find the waveform position, and the corresponding flight time can be read as the third (fine) flight time value t 2 , the flight time value The accuracy or minimum resolution of is the time interval ⁇ T 2 of the time unit.
  • the flight time measurement method in the above steps only needs to use the coarse-fine histogram drawing method for the flight time calculation of only a few spots, and at least 2 frames of flight time measurement are required to obtain high accuracy.
  • the flight time value of most spots can be calculated by using the flight time value of known spots as the rough flight time value of the coarse histogram, and based on the rough flight time value, only a single fine histogram drawing is required.
  • this can greatly improve efficiency. For example, if the light sources are divided into 6 groups, only the first group of light sources need to perform coarse-fine measurement when the light source is turned on, and the subsequent 5 groups only need to perform a single fine measurement when the flight time measurement is performed after the light source is turned on.
  • the flight time values of the combined pixels corresponding to the multiple spots to be interpolated are greater than a certain threshold, the difference between the two spots on the surface There is a jump in the surface depth value of the object between the two spots, the spots between the two spots will still maintain the measurement scheme of the thick-fine histogram drawing, as long as the interpolation calculation is performed when it is less than the subtraction value.
  • the first time of flight of the first combined pixel may also be the rough time of flight, that is, when the first time of flight of the first combined pixel is demodulated and calculated, only a single rough histogram drawing is required, and then the Interpolation is performed based on the rough flight time obtained by the rough histogram drawing.

Abstract

An interpolation-based time of flight measurement method. The method comprises the following steps: S1, acquiring a first time of flight of a first combined pixel corresponding to a first light source (801); S2, obtaining, through interpolation calculation, a second time of flight of a second superpixel corresponding to a second light source (802); S3, locating, according to the second time of flight, a second combined pixel corresponding to the second light source, and drawing a histogram (803); and S4, calculating a third time of flight by means of the histogram (804). Rough time of flight values are directly supplied to most pixels by means of an interpolation mode, so that the drawing of a fine histogram can be directly executed by means of these pixels on the basis of the rough time of flight values, so as to calculate a high-precision fine time of flight value; and since the step of drawing a rough histograms is omitted, the calculation time can be greatly reduced, thereby increasing the frame rate.

Description

一种基于插值的飞行时间测量方法及测量系统Method and system for measuring flight time based on interpolation
本申请要求于2019年9月19日提交中国专利局,申请号为201910889455.6,发明名称为“基于插值的飞行时间测量方法及测量系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the Chinese Patent Office on September 19, 2019, the application number is 201910889455.6, and the invention title is "Interpolation-based time-of-flight measurement method and measurement system", the entire content of which is incorporated by reference In this application.
技术领域Technical field
本申请涉及计算机技术领域,尤其涉及一种基于插值的飞行时间测量方法及测量系统。This application relates to the field of computer technology, and in particular to a method and system for measuring flight time based on interpolation.
背景技术Background technique
飞行时间(Time of flight,TOF)法通过测量光束在空间中的飞行时间来计算物体的距离,由于其具有精度高、测量范围大等优点被广泛应用于消费电子、无人架驶、AR/VR等领域。The Time of Flight (TOF) method calculates the distance of an object by measuring the flight time of the beam in space. Due to its high accuracy and large measurement range, it is widely used in consumer electronics, unmanned aerial vehicles, AR/ VR and other fields.
基于飞行时间原理的距离测量系统比如飞行时间深度相机、激光雷达等系统往往包含一个光源发射端以及接收端,光源向目标空间发射光束以提供照明,接收端接收由目标反射回的光束,系统再通过计算光束由发射到反射接收所需要的时间来计算物体的距离。Distance measurement systems based on the time-of-flight principle, such as time-of-flight depth cameras, lidars, and other systems often include a light source emitting end and a receiving end. The light source emits a light beam to the target space to provide illumination, and the receiving end receives the light beam reflected by the target. Calculate the distance of the object by calculating the time required for the beam to be reflected and received.
目前基于飞行时间法的激光雷达主要有机械式与非机械式两种,机械式通过旋转基座来实现360度大视场的距离测量,优点是测量范围大,但存在功耗高、分辨率及帧率低等问题。非机械式中面阵激光雷达在一定程度上可以解决机械式激光雷达的问题,其通过一次向空间上发射一定视场的面光束,并通过面阵接收器进行接收,因此其分辨率及帧率均得到了提升,另外由于无需旋转部件,更易于安装。尽管如此,面阵激光雷达仍面临一些挑战。At present, the lidar based on the time-of-flight method is mainly mechanical and non-mechanical. The mechanical type uses a rotating base to achieve a 360-degree distance measurement with a large field of view. The advantage is that the measurement range is large, but it has high power consumption and high resolution. And the problem of low frame rate. Non-mechanical mid-area array lidar can solve the problem of mechanical lidar to a certain extent. It transmits a surface beam of a certain field of view into space at a time and receives it through the area array receiver, so its resolution and frame The rate has been improved, and because there is no need to rotate parts, it is easier to install. Nevertheless, area array lidar still faces some challenges.
面阵激光雷达的分辨率越高,有效信息也越全面,另外动态测量对帧率及 测量精度也有较高的要求。然而,分辨率、帧率以及精度的提升往往需要依托于接收端的电路规模、调制解调方式的改善,但增加电路规模会增加功耗、信噪比以及成本;另外也会增加片上存储量,给量产带来了严重的挑战;当前的调制解调方式也难以实现高精度、低功耗等要求。The higher the resolution of the area array lidar, the more comprehensive the effective information. In addition, the dynamic measurement also has higher requirements on the frame rate and measurement accuracy. However, the improvement of resolution, frame rate, and accuracy often depends on the circuit scale of the receiving end and the improvement of the modulation and demodulation method. However, increasing the circuit scale will increase power consumption, signal-to-noise ratio and cost; in addition, it will also increase the amount of on-chip storage. This has brought serious challenges to mass production; current modem methods are also difficult to achieve high-precision, low-power consumption and other requirements.
发明内容Summary of the invention
本申请的目的在于提供一种基于插值的飞行时间测量方法及测量系统,以解决前述背景技术所述问题中的至少一个问题。The purpose of the present application is to provide a time-of-flight measurement method and measurement system based on interpolation, so as to solve at least one of the above-mentioned background art problems.
为达到上述目的,本申请实施例提供一种基于插值的飞行时间测量方法,包括如下步骤:To achieve the foregoing objective, an embodiment of the present application provides an interpolation-based time-of-flight measurement method, which includes the following steps:
S1、获取与第一光源对应的第一合像素的第一飞行时间;S1. Acquire the first flight time of the first combined pixel corresponding to the first light source;
S2、通过插值计算得到与第二光源对应的第二超像素的第二飞行时间;S2. Obtain the second flight time of the second superpixel corresponding to the second light source through interpolation calculation;
S3、根据所述第二飞行时间,定位与所述第二光源对应的第二合像素并绘制直方图;S3. According to the second flight time, locate a second composite pixel corresponding to the second light source and draw a histogram;
S4、利用所述直方图计算第三飞行时间。S4. Calculate the third flight time by using the histogram.
在一些实施例中,所述第一光源与所述第二光源被设置在同一个光源阵列上,所述第一光源与所述第二光源可以被分组独立控制。In some embodiments, the first light source and the second light source are arranged on the same light source array, and the first light source and the second light source can be independently controlled in groups.
在一些实施例中,所述插值包括一维插值或者二维插值,所述插值方法包含线性插值、样条插值、多项式插值中的至少一种。In some embodiments, the interpolation includes one-dimensional interpolation or two-dimensional interpolation, and the interpolation method includes at least one of linear interpolation, spline interpolation, and polynomial interpolation.
在一些实施例中,所述直方图在绘制时,仅第二合像素内的像素被激活。In some embodiments, when the histogram is drawn, only the pixels in the second combined pixel are activated.
在一些实施例中,所述步骤S2还包括对将要插值的多个斑点合像素的飞行时间值进行差值计算,当差值小于某一阈值时,才执行所述插值计算。In some embodiments, the step S2 further includes performing a difference calculation on the flight time values of multiple spots and pixels to be interpolated, and the interpolation calculation is performed only when the difference is less than a certain threshold.
本申请实施例还提供一种基于插值的飞行时间测量系统,包括:The embodiment of the present application also provides a time-of-flight measurement system based on interpolation, including:
发射器,经配置以发射脉冲光束,其包括第一光源以及第二光源;An emitter configured to emit a pulsed light beam, which includes a first light source and a second light source;
采集器,经配置以采集被物体反射回的所述脉冲光束中的光子并形成光子信号,其包含多个像素;A collector configured to collect photons in the pulsed beam reflected by an object and form a photon signal, which includes a plurality of pixels;
处理电路,与所述发射器以及所述采集器连接,用于执行以下步骤以计算飞行时间:S1、获取与第一光源对应的第一合像素的第一飞行时间;S2、通过插值计算得到与第二光源对应的第二超像素的第二飞行时间;S3、根据所述第二飞行时间,定位与所述第二光源对应的第二合像素并绘制直方图;S4、利用所述直方图计算第三飞行时间。The processing circuit is connected to the transmitter and the collector, and is used to perform the following steps to calculate the flight time: S1, obtain the first flight time of the first combined pixel corresponding to the first light source; S2, obtain by interpolation calculation The second time of flight of the second superpixel corresponding to the second light source; S3. According to the second time of flight, locate the second composite pixel corresponding to the second light source and draw a histogram; S4. Use the histogram The graph calculates the third flight time.
在一些实施例中,所述第一光源与所述第二光源被设置在同一个光源阵列上,所述第一光源与所述第二光源可以被分组独立控制。In some embodiments, the first light source and the second light source are arranged on the same light source array, and the first light source and the second light source can be independently controlled in groups.
在一些实施例中,所述插值包括一维插值或者二维插值,所述插值方法包含线性插值、样条插值、多项式插值中的至少一种。In some embodiments, the interpolation includes one-dimensional interpolation or two-dimensional interpolation, and the interpolation method includes at least one of linear interpolation, spline interpolation, and polynomial interpolation.
在一些实施例中,所述直方图在绘制时,仅第二合像素内的像素被激活。In some embodiments, when the histogram is drawn, only the pixels in the second combined pixel are activated.
在一些实施例中,所步骤S2还包括对将要插值的多个斑点合像素的飞行时间值进行差值计算,当差值小于某一阈值时,才执行所述插值计算。In some embodiments, step S2 further includes performing a difference calculation on the flight time values of the multiple spots and pixels to be interpolated, and the interpolation calculation is performed only when the difference is less than a certain threshold.
本申请实施例提供一种基于插值的飞行时间测量方法,包括如下步骤:S1、获取与第一光源对应的第一合像素的第一飞行时间;S2、通过插值计算得到与第二光源对应的第二超像素的第二飞行时间;S3、根据所述第二飞行时间,定位与所述第二光源对应的第二合像素并绘制直方图;S4、利用所述直方图计算第三飞行时间;通过插值的方式为绝大部分像素直接提供粗飞行时间值,以让这些像素可以直接基于粗飞行时间值执行细直方图绘制以计算高精度的细飞行时间值,由于少了一个粗直方图绘制的步骤,因此可以大幅减少计算时间,从而提升帧率。The embodiment of the present application provides a method for measuring time of flight based on interpolation, including the following steps: S1, obtaining the first time of flight of the first combined pixel corresponding to the first light source; S2, obtaining the time of flight corresponding to the second light source through interpolation calculation The second flight time of the second superpixel; S3. According to the second flight time, locate the second composite pixel corresponding to the second light source and draw a histogram; S4, use the histogram to calculate the third flight time ; The coarse flight time value is directly provided for most of the pixels through interpolation, so that these pixels can directly perform fine histogram drawing based on the coarse flight time value to calculate the high-precision fine flight time value, because there is no coarse histogram The drawing steps can greatly reduce the calculation time, thereby increasing the frame rate.
附图说明Description of the drawings
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly describe the technical solutions in the embodiments of the present application or the prior art, the following will briefly introduce the drawings that need to be used in the description of the embodiments or the prior art. Obviously, the drawings in the following description are only These are some embodiments of the present application. For those of ordinary skill in the art, other drawings can be obtained based on these drawings without creative labor.
图1是根据本申请一个实施例的飞行时间距离测量系统示意图。Fig. 1 is a schematic diagram of a time-of-flight distance measurement system according to an embodiment of the present application.
图2是根据本申请一个实施例的光源示意图。Fig. 2 is a schematic diagram of a light source according to an embodiment of the present application.
图3是根据本申请一个实施例的采集器中像素单元的示意图。Fig. 3 is a schematic diagram of a pixel unit in a collector according to an embodiment of the present application.
图4是根据本申请一个实施例的读出电路示意图。Fig. 4 is a schematic diagram of a readout circuit according to an embodiment of the present application.
图5是根据本申请一个实施例的直方图示意图。Fig. 5 is a schematic diagram of a histogram according to an embodiment of the present application.
图6是根据本申请一个实施例的动态直方图绘制飞行时间测量方法。Fig. 6 is a dynamic histogram drawing time-of-flight measurement method according to an embodiment of the present application.
图7是根据本申请一个实施例的飞行时间测量方法。Fig. 7 is a time-of-flight measurement method according to an embodiment of the present application.
图8是根据本申请一个实施例的基于插值的飞行时间测量方法。Fig. 8 is an interpolation-based time-of-flight measurement method according to an embodiment of the present application.
具体实施方式detailed description
为了使本申请实施例所要解决的技术问题、技术方案及有益效果更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。In order to make the technical problems, technical solutions, and beneficial effects to be solved by the embodiments of the present application clearer, the following further describes the present application in detail with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the application, and not used to limit the application.
需要说明的是,当元件被称为“固定于”或“设置于”另一个元件,它可以直接在另一个元件上或者间接在该另一个元件上。当一个元件被称为是“连接于”另一个元件,它可以是直接连接到另一个元件或间接连接至该另一个元件上。另外,连接即可以是用于固定作用也可以是用于电路连通作用。It should be noted that when an element is referred to as being "fixed to" or "disposed on" another element, it can be directly on the other element or indirectly on the other element. When an element is referred to as being "connected to" another element, it can be directly connected to the other element or indirectly connected to the other element. In addition, the connection can be used for fixing or circuit connection.
需要理解的是,术语“长度”、“宽度”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本申请实施例和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本申请的限制。It should be understood that the terms "length", "width", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top" The orientation or positional relationship indicated by "bottom", "inner", "outer", etc. are based on the orientation or positional relationship shown in the drawings, and are only for the convenience of describing the embodiments of the application and simplifying the description, rather than indicating or implying what is meant. The device or element must have a specific orientation, be constructed and operated in a specific orientation, and therefore cannot be understood as a limitation of the present application.
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多该特征。在本申请实施例的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。In addition, the terms "first" and "second" are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Therefore, the features defined with "first" and "second" may explicitly or implicitly include one or more of these features. In the description of the embodiments of the present application, "multiple" means two or more than two, unless otherwise specifically defined.
本申请提供一种基于插值的飞行时间测量方法及测量系统,为方便理解,以下先对距离测量系统实施例进行描述。The present application provides an interpolation-based time-of-flight measurement method and measurement system. For ease of understanding, the following describes an embodiment of the distance measurement system first.
作为本申请一实施例,提供了一种距离测量系统,其具有更强的抗环境光能力以及更高的分辨率。As an embodiment of the present application, a distance measurement system is provided, which has stronger resistance to ambient light and higher resolution.
图1是根据本申请一个实施例的飞行时间距离测量系统示意图。距离测量系统10包括发射器11、采集器12以及处理电路13,其中发射器11提供发射光束30至目标空间中以照明空间中的物体20,至少部分发射光束30经物体20反射后形成反射光束40,反射光束40的至少部分光信号(光子)被采集器12采集,处理电路13分别与发射器11以及采集器12连接,同步发射器11以及采集器12的触发信号以计算光束由发射器11发出并被采集器12接收所需要的时间,即发射光束30与反射光束40之间的飞行时间t,进一步的,物体上对应点的距离D可由下式计算出:Fig. 1 is a schematic diagram of a time-of-flight distance measurement system according to an embodiment of the present application. The distance measurement system 10 includes a transmitter 11, a collector 12, and a processing circuit 13. The transmitter 11 provides a emitted light beam 30 to the target space to illuminate an object 20 in the space. At least part of the emitted light beam 30 is reflected by the object 20 to form a reflected light beam. 40. At least part of the light signals (photons) of the reflected light beam 40 are collected by the collector 12. The processing circuit 13 is connected to the transmitter 11 and the collector 12 respectively, and the trigger signals of the transmitter 11 and the collector 12 are synchronized to calculate the light beam from the transmitter The time required for 11 to be emitted and received by the collector 12, that is, the flight time t between the emitted light beam 30 and the reflected light beam 40, further, the distance D of the corresponding point on the object can be calculated by the following formula:
D=c·t/2     (1)D=c·t/2 (1)
其中,c为光速。Among them, c is the speed of light.
发射器11包括光源111、光学元件112。光源111可以是发光二极管(LED)、边发射激光器(EEL)、垂直腔面发射激光器(VCSEL)等光源,也可以是多个光源组成的阵列光源,可选地,阵列光源111是在单块半导体基底上生成多个VCSEL光源以形成的VCSEL阵列光源芯片。光源111所发射的光束可以是可见光、红外光、紫外光等。光源111在处理电路13的控制下向外发射光束,比如在一些实施例中,光源111在处理电路13的控制下以一定的频率(脉冲周期)发射脉冲光束,可以用于直接飞行时间法(Direct TOF)测量中,频率根据测量距离进行设定,比如可以设置成1MHz-100MHz,测量距离在几米至几百米。可以理解的是,可以是处理电路13中的一部分或者独立于处理电路13存在的子电路来控制光源111发射相关的光束,比如脉冲信号发生器。The transmitter 11 includes a light source 111 and an optical element 112. The light source 111 can be a light source such as a light emitting diode (LED), an edge emitting laser (EEL), a vertical cavity surface emitting laser (VCSEL), etc., or an array light source composed of multiple light sources. Optionally, the array light source 111 is in a single block A plurality of VCSEL light sources are generated on a semiconductor substrate to form a VCSEL array light source chip. The light beam emitted by the light source 111 may be visible light, infrared light, ultraviolet light, or the like. The light source 111 emits a light beam outward under the control of the processing circuit 13. For example, in some embodiments, the light source 111 emits a pulsed light beam at a certain frequency (pulse period) under the control of the processing circuit 13, which can be used in the direct time-of-flight method ( In Direct TOF measurement, the frequency is set according to the measurement distance, for example, it can be set to 1MHz-100MHz, and the measurement distance is several meters to several hundred meters. It is understandable that it may be a part of the processing circuit 13 or a sub-circuit independent of the processing circuit 13 to control the light source 111 to emit related light beams, such as a pulse signal generator.
光学元件112接收来自光源111的脉冲光束,并将脉冲光束进行光学调制,比如衍射、折射、反射等调制,随后向空间中发射被调制后的光束,比如聚焦 光束、泛光光束、结构光光束等。光学元件112可以是透镜、衍射光学元件、掩膜板、反射镜、MEMS振镜等形式中的一种或多种组合。The optical element 112 receives the pulsed beam from the light source 111, and optically modulates the pulsed beam, such as diffraction, refraction, reflection, etc., and then emits the modulated beam into space, such as a focused beam, a flood beam, and a structured light beam. Wait. The optical element 112 may be one or more combinations of lenses, diffractive optical elements, masks, mirrors, MEMS galvanometers, and the like.
处理电路13可以是独立的专用电路,比如专用SOC芯片、FPGA芯片、ASIC芯片等等,也可以包含通用处理器,比如当该深度相机被集成到如手机、电视、电脑等智能终端中去,终端中的处理器可以作为该处理电路13的至少一部分。The processing circuit 13 can be an independent dedicated circuit, such as a dedicated SOC chip, FPGA chip, ASIC chip, etc., or a general-purpose processor. For example, when the depth camera is integrated into smart terminals such as mobile phones, TVs, and computers, The processor in the terminal can be used as at least a part of the processing circuit 13.
采集器12包括像素单元121、成像透镜单元122,成像透镜单元122接收并将由物体反射回的至少部分调制光束引导到像素单元121上。在一些实施例中,像素单元121由单光子雪崩光电二极管(SPAD)组成,也可以由多个SPAD像素组成的阵列像素单元,阵列像素单元的阵列大小代表着该深度相机的分辨率,比如320×240等。SPAD可以对入射的单个光子进行响应从而实现对单光子的检测,由于其具备灵敏度高、响应速度快等优点,可以实现远距离、高精度地测量。与CCD/CMOS等组成的以光积分为原理的图像传感器相比,SPAD可以通过对单光子进行计数,比如利用时间相关单光子计数法(TCSPC)实现对微弱光信号的采集以及飞行时间的计算。通常地,与像素单元121连接的还包括由信号放大器、时数转换器(TDC)、模数转换器(ADC)等器件中的一种或多种组成的读出电路(图中未示出)。这些电路即可以与像素整合在一起,这也可以是处理电路13中的一部分,为了便于描述,将统一视作处理电路13。The collector 12 includes a pixel unit 121 and an imaging lens unit 122. The imaging lens unit 122 receives and guides at least part of the modulated light beam reflected by the object to the pixel unit 121. In some embodiments, the pixel unit 121 is composed of a single photon avalanche photodiode (SPAD), or an array pixel unit composed of multiple SPAD pixels. The array size of the array pixel unit represents the resolution of the depth camera, such as 320 ×240 etc. SPAD can respond to the incident single photon to realize the detection of single photon. Because of its high sensitivity and fast response speed, it can realize long-distance and high-precision measurement. Compared with CCD/CMOS and other image sensors based on the principle of light integration, SPAD can count single photons, such as the use of time-correlated single photon counting (TCSPC) to realize the collection of weak light signals and the calculation of flight time . Generally, connected to the pixel unit 121 also includes a readout circuit composed of one or more of a signal amplifier, a time-to-digital converter (TDC), an analog-to-digital converter (ADC) and other devices (not shown in the figure). ). These circuits can be integrated with the pixels, and they can also be part of the processing circuit 13. For ease of description, the processing circuit 13 will be collectively regarded.
在一些实施例中,距离测量系统10还可以包括彩色相机、红外相机、IMU等器件,与这些器件的组合可以实现更加丰富的功能,比如3D纹理建模、红外人脸识别、SLAM等功能。In some embodiments, the distance measurement system 10 may also include a color camera, an infrared camera, an IMU, and other devices. The combination of these devices can achieve richer functions, such as 3D texture modeling, infrared face recognition, SLAM and other functions.
在一些实施例中,发射器11与采集器12也可以被设置成共轴形式,即二者之间通过具备反射及透射功能的光学器件来实现,比如半透半反镜等。In some embodiments, the transmitter 11 and the collector 12 can also be arranged in a coaxial form, that is, the two are realized by optical devices with reflection and transmission functions, such as a half mirror.
在利用SPAD的直接飞行时间法距离测量系统中,单个光子入射SPAD像素将引起雪崩,SPAD将输出雪崩信号至TDC电路,再由TDC电路检测出光子从发射器11发出到引起雪崩的时间间隔。通过多次测量之后将时间间隔通过时间相关单光子计数(TCSPC)电路进行直方图统计以恢复出整个脉冲信号的波形,进一 步可以确定该波形对应的时间,根据这一时间就可以确定出飞行时间,从而实现精确的飞行时间检测,最后根据飞行时间计算出物体的距离信息。假定脉冲光束发射的脉冲周期为Δt,距离测量系统的最大测量范围为Dmax,对应的最大飞行时间是
Figure PCTCN2019113710-appb-000001
一般要求Δt≥t 1以避免信号混淆,其中c是光速。如果TCSPC要求的多次测量的次数为n,则实现单帧测量的时间(帧周期)将不低于n*t 1,即每帧测量的周期内包含了n次光子计数测量。比如最大测量范围是150m,对应的脉冲周期Δt=1us,n=100000,则帧周期将不低于100ms,帧率将低于10fps。由此可见,TCSPC方法中最大测量范围限制了脉冲周期,从而进一步影响到距离测量的帧率。
In the direct time-of-flight distance measurement system using SPAD, a single photon incident on the SPAD pixel will cause an avalanche, the SPAD will output an avalanche signal to the TDC circuit, and the TDC circuit will detect the time interval between the photon emission from the emitter 11 and the avalanche. After multiple measurements, the time interval is counted through the time-correlated single photon counting (TCSPC) circuit for histogram statistics to recover the waveform of the entire pulse signal, and the time corresponding to the waveform can be further determined, and the flight time can be determined based on this time , So as to achieve accurate flight time detection, and finally calculate the distance information of the object based on the flight time. Assuming that the pulse period emitted by the pulse beam is Δt, the maximum measurement range of the distance measurement system is Dmax, and the corresponding maximum flight time is
Figure PCTCN2019113710-appb-000001
Generally, Δt≥t 1 is required to avoid signal aliasing, where c is the speed of light. If the number of multiple measurements required by TCSPC is n, then the time (frame period) to achieve a single frame measurement will not be less than n*t 1 , that is, the period of each frame measurement includes n photon counting measurements. For example, the maximum measurement range is 150m, the corresponding pulse period Δt=1us, n=100000, then the frame period will not be less than 100ms, and the frame rate will be less than 10fps. It can be seen that the maximum measurement range in the TCSPC method limits the pulse period, which further affects the frame rate of distance measurement.
图2是根据本申请一个实施例的光源示意图。光源111由设置在单片基底(或多片基底)上的多个子光源组成,子光源以一定的图案形式排列在基底上。基底可以是半导体基底、金属基底等,子光源可以是发光二极管、边发射激光发射器、垂直腔面激光发射器(VCSEL)等,可选地,光源111由设置在半导体基底上的多个VCSEL子光源所组成的阵列VCSEL芯片。子光源用于发射任意需要波长的光束,比如可见光、红外光、紫外光等。光源111在驱动电路(可以是处理电路13的一部分)的调制驱动下进行发光,比如连续波调制、脉冲调制等。光源111也可以在驱动电路的控制下分组发光或者整体发光,比如光源111包含第一子光源阵列201、第二子光源阵列202等,第一子光源阵列201在第一驱动电路的控制下发光、第二子光源阵列202在第二驱动电路的控制下发光。子光源的排列可以是一维排列、也可以是二维排列,可以是规则排列、也可以是不规则排列。为了便于分析,图2中仅示意性地给出一种示例,该示例中光源111为8×9的规则阵列子光源,且子光源被分成了4×3=12组,各位光源在图中用不同的符号区分,即光源111由12个3×2的规则排列子光源阵列组成。Fig. 2 is a schematic diagram of a light source according to an embodiment of the present application. The light source 111 is composed of a plurality of sub-light sources arranged on a single substrate (or multiple substrates), and the sub-light sources are arranged on the substrate in a certain pattern. The substrate may be a semiconductor substrate, a metal substrate, etc., and the sub-light source may be a light emitting diode, an edge-emitting laser emitter, a vertical cavity surface laser emitter (VCSEL), etc., optionally, the light source 111 is composed of a plurality of VCSELs arranged on the semiconductor substrate. An array of VCSEL chips composed of sub-light sources. The sub-light source is used to emit light beams of any desired wavelength, such as visible light, infrared light, and ultraviolet light. The light source 111 emits light under the modulation drive of the driving circuit (which may be part of the processing circuit 13), such as continuous wave modulation, pulse modulation, and the like. The light source 111 can also emit light in groups or as a whole under the control of the driving circuit. For example, the light source 111 includes a first sub-light source array 201, a second sub-light source array 202, etc., and the first sub-light source array 201 emits light under the control of the first driving circuit. , The second sub-light source array 202 emits light under the control of the second driving circuit. The arrangement of the sub-light sources can be a one-dimensional arrangement or a two-dimensional arrangement, and can be a regular arrangement or an irregular arrangement. For ease of analysis, only one example is schematically given in Figure 2. In this example, the light source 111 is a regular array of 8×9 sub-light sources, and the sub-light sources are divided into 4×3=12 groups. Each light source is shown in the figure. It is distinguished by different symbols, that is, the light source 111 is composed of 12 3×2 regularly arranged sub-light source arrays.
图3是根据本申请一个实施例的采集器中像素单元的示意图。像素单元包括像素阵列31以及读出电路32,其中像素阵列31包括由多个像素310组成的二维阵列以及由TDC电路321、直方图电路322等组成的读出电路32,其中像 素阵列用于采集由物体反射回的至少部分光束并生成相应的光子信号,读出电路32用于对光子信号进行处理以绘制反映发射器中光源所发射脉冲波形的直方图,进一步地,也可以根据直方图计算飞行时间,最后将结果进行输出。其中,读出电路32可以是单个TDC电路及直方图电路组成,也可以由多个TDC电路单元及直方图电路单元组成的阵列读出电路。Fig. 3 is a schematic diagram of a pixel unit in a collector according to an embodiment of the present application. The pixel unit includes a pixel array 31 and a readout circuit 32. The pixel array 31 includes a two-dimensional array composed of a plurality of pixels 310 and a readout circuit 32 composed of a TDC circuit 321, a histogram circuit 322, etc., where the pixel array is used for Collect at least part of the light beam reflected by the object and generate the corresponding photon signal. The readout circuit 32 is used to process the photon signal to draw a histogram reflecting the pulse waveform emitted by the light source in the transmitter. Further, it can also be based on the histogram. Calculate the flight time, and finally output the result. The readout circuit 32 may be composed of a single TDC circuit and histogram circuit, or may be an array readout circuit composed of multiple TDC circuit units and histogram circuit units.
在一些实施例中,当发射器11向被测物体发射斑点光束时,采集器12中的光学元件112会引导该斑点光束至相应的像素上,通常地,为了尽可能多地接受反射光束的光信号,单个斑点的大小被设置成对应于多个像素(这里的对应可以理解成成像,光学元件112一般包括成像透镜),比如图3中单个斑点对应2×2=4个像素,即该斑点光束反射回的光子会以一定的概率被对应的4个像素接受,为了便于描述,本申请中将对应的多个像素组成的像素区域称为“合像素”,合像素的大小可以根据实际需要进行设定,至少包含一个像素,比如可以是3×3、4×4大小。通常地,光斑是圆形、椭圆形等形状,合像素的大小应设置成与光斑大小相当,或者略小于光斑大小,但考虑到因被测量物体距离不同导致的放大倍数不一样,所以合像素的大小在设置时需要进行综合考虑。In some embodiments, when the emitter 11 emits a spot beam to the object to be measured, the optical element 112 in the collector 12 will guide the spot beam to the corresponding pixel. Generally, in order to receive as much of the reflected beam as possible For the optical signal, the size of a single spot is set to correspond to multiple pixels (the correspondence here can be understood as imaging, and the optical element 112 generally includes an imaging lens). For example, a single spot in Figure 3 corresponds to 2×2=4 pixels, that is, the The photons reflected by the spot beam will be accepted by the corresponding 4 pixels with a certain probability. For the convenience of description, the pixel area composed of the corresponding multiple pixels is called "combined pixel" in this application. The size of the combined pixel can be based on actual conditions. Need to be set, including at least one pixel, for example, it can be 3×3, 4×4 size. Generally, the light spot is round, oval, etc., and the combined pixel size should be set to be the same as the light spot size, or slightly smaller than the light spot size, but considering the different magnifications caused by the distance of the measured object, the combined pixel The size needs to be considered comprehensively when setting.
图3所示实施例中,以像素单元31包含14×18个像素组成的阵列为例进行说明。通常地,发射器11与采集器12之间根据设置方式的不同测量系统10可以分为共轴与离轴,对于共轴情形,发射器11发出的光束经被测物体反射后将由采集器12中对应的合像素采集,合像素的位置不会因为被测物体的远近有影响;但对于离轴情形,由于视差的存在,当被测物体远近不同时,光斑落在像素单元上的位置也会发生变化,通常会沿着基线(发射器11与采集器12之间的连线,本申请中统一用横向来表示基线方向)方向发生偏移,因此当被测物体的距离未知时,合像素的位置是不确定的,为了解决这一问题,本申请将设置超过合像素中像素数量的多个像素组成的像素区域(这里称为“超像素”)用于接受反射回的斑点光束,超像素的大小在设置时,需要同时考虑系统10的测量范围以及基线的长度,使得在测量范围内不同距离上物体反射回的斑点所对应 的合像素均会落入在超像素区域内,即超像素的大小应超过至少一个合像素。通常地,超像素的尺寸沿与基线垂直方向与合像素相同,沿基线方向则大于合像素。超像素的数量一般与采集器12单次测量所采集到的斑点光束的数量相同,图3中为4×3。In the embodiment shown in FIG. 3, the pixel unit 31 includes an array composed of 14×18 pixels as an example for description. Generally, the measurement system 10 between the transmitter 11 and the collector 12 can be divided into coaxial and off-axis according to the different setting modes. For the coaxial situation, the light beam emitted by the transmitter 11 will be reflected by the collector 12 after being reflected by the object to be measured. For the corresponding combined pixel collection in, the position of the combined pixel will not be affected by the distance of the measured object; but for off-axis situations, due to the existence of parallax, when the measured object is at different distances, the position of the light spot on the pixel unit is also Will change, usually along the baseline (the line between the transmitter 11 and the collector 12, the horizontal direction is used to represent the baseline direction in this application) direction, so when the distance of the measured object is unknown, it The position of the pixel is uncertain. In order to solve this problem, this application will set a pixel area (here called "super pixel") composed of multiple pixels exceeding the number of pixels in the combined pixel to receive the reflected spot light beam. When setting the size of the super pixel, it is necessary to consider the measurement range of the system 10 and the length of the baseline at the same time, so that the combined pixels corresponding to the spots reflected by objects at different distances within the measurement range will all fall into the super pixel area, namely The size of a super pixel should exceed at least one super pixel. Generally, the size of the super pixel is the same as the sum pixel along the vertical direction of the baseline, and is larger than the sum pixel along the baseline direction. The number of super pixels is generally the same as the number of spot beams collected by the collector 12 in a single measurement, which is 4×3 in FIG. 3.
在一些实施例中,超像素被设置成:当在测量范围的下限时,即近距时斑点落在超像素的一侧(左侧或右侧,取决于发射器11与采集器12的相对位置);当在测量范围的上限时,即远距时斑点落在超像素的另一侧。图3所示实施例中,超像素被设置成2x6的大小,比如对于斑点363、373、383,分别对应的超像素是361、371和381,其中斑点363、373以及383分别是远、中、近距的物体反射回的斑点光束,所对应的合像素分别落在超像素的左侧、中间以及右侧。In some embodiments, the super pixel is set to: when at the lower limit of the measurement range, that is, the spot falls on one side of the super pixel (left or right, depending on the relative position of the emitter 11 and the collector 12) at close range. Position); when at the upper limit of the measurement range, that is, the spot falls on the other side of the super pixel at a distance. In the embodiment shown in FIG. 3, the super pixels are set to a size of 2x6. For example, for spots 363, 373, and 383, the corresponding super pixels are 361, 371, and 381, respectively. The spots 363, 373, and 383 are respectively far and middle. , The spot beam reflected by a close object, the corresponding combined pixels fall on the left, middle and right side of the super pixel respectively.
在一些实施例中,合像素共享一个TDC电路单元,即由一个TDC电路单元与合像素中的每个像素连接,当合像素中的任意一个像素接受到光子并产生光子信号时,TDC电路单元均可以计算出该光子信号对应的飞行时间。这种情形比较适用于共轴情形,对于离轴情形,由于合像素位置会随着被测物体距离而改变。如图3所示实施例中,将包含4×3个TDC电路单元组成的TDC电路阵列。In some embodiments, the combined pixels share one TDC circuit unit, that is, one TDC circuit unit is connected to each pixel in the combined pixel. When any pixel in the combined pixel receives photons and generates a photon signal, the TDC circuit unit Both can calculate the flight time corresponding to the photon signal. This situation is more suitable for the coaxial situation, for the off-axis situation, because the combined pixel position will change with the distance of the measured object. In the embodiment shown in FIG. 3, a TDC circuit array composed of 4×3 TDC circuit units will be included.
在一些实施例中,超像素共享一个TDC电路单元,即由一个TDC电路单元与超像素中的每个像素连接,当超像素中的任意一个像素接受到光子并产生光子信号时,TDC电路单元均可以计算出该光子信号对应的飞行时间。由于超像素可以包含由离轴视差所引起的合像素偏移,所以超像素共享TDC可以应用于离轴情形。如图3所示实施例中,将包含4×3个TDC电路单元组成的TDC电路阵列。共享TDC电路可以有效减少TDC电路的数量,从而可以降低读出电路的大小及功耗。In some embodiments, the super pixels share a TDC circuit unit, that is, a TDC circuit unit is connected to each pixel in the super pixel. When any pixel in the super pixel receives a photon and generates a photon signal, the TDC circuit unit Both can calculate the flight time corresponding to the photon signal. Since the super pixel can include the pixel shift caused by the off-axis parallax, the super pixel sharing TDC can be applied to the off-axis situation. In the embodiment shown in FIG. 3, a TDC circuit array composed of 4×3 TDC circuit units will be included. Sharing the TDC circuit can effectively reduce the number of TDC circuits, thereby reducing the size and power consumption of the readout circuit.
对于离轴情形,需要设置较多的像素来组成超像素,在单次测量(或者单次曝光)时间内,可采集的斑点数量将远小于像素数量,换句话说,所采集到的有效深度数据(飞行时间值)的分辨率要远小于像素分辨率,例如图3中像素分 辨率为14×18,而斑点的分布为4×3,即单帧测量的有效深度数据分辨率为4×3。For off-axis situations, more pixels need to be set to form super pixels. In a single measurement (or single exposure) time, the number of spots that can be collected will be much smaller than the number of pixels, in other words, the effective depth of the collection The resolution of the data (time-of-flight value) is much smaller than the pixel resolution. For example, the pixel resolution in Figure 3 is 14×18, and the spot distribution is 4×3, that is, the effective depth data resolution of a single frame measurement is 4× 3.
为了提升测量深度数据的分辨率,可以通过多帧测量的方式,发射器11在多帧测量时所发射的斑点发生“偏离”,从而产生扫描效果,采集器12所接收到的斑点在多帧测量中也发生偏离,比如图3中相邻两帧测量分别对应的斑点是343、353,由此可以提升分辨率。在一些实施例中,可以通过对光源111上子光源的分组控制来实现斑点的“偏离”,即在相邻的两帧或多帧测量中,依次开启相邻的子光源,比如在首帧测量时,开启第一子光源阵列201,第二帧测量时,开启第二子光源阵列202,依次类推,不仅可以通过横向的分组控制也可以通过纵向的分组控制从而在二维方向上提升有效深度数据的分辨率。In order to improve the resolution of the measured depth data, multi-frame measurement can be used. The spots emitted by the transmitter 11 during the multi-frame measurement are "off", resulting in a scanning effect. The spots received by the collector 12 are in multiple frames. Deviations also occur in the measurement. For example, the spots corresponding to two adjacent frames of measurement in Figure 3 are 343 and 353 respectively, which can improve the resolution. In some embodiments, the “deviation” of the spots can be achieved by grouping control of the sub-light sources on the light source 111, that is, in the measurement of two or more adjacent frames, the adjacent sub-light sources are sequentially turned on, for example, in the first frame. During the measurement, turn on the first sub-light source array 201, and in the second frame of measurement, turn on the second sub-light source array 202, and so on. Not only can the horizontal grouping control but also the vertical grouping control to improve the effectiveness in the two-dimensional direction The resolution of the depth data.
对于多帧测量的斑点“偏离”,不同位置斑点所对应的超像素在设置时同样需要偏离,如图3所示斑点343对应的超像素为341,斑点353对应的超像素为351,超像素351相对于超像素341而言发生了横向偏移,并且超像素341与超像素351之间存在部分像素重叠,对于多帧测量的超像素之时有相互重叠的情形,为了保证TDC电路可以每帧能准确的对相应的超像素进行光子计数飞行时间测量,本申请提供了一种双重共享TDC电路的方案。For the "deviation" of the spot measured in multiple frames, the super pixels corresponding to the spots in different positions also need to be deviated when setting. As shown in Figure 3, the super pixel corresponding to the spot 343 is 341, the super pixel corresponding to the spot 353 is 351, and the super pixel is 351. 351 is laterally shifted relative to super pixel 341, and there is a partial pixel overlap between super pixel 341 and super pixel 351. For super pixels measured in multiple frames, they overlap each other. In order to ensure that the TDC circuit can be used every time The frame can accurately measure the photon counting flight time of the corresponding super pixel. This application provides a solution for dual sharing of the TDC circuit.
在一些实施例中,单个TDC电路单元所连接的像素区域将包括多帧测量中发生偏离的所有超像素所组成的区域,并且相邻两个TDC电路单元所对应的像素区域之间有重叠。具体地,图3所示实施例中,像素区域391共享一个TDC电路单元,像素区域391包含了6组子光源分别依次开启时的6帧测量所对应的6个超像素。同样地,相邻的像素区域392共享一个TDC电路单元,两个像素区域391、392之间有部分重叠,这就导致了有部分像素与两个TDC电路单元连接,在单帧测量时,根据所投射出的斑点,处理电路13会对相应的像素进行选通以使得其获取的光子信号进被单个TDC电路单元测量到,避免出现串扰、误差。在一些实施例中,TDC电路数量与单帧测量时采集器12所采集到的斑点数量相同,图3中为4×3个,每个共享TDC电路分别与4x10个像素连接,相邻的TDC电路单元所连接的像素区域之间有4×4个像素重叠。In some embodiments, the pixel area connected by a single TDC circuit unit will include the area composed of all super pixels that deviate in the multi-frame measurement, and there is overlap between the pixel areas corresponding to two adjacent TDC circuit units. Specifically, in the embodiment shown in FIG. 3, the pixel area 391 shares a TDC circuit unit, and the pixel area 391 includes 6 superpixels corresponding to 6 frames of measurement when the 6 groups of sub-light sources are turned on in sequence. Similarly, adjacent pixel regions 392 share a TDC circuit unit, and there is a partial overlap between the two pixel regions 391 and 392, which results in some pixels connected to two TDC circuit units. In a single frame measurement, according to For the projected spots, the processing circuit 13 will gate the corresponding pixels so that the acquired photon signals can be measured by a single TDC circuit unit, so as to avoid crosstalk and errors. In some embodiments, the number of TDC circuits is the same as the number of spots collected by the collector 12 during a single frame measurement. In Figure 3, the number of spots is 4×3. Each shared TDC circuit is connected to 4×10 pixels, and adjacent TDCs are connected to each other. There are 4×4 pixels overlap between the pixel regions connected by the circuit unit.
以下对可调直方图电路方案进行说明,在单帧测量周期内,TDC电路将接收来自与其连接的超像素区域中像素的光子信号,并计算该信号与起始时钟信号的时间间隔(即飞行时间),并将该时间间隔转换成温度码或二进制码保存到直方图电路中,通过多次测量之后直方图电路可以绘制出反应脉冲波形的直方图,基于该直方图就可以准确获取该脉冲的飞行时间。通常地,测量范围越大,要求TDC电路可测的时间区间越宽,若精度要求越高,要求TDC电路的时间分辨率越高,无论是时间区间越宽还是时间分辨率越高,都要求TDC电路使用较大的规模以输出的位数较多的二进制码,由于二进制码位数的增多,对直方图电路的存储器的存储量要求也就越高。存储器容量越大,成本越高,单片集成的量产难度也就越大。为此,本申请提供了一种直方图电路可调节的读出电路方案。The following describes the adjustable histogram circuit scheme. In a single frame measurement period, the TDC circuit will receive the photon signal from the pixel in the super pixel area connected to it, and calculate the time interval between the signal and the starting clock signal (ie, flight Time), and convert the time interval into a temperature code or binary code and save it in the histogram circuit. After multiple measurements, the histogram circuit can draw a histogram that reflects the pulse waveform. Based on the histogram, the pulse can be accurately obtained Flight time. Generally, the larger the measurement range, the wider the measurable time interval of the TDC circuit is required. If the accuracy requirement is higher, the higher the time resolution of the TDC circuit is required. Both the wider the time interval and the higher the time resolution are required. The TDC circuit uses a larger scale to output a binary code with a larger number of digits. Due to the increase in the number of binary code digits, the storage capacity of the memory of the histogram circuit is higher. The larger the memory capacity, the higher the cost, and the greater the difficulty in mass production of monolithic integration. For this reason, the present application provides a readout circuit solution with adjustable histogram circuit.
图4是根据本申请一个实施例的读出电路示意图。读出电路包括TDC电路41以及直方图电路42,TDC电路41采集到光子信号的时间间隔并将该时间间隔转化成时间码(二进制码、温度码等编码),随后直方图电路42会基于该时间码并在其内部相应的时间单元(即用于保存时间信息的存储单元)上进行计数,比如加1,经过多次测量之后,可以将所有时间单元内的光子计数进行统计并绘制时间直方图。所绘制的直方图如图5所示,其中ΔT指的是时间单元的宽度,T 1、T 2分别指直方图绘制的起始与终止时刻,[T 1、T 2]是该直方图的时间区间,T=T 2-T 1指的是总的时间宽度,时间单元ΔT的纵坐标即是相应存储单元内所存储的光子计数值,基于该直方图可以利用最高峰值法等方法确定脉冲波形的位置,并得到相应的飞行时间t。 Fig. 4 is a schematic diagram of a readout circuit according to an embodiment of the present application. The readout circuit includes a TDC circuit 41 and a histogram circuit 42. The TDC circuit 41 collects the time interval of the photon signal and converts the time interval into a time code (binary code, temperature code, etc.), and then the histogram circuit 42 will be based on this The time code is counted on the corresponding internal time unit (that is, the storage unit used to save time information), such as adding 1, after multiple measurements, the photon counts in all time units can be counted and the time histogram can be drawn Figure. The drawn histogram is shown in Figure 5, where ΔT refers to the width of the time unit, T 1 , T 2 respectively refer to the start and end time of the histogram drawing, [T 1 , T 2 ] is the histogram Time interval, T=T 2 -T 1 refers to the total time width. The ordinate of the time unit ΔT is the photon count value stored in the corresponding storage unit. Based on the histogram, the highest peak method can be used to determine the pulse The position of the waveform, and get the corresponding flight time t.
在一些实施例中,直方图电路42包括地址译码器421、存储矩阵422、读出/写入电路424以及直方图绘制电路425。其中TDC电路将获取的反映时间间隔的时间码(二进制码、温度码等编码)输入至地址译码器421,并经地址译码器421转换成地址信息,该地址信息将被存储到存储矩阵422中。具体地,存储矩阵422包括多个存储单元423,即时间单元,每个存储单元423被预先配置一定 的地址(或地址区间),当地址译码器421所接收到的时间码地址与某个存储单元的地址一致或在该存储单元的地址区间内时,读出/写入电路424将会对相应的存储单元执行+1操作,即完成一次光子计数,多次测量后各个存储单元中的数据反映的是该时间间隔所接收到的光子数量。在单帧测量(多次测量)后存储矩阵422中所有存储单元的数据被读出至直方图绘制电路425中进行直方图绘制。In some embodiments, the histogram circuit 42 includes an address decoder 421, a memory matrix 422, a read/write circuit 424, and a histogram drawing circuit 425. The TDC circuit inputs the acquired time code (binary code, temperature code, etc.) that reflects the time interval to the address decoder 421, and the address decoder 421 converts it into address information, which will be stored in the storage matrix 422 in. Specifically, the storage matrix 422 includes a plurality of storage units 423, that is, time units. Each storage unit 423 is pre-configured with a certain address (or address interval). When the time code address received by the address decoder 421 is equal to a certain When the address of the storage unit is the same or within the address range of the storage unit, the read/write circuit 424 will perform the +1 operation on the corresponding storage unit, that is, complete one photon count, and after multiple measurements, the value of each storage unit The data reflects the number of photons received during the time interval. After a single frame measurement (multiple measurements), the data of all memory cells in the memory matrix 422 are read out to the histogram drawing circuit 425 for histogram drawing.
为了尽可能减少存储矩阵的存储量,实际上则需要减少存储单元423的数量,为此本申请中通过处理电路对直方图电路42施加控制信号以动态设置各个存储单元423的地址(地址区间),从而进一步实现了对直方图时间分辨率ΔT和/或时间区间宽度T的动态控制。比如当存储单元423的数量不变的前提下,通过将存储单元423对应的地址区间设置为较大的时间间隔即增加时间单元的宽度ΔT,总的存储矩阵所能存储的时间间隔区间将变大,直方图总的时间间隔区间将变大,为了便于描述,将时间区间较大的直方图称为粗直方图;再比如可以通过将存储单元423对应的地址区间设置为较小的时间间隔,总的存储矩阵所能存储的时间间隔区间减小,但存储的时间分辨率将提升,直方图的时间分辨率将提升,相对于粗直方图而言,将时间区间较小的直方图称为细直方图。In order to reduce the storage capacity of the storage matrix as much as possible, in fact, it is necessary to reduce the number of storage units 423. For this reason, in this application, a control signal is applied to the histogram circuit 42 through the processing circuit to dynamically set the address (address interval) of each storage unit 423. , Thereby further realizing the dynamic control of the histogram time resolution ΔT and/or the time interval width T. For example, under the premise that the number of storage units 423 remains unchanged, by setting the address interval corresponding to the storage unit 423 to a larger time interval, that is, increasing the width ΔT of the time unit, the time interval interval that the total storage matrix can store will change. Larger, the total time interval of the histogram will become larger. For ease of description, the histogram with a larger time interval is called a thick histogram; for example, the address interval corresponding to the storage unit 423 can be set to a smaller time interval. , The time interval interval that the total storage matrix can store is reduced, but the time resolution of storage will increase, and the time resolution of the histogram will increase. Compared with the coarse histogram, the histogram with a smaller time interval is called Is a fine histogram.
本申请中,将通过在飞行时间测量过程中通过对直方图进行动态的粗-细调整来实现大范围、高精度的飞行时间测量。In this application, a large-scale and high-precision time-of-flight measurement will be realized by dynamically coarse-fine adjustment of the histogram during the time-of-flight measurement process.
图6是根据本申请一个实施例的动态直方图绘制飞行时间测量方法。包括以下步骤:Fig. 6 is a dynamic histogram drawing time-of-flight measurement method according to an embodiment of the present application. It includes the following steps:
步骤一、以粗精度时间单元绘制粗直方图。即通过施加控制信号对存储矩阵422中的各个时间单元所对应的地址或地址区间进行配置,即设置好T与ΔT,在本步骤中将ΔT配置为较大的时间间隔ΔT 1。通常地,直方图时间区间T的设置需要考虑测量范围,时间间隔ΔT 1在设置时应考虑测量范围以及直方图存储单元数,即将测量范围对应的飞行时间分配到所有的直方图存储单元数上,比如平均分配或者非平均分配等,这样所有的存储单元可以覆盖住测量范围。随后经 过多次测量,将每次测量得到的飞行时间值通过匹配以在相应的时间单元上进行加1操作,最终完成粗直方图的绘制。 Step 1: Draw a coarse histogram in coarse precision time units. That is, the address or address interval corresponding to each time unit in the storage matrix 422 is configured by applying a control signal, that is, T and ΔT are set, and ΔT is configured as a larger time interval ΔT 1 in this step. Generally, the setting of the histogram time interval T needs to consider the measurement range. The time interval ΔT 1 should be set in consideration of the measurement range and the number of histogram storage units, that is, the flight time corresponding to the measurement range is allocated to all the histogram storage units. , Such as equal distribution or non-equal distribution, etc., so that all storage units can cover the measurement range. After a number of measurements, the flight time value obtained from each measurement is matched to perform an operation of adding 1 to the corresponding time unit, and finally the drawing of the rough histogram is completed.
步骤二、利用粗直方图计算出粗略飞行时间值t 1,基于粗直方图利用最大峰值法等方法可以找到脉冲波形位置,并读取相应的飞行时间,作为粗略飞行时间值t 1,该飞行时间值的精度或者最小分辨率即是时间单元的时间间隔ΔT 1Step 2. Use the rough histogram to calculate the rough flight time value t 1. Based on the rough histogram, the maximum peak value method can be used to find the pulse waveform position, and the corresponding flight time can be read as the rough flight time value t 1 . The accuracy or minimum resolution of the time value is the time interval ΔT 1 of the time unit.
当测量范围比较大、存储单元数量有限的情形下,ΔT 1会比较大,当光子数较多时,脉冲光子会淹没在背景光中,导致无法检测到脉冲波形。因此,在一些实施例中,可以将测量范围分成几个区间,每个区间分别对应各自的飞行时间区间,每个时间区间T的时间间隔ΔT可以相同,也可以不同。在进行粗直方图绘制时,可以逐个对每个时间区间进行绘制,由于被测物体的距离未知,其对应的飞行时间会落入到哪个时间区间也未知,因此有可能会在某个时间区间进行粗直方图绘制时无法检测到脉冲波形,即无法计算到粗略飞行时间值,对于这种情况,比如当步骤二中基于粗直方图无法找到波形位置时,再回到步骤一进行下一次的粗直方图绘制,直到在粗直方图中找到脉冲波形为止。当然也可能会因为误差或者物体距离太远导致一直无法找到脉冲波形,为了避免出现一直循环检测的问题,可以设定循环的次数,比如当粗直方图绘制次数超过一定的阈值(如3次),就认为此次没有检测到目标,也可认为此次目标位于无穷远处,从而结束本次测量。 When the measurement range is relatively large and the number of storage units is limited, ΔT 1 will be relatively large. When the number of photons is large, the pulsed photons will be submerged in the background light, making it impossible to detect the pulse waveform. Therefore, in some embodiments, the measurement range may be divided into several intervals, and each interval corresponds to a respective flight time interval, and the time interval ΔT of each time interval T may be the same or different. When drawing a rough histogram, you can draw each time interval one by one. Since the distance of the measured object is unknown, the time interval in which the corresponding flight time will fall is also unknown, so it may be in a certain time interval. When drawing the rough histogram, the pulse waveform cannot be detected, that is, the rough flight time value cannot be calculated. In this case, for example, when the waveform position cannot be found based on the rough histogram in step 2, then go back to step 1 for the next step The rough histogram is drawn until the pulse waveform is found in the rough histogram. Of course, it may also be impossible to find the pulse waveform due to errors or the object distance is too far. In order to avoid the problem of continuous cycle detection, you can set the number of cycles, for example, when the number of rough histogram drawing times exceeds a certain threshold (such as 3 times) , It is considered that the target is not detected this time, or that the target is located at infinity this time, thus ending the measurement.
步骤三、根据所得到的粗略飞行时间值,以精细时间单元绘制细直方图。此时,由于已经知道飞行时间值的粗略值,则可以再进行一轮多次测量并绘制相应的直方图,此时直方图电路被控制信号控制后其存储矩阵422中的各个时间单元所对应的地址或地址区间被配置为较小的时间间隔ΔT 2。通常地,时间间隔ΔT 2在设置时仅需要对应一个较小的能包含真实飞行时间值的测量范围区间以及直方图存储单元数量即可,该测量范围区间在设置时可以以粗略飞行时间值为中间,两边增加一定的余量,比如可设置成[t 1-T’,t 1-T’],其中T’设置的越小,时间间隔ΔT 2越小,分辨率越高,比如在一些实施例中,可以设置成T’=5%T, 由此所有时间单元的时间间隔总和仅为粗直方图对应时间区间的10%。在其他实施例中,余量与粗直方图时间区间的比值可以被设置在1%-25%的范围内。随后进行新一轮的多次测量,将每次所得到的飞行时间值通过匹配并在相应的时间单元上进行加1操作,完成细直方图的绘制。 Step 3: According to the obtained rough flight time value, draw a fine histogram in fine time units. At this time, since the rough value of the flight time value is already known, one more round of measurement can be performed and the corresponding histogram is drawn. At this time, the histogram circuit is controlled by the control signal and corresponds to each time unit in the storage matrix 422. The address or address interval is configured as a smaller time interval ΔT 2 . Generally, the time interval ΔT 2 only needs to correspond to a smaller measurement range interval that can contain the true flight time value and the number of histogram storage units when setting. The measurement range interval can be set as a rough flight time value In the middle, add a certain margin on both sides, for example, it can be set to [t 1 -T',t 1 -T'], where the smaller T'is set, the smaller the time interval ΔT 2 , and the higher the resolution. For example, in some In the embodiment, it can be set as T'=5%T, so the sum of the time intervals of all time units is only 10% of the time interval corresponding to the rough histogram. In other embodiments, the ratio of the margin to the time interval of the coarse histogram may be set in the range of 1%-25%. Then a new round of multiple measurements is performed, and the flight time value obtained each time is matched and the corresponding time unit is increased by 1 to complete the drawing of the fine histogram.
步骤四、利用细直方图计算精细飞行时间t 2,基于细直方图利用最大峰值法等方法可以找到波形位置,并读取相应的飞行时间,作为精细飞行时间值t 2,该飞行时间值的精度或者最小分辨率即是时间单元的时间间隔ΔT 2。若以第三步骤中T’=5%T的设置来说明,精细飞行时间相比于粗略飞行时间其精度提升了10倍(最小分辨率提升了10倍)。 Step 4. Use the fine histogram to calculate the fine flight time t 2. Based on the fine histogram, the maximum peak method can be used to find the waveform position, and the corresponding flight time can be read as the fine flight time value t 2 . The accuracy or minimum resolution is the time interval ΔT 2 of the time unit. If the setting of T'=5%T in the third step is used to illustrate, the accuracy of the fine flight time is increased by 10 times compared with the rough flight time (the minimum resolution is increased by 10 times).
上述直方图动态粗细调整的测量方法实质上是先在较大的测量范围内进行粗略定位,再基于定位结果进行精细测量的过程。可以理解的是,上述粗细调整的方法也可以是被扩展到三步或更多步的测量中,比如在一些实施例中,先以第一时间分辨率进行测量得到第一飞行时间,再基于第一飞行时间以第二时间分辨率进行测量得到第二飞行时间,最后基于第二飞行时间以第三时间分辨率进行测量得到第三飞行时间。三次的精度依次提升,最终可以实现更高精度的测量。The measurement method of the above-mentioned histogram dynamic coarse and fine adjustment is essentially a process of first performing coarse positioning in a larger measurement range, and then performing fine measurement based on the positioning result. It is understandable that the above-mentioned coarse and fine adjustment method can also be extended to three or more steps of measurement. For example, in some embodiments, the first time resolution is first measured to obtain the first flight time, and then based on The first flight time is measured with the second time resolution to obtain the second flight time, and finally based on the second flight time with the third time resolution, the third flight time is obtained. The accuracy of the three times is increased successively, and finally a higher-precision measurement can be achieved.
在一些实施例中,由于直方图绘制时仅对位于其时间区间T内的飞行时间值进行计数,因此对于测量系统采集器12中的各个像素,可以在指定的时间区间内被激活(使能),从而降低功耗,该指定的时间区间一般要包含直方图绘制的时间区间T。比如当直方图的时间区间为[3ns,10ns],像素被激活的时间区间可以设置成[2.5ns,10.5ns]。In some embodiments, since the histogram only counts the flight time value within the time interval T when the histogram is drawn, each pixel in the measurement system collector 12 can be activated within a specified time interval (enable ), thereby reducing power consumption, the specified time interval generally includes the time interval T drawn by the histogram. For example, when the time interval of the histogram is [3ns, 10ns], the time interval during which the pixel is activated can be set to [2.5ns, 10.5ns].
可以理解的是,上述测量方法不仅适用于共轴距离测量系统中,也适用于离轴测量系统中。特别需要说明的是,对于包含图3所示采集器的离轴测量系统中,直方图动态调整方案可以进一步用来进行超像素定位不仅可以提升精度还可以降低功耗。图7是根据本申请又一个实施例的飞行时间测量方法。以下将结合图3进行说明,方法包括以下几个步骤:It is understandable that the above-mentioned measurement method is not only applicable to the coaxial distance measurement system, but also applicable to the off-axis measurement system. In particular, it should be noted that for the off-axis measurement system including the collector shown in FIG. 3, the histogram dynamic adjustment scheme can be further used for super pixel positioning not only to improve accuracy but also to reduce power consumption. Fig. 7 is a time-of-flight measurement method according to another embodiment of the present application. The following will be described with reference to Figure 3. The method includes the following steps:
步骤一、接收超像素TDC输出信号,以粗精度时间单元绘制粗直方图。由于在测量之前不清楚物体的距离,因此无法确定斑点的位置,即无法确定合像素的位置,合像素根据物体的远近可能会落入超像素的不同位置。因此在本步骤中,首先对超像素中的各个像素进行使能使其均处于激活状态以接收光子,并接收该超像素的共享TDC输出的光子信号,随后进行直方图绘制。直方图采用图6所示的动态调整直方图方案,在本步骤中将采用粗精度时间单元绘制出粗直方图。Step 1: Receive the super pixel TDC output signal, and draw a coarse histogram in coarse-precision time units. Since the distance of the object is not clear before the measurement, the position of the spot cannot be determined, that is, the position of the combined pixel cannot be determined, and the combined pixel may fall into different positions of the super pixel according to the distance of the object. Therefore, in this step, firstly, each pixel in the super pixel is enabled to be in an activated state to receive photons, and the photon signal output by the shared TDC of the super pixel is received, and then the histogram is drawn. The histogram adopts the dynamic adjustment histogram scheme shown in Fig. 6, in this step, the coarse histogram will be drawn using the coarse precision time unit.
步骤二、利用粗直方图计算出粗略飞行时间值t 1,基于粗直方图利用最大峰值法等方法可以找到波形位置,并读取相应的飞行时间,作为粗略飞行时间值t 1,该飞行时间值的精度或者最小分辨率即是时间单元的时间间隔ΔT 1Step 2. Use the rough histogram to calculate the rough flight time value t 1. Based on the rough histogram, use the maximum peak method to find the waveform position, and read the corresponding flight time as the rough flight time value t 1 , the flight time The accuracy or minimum resolution of the value is the time interval ΔT 1 of the time unit.
当测量范围比较大、存储单元数量有限的情形下,ΔT 1会比较大,当光子数较多时,脉冲光子会淹没在背景光中,导致无法检测到脉冲波形。因此,在一些实施例中,可以将测量范围分成几个区间,每个区间分别对应各自的飞行时间区间,每个时间区间T的时间间隔ΔT可以相同,也可以不同。在进行粗直方图绘制时,可以逐个对每个时间区间进行绘制,由于被测物体的距离未知,其对应的飞行时间会落入到哪个时间区间也未知,因此有可能会在某个时间区间进行粗直方图绘制时无法检测到脉冲波形,对于这种情况,比如当步骤二中基于粗直方图无法找到波形位置时,再回到步骤一进行下一次的粗直方图绘制,直到在粗直方图中找到脉冲波形为止。当然也可能会因为误差或者物体距离太远导致一直无法找到脉冲波形,为了避免出现一直循环检测的问题,可以设定循环的次数,比如当粗直方图绘制次数超过一定的阈值(如3次),就认为此次没有检测到目标,也可认为此次目标位于无穷远处,从而结束本次测量。 When the measurement range is relatively large and the number of storage units is limited, ΔT 1 will be relatively large. When the number of photons is large, the pulsed photons will be submerged in the background light, making it impossible to detect the pulse waveform. Therefore, in some embodiments, the measurement range may be divided into several intervals, and each interval corresponds to a respective flight time interval, and the time interval ΔT of each time interval T may be the same or different. When drawing a rough histogram, you can draw each time interval one by one. Since the distance of the measured object is unknown, it is also unknown which time interval the corresponding flight time will fall into, so it may be in a certain time interval. The pulse waveform cannot be detected when drawing the rough histogram. For this case, for example, when the waveform position cannot be found based on the rough histogram in step 2, then go back to step 1 to draw the next rough histogram until the rough histogram Until the pulse waveform is found in the figure. Of course, it may also be impossible to find the pulse waveform due to errors or the object distance is too far. In order to avoid the problem of continuous cycle detection, you can set the number of cycles, for example, when the number of rough histogram drawing times exceeds a certain threshold (such as 3 times) , It is considered that the target is not detected this time, or that the target is located at infinity this time, thus ending the measurement.
步骤三、根据所得到的粗略飞行时间值,定位合像素并以精细时间单元绘制细直方图。由于已经明确了粗略的飞行时间值,因此可以基于该粗略飞行时间值以及视差定位于合像素的位置,通常需要提前将合像素的位置与粗略飞行时间值之间的关系保存到系统中以用于在获取粗略飞行时间值之后可以根据该 关系直接定位出合像素的位置;随后基于该合像素的位置,仅对合像素进行激活,同时以精细时间单元绘制细直方图。由于已经知道飞行时间值的粗略值,则可以再进行一轮多次测量并绘制相应的直方图,此时直方图电路被控制信号控制后其存储矩阵422中的各个时间单元所对应的地址或地址区间被配置为较小的时间间隔ΔT 2。通常地,时间间隔ΔT 2在设置时仅需要对应一个较小的能包含真实飞行时间值的测量范围区间以及直方图存储单元数量即可,该测量范围区间在设置时可以以粗略飞行时间值为中间,两边增加一定的余量,比如可设置成[t 1-T’,t 1-T’],其中T’设置的越小,时间间隔ΔT 2越小,分辨率越高,比如在一些实施例中,可以设置成T’=5%T,由此所有时间单元的时间间隔总和仅为粗直方图对应时间区间的10%。在其他实施例中,余量与粗直方图时间区间的比值可以被设置在1%~25%的范围内。随后进行新一轮的多次测量,将每次所得到的飞行时间值通过匹配并在相应的时间单元上进行加1操作,完成细直方图的绘制。 Step 3: According to the obtained rough time-of-flight value, locate the combined pixels and draw a fine histogram in fine time units. Since the rough time-of-flight value has been clarified, it can be based on the rough time-of-flight value and the parallax to locate the position of the combined pixel. It is usually necessary to save the relationship between the position of the combined pixel and the rough time-of-flight value in the system in advance After obtaining the rough time-of-flight value, the position of the combined pixel can be directly located according to the relationship; then based on the position of the combined pixel, only the combined pixel is activated, and a fine histogram is drawn in fine time units. Since the rough value of the flight time value is already known, you can perform multiple measurements and draw the corresponding histogram. At this time, the histogram circuit is controlled by the control signal and stores the address or address corresponding to each time unit in the matrix 422. The address interval is configured as a small time interval ΔT 2 . Generally, the time interval ΔT 2 only needs to correspond to a smaller measurement range interval that can contain the true flight time value and the number of histogram storage units when setting it. The measurement range interval can be set as a rough flight time value In the middle, a certain margin is added on both sides, for example, it can be set to [t 1 -T',t 1 -T'], where the smaller T'is set, the smaller the time interval ΔT 2 , and the higher the resolution. For example, in some In the embodiment, it can be set as T'=5%T, so the sum of the time intervals of all time units is only 10% of the time interval corresponding to the rough histogram. In other embodiments, the ratio of the margin to the time interval of the coarse histogram may be set in the range of 1% to 25%. Then a new round of multiple measurements is performed, and the flight time value obtained each time is matched and the corresponding time unit is increased by 1 to complete the drawing of the fine histogram.
步骤四、利用细直方图计算精细飞行时间t 2,基于细直方图利用最大峰值法等方法可以找到波形位置,并读取相应的飞行时间,作为精细飞行时间值t 2,该飞行时间值的精度或者最小分辨率即是时间单元的时间间隔ΔT 2。若以第三步骤中T’=5%T的设置来说明,精细飞行时间相比于粗略飞行时间其精度提升了10倍(最小分辨率提升了10倍)。 Step 4. Use the fine histogram to calculate the fine flight time t 2. Based on the fine histogram, the maximum peak method can be used to find the waveform position, and the corresponding flight time can be read as the fine flight time value t 2 . The accuracy or minimum resolution is the time interval ΔT 2 of the time unit. If the setting of T'=5%T in the third step is used to illustrate, the accuracy of the fine flight time is increased by 10 times compared with the rough flight time (the minimum resolution is increased by 10 times).
上述直方图动态粗细调整的测量方法实质上是先在较大的测量范围内进行粗略定位,再基于定位结果进行精细测量的过程。可以理解的是,上述粗细调整的方法也可以是被扩展到三步或更多步的测量中,比如在一些实施例中,先以第一时间分辨率进行测量得到第一飞行时间,再基于第一飞行时间以第二时间分辨率进行测量得到第二飞行时间,最后基于第二飞行时间以第三时间分辨率进行测量得到第三飞行时间。三次的精度依次提升,最终可以实现更高精度的测量。The measurement method of the above-mentioned histogram dynamic coarse and fine adjustment is essentially a process of first performing coarse positioning in a larger measurement range, and then performing fine measurement based on the positioning result. It is understandable that the above-mentioned coarse and fine adjustment method can also be extended to three or more steps of measurement. For example, in some embodiments, the first time resolution is first measured to obtain the first flight time, and then based on The first flight time is measured with the second time resolution to obtain the second flight time, and finally based on the second flight time with the third time resolution, the third flight time is obtained. The accuracy of the three times is increased successively, and finally a higher-precision measurement can be achieved.
在一些实施例中,由于直方图绘制时仅对位于其时间区间T内的飞行时间 值进行计数,因此对于测量系统采集器12中的各个像素,可以在指定的时间区间内被激活(使能),从而降低功耗,该指定的时间区间一般要包含直方图绘制的时间区间T。比如当直方图的时间区间为[3ns,10ns],像素被激活的时间区间可以设置成[2.5ns,10.5ns]。In some embodiments, since the histogram only counts the flight time value within the time interval T when the histogram is drawn, each pixel in the measurement system collector 12 can be activated within a specified time interval (enable ), thereby reducing power consumption, the specified time interval generally includes the time interval T drawn by the histogram. For example, when the time interval of the histogram is [3ns, 10ns], the time interval during which the pixel is activated can be set to [2.5ns, 10.5ns].
以下对基于插值的飞行时间测量方法进行描述,图2及图3实施例中介绍了通过多帧测量提升分辨率的示例,可以理解的是,在进行多帧测量时,各帧深度数据的测量均可以采用图6或图7所示的直方图动态调整方案。比如开启第一子光源阵列201时,进行动态粗-细直方图绘制得到第一帧深度图像;开启第二子光源阵列202时,进行动态粗-细直方图绘制得到第二帧深度图像;融合第一、二帧深度图像得到更高分辨率的深度图像。在一些实施例中,也可以采集3帧以上的深度图像并融合成更高分辨率的深度图像。The following describes the time-of-flight measurement method based on interpolation. The embodiments in Figures 2 and 3 introduce examples of improving resolution through multi-frame measurement. It can be understood that when multi-frame measurement is performed, the depth data of each frame is measured Both can use the histogram dynamic adjustment scheme shown in Fig. 6 or Fig. 7. For example, when the first sub-light source array 201 is turned on, the dynamic coarse-fine histogram is drawn to obtain the first frame of depth image; when the second sub-light source array 202 is turned on, the dynamic coarse-fine histogram is drawn to obtain the second frame of depth image; The first and second frames of depth images get higher resolution depth images. In some embodiments, depth images of more than 3 frames can also be collected and merged into a higher resolution depth image.
然而,若在每帧深度图像采集时都需要进行粗细动态调整,每张高分辨率融合深度图像的采集时间会相对较长,整体帧率不高。为尽可能提升帧率,本申请提供一种图8所示的根据本申请一个实施例的基于插值的飞行时间测量方法,该方法包括以下步骤:However, if each frame of depth image acquisition requires dynamic adjustment of the thickness, the acquisition time of each high-resolution fusion depth image will be relatively long, and the overall frame rate will not be high. In order to increase the frame rate as much as possible, this application provides an interpolation-based time-of-flight measurement method according to an embodiment of the application as shown in FIG. 8. The method includes the following steps:
步骤一、获取与第一光源对应的第一合像素的第一飞行时间。在本步骤中,开启发射器11中的第一光源以发射出第一光源对应的斑点光束,该斑点光束会落入到采集器12中像素单元31上的合像素上,以图3中4x3的实线圆所代表的斑点为例,处理电路进一步可以获取该合像素的第一飞行时间,比如可以通过图6或图7所示实施例中的粗-细动态调整方案或者其他任意方案可以获取该合像素的精细飞行时间(第一飞行时间)。Step 1: Obtain the first flight time of the first combined pixel corresponding to the first light source. In this step, the first light source in the emitter 11 is turned on to emit the spot beam corresponding to the first light source, and the spot beam will fall on the closed pixel on the pixel unit 31 in the collector 12, as shown in 4x3 in Figure 3 Take the spot represented by the solid circle as an example. The processing circuit can further obtain the first flight time of the combined pixel. For example, the coarse-fine dynamic adjustment scheme in the embodiment shown in FIG. 6 or FIG. 7 or any other scheme can be used. Obtain the fine flight time (first flight time) of the combined pixel.
步骤二、通过插值计算得到与第二光源对应的第二超像素的第二飞行时间。当开启第二光源时,将发射出与第一光源对应的斑点光束相邻的斑点光束,该斑点光束同样会落入到采集器12的合像素上,为了方便示意,仅在图3中用虚线圆绘制出一个斑点353,斑点353与斑点343在空间上因为第一光源与第二光源的位置错开导致斑点位置错开,因此各自对应的合像素也会错开。通常地, 当空间点相距比较近时,两个点的距离相差也不会太大。因此在一些实施例中,可以将步骤一中获取的斑点343对应合像素的飞行时间值做为斑点353对应的超像素351的第二飞行时间值(粗略飞行时间),后续再进行精细飞行时间计算。在一些实施例中,斑点353的超像素第二飞行时间值的估计可以利用其周边多个第一光源所对应的合像素,比如利用左右两个合像素的飞行时间值进行插值得到。插值可以是一维插值也可以是二维插值,插值方法可以是线性插值、样条插值、多项式插值等插值方法中的至少一种。Step 2: Obtain the second flight time of the second superpixel corresponding to the second light source through interpolation calculation. When the second light source is turned on, a spot beam adjacent to the spot beam corresponding to the first light source will be emitted, and the spot beam will also fall on the combined pixel of the collector 12. For ease of illustration, only used in Figure 3 The dotted circle draws a spot 353. The spot 353 and the spot 343 are spatially staggered because the positions of the first light source and the second light source are staggered, and therefore the respective corresponding pixels are also staggered. Generally, when the space points are relatively close, the distance between the two points will not differ too much. Therefore, in some embodiments, the flight time value of the corresponding pixel corresponding to the spot 343 obtained in step 1 can be used as the second flight time value (rough flight time) of the super pixel 351 corresponding to the spot 353, and then the fine flight time is performed. Calculation. In some embodiments, the second time-of-flight value of the superpixel of the spot 353 can be estimated by using the combined pixels corresponding to multiple first light sources around the spot 353, for example, using the time-of-flight values of the left and right combined pixels for interpolation. The interpolation may be one-dimensional interpolation or two-dimensional interpolation, and the interpolation method may be at least one of interpolation methods such as linear interpolation, spline interpolation, and polynomial interpolation.
步骤三、根据第二飞行时间,定位与第二光源对应的第二合像素并绘制直方图。通过插值得到第二飞行时间后,基于该飞行时间以及视差就可以定位与斑点在超像素中的位置,即合像素的位置,随后基于该合像素的位置,仅对合像素进行激活,同时以精细时间单元绘制直方图。Step 3: According to the second flight time, locate the second composite pixel corresponding to the second light source and draw a histogram. After the second flight time is obtained by interpolation, based on the flight time and the parallax, the position of the spot in the superpixel, that is, the position of the combined pixel, can be located, and then based on the position of the combined pixel, only the combined pixel is activated, and at the same time Plot histograms in fine time units.
步骤四、利用直方图计算第三飞行时间,基于直方图利用最大峰值法等方法可以找到波形位置,并读取相应的飞行时间,作为第三(精细)飞行时间值t 2,该飞行时间值的精度或者最小分辨率即是时间单元的时间间隔ΔT 2Step 4. Use the histogram to calculate the third flight time. Based on the histogram, the maximum peak value method can be used to find the waveform position, and the corresponding flight time can be read as the third (fine) flight time value t 2 , the flight time value The accuracy or minimum resolution of is the time interval ΔT 2 of the time unit.
上述步骤中的飞行时间测量方法与图6或图7所述的方法相比,仅少数斑点的飞行时间计算需要利用粗-细直方图绘制方式,至少需要进行2帧飞行时间测量才能得到高精度的飞行时间值,大部分斑点的飞行时间计算可以利用已知斑点的飞行时间值进行插值作为粗直方图的粗略飞行时间值,并基于该粗略飞行时间值仅需要单次的精细直方图绘制即可,由此可以大大提升效率。比如,若光源分成6组,仅第一组光源开启时需要进行粗-细测量,随后的5组在开启后进行飞行时间测量时均仅需要进行单次的细测量即可。Compared with the method described in Figure 6 or Figure 7, the flight time measurement method in the above steps only needs to use the coarse-fine histogram drawing method for the flight time calculation of only a few spots, and at least 2 frames of flight time measurement are required to obtain high accuracy. The flight time value of most spots can be calculated by using the flight time value of known spots as the rough flight time value of the coarse histogram, and based on the rough flight time value, only a single fine histogram drawing is required. However, this can greatly improve efficiency. For example, if the light sources are divided into 6 groups, only the first group of light sources need to perform coarse-fine measurement when the light source is turned on, and the subsequent 5 groups only need to perform a single fine measurement when the flight time measurement is performed after the light source is turned on.
在一些实施例中,考虑到被测物体表面往往存在跳变,即距离差展异较大的情形,此时插值难以得到准确的飞行时间值,因此基于插值的结果进行细测会引起误差。因此可以在步骤二中的插值之前进行一次判断,比如将要进行插值计算的多个斑点对应的合像素的(比如左右两个斑点)飞行时间值相差大于某一阈值时,表面这两个斑点之间的物体表面深度值存在跳变,这两个斑点之 间的斑点将依然保持粗-细直方图绘制的测量方案,只要当小于该减值时,才执行插值计算。In some embodiments, considering that the surface of the object to be measured often has jumps, that is, the distance difference is large, it is difficult to obtain an accurate time-of-flight value by interpolation. Therefore, performing a fine measurement based on the result of the interpolation may cause errors. Therefore, a judgment can be made before the interpolation in step 2. For example, when the flight time values of the combined pixels corresponding to the multiple spots to be interpolated (such as the left and right spots) are greater than a certain threshold, the difference between the two spots on the surface There is a jump in the surface depth value of the object between the two spots, the spots between the two spots will still maintain the measurement scheme of the thick-fine histogram drawing, as long as the interpolation calculation is performed when it is less than the subtraction value.
在一些实施例中,第一合像素的第一飞行时间也可以是粗略飞行时间,即对第一合像素的第一飞行时间解调计算时仅需要进行单次的粗略直方图绘制,随后将基于该粗略直方图绘制得到的粗略飞行时间进行插值。In some embodiments, the first time of flight of the first combined pixel may also be the rough time of flight, that is, when the first time of flight of the first combined pixel is demodulated and calculated, only a single rough histogram drawing is required, and then the Interpolation is performed based on the rough flight time obtained by the rough histogram drawing.
可以理解的是,当将本申请的距离测距系统嵌入装置或硬件中时会作出相应的结构或部件变化以适应需求,其本质仍然采用本申请的距离测距系统,所以应当视为本申请的保护范围。以上内容是结合具体/优选的实施方式对本申请所作的进一步详细说明,不能认定本申请的具体实施只局限于这些说明。对于本申请所属技术领域的普通技术人员来说,在不脱离本申请构思的前提下,其还可以对这些已描述的实施方式做出若干替代或变型,而这些替代或变型方式都应当视为属于本申请的保护范围。It is understandable that when the distance ranging system of this application is embedded in a device or hardware, corresponding structural or component changes will be made to adapt to requirements. The essence of this application is still the distance ranging system of this application, so it should be regarded as this application. The scope of protection. The above content is a further detailed description of the application in combination with specific/preferred implementations, and it cannot be determined that the specific implementation of the application is limited to these descriptions. For those of ordinary skill in the technical field to which this application belongs, without departing from the concept of this application, they can also make several substitutions or modifications to the described implementations, and these substitutions or modifications should be regarded as It belongs to the protection scope of this application.
在本说明书的描述中,参考术语“一种实施例”、“一些实施例”、“优选实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。In the description of this specification, reference to the description of the terms "one embodiment", "some embodiments", "preferred embodiment", "examples", "specific examples", or "some examples" etc. means to incorporate the implementation The specific features, structures, materials, or characteristics described by the examples or examples are included in at least one embodiment or example of the present application. In this specification, the schematic representations of the above terms do not necessarily refer to the same embodiment or example. Moreover, the described specific features, structures, materials or characteristics may be combined in any one or more embodiments or examples in a suitable manner.
此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。尽管已经详细描述了本申请的实施例及其优点,但应当理解,在不脱离由所附权利要求限定的范围的情况下,可以在本文中进行各种改变、替换和变更。此外,本申请的范围不旨在限于说明书中所述的过程、机器、制造、物质组成、手段、方法和步骤的特定实施例。本领域普通技术人员将容易理解,可以利用执行与本文所述相应实施例基本相同功能或获得与本文所述实施例基本相同结果的目前存在的或稍后要开发的上述披露、过程、机器、制造、物质组成、手段、方法 或步骤。因此,所附权利要求旨在将这些过程、机器、制造、物质组成、手段、方法或步骤包含在其范围内。In addition, those skilled in the art can combine and combine the different embodiments or examples and the features of the different embodiments or examples described in this specification without contradicting each other. Although the embodiments of the present application and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the scope defined by the appended claims. In addition, the scope of the present application is not intended to be limited to the specific embodiments of the processes, machines, manufacturing, material composition, means, methods, and steps described in the specification. A person of ordinary skill in the art will readily understand that the above-mentioned disclosures, processes, machines, processes, machines, and machines that currently exist or will be developed later that perform substantially the same functions as the corresponding embodiments described herein or obtain substantially the same results as the embodiments described herein can be used. Manufacturing, material composition, means, method, or step. Therefore, the appended claims intend to include these processes, machines, manufacturing, material compositions, means, methods, or steps within their scope.

Claims (10)

  1. 一种基于插值的飞行时间测量方法,其特征在于,包括如下步骤:A time-of-flight measurement method based on interpolation, characterized in that it comprises the following steps:
    S1、获取与第一光源对应的第一合像素的第一飞行时间;S1. Acquire the first flight time of the first combined pixel corresponding to the first light source;
    S2、通过插值计算得到与第二光源对应的第二超像素的第二飞行时间;S2. Obtain the second flight time of the second superpixel corresponding to the second light source through interpolation calculation;
    S3、根据所述第二飞行时间,定位与所述第二光源对应的第二合像素并绘制直方图;S3. According to the second flight time, locate a second composite pixel corresponding to the second light source and draw a histogram;
    S4、利用所述直方图计算第三飞行时间。S4. Calculate the third flight time by using the histogram.
  2. 如权利要求1所述的飞行时间距离测量方法,其特征在于:所述第一光源与所述第二光源被设置在同一个光源阵列上,所述第一光源与所述第二光源可以被分组独立控制。The time-of-flight distance measurement method according to claim 1, wherein the first light source and the second light source are arranged on the same light source array, and the first light source and the second light source can be Group independent control.
  3. 如权利要求1所述的飞行时间距离测量方法,其特征在于:所述插值包括一维插值或者二维插值,所述插值方法包含线性插值、样条插值、多项式插值中的至少一种。The time-of-flight distance measurement method according to claim 1, wherein the interpolation includes one-dimensional interpolation or two-dimensional interpolation, and the interpolation method includes at least one of linear interpolation, spline interpolation, and polynomial interpolation.
  4. 如权利要求1所述的飞行时间距离测量方法,其特征在于:所述直方图在绘制时,仅第二合像素内的像素被激活。3. The time-of-flight distance measurement method according to claim 1, wherein when the histogram is drawn, only the pixels in the second sum pixel are activated.
  5. 如权利要求1所述的飞行时间距离测量方法,其特征在于:所述步骤S2还包括对将要插值的多个斑点合像素的飞行时间值进行差值计算,当差值小于某一阈值时,才执行所述插值计算。5. The time-of-flight distance measurement method according to claim 1, characterized in that: the step S2 further comprises calculating the difference of the time-of-flight values of the multiple spots and pixels to be interpolated, and when the difference is less than a certain threshold, The interpolation calculation is performed.
  6. 一种基于插值的飞行时间测量系统,其特征在于,包括:A time-of-flight measurement system based on interpolation, which is characterized in that it comprises:
    发射器,经配置以发射脉冲光束,其包括第一光源以及第二光源;An emitter configured to emit a pulsed light beam, which includes a first light source and a second light source;
    采集器,经配置以采集被物体反射回的所述脉冲光束中的光子并形成光子信号,其包含多个像素;A collector configured to collect photons in the pulsed beam reflected by an object and form a photon signal, which includes a plurality of pixels;
    处理电路,与所述发射器以及所述采集器连接,用于执行以下步骤以计算飞行时间:The processing circuit is connected to the transmitter and the collector, and is used to perform the following steps to calculate the flight time:
    S1、获取与第一光源对应的第一合像素的第一飞行时间;S1. Acquire the first flight time of the first combined pixel corresponding to the first light source;
    S2、通过插值计算得到与第二光源对应的第二超像素的第二飞行时间;S2. Obtain the second flight time of the second superpixel corresponding to the second light source through interpolation calculation;
    S3、根据所述第二飞行时间,定位与所述第二光源对应的第二合像素并绘制直方图;S3. According to the second flight time, locate a second composite pixel corresponding to the second light source and draw a histogram;
    S4、利用所述直方图计算第三飞行时间。S4. Calculate the third flight time by using the histogram.
  7. 如权利要求6所述的飞行时间距离测量系统,其特征在于:所述第一光源与所述第二光源被设置在同一个光源阵列上,所述第一光源与所述第二光源可以被分组独立控制。The time-of-flight distance measurement system according to claim 6, wherein the first light source and the second light source are arranged on the same light source array, and the first light source and the second light source can be Group independent control.
  8. 如权利要求6所述的飞行时间距离测量系统,其特征在于:所述插值包括一维插值或者二维插值,所述插值方法包含线性插值、样条插值、多项式插值中的至少一种。The time-of-flight distance measurement system according to claim 6, wherein the interpolation includes one-dimensional interpolation or two-dimensional interpolation, and the interpolation method includes at least one of linear interpolation, spline interpolation, and polynomial interpolation.
  9. 如权利要求6所述的飞行时间距离测量系统,其特征在于:所述直方图在绘制时,仅第二合像素内的像素被激活。7. The time-of-flight distance measurement system according to claim 6, wherein when the histogram is drawn, only the pixels in the second sum pixel are activated.
  10. 如权利要求6所述的飞行时间距离测量系统,其特征在于:所步骤S2还包括对将要插值的多个斑点合像素的飞行时间值进行差值计算,当差值小于某一阈值时,才执行所述插值计算。The time-of-flight distance measurement system according to claim 6, characterized in that: step S2 further comprises calculating the difference of the time-of-flight values of the multiple spots and pixels to be interpolated, and only when the difference is less than a certain threshold Perform the interpolation calculation.
PCT/CN2019/113710 2019-09-19 2019-10-28 Interpolation-based time of flight measurement method and system WO2021051479A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910889455.6 2019-09-19
CN201910889455.6A CN110596725B (en) 2019-09-19 2019-09-19 Time-of-flight measurement method and system based on interpolation

Publications (1)

Publication Number Publication Date
WO2021051479A1 true WO2021051479A1 (en) 2021-03-25

Family

ID=68861628

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/113710 WO2021051479A1 (en) 2019-09-19 2019-10-28 Interpolation-based time of flight measurement method and system

Country Status (2)

Country Link
CN (1) CN110596725B (en)
WO (1) WO2021051479A1 (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11885915B2 (en) 2020-03-30 2024-01-30 Stmicroelectronics (Research & Development) Limited Time to digital converter
CN111487639B (en) * 2020-04-20 2024-05-03 深圳奥锐达科技有限公司 Laser ranging device and method
WO2021243612A1 (en) * 2020-06-03 2021-12-09 深圳市大疆创新科技有限公司 Distance measurement method, distance measurement apparatus, and movable platform
CN113848538A (en) * 2020-06-25 2021-12-28 深圳奥锐达科技有限公司 Dispersion spectrum laser radar system and measurement method
CN114355384B (en) * 2020-07-07 2024-01-02 柳州阜民科技有限公司 Time-of-flight TOF system and electronic device
CN111856433B (en) * 2020-07-25 2022-10-04 深圳奥锐达科技有限公司 Distance measuring system and measuring method
CN112100449B (en) * 2020-08-24 2024-02-02 深圳市力合微电子股份有限公司 d-ToF distance measurement optimizing storage method for realizing dynamic large-range and high-precision positioning
WO2022109826A1 (en) * 2020-11-25 2022-06-02 深圳市速腾聚创科技有限公司 Distance measurement method and apparatus, electronic device, and storage medium
CN112731425B (en) * 2020-11-29 2024-05-03 奥比中光科技集团股份有限公司 Histogram processing method, distance measurement system and distance measurement equipment
CN112558096B (en) * 2020-12-11 2021-10-26 深圳市灵明光子科技有限公司 Distance measurement method, system and storage medium based on shared memory
CN113514842A (en) * 2021-03-08 2021-10-19 奥诚信息科技(上海)有限公司 Distance measuring method, system and device
CN115144864A (en) * 2021-03-31 2022-10-04 上海禾赛科技有限公司 Storage method, data processing method, laser radar, and computer-readable storage medium
WO2023000756A1 (en) * 2021-07-20 2023-01-26 Oppo广东移动通信有限公司 Ranging method and apparatus, terminal, and non-volatile computer-readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105911536A (en) * 2016-06-12 2016-08-31 中国科学院上海技术物理研究所 Multichannel photon counting laser radar receiver possessing real-time door control function
CN107462898A (en) * 2017-08-08 2017-12-12 中国科学院西安光学精密机械研究所 Based on the gate type diffusing reflection of monochromatic light subarray around angle imaging system and method
US20180329064A1 (en) * 2017-05-09 2018-11-15 Stmicroelectronics (Grenoble 2) Sas Method and apparatus for mapping column illumination to column detection in a time of flight (tof) system
CN109343070A (en) * 2018-11-21 2019-02-15 深圳奥比中光科技有限公司 Time flight depth camera
CN109725326A (en) * 2017-10-30 2019-05-07 豪威科技股份有限公司 Time-of-flight camera
CN109870704A (en) * 2019-01-23 2019-06-11 深圳奥比中光科技有限公司 TOF camera and its measurement method
CN110073244A (en) * 2016-12-12 2019-07-30 森斯尔科技有限公司 For determining the histogram reading method and circuit of the flight time of photon
CN110235024A (en) * 2017-01-25 2019-09-13 苹果公司 SPAD detector with modulation sensitivity

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1499851A4 (en) * 2002-04-15 2008-11-19 Toolz Ltd Distance measurement device
RU2473099C2 (en) * 2007-05-16 2013-01-20 Конинклейке Филипс Электроникс Н.В. Virtual pet detector and quasi-pixelated reading circuit for pet
US8587771B2 (en) * 2010-07-16 2013-11-19 Microsoft Corporation Method and system for multi-phase dynamic calibration of three-dimensional (3D) sensors in a time-of-flight system
WO2012014077A2 (en) * 2010-07-29 2012-02-02 Waikatolink Limited Apparatus and method for measuring the distance and/or intensity characteristics of objects
US9557856B2 (en) * 2013-08-19 2017-01-31 Basf Se Optical detector
DE102014100696B3 (en) * 2014-01-22 2014-12-31 Sick Ag Distance measuring sensor and method for detection and distance determination of objects
EP2999974B1 (en) * 2014-03-03 2019-02-13 Consortium P, Inc. Real-time location detection using exclusion zones
GB201413564D0 (en) * 2014-07-31 2014-09-17 Stmicroelectronics Res & Dev Time of flight determination
CN107015234B (en) * 2017-05-19 2019-08-09 中国科学院国家天文台长春人造卫星观测站 Embedded satellite laser ranging control system
DE102017113675B4 (en) * 2017-06-21 2021-11-18 Sick Ag Photoelectric sensor and method for measuring the distance to an object
EP3428683B1 (en) * 2017-07-11 2019-08-28 Sick Ag Optoelectronic sensor and method for measuring a distance
EP3460508A1 (en) * 2017-09-22 2019-03-27 ams AG Semiconductor body and method for a time-of-flight measurement
US10996323B2 (en) * 2018-02-22 2021-05-04 Stmicroelectronics (Research & Development) Limited Time-of-flight imaging device, system and method
CN209167538U (en) * 2018-11-21 2019-07-26 深圳奥比中光科技有限公司 Time flight depth camera
CN110111239B (en) * 2019-04-28 2022-12-20 叠境数字科技(上海)有限公司 Human image head background blurring method based on tof camera soft segmentation

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105911536A (en) * 2016-06-12 2016-08-31 中国科学院上海技术物理研究所 Multichannel photon counting laser radar receiver possessing real-time door control function
CN110073244A (en) * 2016-12-12 2019-07-30 森斯尔科技有限公司 For determining the histogram reading method and circuit of the flight time of photon
CN110235024A (en) * 2017-01-25 2019-09-13 苹果公司 SPAD detector with modulation sensitivity
US20180329064A1 (en) * 2017-05-09 2018-11-15 Stmicroelectronics (Grenoble 2) Sas Method and apparatus for mapping column illumination to column detection in a time of flight (tof) system
CN107462898A (en) * 2017-08-08 2017-12-12 中国科学院西安光学精密机械研究所 Based on the gate type diffusing reflection of monochromatic light subarray around angle imaging system and method
CN109725326A (en) * 2017-10-30 2019-05-07 豪威科技股份有限公司 Time-of-flight camera
CN109343070A (en) * 2018-11-21 2019-02-15 深圳奥比中光科技有限公司 Time flight depth camera
CN109870704A (en) * 2019-01-23 2019-06-11 深圳奥比中光科技有限公司 TOF camera and its measurement method

Also Published As

Publication number Publication date
CN110596725B (en) 2022-03-04
CN110596725A (en) 2019-12-20

Similar Documents

Publication Publication Date Title
WO2021051477A1 (en) Time of flight distance measurement system and method with adjustable histogram
WO2021051478A1 (en) Time-of-flight-based distance measurement system and method for dual-shared tdc circuit
WO2021051479A1 (en) Interpolation-based time of flight measurement method and system
WO2021051481A1 (en) Dynamic histogram drawing time-of-flight distance measurement method and measurement system
WO2021051480A1 (en) Dynamic histogram drawing-based time of flight distance measurement method and measurement system
WO2021072802A1 (en) Distance measurement system and method
CN111108407B (en) Semiconductor body and method for time-of-flight measurement
CN111025317B (en) Adjustable depth measuring device and measuring method
WO2021248892A1 (en) Distance measurement system and measurement method
CN101449181B (en) Distance measuring method and distance measuring instrument for detecting the spatial dimension of a target
CN110221272B (en) Time flight depth camera and anti-interference distance measurement method
CN110221274B (en) Time flight depth camera and multi-frequency modulation and demodulation distance measuring method
KR20190055238A (en) System and method for determining distance to an object
CN112731425B (en) Histogram processing method, distance measurement system and distance measurement equipment
CN110221273B (en) Time flight depth camera and distance measuring method of single-frequency modulation and demodulation
CN110780312B (en) Adjustable distance measuring system and method
US20220043129A1 (en) Time flight depth camera and multi-frequency modulation and demodulation distance measuring method
CN111965658B (en) Distance measurement system, method and computer readable storage medium
CN111427230A (en) Imaging method based on time flight and 3D imaging device
WO2021035694A1 (en) System and method for time-coding-based time-of-flight distance measurement
CN212135134U (en) 3D imaging device based on time flight
WO2022241942A1 (en) Depth camera and depth calculation method
US11709271B2 (en) Time of flight sensing system and image sensor used therein
CN211148917U (en) Distance measuring system
US20230019246A1 (en) Time-of-flight imaging circuitry, time-of-flight imaging system, and time-of-flight imaging method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19945878

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19945878

Country of ref document: EP

Kind code of ref document: A1