CN110596725B - Time-of-flight measurement method and system based on interpolation - Google Patents

Time-of-flight measurement method and system based on interpolation Download PDF

Info

Publication number
CN110596725B
CN110596725B CN201910889455.6A CN201910889455A CN110596725B CN 110596725 B CN110596725 B CN 110596725B CN 201910889455 A CN201910889455 A CN 201910889455A CN 110596725 B CN110596725 B CN 110596725B
Authority
CN
China
Prior art keywords
time
flight
interpolation
light source
histogram
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910889455.6A
Other languages
Chinese (zh)
Other versions
CN110596725A (en
Inventor
何燃
朱亮
王瑞
闫敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Oradar Technology Co Ltd
Original Assignee
Shenzhen Oradar Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Oradar Technology Co Ltd filed Critical Shenzhen Oradar Technology Co Ltd
Priority to CN201910889455.6A priority Critical patent/CN110596725B/en
Priority to PCT/CN2019/113710 priority patent/WO2021051479A1/en
Publication of CN110596725A publication Critical patent/CN110596725A/en
Application granted granted Critical
Publication of CN110596725B publication Critical patent/CN110596725B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/10Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4804Auxiliary means for detecting or identifying lidar signals or the like, e.g. laser illuminators
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/484Transmitters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • G01S7/4865Time delay measurement, e.g. time-of-flight measurement, time of arrival measurement or determining the exact position of a peak
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • G01S7/487Extracting wanted echo signals, e.g. pulse detection
    • G01S7/4876Extracting wanted echo signals, e.g. pulse detection by removing unwanted signals
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/51Display arrangements

Abstract

The invention discloses a flight time measuring method based on interpolation, which comprises the following steps: s1, acquiring a first flight time of a first combined pixel corresponding to the first light source; s2, calculating by interpolation to obtain a second flight time of a second super pixel corresponding to a second light source; s3, positioning a second combined pixel corresponding to the second light source and drawing a histogram according to the second flight time; and S4, calculating a third flight time by using the histogram. According to the invention, the coarse flight time values are directly provided for most of the pixels in an interpolation mode, so that the pixels can directly perform fine histogram drawing based on the coarse flight time values to calculate the high-precision fine flight time values.

Description

Time-of-flight measurement method and system based on interpolation
Technical Field
The invention relates to the technical field of computers, in particular to a flight time measuring method and a flight time measuring system based on interpolation.
Background
The Time of flight (TOF) method calculates the distance of an object by measuring the Time of flight of a light beam in space, and is widely applied to the fields of consumer electronics, unmanned driving, AR/VR, and the like due to its advantages of high precision, large measurement range, and the like.
Distance measurement systems based on the time-of-flight principle, such as time-of-flight depth cameras, lidar and other systems, often include a light source transmitting end and a receiving end, where the light source transmits a light beam to a target space to provide illumination, the receiving end receives the light beam reflected back by the target, and the system calculates the distance to the object by calculating the time required for the light beam to be transmitted to the target space.
At present, the laser radar based on the flight time method mainly has two types, namely a mechanical type and a non-mechanical type, the distance measurement of a 360-degree large view field is realized through a rotating base in the mechanical type, and the laser radar has the advantages of large measurement range, but the problems of high power consumption, low resolution ratio, low frame rate and the like. The non-mechanical medium-area array laser radar can solve the problem of the mechanical laser radar to a certain extent, transmits an area light beam with a certain view field in space once, and receives the light beam through the area array receiver, so that the resolution and the frame rate of the non-mechanical medium-area array laser radar are improved, and in addition, the non-mechanical medium-area array laser radar is easier to install because a rotating part is not needed. Nevertheless, area-array lidar still faces some challenges.
The higher the resolution of the area array laser radar is, the more comprehensive the effective information is, and in addition, the dynamic measurement has higher requirements on the frame rate and the measurement precision. However, the improvement of resolution, frame rate and precision is usually required to depend on the improvement of circuit scale and modulation and demodulation mode of the receiving end, but increasing circuit scale increases power consumption, signal-to-noise ratio and cost; in addition, the memory capacity on the chip is increased, which brings serious challenges to mass production; the current modulation and demodulation mode is difficult to realize the requirements of high precision, low power consumption and the like.
The above background disclosure is only for the purpose of assisting understanding of the inventive concept and technical solutions of the present invention, and does not necessarily belong to the prior art of the present patent application, and should not be used for evaluating the novelty and inventive step of the present application in the case that there is no clear evidence that the above content is disclosed at the filing date of the present patent application.
Disclosure of Invention
The present invention is directed to a method and system for measuring time-of-flight based on interpolation, so as to solve at least one of the problems described in the background art.
In order to achieve the above purpose, the technical solution of the embodiment of the present invention is realized as follows:
an interpolation-based time-of-flight measurement method, comprising the steps of:
s1, acquiring a first flight time of a first combined pixel corresponding to the first light source;
s2, calculating a second flight time by utilizing the first flight time to carry out interpolation as a rough flight time value of a rough histogram for calculating the second flight time, wherein the second flight time is the second flight time of a second super pixel corresponding to a second light source;
s3, positioning a second combined pixel corresponding to the second light source and drawing a histogram according to the second flight time;
and S4, calculating a third flight time by using the histogram.
In some embodiments, the first light source and the second light source are arranged on the same light source array, and the first light source and the second light source can be independently controlled in groups.
In some embodiments, the interpolation includes one-dimensional interpolation or two-dimensional interpolation, and the interpolation method includes at least one of linear interpolation, spline interpolation, and polynomial interpolation.
In some embodiments, only pixels within the second combined pixel are activated when the histogram is rendered.
In some embodiments, the step S2 further includes performing a difference calculation on the time-of-flight values of the plurality of blobs to be interpolated, and the interpolation calculation is performed only when the difference is less than a certain threshold.
The other technical scheme of the invention is as follows:
an interpolation-based time-of-flight measurement system, comprising:
a transmitter configured to transmit a pulsed light beam comprising a first light source and a second light source;
a collector configured to collect photons in the pulsed light beam reflected back by an object and form a photon signal comprising a plurality of pixels;
processing circuitry, coupled to the transmitter and the collector, for performing the following steps to calculate a time of flight: s1, acquiring a first flight time of a first combined pixel corresponding to the first light source; s2, calculating a second flight time by utilizing the first flight time to carry out interpolation as a rough flight time value of a rough histogram for calculating the second flight time, wherein the second flight time is the second flight time of a second super pixel corresponding to a second light source; s3, positioning a second combined pixel corresponding to the second light source and drawing a histogram according to the second flight time; and S4, calculating a third flight time by using the histogram.
In some embodiments, the first light source and the second light source are arranged on the same light source array, and the first light source and the second light source can be independently controlled in groups.
In some embodiments, the interpolation includes one-dimensional interpolation or two-dimensional interpolation, and the interpolation method includes at least one of linear interpolation, spline interpolation, and polynomial interpolation.
In some embodiments, only pixels within the second combined pixel are activated when the histogram is rendered.
In some embodiments, step S2 further includes performing a difference calculation on the time-of-flight values of the plurality of blobs and the interpolation calculation is performed when the difference is less than a certain threshold.
The technical scheme of the invention has the beneficial effects that:
according to the invention, the coarse flight time values are directly provided for most of the pixels in an interpolation mode, so that the pixels can directly perform fine histogram drawing based on the coarse flight time values to calculate the high-precision fine flight time values.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic view of a time-of-flight distance measurement system according to one embodiment of the present invention.
FIG. 2 is a schematic view of a light source according to one embodiment of the invention.
Fig. 3 is a schematic diagram of a pixel unit in a collector according to an embodiment of the invention.
FIG. 4 is a schematic diagram of a sensing circuit according to one embodiment of the invention.
Fig. 5 is a histogram diagram in accordance with one embodiment of the present invention.
Fig. 6 is a dynamic histogram plotting time-of-flight measurement method according to one embodiment of the invention.
FIG. 7 is a time-of-flight measurement method according to yet another embodiment of the invention.
FIG. 8 is a method of interpolation-based time-of-flight measurement according to one embodiment of the invention.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects to be solved by the embodiments of the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and the embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It will be understood that when an element is referred to as being "secured to" or "disposed on" another element, it can be directly on the other element or be indirectly on the other element. When an element is referred to as being "connected to" another element, it can be directly connected to the other element or be indirectly connected to the other element. The connection may be for fixation or for circuit connection.
It is to be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like are used in an orientation or positional relationship indicated in the drawings for convenience in describing the embodiments of the present invention and to simplify the description, and are not intended to indicate or imply that the referenced device or element must have a particular orientation, be constructed in a particular orientation, and be in any way limiting of the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the embodiments of the present invention, "a plurality" means two or more unless specifically limited otherwise.
The invention provides a flight time measuring method and a flight time measuring system based on interpolation, and for convenience of understanding, embodiments of a distance measuring system are described first.
As an embodiment of the present invention, a distance measuring system is provided, which has a stronger resistance to ambient light and a higher resolution.
FIG. 1 is a schematic view of a time-of-flight distance measurement system according to one embodiment of the present invention. The distance measuring system 10 includes an emitter 11, a collector 12 and a processing circuit 13, wherein the emitter 11 provides an emission beam 30 to a target space to illuminate an object 20 in the space, at least a part of the emission beam 30 is reflected by the object 20 to form a reflected beam 40, at least a part of the light signal (photon) of the reflected beam 40 is collected by the collector 12, the processing circuit 13 is respectively connected to the emitter 11 and the collector 12, and the trigger signals of the emitter 11 and the collector 12 are synchronized to calculate a time required for the emission beam from the emitter 11 to be received by the collector 12, i.e. a flight time t between the emission beam 30 and the reflected beam 40, and further, a distance D of a corresponding point on the object can be calculated by the following formula:
D=c·t/2 (1)
where c is the speed of light.
The emitter 11 includes a light source 111, an optical element 112. The light source 111 may be a light source such as a Light Emitting Diode (LED), an Edge Emitting Laser (EEL), a Vertical Cavity Surface Emitting Laser (VCSEL), or an array light source composed of a plurality of light sources, and preferably, the array light source 111 is a VCSEL array light source chip formed by generating a plurality of VCSEL light sources on a single semiconductor substrate. The light beam emitted by the light source 111 may be visible light, infrared light, ultraviolet light, or the like. The light source 111 emits light beams outwards under the control of the processing circuit 13, for example, in one embodiment, the light source 111 emits pulsed light beams under the control of the processing circuit 13 at a certain frequency (pulse period), which can be used in Direct time of flight (Direct TOF) measurement, the frequency is set according to a measurement distance, for example, the frequency can be set to 1MHz-100MHz, and the measurement distance is several meters to several hundred meters. It will be appreciated that the light source 111 may be controlled to emit the associated light beam, either as part of the processing circuitry 13 or independently of sub-circuits present in the processing circuitry 13, such as a pulse signal generator.
The optical element 112 receives the pulsed light beam from the light source 111, optically modulates the pulsed light beam, such as by diffraction, refraction, reflection, etc., and then emits the modulated light beam, such as a focused light beam, a flood light beam, a structured light beam, etc., into the space. The optical elements 112 may be in the form of one or more combinations of lenses, diffractive optical elements, masks, mirrors, MEMS mirrors, and the like.
The processing circuit 13 may be a stand-alone dedicated circuit, such as a dedicated SOC chip, an FPGA chip, an ASIC chip, etc., or may comprise a general-purpose processor, such as when the depth camera is integrated into a smart terminal, such as a mobile phone, a television, a computer, etc., where the processor in the terminal may be at least a part of the processing circuit 13.
Collector 12 includes pixel cells 121, and imaging lens cell 122, where imaging lens cell 122 receives and directs at least a portion of the modulated light beam reflected back by the object onto pixel cells 121. In one embodiment, the pixel unit 121 is composed of a single photon avalanche photodiode (SPAD), or an array pixel unit composed of a plurality of SPAD pixels, and the array size of the array pixel unit represents the resolution of the depth camera, such as 320 × 240. The SPAD can respond to the incident single photon so as to realize the detection of the single photon, and can realize the remote and high-precision measurement due to the advantages of high sensitivity, high response speed and the like. Compared with an image sensor which is composed of a CCD/CMOS and the like and takes light integration as a principle, the SPAD can count single photons, for example, the time correlation single photon counting method (TCSPC) is utilized to realize the collection of weak light signals and the calculation of flight time. Generally, a readout circuit (not shown in the figure) composed of one or more of a signal amplifier, a time-to-digital converter (TDC), an analog-to-digital converter (ADC), and the like is also included in connection with the pixel unit 121. These circuits can be integrated with the pixels, which can also be part of the processing circuit 13, and for convenience of description, they will be collectively referred to as the processing circuit 13.
In some embodiments, the distance measurement system 10 may further include a color camera, an infrared camera, an IMU, etc., and a combination thereof may implement more rich functions, such as 3D texture modeling, infrared face recognition, SLAM, etc.
In some embodiments, emitter 11 and collector 12 may be arranged coaxially, i.e. they are implemented by an optical device with reflection and transmission functions, such as a half-mirror.
In a direct time-of-flight distance measurement system using SPAD, a single photon incident on a SPAD pixel will cause an avalanche, the SPAD will output an avalanche signal to the TDC circuitry, and the TDC circuitry detects the time interval from the emission of the photon from the emitter 11 to the avalanche. After multiple measurements, the time interval is subjected to histogram statistics by a Time Correlation Single Photon Counting (TCSPC) circuit to recover the waveform of the whole pulse signal, the time corresponding to the waveform can be further determined, the flight time can be determined according to the time, so that accurate flight time detection is realized, and finally, the flight time is detected according to the flight timeAnd calculating the distance information of the object. Assuming that the pulse period emitted by the pulse light beam is Δ t, the maximum measurement range of the distance measurement system is Dmax, and the corresponding maximum flight time is
Figure GDA0003391468240000071
It is generally required that Δ t ≧ t1To avoid signal aliasing, where c is the speed of light. If TCSPC requires a number of measurements n, the time to perform a single frame measurement (frame period) will not be less than n x t1I.e. n photon-counting measurements are included in the period of each frame measurement. For example, the maximum measurement range is 150m, the corresponding pulse period Δ t is 1us, and n is 100000, the frame period will not be lower than 100ms, and the frame rate will be lower than 10 fps. It follows that the maximum measurement range in the TCSPC approach limits the pulse period, further affecting the frame rate of range measurements.
FIG. 2 is a schematic view of a light source according to one embodiment of the invention. The light source 111 is composed of a plurality of sub-light sources disposed on a single substrate (or a multi-substrate), and the sub-light sources are arranged in a pattern on the substrate. The substrate may be a semiconductor substrate, a metal substrate, etc., and the sub-light sources may be light emitting diodes, edge emitting laser emitters, vertical cavity surface laser emitters (VCSELs), etc., and preferably, the light source 111 is an arrayed VCSEL chip composed of a plurality of VCSEL sub-light sources disposed on the semiconductor substrate. The sub-light sources are used to emit light beams of any desired wavelength, such as visible light, infrared light, ultraviolet light, and the like. The light source 111 emits light under modulation driving of a driving circuit (which may be part of the processing circuit 13), such as continuous wave modulation, pulse modulation, or the like. The light sources 111 may also emit light in groups or in whole under the control of the driving circuit, for example, the light source 111 includes a first sub light source array 201, a second sub light source array 202, and the like, the first sub light source array 201 emits light under the control of the first driving circuit, and the second sub light source array 202 emits light under the control of the second driving circuit. The arrangement of the sub light sources may be a one-dimensional arrangement, a two-dimensional arrangement, a regular arrangement, or an irregular arrangement. For the sake of convenience of analysis, fig. 2 only schematically shows an example in which the light source 111 is an 8 × 9 regular array sub-light source, and the sub-light sources are divided into 4 × 3 — 12 groups, and each light source is distinguished by a different symbol in the drawing, that is, the light source 111 is composed of 12 3 × 2 regularly arranged sub-light source arrays.
Fig. 3 is a schematic diagram of a pixel unit in a collector according to an embodiment of the invention. The pixel unit comprises a pixel array 31 and a readout circuit 32, wherein the pixel array 31 comprises a two-dimensional array composed of a plurality of pixels 310 and the readout circuit 32 composed of a TDC circuit 321, a histogram circuit 322 and the like, the pixel array is used for collecting at least part of light beams reflected by an object and generating corresponding photon signals, the readout circuit 32 is used for processing the photon signals to draw a histogram reflecting the waveform of pulses emitted by a light source in an emitter, further, the flight time can be calculated according to the histogram, and finally the result is output. The readout circuit 32 may be a single TDC circuit and a histogram circuit, or an array readout circuit including a plurality of TDC circuit units and histogram circuit units.
In one embodiment, when the emitter 11 emits a spot light beam to the object to be measured, the optical element 112 in the collector 12 directs the spot light beam to a corresponding pixel, and generally, in order to receive as much as possible of the optical signal of the reflected light beam, the size of a single spot is set to correspond to a plurality of pixels (where the correspondence is understood to be imaging, the optical element 112 generally includes an imaging lens), for example, the single spot corresponds to 2 × 2 — 4 pixels in fig. 3, that is, photons reflected back by the spot light beam are received by the corresponding 4 pixels with a certain probability, for convenience of description, a pixel area formed by the corresponding plurality of pixels is referred to as a "combined pixel", and the size of the combined pixel may be set according to actual needs, and includes at least one pixel, such as 3 × 3, and 4 × 4. In general, the light spot is circular, elliptical, or the like, and the size of the combined pixel should be set to be equal to the size of the light spot or slightly smaller than the size of the light spot, but considering that the magnification factor is different due to the different distances between the objects to be measured, the size of the combined pixel needs to be comprehensively considered when setting.
In the embodiment shown in fig. 3, the pixel unit 31 is exemplified to include an array of 14 × 18 pixels. Generally, the measurement system 10 between the emitter 11 and the collector 12 can be divided into coaxial and off-axis according to different setting modes, for the coaxial situation, the light beam emitted by the emitter 11 is reflected by the object to be measured and then collected by the corresponding combined pixel in the collector 12, and the position of the combined pixel is not influenced by the distance of the object to be measured; however, in the case of off-axis, due to the existence of parallax, when the distance between the object to be measured and the distance between the object to be measured are different, the position of the light spot on the pixel unit will also change, and will generally shift along the direction of the baseline (the line between the emitter 11 and the collector 12, in the present invention, the direction of the baseline is uniformly represented by the horizontal direction), therefore, when the distance of the object to be measured is unknown, the position of the combined pixel is uncertain, and to solve this problem, the present invention will set a pixel area (referred to as "super pixel" herein) composed of a plurality of pixels exceeding the number of pixels in the combined pixel for receiving the reflected spot beam, the size of the super pixel needs to consider the measuring range of the system 10 and the length of the baseline at the same time when setting, therefore, the combined pixels corresponding to the spots reflected back by the object at different distances in the measuring range all fall into the super-pixel area, i.e. the size of the super-pixel should exceed at least one combined pixel. In general, the size of a super-pixel is the same as a co-pixel in the direction perpendicular to the baseline, and is larger than the co-pixel in the direction along the baseline. The number of superpixels is typically the same as the number of spot beams acquired by a single measurement of collector 12, which is 4x3 in fig. 3.
In one embodiment, the super-pixel is arranged to: when at the lower end of the measurement range, i.e. near, the spot falls to one side of the superpixel (left or right, depending on the relative positions of emitter 11 and collector 12); the spot falls on the other side of the superpixel when at the upper limit of the measurement range, i.e. at distance. In the embodiment shown in fig. 3, the superpixel is set to 2 × 6, for example, for spots 363, 373, 383, the corresponding superpixel is 361, 371, and 381, where spots 363, 373, and 383 are the spot beams reflected back by the far, middle, and near objects, respectively, and the corresponding combined pixels fall on the left, middle, and right sides of the superpixel, respectively.
In one embodiment, the combined pixels share one TDC circuit unit, that is, one TDC circuit unit is connected to each pixel in the combined pixels, and when any one of the combined pixels receives a photon and generates a photon signal, the TDC circuit unit can calculate the flight time corresponding to the photon signal. This is the case for the on-axis case, since the on-pixel position varies with the distance of the object to be measured. In the embodiment shown in fig. 3, a TDC circuit array of 4 × 3 TDC circuit cells will be included.
In one embodiment, the super-pixels share one TDC circuit unit, that is, one TDC circuit unit is connected to each pixel in the super-pixels, and when any one of the super-pixels receives a photon and generates a photon signal, the TDC circuit unit can calculate the flight time corresponding to the photon signal. Since superpixels may contain a resultant pixel shift caused by off-axis parallax, the superpixel-sharing TDC may be applied to the off-axis case. In the embodiment shown in fig. 3, a TDC circuit array of 4 × 3 TDC circuit cells will be included. The shared TDC circuit can effectively reduce the number of TDC circuits, thereby reducing the size and power consumption of the read circuit.
For the off-axis case, more pixels are needed to constitute the super-pixel, and the number of spots that can be collected in a single measurement (or single exposure) is much smaller than the number of pixels, in other words, the resolution of the collected effective depth data (time-of-flight value) is much smaller than the pixel resolution, for example, the pixel resolution is 14 × 18 in fig. 3, and the distribution of the spots is 4 × 3, i.e. the effective depth data resolution of a single frame measurement is 4 × 3.
In order to improve the resolution of the measured depth data, the spots transmitted by the transmitter 11 during the multi-frame measurement may be "deviated" through the multi-frame measurement, so as to generate a scanning effect, and the spots received by the collector 12 are also deviated during the multi-frame measurement, for example, the spots corresponding to two adjacent frames of measurement in fig. 3 are 343 and 353, so as to improve the resolution. In one embodiment, the "deviation" of the speckle can be realized by grouping and controlling the sub-light sources on the light source 111, that is, in two or more adjacent frames of measurement, the adjacent sub-light sources are sequentially turned on, for example, the first sub-light source array 201 is turned on in the first frame of measurement, the second sub-light source array 202 is turned on in the second frame of measurement, and so on, so that the resolution of the effective depth data can be improved in two dimensions not only by grouping and controlling the sub-light sources in the horizontal direction but also by grouping and controlling the sub-light sources in the vertical direction.
For the speckle "deviation" measured by multiple frames, the superpixels corresponding to speckles at different positions also need to be deviated during setting, as shown in fig. 3, the superpixel corresponding to speckle 343 is 341, the superpixel corresponding to speckle 353 is 351, the superpixel 351 has a lateral deviation relative to the superpixel 341, and there is a partial pixel overlap between the superpixel 341 and the superpixel 351, and for the case that there is a mutual overlap during the superpixels measured by multiple frames, in order to ensure that the TDC circuit can accurately measure the photon counting time-of-flight for the corresponding superpixel per frame, the present application provides a scheme for doubly sharing the TDC circuit.
In one embodiment, the pixel regions connected by a single TDC circuit unit include all super-pixels with deviations in multi-frame measurement, and there is an overlap between the pixel regions corresponding to two adjacent TDC circuit units. Specifically, in the embodiment shown in fig. 3, the pixel region 391 shares one TDC circuit unit, and the pixel region 391 includes 6 super pixels corresponding to 6 frames of measurement when 6 groups of sub-light sources are sequentially turned on respectively. Similarly, the adjacent pixel regions 392 share one TDC circuit unit, and there is a partial overlap between the two pixel regions 391 and 392, which results in that some pixels are connected to the two TDC circuit units, and during single frame measurement, according to the projected spots, the processing circuit 13 gates the corresponding pixels so that the photon signals acquired by the pixels are measured by a single TDC circuit unit, thereby avoiding crosstalk and errors. In one embodiment, the number of TDC circuits is the same as the number of spots acquired by collector 12 during a single frame measurement, 4 × 3 in fig. 3, each shared TDC circuit is respectively connected to 4 × 10 pixels, and there is an overlap of 4 × 4 pixels between pixel regions connected by adjacent TDC circuit units.
In a single frame measurement period, the TDC circuit receives a photon signal from a pixel in a super-pixel region connected to the TDC circuit, calculates a time interval (i.e., a flight time) between the signal and an initial clock signal, converts the time interval into a temperature code or a binary code, and stores the temperature code or the binary code in the histogram circuit. Generally, the larger the measurement range is, the wider the time interval that the TDC circuit is required to measure is, and in addition, the higher the accuracy requirement is, the higher the time resolution of the TDC circuit is required, and no matter the time interval is wider or the time resolution is higher, the TDC circuit is required to use a larger scale to output a binary code with a larger number of bits, and the requirement on the storage capacity of the memory of the histogram circuit is higher due to the increase of the number of bits of the binary code. The larger the memory capacity is, the higher the cost is, and the greater the difficulty in mass production of monolithic integration. To this end, the invention provides a readout circuit scheme with adjustable histogram circuit.
FIG. 4 is a schematic diagram of a sensing circuit according to one embodiment of the invention. The readout circuit includes a TDC circuit 41 and a histogram circuit 42, the TDC circuit 41 collects the time interval of the photon signal and converts the time interval into a time code (binary code, temperature code, etc. code), then the histogram circuit 42 counts on its corresponding time unit (i.e. storage unit for storing time information) based on the time code, such as adding 1, and after a plurality of measurements, the photon counts in all time units can be counted and a time histogram can be drawn. The plotted histogram is shown in fig. 5, where Δ T refers to the width of a time cell, T1、T2Respectively refer to the start and end times of histogram rendering, [ T ]1、T2]Is the time interval of the histogram, T ═ T2-T1The total time width is referred, the ordinate of the time unit Δ T is the photon count value stored in the corresponding storage unit, and based on the histogram, the position of the pulse waveform can be determined by using a method such as a peak-to-peak method, and the corresponding flight time T is obtained.
In one embodiment, histogram circuitry 42 includes address decoder 421, memory matrix 422, read/write circuits 424, and histogram plotting circuit 425. The TDC circuit inputs the acquired time code (binary code, temperature code, etc.) reflecting the time interval to the address decoder 421, and converts the time code into address information by the address decoder 421, which is stored in the memory matrix 422. Specifically, the memory matrix 422 includes a plurality of memory cells 423, i.e., time cells, each memory cell 423 is configured with a certain address (or address interval) in advance, when the time code address received by the address decoder 421 is consistent with the address of a certain memory cell or within the address interval of the certain memory cell, the read/write circuit 424 performs +1 operation on the corresponding memory cell, i.e., completes one photon counting, and after multiple measurements, the data in each memory cell reflects the number of photons received in the time interval. After a single frame of measurement (multiple measurements), the data of all the memory cells in the memory matrix 422 is read out to the histogram drawing circuit 425 for histogram drawing.
In order to reduce the storage capacity of the memory matrix as much as possible, it is actually necessary to reduce the number of memory cells 423, and for this purpose, the processing circuit applies a control signal to the histogram circuit 42 to dynamically set the addresses (address intervals) of the memory cells 423, so as to further dynamically control the histogram time resolution Δ T and/or the time interval width T. For example, on the premise that the number of the storage units 423 is not changed, by setting the address intervals corresponding to the storage units 423 to be larger time intervals, that is, increasing the width Δ T of the time unit, the time interval that can be stored in the total storage matrix will be larger, and the total time interval of the histogram will be larger, and for convenience of description, the histogram with the larger time interval will be referred to as a coarse histogram; for another example, by setting the address interval corresponding to the storage unit 423 to a smaller time interval, the time interval that can be stored in the total storage matrix is reduced, but the time resolution of storage is increased, the time resolution of the histogram is increased, and the histogram with a smaller time interval is referred to as a fine histogram with respect to a coarse histogram.
In the invention, the flight time measurement with large range and high precision is realized by performing dynamic coarse-fine adjustment on the histogram in the flight time measurement process.
Fig. 6 is a dynamic histogram plotting time-of-flight measurement method according to one embodiment of the invention. The method comprises the following steps:
step one, drawing a coarse histogram in coarse precision time units. That is, the address or address interval corresponding to each time unit in the memory matrix 422 is configured by applying the control signal, i.e. T and Δ T are set, and Δ T is configured as a larger time interval Δ T in this step1. In general, the histogram time interval T is set by considering the measurement range, the time interval Δ T1During setting, the measurement range and the number of histogram storage units should be considered, that is, the flight time corresponding to the measurement range is allocated to all the number of histogram storage units, such as equal allocation or non-equal allocation, so that all the storage units can cover the measurement range. And after multiple measurements, matching the flight time value obtained by each measurement to perform an operation of adding 1 on the corresponding time unit, and finally completing the drawing of the rough square chart.
Step two, calculating a rough flight time value t by utilizing a rough histogram1Based on the rough histogram, the pulse waveform position can be found by using a maximum peak method and the like, and the corresponding flight time is read as a rough flight time value t1The precision or minimum resolution of the time-of-flight value is the time interval Δ T of a time unit1
When the measurement range is large and the number of storage units is limited, the Δ T is calculated1The pulse waveform is larger, and when the number of photons is larger, the pulse photons are submerged in the background light, so that the pulse waveform cannot be detected. Therefore, in some embodiments, the measurement range may be divided into several intervals, each interval corresponding to a respective flight time interval, and the time interval Δ T of each time interval T may be the same or different. When the rough square chart is drawn, each time interval can be drawn one by one, and because the distance of the measured object is unknown, the corresponding flight time can be obtainedIt is not known which time interval falls into, and therefore, there is a possibility that the pulse waveform cannot be detected when the rough histogram is plotted in a certain time interval, that is, the rough time-of-flight value cannot be calculated, and for this case, for example, when the waveform position cannot be found based on the rough histogram in step two, the process returns to step one to perform the next rough histogram plotting until the pulse waveform is found in the rough histogram. Certainly, the pulse waveform may not be found all the time due to an error or too far object distance, and in order to avoid the problem of always cyclic detection, the number of cycles may be set, for example, when the number of times of drawing the coarse histogram exceeds a certain threshold (e.g., 3 times), it is considered that the target is not detected this time, and it may also be considered that the target is located at infinity, so that the measurement is ended.
And step three, drawing a fine histogram in a fine time unit according to the obtained rough time-of-flight value. At this time, since the rough value of the time-of-flight value is already known, a round of multiple measurements can be performed and a corresponding histogram can be drawn, and at this time, after the histogram circuit is controlled by the control signal, the address or address interval corresponding to each time unit in the storage matrix 422 of the histogram circuit is configured to be a smaller time interval Δ T2. In general, the time interval Δ T2During setting, only a small measurement range interval capable of containing the real flight time value and the number of histogram storage units need to be corresponded, the measurement range interval can take the rough flight time value as the middle during setting, and a certain margin is added on two sides, for example, the measurement range interval can be set as [ t [ t ]1-T’,t1-T’]Wherein the smaller the setting of T', the time interval DeltaT2The smaller the resolution, the higher the resolution, such as in one embodiment, T' may be set to 5% T, whereby the sum of the time intervals of all time units is only 10% of the corresponding time interval of the bold histogram. In other embodiments, the ratio of the margin to the coarse histogram time interval may be set in the range of 1% -25%. And then, carrying out a new round of multiple measurement, and completing the drawing of a thin histogram by matching the flight time value obtained each time and adding 1 to the corresponding time unit.
Step four, the advantagesCalculating fine time of flight t using a fine histogram2Based on the fine histogram, the waveform position can be found by utilizing a maximum peak method and other methods, and the corresponding flight time is read as a fine flight time value t2The precision or minimum resolution of the time-of-flight value is the time interval Δ T of a time unit2. If T' is set to 5% T in the third step, the fine time-of-flight is improved by a factor of 10 (the minimum resolution is improved by a factor of 10) compared to the coarse time-of-flight.
The measuring method for the dynamic thickness adjustment of the histogram is a process of roughly positioning in a larger measuring range and then finely measuring based on a positioning result. It will be appreciated that the above described method of coarse and fine adjustment may also be extended to three or more measurements, such as in one embodiment, first measuring with a first time resolution to obtain a first time of flight, then measuring with a second time resolution based on the first time of flight to obtain a second time of flight, and finally measuring with a third time resolution based on the second time of flight to obtain a third time of flight. The three-time precision is sequentially improved, and finally, the measurement with higher precision can be realized.
In one embodiment, because histogram rendering counts only time-of-flight values that are within its time interval T, individual pixels in measurement system collector 12 may be activated (enabled) for a specified time interval that generally encompasses the time interval T of histogram rendering, thereby reducing power consumption. For example, when the time interval of the histogram is [3ns,10ns ], the time interval in which the pixel is activated may be set to [2.5ns,10.5ns ].
It will be appreciated that the above described measurement method is applicable not only to coaxial distance measurement systems, but also to off-axis measurement systems. It should be particularly noted that, for the off-axis measurement system including the collector shown in fig. 3, the histogram dynamic adjustment scheme may be further used to perform super-pixel positioning, which not only improves the accuracy but also reduces the power consumption. FIG. 7 is a time-of-flight measurement method according to yet another embodiment of the invention. As will be described below with reference to fig. 3, the method includes the following steps:
step one, receiving a super-pixel TDC output signal, and drawing a coarse histogram by using a coarse precision time unit. Since the distance of the object is not clear before the measurement, the position of the spot, that is, the position of the combined pixel cannot be specified, and the combined pixel may fall into a different position of the super pixel depending on the distance of the object. In this step, therefore, each of the super pixels is first enabled to be in an active state to receive photons and receive photon signals from the shared TDC of that super pixel, followed by histogram rendering. The histogram is a dynamically adjusted histogram scheme as shown in fig. 6, and a coarse histogram is drawn in this step using coarse-precision time units.
Step two, calculating a rough flight time value t by utilizing a rough histogram1Based on the rough histogram, the waveform position can be found by utilizing a maximum peak method and other methods, and the corresponding flight time is read as a rough flight time value t1The precision or minimum resolution of the time-of-flight value is the time interval Δ T of a time unit1
When the measurement range is large and the number of storage units is limited, the Δ T is calculated1The pulse waveform is larger, and when the number of photons is larger, the pulse photons are submerged in the background light, so that the pulse waveform cannot be detected. Therefore, in some embodiments, the measurement range may be divided into several intervals, each interval corresponding to a respective flight time interval, and the time interval Δ T of each time interval T may be the same or different. When the rough histogram is drawn, each time interval may be drawn one by one, and since the distance of the object to be measured is unknown, which time interval the corresponding flight time falls into is also unknown, so that there is a possibility that the pulse waveform cannot be detected when the rough histogram is drawn in a certain time interval, for such a case, for example, when the waveform position cannot be found based on the rough histogram in step two, the next rough histogram is drawn again in step one until the pulse waveform is found in the rough histogram. Of course, it is also possible that the pulse waveform is not found at all times due to errors or too far object distances, in order to avoid the occurrence of aFor the problem of straight loop detection, the number of loops may be set, for example, when the number of times of drawing the rough square graph exceeds a certain threshold (e.g., 3 times), it is determined that no target is detected this time, and it may also be determined that the target is located at infinity, thereby ending the measurement.
And step three, positioning the combined pixels and drawing a fine histogram by a fine time unit according to the obtained rough flight time value. Since the coarse time-of-flight value is already specified, the position of the synthesized pixel can be located based on the coarse time-of-flight value and the parallax, and generally, the relationship between the position of the synthesized pixel and the coarse time-of-flight value needs to be saved in the system in advance so that the position of the synthesized pixel can be directly located according to the relationship after the coarse time-of-flight value is obtained; based on the position of the combined pixel, only the combined pixel is then activated, while a fine histogram is drawn in fine time units. Since the coarse value of the time-of-flight value is known, a round of multiple measurements can be performed and a corresponding histogram can be drawn, and the addresses or address intervals corresponding to the time units in the memory matrix 422 of the histogram circuit after being controlled by the control signal are configured to be smaller time intervals Δ T2. In general, the time interval Δ T2During setting, only a small measurement range interval capable of containing the real flight time value and the number of histogram storage units need to be corresponded, the measurement range interval can take the rough flight time value as the middle during setting, and a certain margin is added on two sides, for example, the measurement range interval can be set as [ t [ t ]1-T’,t1-T’]Wherein the smaller the setting of T', the time interval DeltaT2The smaller the resolution, the higher the resolution, such as in one embodiment, T' may be set to 5% T, whereby the sum of the time intervals of all time units is only 10% of the corresponding time interval of the bold histogram. In other embodiments, the ratio of the margin to the coarse histogram time interval may be set in the range of 1% to 25%. And then, carrying out a new round of multiple measurement, and completing the drawing of a thin histogram by matching the flight time value obtained each time and adding 1 to the corresponding time unit.
Step four, calculating fine flight by using a fine histogramTime t2Based on the fine histogram, the waveform position can be found by utilizing a maximum peak method and other methods, and the corresponding flight time is read as a fine flight time value t2The precision or minimum resolution of the time-of-flight value is the time interval Δ T of a time unit2. If T' is set to 5% T in the third step, the fine time-of-flight is improved by a factor of 10 (the minimum resolution is improved by a factor of 10) compared to the coarse time-of-flight.
The measuring method for the dynamic thickness adjustment of the histogram is a process of roughly positioning in a larger measuring range and then finely measuring based on a positioning result. It will be appreciated that the above described method of coarse and fine adjustment may also be extended to three or more measurements, such as in one embodiment, first measuring with a first time resolution to obtain a first time of flight, then measuring with a second time resolution based on the first time of flight to obtain a second time of flight, and finally measuring with a third time resolution based on the second time of flight to obtain a third time of flight. The three-time precision is sequentially improved, and finally, the measurement with higher precision can be realized.
In one embodiment, because histogram rendering counts only time-of-flight values that are within its time interval T, individual pixels in measurement system collector 12 may be activated (enabled) for a specified time interval that generally encompasses the time interval T of histogram rendering, thereby reducing power consumption. For example, when the time interval of the histogram is [3ns,10ns ], the time interval in which the pixel is activated may be set to [2.5ns,10.5ns ].
The following describes an interpolation-based time-of-flight measurement method, and an example of increasing the resolution by multi-frame measurement is described in the embodiments of fig. 2 and fig. 3, it can be understood that, when multi-frame measurement is performed, the dynamic histogram adjustment scheme shown in fig. 6 or fig. 7 may be used for each frame of depth data measurement. For example, when the first sub-light source array 201 is turned on, a first frame depth image is obtained by drawing a dynamic coarse-fine histogram; when the second sub-light source array 202 is started, performing dynamic coarse-fine histogram drawing to obtain a second frame depth image; and fusing the first and second frame depth images to obtain a depth image with higher resolution. In some embodiments, more than 3 frames of depth images may also be acquired and fused into a higher resolution depth image.
However, if the depth image needs to be dynamically adjusted in thickness during each frame of depth image acquisition, the acquisition time of each high-resolution fusion depth image is relatively long, and the overall frame rate is not high. In order to increase the frame rate as much as possible, the present invention provides a method for measuring time-of-flight based on interpolation according to an embodiment of the present invention shown in fig. 8, the method includes the following steps:
step one, acquiring first flight time of a first combined pixel corresponding to a first light source. In this step, the first light source in the emitter 11 is turned on to emit a speckle beam corresponding to the first light source, which falls on a combined pixel on the pixel unit 31 in the collector 12, and the processing circuit may further obtain a first time-of-flight of the combined pixel, for example, a coarse-fine dynamic adjustment scheme in the embodiment shown in fig. 6 or fig. 7 or any other scheme may obtain a fine time-of-flight (first time-of-flight) of the combined pixel, taking a speckle represented by a solid circle of 4 × 3 in fig. 3 as an example.
And step two, calculating by interpolation to obtain a second flight time of a second super pixel corresponding to the second light source. When the second light source is turned on, a speckle beam adjacent to the speckle beam corresponding to the first light source is emitted, and the speckle beam also falls onto a combined pixel of the collector 12, for convenience of illustration, only one speckle 353 is drawn by a dashed circle in fig. 3, and the speckle 353 and the speckle 343 are spatially staggered because the positions of the first light source and the second light source are staggered, so that the corresponding combined pixels are also staggered. Generally, when the spatial points are relatively close to each other, the distances of the two points do not differ much. Therefore, in one embodiment, the time-of-flight value of the corresponding pixel of the blob 343 obtained in step one can be used as the second time-of-flight value (coarse time-of-flight) of the super-pixel 351 corresponding to the blob 353, and then the fine time-of-flight calculation is performed. In one embodiment, the estimation of the second time-of-flight value of the superpixel of the spot 353 can be obtained by using the multiple pixels corresponding to the multiple first light sources around the spot, for example, by performing interpolation using the time-of-flight values of the left and right two pixels. The interpolation may be one-dimensional interpolation or two-dimensional interpolation, and the interpolation method may be at least one of linear interpolation, spline interpolation, polynomial interpolation, and other interpolation methods.
And thirdly, positioning a second combined pixel corresponding to the second light source and drawing a histogram according to the second flight time. After the second time of flight is obtained by interpolation, the position of the blob in the superpixel, i.e. the position of the combined pixel, can be located based on the time of flight and the parallax, and then only the combined pixel is activated based on the position of the combined pixel, and meanwhile, the histogram is drawn in a fine time unit.
Step four, calculating a third flight time by utilizing the histogram, finding the waveform position by utilizing methods such as a maximum peak method and the like based on the histogram, and reading the corresponding flight time as a third (fine) flight time value t2The precision or minimum resolution of the time-of-flight value is the time interval Δ T of a time unit2
Compared with the method described in fig. 6 or fig. 7, the time-of-flight measurement method in the above steps has the advantages that only the time-of-flight calculation of a few of spots needs to use a coarse-fine histogram drawing mode, and at least 2 frames of time-of-flight measurement need to be performed to obtain a high-precision time-of-flight value, the time-of-flight calculation of most of spots can use the time-of-flight values of known spots to perform interpolation as a coarse time-of-flight value of a coarse histogram, and only a single fine histogram drawing is needed based on the coarse time-of-flight value, so that the efficiency can be greatly improved. For example, if the light sources are divided into 6 groups, only the first group of light sources needs to perform coarse-fine measurement when turned on, and the subsequent 5 groups need only perform a single fine measurement when the time-of-flight measurement is performed after turned on.
In some embodiments, considering that the surface of the measured object often has jump, that is, the distance difference is greatly spread, it is difficult to obtain an accurate time-of-flight value by interpolation, and therefore, performing a detail measurement based on the interpolation result may cause an error. Therefore, a determination can be made before the interpolation in step two, for example, when the difference between the time-of-flight values (for example, the left and right blobs) of the composite pixels corresponding to the blobs to be interpolated is greater than a certain threshold, there is a jump in the depth value of the object surface between the two blobs on the surface, and the blobs between the two blobs will still maintain the measurement scheme of the coarse-fine histogram rendering, and the interpolation calculation is performed only when the difference is smaller than the subtraction value.
In some embodiments, the first time-of-flight of the first combined pixel may also be a coarse time-of-flight, that is, only a single coarse histogram plot needs to be performed when performing the first time-of-flight demodulation calculation on the first combined pixel, and then the coarse time-of-flight obtained based on the coarse histogram plot is interpolated.
It is understood that when the distance measuring system of the present invention is embedded in a device or hardware, corresponding structural or component changes may be made to adapt it to the needs, the nature of which still employs the distance measuring system of the present invention and therefore should be considered as the scope of the present invention. The foregoing is a more detailed description of the invention in connection with specific/preferred embodiments and is not intended to limit the practice of the invention to those descriptions. It will be apparent to those skilled in the art that various substitutions and modifications can be made to the described embodiments without departing from the spirit of the invention, and these substitutions and modifications should be considered to fall within the scope of the invention.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "preferred embodiments," "an example," "a specific example," or "some examples" or the like are intended to mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction. Although embodiments of the present invention and their advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the scope of the invention as defined by the appended claims. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. One of ordinary skill in the art will readily appreciate that the above-disclosed, presently existing or later to be developed, processes, machines, manufacture, compositions of matter, means, methods, or steps, that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.

Claims (10)

1. A method for measuring flight time based on interpolation is characterized by comprising the following steps:
s1, acquiring a first flight time of a first combined pixel corresponding to the first light source;
s2, calculating a second flight time by utilizing the first flight time to carry out interpolation as a rough flight time value of a rough histogram for calculating the second flight time, wherein the second flight time is the flight time of a second super pixel corresponding to a second light source;
s3, positioning a second combined pixel corresponding to the second light source and drawing a histogram according to the second flight time;
and S4, calculating a third flight time by using the histogram.
2. The interpolation-based time-of-flight measurement method of claim 1, wherein: the first light source and the second light source are arranged on the same light source array, and the first light source and the second light source can be independently controlled in groups.
3. The interpolation-based time-of-flight measurement method of claim 1, wherein: the interpolation comprises one-dimensional interpolation or two-dimensional interpolation, and the interpolation method comprises at least one of linear interpolation, spline interpolation and polynomial interpolation.
4. The interpolation-based time-of-flight measurement method of claim 1, wherein: when the histogram is drawn, only the pixels within the second combined pixel are activated.
5. The interpolation-based time-of-flight measurement method of claim 1, wherein: the step S2 further includes performing a difference calculation on the time-of-flight values of the plurality of blob pixels to be interpolated, and performing the interpolation calculation only when the difference is smaller than a certain threshold.
6. An interpolation-based time-of-flight measurement system, comprising:
a transmitter configured to transmit a pulsed light beam comprising a first light source and a second light source;
a collector configured to collect photons in the pulsed light beam reflected back by an object and form a photon signal comprising a plurality of pixels;
processing circuitry, coupled to the transmitter and the collector, for performing the following steps to calculate a time of flight:
s1, acquiring a first flight time of a first combined pixel corresponding to the first light source;
s2, calculating a second flight time by utilizing the first flight time to carry out interpolation as a rough flight time value of a rough histogram for calculating the second flight time, wherein the second flight time is the second flight time of a second super pixel corresponding to a second light source;
s3, positioning a second combined pixel corresponding to the second light source and drawing a histogram according to the second flight time;
and S4, calculating a third flight time by using the histogram.
7. The interpolation-based time-of-flight measurement system of claim 6, wherein: the first light source and the second light source are arranged on the same light source array, and the first light source and the second light source can be independently controlled in groups.
8. The interpolation-based time-of-flight measurement system of claim 6, wherein: the interpolation comprises one-dimensional interpolation or two-dimensional interpolation, and the interpolation method comprises at least one of linear interpolation, spline interpolation and polynomial interpolation.
9. The interpolation-based time-of-flight measurement system of claim 6, wherein: when the histogram is drawn, only the pixels within the second combined pixel are activated.
10. The interpolation-based time-of-flight measurement system of claim 6, wherein: step S2 further includes performing a difference calculation on the time-of-flight values of the plurality of blob pixels to be interpolated, the interpolation being performed when the difference is less than a threshold.
CN201910889455.6A 2019-09-19 2019-09-19 Time-of-flight measurement method and system based on interpolation Active CN110596725B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910889455.6A CN110596725B (en) 2019-09-19 2019-09-19 Time-of-flight measurement method and system based on interpolation
PCT/CN2019/113710 WO2021051479A1 (en) 2019-09-19 2019-10-28 Interpolation-based time of flight measurement method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910889455.6A CN110596725B (en) 2019-09-19 2019-09-19 Time-of-flight measurement method and system based on interpolation

Publications (2)

Publication Number Publication Date
CN110596725A CN110596725A (en) 2019-12-20
CN110596725B true CN110596725B (en) 2022-03-04

Family

ID=68861628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910889455.6A Active CN110596725B (en) 2019-09-19 2019-09-19 Time-of-flight measurement method and system based on interpolation

Country Status (2)

Country Link
CN (1) CN110596725B (en)
WO (1) WO2021051479A1 (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11885915B2 (en) 2020-03-30 2024-01-30 Stmicroelectronics (Research & Development) Limited Time to digital converter
CN111487639A (en) * 2020-04-20 2020-08-04 深圳奥锐达科技有限公司 Laser ranging device and method
WO2021243612A1 (en) * 2020-06-03 2021-12-09 深圳市大疆创新科技有限公司 Distance measurement method, distance measurement apparatus, and movable platform
CN113848538A (en) * 2020-06-25 2021-12-28 深圳奥锐达科技有限公司 Dispersion spectrum laser radar system and measurement method
CN114355384B (en) * 2020-07-07 2024-01-02 柳州阜民科技有限公司 Time-of-flight TOF system and electronic device
CN111856433B (en) * 2020-07-25 2022-10-04 深圳奥锐达科技有限公司 Distance measuring system and measuring method
CN112100449B (en) * 2020-08-24 2024-02-02 深圳市力合微电子股份有限公司 d-ToF distance measurement optimizing storage method for realizing dynamic large-range and high-precision positioning
WO2022109826A1 (en) * 2020-11-25 2022-06-02 深圳市速腾聚创科技有限公司 Distance measurement method and apparatus, electronic device, and storage medium
CN112731425A (en) * 2020-11-29 2021-04-30 奥比中光科技集团股份有限公司 Histogram processing method, distance measuring system and distance measuring equipment
CN112558096B (en) * 2020-12-11 2021-10-26 深圳市灵明光子科技有限公司 Distance measurement method, system and storage medium based on shared memory
CN113514842A (en) * 2021-03-08 2021-10-19 奥诚信息科技(上海)有限公司 Distance measuring method, system and device
CN115144864A (en) * 2021-03-31 2022-10-04 上海禾赛科技有限公司 Storage method, data processing method, laser radar, and computer-readable storage medium
CN113484870A (en) * 2021-07-20 2021-10-08 Oppo广东移动通信有限公司 Ranging method and apparatus, terminal, and non-volatile computer-readable storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1499851A2 (en) * 2002-04-15 2005-01-26 Toolz, Ltd. Distance measurement device
CN103261912A (en) * 2010-07-29 2013-08-21 威凯托陵科有限公司 Apparatus and method for measuring the distance and/or intensity characteristics of objects
CN104076383A (en) * 2007-05-16 2014-10-01 皇家飞利浦电子股份有限公司 Virtual pet detector and quasi-pixelated readout scheme for PET
CN105637320A (en) * 2013-08-19 2016-06-01 巴斯夫欧洲公司 Optical detector
CN109100702A (en) * 2017-06-21 2018-12-28 西克股份公司 For measuring the photoelectric sensor and method of the distance of object
CN109239694A (en) * 2017-07-11 2019-01-18 布鲁诺凯斯勒基金会 For measuring the photoelectric sensor and method of distance
CN109343070A (en) * 2018-11-21 2019-02-15 深圳奥比中光科技有限公司 Time flight depth camera
EP3460508A1 (en) * 2017-09-22 2019-03-27 ams AG Semiconductor body and method for a time-of-flight measurement
CN109870704A (en) * 2019-01-23 2019-06-11 深圳奥比中光科技有限公司 TOF camera and its measurement method
CN209167538U (en) * 2018-11-21 2019-07-26 深圳奥比中光科技有限公司 Time flight depth camera
CN110111239A (en) * 2019-04-28 2019-08-09 叠境数字科技(上海)有限公司 A kind of portrait head background-blurring method based on the soft segmentation of tof camera
CN110235024A (en) * 2017-01-25 2019-09-13 苹果公司 SPAD detector with modulation sensitivity

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8587771B2 (en) * 2010-07-16 2013-11-19 Microsoft Corporation Method and system for multi-phase dynamic calibration of three-dimensional (3D) sensors in a time-of-flight system
DE102014100696B3 (en) * 2014-01-22 2014-12-31 Sick Ag Distance measuring sensor and method for detection and distance determination of objects
US9443363B2 (en) * 2014-03-03 2016-09-13 Consortium P, Inc. Real-time location detection using exclusion zones
GB201413564D0 (en) * 2014-07-31 2014-09-17 Stmicroelectronics Res & Dev Time of flight determination
CN105911536B (en) * 2016-06-12 2018-10-19 中国科学院上海技术物理研究所 A kind of multi-channel photon counting laser radar receiver having real-time gate control function
US10416293B2 (en) * 2016-12-12 2019-09-17 Sensl Technologies Ltd. Histogram readout method and circuit for determining the time of flight of a photon
US20180329064A1 (en) * 2017-05-09 2018-11-15 Stmicroelectronics (Grenoble 2) Sas Method and apparatus for mapping column illumination to column detection in a time of flight (tof) system
CN107015234B (en) * 2017-05-19 2019-08-09 中国科学院国家天文台长春人造卫星观测站 Embedded satellite laser ranging control system
CN107462898B (en) * 2017-08-08 2019-06-28 中国科学院西安光学精密机械研究所 Based on the gate type diffusing reflection of monochromatic light subarray around angle imaging system and method
US10681295B2 (en) * 2017-10-30 2020-06-09 Omnivision Technologies, Inc. Time of flight camera with photon correlation successive approximation
US10996323B2 (en) * 2018-02-22 2021-05-04 Stmicroelectronics (Research & Development) Limited Time-of-flight imaging device, system and method

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1499851A2 (en) * 2002-04-15 2005-01-26 Toolz, Ltd. Distance measurement device
CN104076383A (en) * 2007-05-16 2014-10-01 皇家飞利浦电子股份有限公司 Virtual pet detector and quasi-pixelated readout scheme for PET
CN103261912A (en) * 2010-07-29 2013-08-21 威凯托陵科有限公司 Apparatus and method for measuring the distance and/or intensity characteristics of objects
CN105637320A (en) * 2013-08-19 2016-06-01 巴斯夫欧洲公司 Optical detector
CN110235024A (en) * 2017-01-25 2019-09-13 苹果公司 SPAD detector with modulation sensitivity
CN109100702A (en) * 2017-06-21 2018-12-28 西克股份公司 For measuring the photoelectric sensor and method of the distance of object
CN109239694A (en) * 2017-07-11 2019-01-18 布鲁诺凯斯勒基金会 For measuring the photoelectric sensor and method of distance
EP3460508A1 (en) * 2017-09-22 2019-03-27 ams AG Semiconductor body and method for a time-of-flight measurement
CN109343070A (en) * 2018-11-21 2019-02-15 深圳奥比中光科技有限公司 Time flight depth camera
CN209167538U (en) * 2018-11-21 2019-07-26 深圳奥比中光科技有限公司 Time flight depth camera
CN109870704A (en) * 2019-01-23 2019-06-11 深圳奥比中光科技有限公司 TOF camera and its measurement method
CN110111239A (en) * 2019-04-28 2019-08-09 叠境数字科技(上海)有限公司 A kind of portrait head background-blurring method based on the soft segmentation of tof camera

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"脉冲式TOF车载激光雷达接收芯片核心技术研究";谢刚;《中国优秀硕士学位论文全文数据库工程科技||辑》;20190715;第C035-127页 *

Also Published As

Publication number Publication date
CN110596725A (en) 2019-12-20
WO2021051479A1 (en) 2021-03-25

Similar Documents

Publication Publication Date Title
CN110596722B (en) System and method for measuring flight time distance with adjustable histogram
CN110596721B (en) Flight time distance measuring system and method of double-shared TDC circuit
CN110596725B (en) Time-of-flight measurement method and system based on interpolation
CN110596724B (en) Method and system for measuring flight time distance during dynamic histogram drawing
CN110596723B (en) Dynamic histogram drawing flight time distance measuring method and measuring system
CN111025317B (en) Adjustable depth measuring device and measuring method
CN111830530B (en) Distance measuring method, system and computer readable storage medium
CN110687541A (en) Distance measuring system and method
CN101449181B (en) Distance measuring method and distance measuring instrument for detecting the spatial dimension of a target
CN111856433B (en) Distance measuring system and measuring method
CN111045029B (en) Fused depth measuring device and measuring method
CN110780312B (en) Adjustable distance measuring system and method
CN110221273B (en) Time flight depth camera and distance measuring method of single-frequency modulation and demodulation
CN110221272B (en) Time flight depth camera and anti-interference distance measurement method
CN111025321B (en) Variable-focus depth measuring device and measuring method
CN111965658B (en) Distance measurement system, method and computer readable storage medium
CN111766596A (en) Distance measuring method, system and computer readable storage medium
CN112198519A (en) Distance measuring system and method
CN111427230A (en) Imaging method based on time flight and 3D imaging device
CN112731425A (en) Histogram processing method, distance measuring system and distance measuring equipment
CN112346075B (en) Collector and light spot position tracking method
CN212135134U (en) 3D imaging device based on time flight
CN111796295A (en) Collector, manufacturing method of collector and distance measuring system
CN213091889U (en) Distance measuring system
CN111965659A (en) Distance measuring system, method and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant