CN110596724B - Method and system for measuring flight time distance during dynamic histogram drawing - Google Patents

Method and system for measuring flight time distance during dynamic histogram drawing Download PDF

Info

Publication number
CN110596724B
CN110596724B CN201910889452.2A CN201910889452A CN110596724B CN 110596724 B CN110596724 B CN 110596724B CN 201910889452 A CN201910889452 A CN 201910889452A CN 110596724 B CN110596724 B CN 110596724B
Authority
CN
China
Prior art keywords
time
histogram
flight
pixel
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910889452.2A
Other languages
Chinese (zh)
Other versions
CN110596724A (en
Inventor
何燃
朱亮
王瑞
闫敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Oradar Technology Co Ltd
Original Assignee
Shenzhen Oradar Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Oradar Technology Co Ltd filed Critical Shenzhen Oradar Technology Co Ltd
Priority to CN201910889452.2A priority Critical patent/CN110596724B/en
Priority to PCT/CN2019/113712 priority patent/WO2021051480A1/en
Publication of CN110596724A publication Critical patent/CN110596724A/en
Application granted granted Critical
Publication of CN110596724B publication Critical patent/CN110596724B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/10Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4804Auxiliary means for detecting or identifying lidar signals or the like, e.g. laser illuminators
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/484Transmitters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • G01S7/4865Time delay measurement, e.g. time-of-flight measurement, time of arrival measurement or determining the exact position of a peak
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • G01S7/487Extracting wanted echo signals, e.g. pulse detection
    • G01S7/4876Extracting wanted echo signals, e.g. pulse detection by removing unwanted signals
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/51Display arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Optics & Photonics (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention discloses a method for measuring flight time distance by drawing a dynamic histogram, which comprises the following steps: s1, receiving a super-pixel TDC output signal, and drawing a first histogram by a first precision time unit; s2, calculating a first flight time by using the first histogram; s3, positioning the combined pixels according to the first flight time and drawing a second histogram in a second precision time unit; the combined pixel is composed of at least one pixel, and the super pixel comprises at least one combined pixel; and S4, calculating a second flight time by using the second histogram. The invention realizes large-range and high-precision flight time measurement by performing dynamic coarse-fine adjustment on the histogram in the flight time distance measurement system.

Description

Method and system for measuring flight time distance during dynamic histogram drawing
Technical Field
The invention relates to the technical field of computers, in particular to a method and a system for measuring flight time and distance by drawing a dynamic histogram.
Background
The Time of flight (TOF) method calculates the distance of an object by measuring the Time of flight of a light beam in space, and is widely applied to the fields of consumer electronics, unmanned driving, AR/VR, and the like due to its advantages of high precision, large measurement range, and the like.
Distance measurement systems based on the time-of-flight principle, such as time-of-flight depth cameras, lidar and other systems, often include a light source transmitting end and a receiving end, where the light source transmits a light beam to a target space to provide illumination, the receiving end receives the light beam reflected back by the target, and the system calculates the distance to the object by calculating the time required for the light beam to be transmitted to the target space.
At present, the laser radar based on the flight time method mainly has two types, namely a mechanical type and a non-mechanical type, the distance measurement of a 360-degree large view field is realized through a rotating base in the mechanical type, and the laser radar has the advantages of large measurement range, but the problems of high power consumption, low resolution ratio, low frame rate and the like. The non-mechanical medium-area array laser radar can solve the problem of the mechanical laser radar to a certain extent, transmits an area light beam with a certain view field in space once, and receives the light beam through the area array receiver, so that the resolution and the frame rate of the non-mechanical medium-area array laser radar are improved, and in addition, the non-mechanical medium-area array laser radar is easier to install because a rotating part is not needed. Nevertheless, area array lidar still faces some challenges.
The higher the resolution of the area array laser radar is, the more comprehensive the effective information is, and in addition, the dynamic measurement has higher requirements on the frame rate and the measurement precision. However, the improvement of resolution, frame rate and precision is usually required to depend on the improvement of circuit scale and modulation and demodulation mode of the receiving end, but increasing circuit scale increases power consumption, signal-to-noise ratio and cost; in addition, the memory capacity on the chip is increased, which brings serious challenges to mass production; the current modulation and demodulation mode is difficult to realize the requirements of high precision, low power consumption and the like.
The above background disclosure is only for the purpose of assisting understanding of the inventive concept and technical solutions of the present invention, and does not necessarily belong to the prior art of the present patent application, and should not be used for evaluating the novelty and inventive step of the present application in the case that there is no clear evidence that the above content is disclosed at the filing date of the present patent application.
Disclosure of Invention
The present invention is directed to a method and system for measuring time-of-flight distance by dynamic histogram plotting, so as to solve at least one of the problems described in the background art.
In order to achieve the above purpose, the technical solution of the embodiment of the present invention is realized as follows:
a dynamic histogram plotting time-of-flight distance measuring method comprises the following steps:
s1, receiving a super-pixel TDC output signal, and drawing a first histogram by a first precision time unit;
s2, calculating a first flight time by using the first histogram;
s3, positioning the combined pixels according to the first flight time and drawing a second histogram in a second precision time unit; the combined pixel is composed of at least one pixel, and the super pixel comprises at least one combined pixel;
and S4, calculating a second flight time by using the second histogram.
In some embodiments, the addresses of the first precision time unit and the second precision time unit are preconfigured such that the time interval of the first histogram is larger than the time interval of the second histogram; alternatively, the temporal resolution of the first histogram is made smaller than the temporal resolution of the second histogram.
In some embodiments, a pulse waveform position is located and the first or second time of flight is calculated using a maximum peak method based on the first or second histogram.
In some embodiments, the second histogram is rendered with only pixels within the pixels activated.
In some embodiments, when the first time of flight cannot be calculated by using the first histogram in the step S2, the step S1 is returned to continue drawing the first histogram until the first time of flight is calculated.
In some embodiments, the time interval of the first histogram is determined according to a measurement range and the number of time units of the histogram; the time interval of the second histogram is determined from the first time of flight. The time interval of the second histogram is determined by taking the first flight time as the middle and adding certain margins on two sides. The margin is set to 1% -25% of the first histogram time interval.
In some embodiments, the relationship between the position of the combined pixel and the first flight time is preserved in advance, so as to realize that the combined pixel is positioned according to the relationship after the first flight time is acquired.
In some embodiments, the first histogram or the second histogram is plotted such that the corresponding pixel is activated only in the time range of the time interval containing the first histogram or the second histogram when a photon is acquired.
The other technical scheme of the invention is as follows:
a dynamic histogram plotting time-of-flight distance measurement system comprising: a transmitter configured to transmit a pulsed light beam; a collector configured to collect photons in the pulsed light beam reflected back by an object and form a photon signal; processing circuitry, coupled to the transmitter and the collector, for performing the method of any of claims 1-10 to render a histogram and calculate a time of flight based on the histogram.
The technical scheme of the invention has the beneficial effects that:
according to the invention, the flight time measurement with large range and high precision is realized by dynamically performing coarse-fine adjustment on the histogram in the flight time distance measurement system, and the problems of high cost and large mass production difficulty of monolithic integration caused by large memory capacity of a histogram circuit in the prior art are solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic view of a time-of-flight distance measurement system according to one embodiment of the present invention.
FIG. 2 is a schematic view of a light source according to one embodiment of the invention.
Fig. 3 is a schematic diagram of a pixel unit in a collector according to an embodiment of the invention.
FIG. 4 is a schematic diagram of a sensing circuit according to one embodiment of the invention.
Fig. 5 is a histogram diagram in accordance with one embodiment of the present invention.
Fig. 6 is a dynamic histogram plotting time-of-flight measurement method according to one embodiment of the invention.
FIG. 7 is a time-of-flight measurement method according to yet another embodiment of the invention.
FIG. 8 is a method of interpolation-based time-of-flight measurement according to one embodiment of the invention.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects to be solved by the embodiments of the present invention more clearly understood, the present invention is further described in detail below with reference to the accompanying drawings and the embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
It will be understood that when an element is referred to as being "secured to" or "disposed on" another element, it can be directly on the other element or be indirectly on the other element. When an element is referred to as being "connected to" another element, it can be directly connected to the other element or be indirectly connected to the other element. The connection may be for fixation or for circuit connection.
It is to be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like are used in an orientation or positional relationship indicated in the drawings for convenience in describing the embodiments of the present invention and to simplify the description, and are not intended to indicate or imply that the referenced device or element must have a particular orientation, be constructed in a particular orientation, and be in any way limiting of the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the embodiments of the present invention, "a plurality" means two or more unless specifically limited otherwise.
The invention provides a measuring method and a measuring system for drawing flight time distance by a dynamic histogram, and for convenience of understanding, an embodiment of the measuring system is described first.
As an embodiment of the present invention, a distance measuring system is provided, which has a stronger resistance to ambient light and a higher resolution.
FIG. 1 is a schematic diagram of a time-of-flight distance measurement system according to one embodiment of the present invention. The distance measuring system 10 includes an emitter 11, a collector 12 and a processing circuit 13, wherein the emitter 11 provides an emission beam 30 to a target space to illuminate an object 20 in the space, at least a part of the emission beam 30 is reflected by the object 20 to form a reflected beam 40, at least a part of the light signal (photon) of the reflected beam 40 is collected by the collector 12, the processing circuit 13 is respectively connected to the emitter 11 and the collector 12, and the trigger signals of the emitter 11 and the collector 12 are synchronized to calculate a time required for the emission beam from the emitter 11 to be received by the collector 12, i.e. a flight time t between the emission beam 30 and the reflected beam 40, and further, a distance D of a corresponding point on the object can be calculated by the following formula:
D=c·t/2 (1)
where c is the speed of light.
The emitter 11 includes a light source 111, an optical element 112. The light source 111 may be a light source such as a Light Emitting Diode (LED), an Edge Emitting Laser (EEL), a Vertical Cavity Surface Emitting Laser (VCSEL), or an array light source composed of a plurality of light sources, and preferably, the array light source 111 is a VCSEL array light source chip formed by generating a plurality of VCSEL light sources on a single semiconductor substrate. The light beam emitted by the light source 111 may be visible light, infrared light, ultraviolet light, or the like. The light source 111 emits light beams outwards under the control of the processing circuit 13, for example, in one embodiment, the light source 111 emits pulsed light beams under the control of the processing circuit 13 at a certain frequency (pulse period), which can be used in Direct time of flight (Direct TOF) measurement, the frequency is set according to a measurement distance, for example, the frequency can be set to 1 MHz-100 MHz, and the measurement distance is several meters to several hundred meters. It will be appreciated that the light source 111 may be controlled to emit the associated light beam by a part of the processing circuitry 13 or by a sub-circuit present independently of the processing circuitry 13, such as a pulse signal generator.
The optical element 112 receives the pulsed light beam from the light source 111, optically modulates the pulsed light beam, such as by diffraction, refraction, reflection, etc., and then emits the modulated light beam, such as a focused light beam, a flood light beam, a structured light beam, etc., into the space. The optical elements 112 may be in the form of one or more combinations of lenses, diffractive optical elements, masks, mirrors, MEMS mirrors, and the like.
The processing circuit 13 may be a stand-alone dedicated circuit, such as a dedicated SOC chip, an FPGA chip, an ASIC chip, etc., or may comprise a general-purpose processor, such as when the depth camera is integrated into a smart terminal, such as a mobile phone, a television, a computer, etc., where the processor in the terminal may be at least a part of the processing circuit 13.
Collector 12 includes pixel cells 121, and imaging lens cell 122, where imaging lens cell 122 receives and directs at least a portion of the modulated light beam reflected back by the object onto pixel cells 121. In one embodiment, the pixel unit 121 is composed of a single photon avalanche photodiode (SPAD), or an array pixel unit composed of a plurality of SPAD pixels, and the array size of the array pixel unit represents the resolution of the depth camera, such as 320 × 240. The SPAD can respond to the incident single photon so as to realize the detection of the single photon, and can realize the remote and high-precision measurement due to the advantages of high sensitivity, high response speed and the like. Compared with an image sensor which is composed of a CCD/CMOS and the like and takes light integration as a principle, the SPAD can count single photons, for example, the time correlation single photon counting method (TCSPC) is utilized to realize the collection of weak light signals and the calculation of flight time. Generally, a readout circuit (not shown in the figure) composed of one or more of a signal amplifier, a time-to-digital converter (TDC), an analog-to-digital converter (ADC), and the like is also included in connection with the pixel unit 121. These circuits can be integrated with the pixels, which can also be part of the processing circuit 13, and for convenience of description, they will be collectively referred to as the processing circuit 13.
In some embodiments, the distance measurement system 10 may further include a color camera, an infrared camera, an IMU, etc., and a combination thereof may implement more abundant functions, such as 3D texture modeling, infrared face recognition, SLAM, etc.
In some embodiments, emitter 11 and collector 12 may be arranged coaxially, i.e. they are realized by an optical device with reflection and transmission functions, such as a half-mirror.
In a direct time-of-flight distance measurement system using SPAD, a single photon incident on an SPAD pixel will cause an avalanche, the SPAD will output an avalanche signal to the TDC circuitry, and the TDC circuitry will detect the time interval from when the photon is emitted from the emitter 11 to when it causes an avalanche. After multiple measurements, the time interval is subjected to histogram statistics through a Time Correlation Single Photon Counting (TCSPC) circuit to recover the waveform of the whole pulse signal, the time corresponding to the waveform can be further determined, the flight time can be determined according to the time, therefore, the accurate flight time detection is realized, and finally, the distance information of the object is calculated according to the flight time. Assuming that the pulse period emitted by the pulse beam is Δ t, the maximum measurement range of the distance measurement system is Dmax, and the corresponding maximum flight time is
Figure BDA0002208273220000071
It is generally required that Δ t ≧ t 1 To avoid signal aliasing, where c is the speed of light. If TCSPC requires a number of measurements n, the time to perform a single frame measurement (frame period) will not be less than n x t 1 I.e. n photon-counting measurements are included in the period of each frame measurement. For example, the maximum measurement range is 150m, the corresponding pulse period Δ t is 1us, and n is 100000, the frame period will not be lower than 100ms, and the frame rate will be lower than 10 fps. It follows that the maximum measurement range in the TCSPC approach limits the pulse period, further affecting the frame rate of range measurements.
FIG. 2 is a schematic view of a light source according to one embodiment of the invention. The light source 111 is composed of a plurality of sub-light sources disposed on a single substrate (or a multi-substrate), and the sub-light sources are arranged in a pattern on the substrate. The substrate may be a semiconductor substrate, a metal substrate, etc., and the sub-light sources may be light emitting diodes, edge emitting laser emitters, vertical cavity surface laser emitters (VCSELs), etc., and preferably, the light source 111 is an arrayed VCSEL chip composed of a plurality of VCSEL sub-light sources disposed on the semiconductor substrate. The sub-light sources are used to emit light beams of any desired wavelength, such as visible light, infrared light, ultraviolet light, and the like. The light source 111 emits light under modulation driving of a driving circuit (which may be part of the processing circuit 13), such as continuous wave modulation, pulse modulation, or the like. The light sources 111 may also emit light in groups or in whole under the control of the driving circuit, for example, the light source 111 includes a first sub light source array 201, a second sub light source array 202, and the like, the first sub light source array 201 emits light under the control of the first driving circuit, and the second sub light source array 202 emits light under the control of the second driving circuit. The arrangement of the sub light sources may be a one-dimensional arrangement, a two-dimensional arrangement, a regular arrangement, or an irregular arrangement. For the sake of convenience of analysis, fig. 2 only schematically shows an example in which the light source 111 is an 8 × 9 regular array sub-light source, and the sub-light sources are divided into 4 × 3 — 12 groups, and each light source is distinguished by a different symbol in the drawing, that is, the light source 111 is composed of 12 3 × 2 regularly arranged sub-light source arrays.
Fig. 3 is a schematic diagram of a pixel unit in a collector according to an embodiment of the invention. The pixel unit comprises a pixel array 31 and a readout circuit 32, wherein the pixel array 31 comprises a two-dimensional array composed of a plurality of pixels 310 and the readout circuit 32 composed of a TDC circuit 321, a histogram circuit 322 and the like, the pixel array is used for collecting at least part of light beams reflected by an object and generating corresponding photon signals, the readout circuit 32 is used for processing the photon signals to draw a histogram reflecting the waveform of pulses emitted by a light source in an emitter, further, the flight time can be calculated according to the histogram, and finally the result is output. The readout circuit 32 may be a single TDC circuit and a histogram circuit, or an array readout circuit including a plurality of TDC circuit units and histogram circuit units.
In one embodiment, when the emitter 11 emits a spot light beam to the object to be measured, the optical element 112 in the collector 12 directs the spot light beam to a corresponding pixel, and generally, in order to receive as much as possible of the optical signal of the reflected light beam, the size of a single spot is set to correspond to a plurality of pixels (where the correspondence is understood to be imaging, the optical element 112 generally includes an imaging lens), for example, the single spot corresponds to 2 × 2 — 4 pixels in fig. 3, that is, photons reflected back by the spot light beam are received by the corresponding 4 pixels with a certain probability, for convenience of description, a pixel area formed by the corresponding plurality of pixels is referred to as a "combined pixel", and the size of the combined pixel may be set according to actual needs, and includes at least one pixel, such as 3 × 3, and 4 × 4. In general, the light spot is circular, elliptical, or the like, and the size of the combined pixel should be set to be equal to the size of the light spot or slightly smaller than the size of the light spot, but considering that the magnification factor is different due to the different distances between the objects to be measured, the size of the combined pixel needs to be comprehensively considered when setting.
In the embodiment shown in fig. 3, the pixel unit 31 is exemplified to include an array of 14 × 18 pixels. Generally, the measurement system 10 between the emitter 11 and the collector 12 can be divided into coaxial and off-axis according to different setting modes, for the coaxial situation, the light beam emitted by the emitter 11 is reflected by the object to be measured and then collected by the corresponding combined pixel in the collector 12, and the position of the combined pixel is not influenced by the distance of the object to be measured; however, in the case of off-axis, due to the existence of parallax, when the distance between the object to be measured and the distance between the object to be measured are different, the position of the light spot on the pixel unit will also change, and will generally shift along the direction of the baseline (the line between the emitter 11 and the collector 12, in the present invention, the direction of the baseline is uniformly represented by the horizontal direction), therefore, when the distance of the object to be measured is unknown, the position of the combined pixel is uncertain, and to solve this problem, the present invention will set a pixel area (referred to as "super pixel" herein) composed of a plurality of pixels exceeding the number of pixels in the combined pixel for receiving the reflected spot beam, the size of the super pixel needs to consider the measuring range of the system 10 and the length of the baseline at the same time when setting, therefore, the combined pixels corresponding to the spots reflected back by the object at different distances in the measuring range all fall into the super-pixel area, i.e. the size of the super-pixel should exceed at least one combined pixel. In general, the size of a super-pixel is the same as a co-pixel in the direction perpendicular to the baseline, and is larger than the co-pixel in the direction along the baseline. The number of superpixels is typically the same as the number of spot beams acquired by a single measurement of collector 12, which is 4 x 3 in fig. 3.
In one embodiment, the super-pixel is arranged to: when at the lower end of the measurement range, i.e. near, the spot falls to one side of the superpixel (left or right, depending on the relative positions of emitter 11 and collector 12); the spot falls on the other side of the superpixel when at the upper limit of the measurement range, i.e. at distance. In the embodiment shown in fig. 3, the superpixels are set to a size of 2 × 6, for example, for spots 363, 373, 383, the corresponding superpixels are 361, 371, and 381, respectively, where spots 363, 373, and 383 are the spot beams reflected back by the far, middle, and near objects, respectively, and the corresponding combined pixels fall on the left, middle, and right sides of the superpixel, respectively.
In one embodiment, the combined pixels share one TDC circuit unit, that is, one TDC circuit unit is connected to each pixel in the combined pixels, and when any one of the combined pixels receives a photon and generates a photon signal, the TDC circuit unit can calculate the flight time corresponding to the photon signal. This is the case for the on-axis case, since the on-pixel position varies with the distance of the object to be measured. In the embodiment shown in fig. 3, a TDC circuit array of 4 × 3 TDC circuit cells will be included.
In one embodiment, the super-pixels share one TDC circuit unit, that is, one TDC circuit unit is connected to each pixel in the super-pixels, and when any one of the super-pixels receives a photon and generates a photon signal, the TDC circuit unit can calculate the flight time corresponding to the photon signal. Since superpixels may contain a resultant pixel offset caused by off-axis parallax, the superpixel sharing TDC may be applied to the off-axis case. In the embodiment shown in FIG. 3, a TDC circuit array of 4 × 3 TDC circuit units is included.
The shared TDC circuit can effectively reduce the number of TDC circuits, thereby reducing the size and power consumption of the read circuit.
For the off-axis case, more pixels are needed to constitute the super-pixel, and the number of spots that can be collected in a single measurement (or single exposure) is much smaller than the number of pixels, in other words, the resolution of the collected effective depth data (time-of-flight value) is much smaller than the pixel resolution, for example, the pixel resolution is 14 × 18 in fig. 3, and the distribution of the spots is 4 × 3, i.e. the effective depth data resolution of a single frame measurement is 4 × 3.
In order to improve the resolution of the measured depth data, the spots transmitted by the transmitter 11 during the multi-frame measurement may be "deviated" through the multi-frame measurement, so as to generate a scanning effect, and the spots received by the collector 12 are also deviated during the multi-frame measurement, for example, the spots corresponding to two adjacent frames of measurement in fig. 3 are 343 and 353, so as to improve the resolution. In one embodiment, the "deviation" of the speckle can be realized by grouping and controlling the sub-light sources on the light source 111, that is, in two or more adjacent frames of measurement, the adjacent sub-light sources are sequentially turned on, for example, the first sub-light source array 201 is turned on in the first frame of measurement, the second sub-light source array 202 is turned on in the second frame of measurement, and so on, so that the resolution of the effective depth data can be improved in two dimensions not only by grouping and controlling the sub-light sources in the horizontal direction but also by grouping and controlling the sub-light sources in the vertical direction.
For the speckle "deviation" measured by multiple frames, the superpixels corresponding to speckles at different positions also need to be deviated during setting, as shown in fig. 3, the superpixel corresponding to speckle 343 is 341, the superpixel corresponding to speckle 353 is 351, the superpixel 351 has a lateral deviation relative to the superpixel 341, and there is a partial pixel overlap between the superpixel 341 and the superpixel 351, and for the case that there is a mutual overlap during the superpixels measured by multiple frames, in order to ensure that the TDC circuit can accurately measure the photon counting time-of-flight for the corresponding superpixel per frame, the present application provides a scheme for doubly sharing the TDC circuit.
In one embodiment, the pixel regions connected by a single TDC circuit unit include all super-pixels with deviations in multi-frame measurement, and there is an overlap between the pixel regions corresponding to two adjacent TDC circuit units. Specifically, in the embodiment shown in fig. 3, the pixel region 391 shares one TDC circuit unit, and the pixel region 391 includes 6 super pixels corresponding to 6 frames of measurement when 6 groups of sub-light sources are sequentially turned on respectively. Similarly, the adjacent pixel regions 392 share one TDC circuit unit, and there is a partial overlap between the two pixel regions 391 and 392, which results in that some pixels are connected to the two TDC circuit units, and during single frame measurement, according to the projected spots, the processing circuit 13 gates the corresponding pixels so that the photon signals acquired by the pixels are measured by a single TDC circuit unit, thereby avoiding crosstalk and errors. In one embodiment, the number of TDC circuits is the same as the number of spots collected by collector 12 during a single frame measurement, 4 × 3 in fig. 3, each shared TDC circuit is connected to 4 × 10 pixels, and there is an overlap of 4 × 4 pixels between pixel regions connected to adjacent TDC circuit units.
In a single frame measurement period, the TDC circuit receives a photon signal from a pixel in a super-pixel region connected to the TDC circuit, calculates a time interval (i.e., a flight time) between the signal and an initial clock signal, converts the time interval into a temperature code or a binary code, and stores the temperature code or the binary code in the histogram circuit. Generally, the larger the measurement range is, the wider the time interval that the TDC circuit is required to measure is, and in addition, the higher the accuracy requirement is, the higher the time resolution of the TDC circuit is required, and no matter the time interval is wider or the time resolution is higher, the TDC circuit is required to use a larger scale to output a binary code with a larger number of bits, and the requirement on the storage capacity of the memory of the histogram circuit is higher due to the increase of the number of bits of the binary code. The larger the memory capacity is, the higher the cost is, and the greater the difficulty in mass production of monolithic integration. To this end, the invention provides a readout circuit scheme with adjustable histogram circuit.
FIG. 4 is a schematic diagram of a sensing circuit according to one embodiment of the invention. The readout circuit includes a TDC circuit 41 and a histogram circuit 42, the TDC circuit 41 collects the time interval of the photon signal and converts the time interval into a time code (binary code, temperature code, etc. code), then the histogram circuit 42 counts on its corresponding time unit (i.e. storage unit for storing time information) based on the time code, such as adding 1, and after a plurality of measurements, the photon counts in all time units can be counted and a time histogram can be drawn. The plotted histogram is shown in fig. 5, where Δ T refers to the width of a time unit, T1 and T2 refer to the start and end times of histogram plotting, respectively, [ T1 and T2] refer to the time interval of the histogram, T2 to T1 refer to the total time width, the ordinate of the time unit Δ T is the photon count value stored in the corresponding storage unit, and the position of the pulse waveform can be determined by a method such as a maximum peak method based on the histogram, and the corresponding flight time T is obtained.
In one embodiment, histogram circuit 42 includes an address decoder 421, a memory matrix 422, a read/write circuit 424, and a histogram plotting circuit 425. Wherein the TDC circuit inputs the acquired time code (binary code, temperature code, etc.) reflecting the time interval to the address decoder 421 and converts the acquired time code into address information by the address decoder 421, and the address information is stored in the memory matrix 422. Specifically, the memory matrix 422 includes a plurality of memory cells 423, i.e., time cells, each memory cell 423 is configured with a certain address (or address interval) in advance, when the time code address received by the address decoder 421 is consistent with the address of a certain memory cell or within the address interval of the certain memory cell, the read/write circuit 424 performs +1 operation on the corresponding memory cell, i.e., completes one photon counting, and after multiple measurements, the data in each memory cell reflects the number of photons received in the time interval. After a single frame measurement (multiple measurements), the data of all the memory cells in the memory matrix 422 are read out to the histogram plotting circuit 425 for histogram plotting.
In order to reduce the storage capacity of the memory matrix as much as possible, it is actually necessary to reduce the number of memory cells 423, and for this purpose, the processing circuit applies a control signal to the histogram circuit 42 to dynamically set the addresses (address intervals) of the memory cells 423, so as to further dynamically control the histogram time resolution Δ T and/or the time interval width T. For example, on the premise that the number of the storage units 423 is not changed, by setting the address intervals corresponding to the storage units 423 to be larger time intervals, that is, increasing the width Δ T of the time unit, the time interval that can be stored in the total storage matrix will be larger, and the total time interval of the histogram will be larger, and for convenience of description, the histogram with the larger time interval will be referred to as a coarse histogram; for another example, by setting the address interval corresponding to the storage unit 423 to a smaller time interval, the time interval that can be stored in the total storage matrix is reduced, but the time resolution of storage is increased, the time resolution of the histogram is increased, and the histogram with a smaller time interval is referred to as a fine histogram with respect to a coarse histogram.
In the invention, the flight time measurement with large range and high precision is realized by performing dynamic coarse-fine adjustment on the histogram in the flight time measurement process.
Fig. 6 is a dynamic histogram plotting time-of-flight measurement method according to one embodiment of the invention. The method comprises the following steps:
step one, drawing a coarse histogram in coarse precision time units. That is, the address or address interval corresponding to each time unit in the memory matrix 422 is configured by applying the control signal, i.e. T and Δ T are set, and Δ T is configured as a larger time interval Δ T in this step 1 . In general, the histogram time interval T is set by considering the measurement range, the time interval Δ T 1 During setting, the measurement range and the number of histogram storage units should be considered, that is, the flight time corresponding to the measurement range is distributed to all the number of histogram storage units, such as average distribution or non-average distributionEtc., so that all memory cells can cover the measurement range. And after multiple measurements, matching the flight time value obtained by each measurement to perform an operation of adding 1 on the corresponding time unit, and finally completing the drawing of the rough square chart.
Step two, calculating a rough flight time value t by utilizing a rough histogram 1 Based on the rough histogram, the pulse waveform position can be found by using a maximum peak method and the like, and the corresponding flight time is read as a rough flight time value t 1 The precision or minimum resolution of the time-of-flight value is the time interval Δ T of a time unit 1
When the measurement range is large and the number of storage units is limited, the Δ T is calculated 1 The pulse waveform is larger, and when the number of photons is larger, the pulse photons are submerged in the background light, so that the pulse waveform cannot be detected. Thus, in some embodiments, the measurement range may be divided into several intervals, each interval corresponding to a respective time-of-flight interval, and the time interval Δ T of each time interval T may be the same or different. When the rough square chart is drawn, each time interval can be drawn one by one, and since the distance of the measured object is unknown, which time interval the corresponding flight time of the measured object falls into is also unknown, so that a pulse waveform may not be detected when the rough square chart is drawn in a certain time interval, that is, a rough flight time value cannot be calculated. Certainly, the pulse waveform may not be found all the time due to an error or too far object distance, and in order to avoid the problem of always cyclic detection, the number of cycles may be set, for example, when the number of times of drawing the coarse histogram exceeds a certain threshold (e.g., 3 times), it is considered that the target is not detected this time, and it may also be considered that the target is located at infinity, so that the measurement is ended.
And step three, drawing a fine histogram in a fine time unit according to the obtained rough time-of-flight value. At this time, since it is already knownThe coarse value of the time-of-flight value can be measured for a plurality of times and a corresponding histogram can be drawn, and at this time, after the histogram circuit is controlled by the control signal, the address or address interval corresponding to each time unit in the storage matrix 422 of the histogram circuit is configured to be a smaller time interval Δ T 2 . In general, the time interval Δ T 2 During setting, only a small measurement range interval capable of containing the real flight time value and the number of histogram storage units need to be corresponded, the measurement range interval can take the rough flight time value as the middle during setting, and a certain margin is added on two sides, for example, the measurement range interval can be set as [ t [ t ] 1 -T’,t 1 -T’]Wherein the smaller the setting of T', the time interval DeltaT 2 The smaller the resolution, the higher the resolution, such as in one embodiment, T' may be set to 5% T, whereby the sum of the time intervals of all time units is only 10% of the corresponding time interval of the bold histogram. In other embodiments, the ratio of the margin to the coarse histogram time interval may be set in the range of 1% -25%. And then, carrying out a new round of multiple measurement, and completing the drawing of a thin histogram by matching the flight time value obtained each time and adding 1 to the corresponding time unit.
Step four, calculating fine flight time t by using the fine histogram 2 Based on the fine histogram, the waveform position can be found by utilizing a maximum peak method and other methods, and the corresponding flight time is read as a fine flight time value t 2 The precision or minimum resolution of the time-of-flight value is the time interval Δ T of a time unit 2 . If T' is set to 5% T in the third step, the fine time-of-flight is improved by a factor of 10 (the minimum resolution is improved by a factor of 10) compared to the coarse time-of-flight.
The measuring method for the dynamic thickness adjustment of the histogram is a process of roughly positioning in a larger measuring range and then finely measuring based on a positioning result. It will be appreciated that the above described method of coarse and fine adjustment may also be extended to three or more measurements, such as in one embodiment, first measuring with a first time resolution to obtain a first time of flight, then measuring with a second time resolution based on the first time of flight to obtain a second time of flight, and finally measuring with a third time resolution based on the second time of flight to obtain a third time of flight. The three-time precision is sequentially improved, and finally, the measurement with higher precision can be realized.
In one embodiment, because histogram rendering counts only time-of-flight values that are within its time interval T, individual pixels in measurement system collector 12 may be activated (enabled) for a specified time interval that generally encompasses the time interval T of histogram rendering, thereby reducing power consumption. For example, when the time interval of the histogram is [3ns,10ns ], the time interval in which the pixel is activated may be set to [2.5ns,10.5ns ].
It will be appreciated that the above described measurement method is applicable not only to coaxial distance measurement systems, but also to off-axis measurement systems. It should be particularly noted that, for the off-axis measurement system including the collector shown in fig. 3, the histogram dynamic adjustment scheme may be further used to perform super-pixel positioning, which not only improves the accuracy but also reduces the power consumption. FIG. 7 is a time-of-flight measurement method according to yet another embodiment of the invention. As will be described below with reference to fig. 3, the method includes the following steps:
step one, receiving a super-pixel TDC output signal, and drawing a coarse histogram by using a coarse precision time unit. Since the distance of the object is not clear before the measurement, the position of the spot, that is, the position of the combined pixel cannot be specified, and the combined pixel may fall into a different position of the super pixel depending on the distance of the object. In this step, therefore, each of the super pixels is first enabled to be in an active state to receive photons and receive photon signals from the shared TDC of that super pixel, followed by histogram rendering. The histogram is a dynamically adjusted histogram scheme as shown in fig. 6, and a coarse histogram is drawn in this step using coarse-precision time units.
Step two, calculating a rough flight time value t by utilizing the rough histogram 1 Based on the rough histogram, the waveform position can be found by utilizing methods such as a maximum peak method and the like, and corresponding flight time is readAs a coarse time-of-flight value t 1 The precision or minimum resolution of the time-of-flight value is the time interval Δ T of a time unit 1
When the measurement range is large and the number of storage units is limited, the Δ T is calculated 1 The pulse waveform is larger, and when the number of photons is larger, the pulse photons are submerged in the background light, so that the pulse waveform cannot be detected. Therefore, in some embodiments, the measurement range may be divided into several intervals, each interval corresponding to a respective flight time interval, and the time interval Δ T of each time interval T may be the same or different. When the rough histogram is drawn, each time interval may be drawn one by one, and since the distance of the object to be measured is unknown, which time interval the corresponding flight time falls into is also unknown, so that there is a possibility that the pulse waveform cannot be detected when the rough histogram is drawn in a certain time interval, for such a case, for example, when the waveform position cannot be found based on the rough histogram in step two, the next rough histogram is drawn again in step one until the pulse waveform is found in the rough histogram. Certainly, the pulse waveform may not be found all the time due to an error or too far object distance, and in order to avoid the problem of always cyclic detection, the number of cycles may be set, for example, when the number of times of drawing the coarse histogram exceeds a certain threshold (e.g., 3 times), it is considered that the target is not detected this time, and it may also be considered that the target is located at infinity, so that the measurement is ended.
And step three, positioning the combined pixels and drawing a fine histogram by a fine time unit according to the obtained rough flight time value. Since the coarse time-of-flight value is already specified, the position of the synthesized pixel can be located based on the coarse time-of-flight value and the parallax, and generally, the relationship between the position of the synthesized pixel and the coarse time-of-flight value needs to be saved in the system in advance so that the position of the synthesized pixel can be directly located according to the relationship after the coarse time-of-flight value is obtained; only the composite pixel is then activated based on its position while drawing a fine histogram in fine time units. Since the coarse value of the time-of-flight value is already known,then a round of multiple measurements can be performed and a corresponding histogram can be drawn, and the addresses or address intervals corresponding to the time units in the memory matrix 422 of the histogram circuit after being controlled by the control signal are configured to be smaller time intervals Δ T 2 . In general, the time interval Δ T 2 During setting, only a small measurement range interval capable of containing the real flight time value and the number of histogram storage units need to be corresponded, the measurement range interval can take the rough flight time value as the middle during setting, and a certain margin is added on two sides, for example, the measurement range interval can be set as [ t [ t ] 1 -T’,t 1 -T’]Wherein the smaller the setting of T', the time interval DeltaT 2 The smaller the resolution, the higher the resolution, such as in one embodiment, T' may be set to 5% T, whereby the sum of the time intervals of all time units is only 10% of the corresponding time interval of the bold histogram. In other embodiments, the ratio of the margin to the coarse histogram time interval may be set in the range of 1% -25%. And then, carrying out a new round of multiple measurement, and completing the drawing of a thin histogram by matching the flight time value obtained each time and adding 1 to the corresponding time unit.
Step four, calculating the fine flight time t by using a fine histogram 2 Based on the fine histogram, the waveform position can be found by utilizing a maximum peak method and other methods, and the corresponding flight time is read as a fine flight time value t 2 The precision or minimum resolution of the time-of-flight value is the time interval Δ T of a time unit 2 . If T' is set to 5% T in the third step, the fine time-of-flight is improved by a factor of 10 (the minimum resolution is improved by a factor of 10) compared to the coarse time-of-flight.
The measuring method for the dynamic thickness adjustment of the histogram is a process of roughly positioning in a larger measuring range and then finely measuring based on a positioning result. It will be appreciated that the above described method of coarse and fine adjustment may also be extended to three or more measurements, such as in one embodiment, first measuring with a first time resolution to obtain a first time of flight, then measuring with a second time resolution based on the first time of flight to obtain a second time of flight, and finally measuring with a third time resolution based on the second time of flight to obtain a third time of flight. The three-time precision is sequentially improved, and finally, the measurement with higher precision can be realized.
In one embodiment, because histogram rendering counts only time-of-flight values that are within its time interval T, individual pixels in measurement system collector 12 may be activated (enabled) for a specified time interval that generally encompasses the time interval T of histogram rendering, thereby reducing power consumption. For example, when the time interval of the histogram is [3ns,10ns ], the time interval in which the pixel is activated may be set to [2.5ns,10.5ns ].
The following describes an interpolation-based time-of-flight measurement method, and an example of increasing the resolution by multi-frame measurement is described in the embodiments of fig. 2 and fig. 3, it can be understood that, when multi-frame measurement is performed, the dynamic histogram adjustment scheme shown in fig. 6 or fig. 7 may be used for each frame of depth data measurement. For example, when the first sub-light source array 201 is turned on, a first frame depth image is obtained by drawing a dynamic coarse-fine histogram; when the second sub-light source array 202 is started, performing dynamic coarse-fine histogram drawing to obtain a second frame depth image; and fusing the first and second frame depth images to obtain a depth image with higher resolution. In some embodiments, more than 3 frames of depth images may also be acquired and fused into a higher resolution depth image.
However, if the depth image needs to be dynamically adjusted in thickness during each frame of depth image acquisition, the acquisition time of each high-resolution fusion depth image is relatively long, and the overall frame rate is not high. In order to increase the frame rate as much as possible, the present invention provides a method for measuring time-of-flight based on interpolation according to an embodiment of the present invention shown in fig. 8, the method includes the following steps:
step one, acquiring first flight time of a first combined pixel corresponding to a first light source. In this step, the first light source in the emitter 11 is turned on to emit a speckle beam corresponding to the first light source, and the speckle beam falls on a combined pixel on the pixel unit 31 in the collector 12, for example, a speckle represented by a solid circle of 4 × 3 in fig. 3, the processing circuit may further obtain a first time of flight of the combined pixel, for example, a coarse-fine dynamic adjustment scheme in the embodiment shown in fig. 6 or fig. 7 or any other scheme may obtain a fine time of flight (first time of flight) of the combined pixel.
And step two, calculating by interpolation to obtain a second flight time of a second super pixel corresponding to the second light source. When the second light source is turned on, a speckle beam adjacent to the speckle beam corresponding to the first light source is emitted, and the speckle beam also falls onto a combined pixel of the collector 12, for convenience of illustration, only one speckle 353 is drawn by a dashed circle in fig. 3, and the speckle 353 and the speckle 343 are spatially staggered because the positions of the first light source and the second light source are staggered, so that the corresponding combined pixels are also staggered. Generally, when the spatial points are relatively close to each other, the distances of the two points do not differ much. Therefore, in one embodiment, the time-of-flight value of the corresponding pixel of the blob 343 obtained in step one can be used as the second time-of-flight value (coarse time-of-flight) of the super-pixel 351 corresponding to the blob 353, and then the fine time-of-flight calculation is performed. In one embodiment, the estimation of the second time-of-flight value of the superpixel of the spot 353 can be obtained by using the multiple pixels corresponding to the multiple first light sources around the spot, for example, by performing interpolation using the time-of-flight values of the left and right two pixels. The interpolation may be one-dimensional interpolation or two-dimensional interpolation, and the interpolation method may be at least one of linear interpolation, spline interpolation, polynomial interpolation, and other interpolation methods.
And thirdly, positioning a second combined pixel corresponding to the second light source and drawing a histogram according to the second flight time. After the second time of flight is obtained by interpolation, the position of the blob in the superpixel, i.e. the position of the combined pixel, can be located based on the time of flight and the parallax, and then only the combined pixel is activated based on the position of the combined pixel, and meanwhile, the histogram is drawn in a fine time unit.
Step four, calculating third flight time by utilizing the histogram, finding out the waveform position by utilizing methods such as a maximum peak method and the like based on the histogram, and reading the phaseCorresponding time of flight as a third (fine) time of flight value t 2 The precision or minimum resolution of the time-of-flight value is the time interval Δ T of a time unit 2
Compared with the method described in fig. 6 or fig. 7, the time-of-flight measurement method in the above steps has the advantages that only the time-of-flight calculation of a few of spots needs to use a coarse-fine histogram drawing mode, and at least 2 frames of time-of-flight measurement need to be performed to obtain a high-precision time-of-flight value, the time-of-flight calculation of most of spots can use the time-of-flight values of known spots to perform interpolation as a coarse time-of-flight value of a coarse histogram, and only a single fine histogram drawing is needed based on the coarse time-of-flight value, so that the efficiency can be greatly improved. For example, if the light sources are divided into 6 groups, only the first group of light sources needs to perform coarse-fine measurement when turned on, and the subsequent 5 groups need only perform a single fine measurement when the time-of-flight measurement is performed after turned on.
In some embodiments, considering that the surface of the measured object often has jump, that is, the distance difference is greatly spread, it is difficult to obtain an accurate time-of-flight value by interpolation, and therefore, performing a detail measurement based on the interpolation result may cause an error. Therefore, a determination can be made before the interpolation in step two, for example, when the difference between the time-of-flight values (for example, the left and right blobs) of the composite pixels corresponding to the blobs to be interpolated is greater than a certain threshold, there is a jump in the depth value of the object surface between the two blobs on the surface, and the blobs between the two blobs will still maintain the measurement scheme of the coarse-fine histogram rendering, and the interpolation calculation is performed only when the difference is smaller than the subtraction value.
In some embodiments, the first time-of-flight of the first combined pixel may also be a coarse time-of-flight, that is, only a single coarse histogram plot needs to be performed when performing the first time-of-flight demodulation calculation on the first combined pixel, and then the coarse time-of-flight obtained based on the coarse histogram plot is interpolated.
It is understood that when the distance measuring system of the present invention is embedded in a device or hardware, corresponding structural or component changes may be made to adapt it to the needs, the nature of which still employs the distance measuring system of the present invention and therefore should be considered as the scope of the present invention. The foregoing is a more detailed description of the invention in connection with specific/preferred embodiments and is not intended to limit the practice of the invention to those descriptions. It will be apparent to those skilled in the art that various substitutions and modifications can be made to the described embodiments without departing from the spirit of the invention, and these substitutions and modifications should be considered to fall within the scope of the invention.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "preferred embodiments," "an example," "a specific example," or "some examples" or the like are intended to mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction. Although embodiments of the present invention and their advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the scope of the invention as defined by the appended claims. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. One of ordinary skill in the art will readily appreciate that the above-disclosed, presently existing or later to be developed, processes, machines, manufacture, compositions of matter, means, methods, or steps, that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.

Claims (9)

1. A method for measuring the time-of-flight distance by drawing a dynamic histogram is characterized by comprising the following steps:
s1, enabling each pixel in the super-pixels to be in an activated state to receive photons, receiving photon signals output by the shared TDC of the super-pixels, and drawing a first histogram in a first precision time unit, wherein the super-pixels are set to be positioned on one side of the super-pixels at the lower limit of a measuring range and positioned on the other side of the super-pixels at the upper limit of the measuring range;
s2, calculating a first flight time by using the first histogram;
s3, according to the first flight time, positioning a combined pixel in the super pixels, only activating the pixels in the combined pixel, and drawing a second histogram by a second precision time unit; the combined pixel is composed of at least one pixel, the size of the combined pixel is equal to or slightly smaller than that of the light spot, and the super pixel comprises at least one combined pixel;
s4, calculating a second flight time by using the second histogram; the first time of flight is a coarse time of flight and the second time of flight is a fine time of flight; when the first histogram or the second histogram is plotted, the corresponding pixel is activated only in the time range of the time interval containing the first histogram or the second histogram when a photon is collected.
2. The method of time-of-flight distance measurement according to claim 1, wherein: addresses of the first precision time unit and the second precision time unit are pre-configured so that a time interval of the first histogram is greater than a time interval of the second histogram; alternatively, the temporal resolution of the first histogram is made smaller than the temporal resolution of the second histogram.
3. The time-of-flight distance measurement method according to claim 1, wherein: and positioning the pulse waveform position by utilizing a maximum peak method based on the first histogram or the second histogram and calculating the first flight time or the second flight time.
4. The time-of-flight distance measurement method according to claim 1, wherein: when the first time of flight cannot be calculated using the first histogram in the step S2, the process returns to the step S1 to continue drawing the first histogram until the first time of flight is calculated.
5. The time-of-flight distance measurement method according to claim 1, wherein: the time interval of the first histogram is determined according to the measurement range and the number of time units of the first histogram; the time interval of the second histogram is determined from the first time of flight.
6. The time-of-flight distance measurement method according to claim 5, wherein: the time interval of the second histogram is determined by taking the first flight time as the middle and adding certain margins on two sides.
7. The time-of-flight distance measurement method according to claim 6, wherein: the margin is set to 1% -25% of the first histogram time interval.
8. The time-of-flight distance measurement method according to claim 1, wherein: and the relation between the position of the combined pixel and the first flight time is preserved in advance, so that the combined pixel is positioned according to the relation after the first flight time is obtained.
9. A dynamic histogram plotting time-of-flight distance measurement system comprising:
a transmitter configured to transmit a pulsed light beam;
a collector configured to collect photons in the pulsed light beam reflected back by an object and form a photon signal;
processing circuitry, coupled to the transmitter and the collector, for performing the time-of-flight distance measurement method of any one of claims 1-8.
CN201910889452.2A 2019-09-19 2019-09-19 Method and system for measuring flight time distance during dynamic histogram drawing Active CN110596724B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910889452.2A CN110596724B (en) 2019-09-19 2019-09-19 Method and system for measuring flight time distance during dynamic histogram drawing
PCT/CN2019/113712 WO2021051480A1 (en) 2019-09-19 2019-10-28 Dynamic histogram drawing-based time of flight distance measurement method and measurement system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910889452.2A CN110596724B (en) 2019-09-19 2019-09-19 Method and system for measuring flight time distance during dynamic histogram drawing

Publications (2)

Publication Number Publication Date
CN110596724A CN110596724A (en) 2019-12-20
CN110596724B true CN110596724B (en) 2022-07-29

Family

ID=68861643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910889452.2A Active CN110596724B (en) 2019-09-19 2019-09-19 Method and system for measuring flight time distance during dynamic histogram drawing

Country Status (2)

Country Link
CN (1) CN110596724B (en)
WO (1) WO2021051480A1 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021243612A1 (en) * 2020-06-03 2021-12-09 深圳市大疆创新科技有限公司 Distance measurement method, distance measurement apparatus, and movable platform
CN111812661A (en) * 2020-06-22 2020-10-23 深圳奥锐达科技有限公司 Distance measuring method and system
CN111856433B (en) * 2020-07-25 2022-10-04 深圳奥锐达科技有限公司 Distance measuring system and measuring method
CN112100449B (en) * 2020-08-24 2024-02-02 深圳市力合微电子股份有限公司 d-ToF distance measurement optimizing storage method for realizing dynamic large-range and high-precision positioning
CN112114324B (en) * 2020-08-24 2024-03-08 奥诚信息科技(上海)有限公司 Distance measurement method, device, terminal equipment and storage medium
CN112255635A (en) * 2020-09-03 2021-01-22 奥诚信息科技(上海)有限公司 Distance measuring method, system and equipment
CN112764048B (en) * 2020-12-30 2022-03-18 深圳市灵明光子科技有限公司 Addressing and ranging method and ranging system based on flight time
CN112817001B (en) * 2021-01-28 2023-12-01 深圳奥锐达科技有限公司 Time-of-flight ranging method, system and equipment
CN113514842A (en) * 2021-03-08 2021-10-19 奥诚信息科技(上海)有限公司 Distance measuring method, system and device
CN117741682A (en) * 2024-02-19 2024-03-22 荣耀终端有限公司 Distance detection method, distance measurement system, electronic device, and readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018122560A1 (en) * 2016-12-30 2018-07-05 The University Court Of The University Of Edinburgh Photon sensor apparatus
WO2018181013A1 (en) * 2017-03-29 2018-10-04 株式会社デンソー Light detector
CN109725326A (en) * 2017-10-30 2019-05-07 豪威科技股份有限公司 Time-of-flight camera

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8587771B2 (en) * 2010-07-16 2013-11-19 Microsoft Corporation Method and system for multi-phase dynamic calibration of three-dimensional (3D) sensors in a time-of-flight system
GB201413564D0 (en) * 2014-07-31 2014-09-17 Stmicroelectronics Res & Dev Time of flight determination
US10620300B2 (en) * 2015-08-20 2020-04-14 Apple Inc. SPAD array with gated histogram construction
CN108431626B (en) * 2015-12-20 2022-06-17 苹果公司 Light detection and ranging sensor
US10416293B2 (en) * 2016-12-12 2019-09-17 Sensl Technologies Ltd. Histogram readout method and circuit for determining the time of flight of a photon
CN110226184B (en) * 2016-12-27 2023-07-14 杰拉德·迪尔克·施密茨 System and method for machine perception
US10801886B2 (en) * 2017-01-25 2020-10-13 Apple Inc. SPAD detector having modulated sensitivity
JP6665873B2 (en) * 2017-03-29 2020-03-13 株式会社デンソー Photo detector
DE102017113675B4 (en) * 2017-06-21 2021-11-18 Sick Ag Photoelectric sensor and method for measuring the distance to an object
EP3428683B1 (en) * 2017-07-11 2019-08-28 Sick Ag Optoelectronic sensor and method for measuring a distance
CN107807364B (en) * 2017-09-22 2019-11-15 中国科学院西安光学精密机械研究所 A kind of three-dimensional imaging Photo Counting System and its dynamic biasing control method
DE102018203534A1 (en) * 2018-03-08 2019-09-12 Ibeo Automotive Systems GmbH Receiver arrangement for receiving light pulses, LiDAR module and method for receiving light pulses
CN109343070A (en) * 2018-11-21 2019-02-15 深圳奥比中光科技有限公司 Time flight depth camera

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018122560A1 (en) * 2016-12-30 2018-07-05 The University Court Of The University Of Edinburgh Photon sensor apparatus
WO2018181013A1 (en) * 2017-03-29 2018-10-04 株式会社デンソー Light detector
CN109725326A (en) * 2017-10-30 2019-05-07 豪威科技股份有限公司 Time-of-flight camera

Also Published As

Publication number Publication date
CN110596724A (en) 2019-12-20
WO2021051480A1 (en) 2021-03-25

Similar Documents

Publication Publication Date Title
CN110596722B (en) System and method for measuring flight time distance with adjustable histogram
CN110596721B (en) Flight time distance measuring system and method of double-shared TDC circuit
CN110596725B (en) Time-of-flight measurement method and system based on interpolation
CN110596724B (en) Method and system for measuring flight time distance during dynamic histogram drawing
CN110596723B (en) Dynamic histogram drawing flight time distance measuring method and measuring system
CN111025317B (en) Adjustable depth measuring device and measuring method
CN110687541A (en) Distance measuring system and method
CN111830530B (en) Distance measuring method, system and computer readable storage medium
CN101449181B (en) Distance measuring method and distance measuring instrument for detecting the spatial dimension of a target
CN110609293A (en) Distance detection system and method based on flight time
CN111856433B (en) Distance measuring system and measuring method
CN110780312B (en) Adjustable distance measuring system and method
CN111123289B (en) Depth measuring device and measuring method
CN112198519B (en) Distance measurement system and method
CN112731425B (en) Histogram processing method, distance measurement system and distance measurement equipment
CN111766596A (en) Distance measuring method, system and computer readable storage medium
CN111045029A (en) Fused depth measuring device and measuring method
CN111965658B (en) Distance measurement system, method and computer readable storage medium
CN111427230A (en) Imaging method based on time flight and 3D imaging device
CN111812661A (en) Distance measuring method and system
CN211148917U (en) Distance measuring system
CN112346075B (en) Collector and light spot position tracking method
CN212135134U (en) 3D imaging device based on time flight
CN111796295A (en) Collector, manufacturing method of collector and distance measuring system
CN213091889U (en) Distance measuring system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant