WO2021244011A1 - 一种距离测量方法、系统及计算机可读存储介质 - Google Patents

一种距离测量方法、系统及计算机可读存储介质 Download PDF

Info

Publication number
WO2021244011A1
WO2021244011A1 PCT/CN2020/138372 CN2020138372W WO2021244011A1 WO 2021244011 A1 WO2021244011 A1 WO 2021244011A1 CN 2020138372 W CN2020138372 W CN 2020138372W WO 2021244011 A1 WO2021244011 A1 WO 2021244011A1
Authority
WO
WIPO (PCT)
Prior art keywords
target area
detection efficiency
pixel array
pixel
photons
Prior art date
Application number
PCT/CN2020/138372
Other languages
English (en)
French (fr)
Inventor
李国花
何燃
王瑞
朱亮
闫敏
Original Assignee
深圳奥锐达科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳奥锐达科技有限公司 filed Critical 深圳奥锐达科技有限公司
Publication of WO2021244011A1 publication Critical patent/WO2021244011A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/10Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4814Constructional features, e.g. arrangements of optical elements of transmitters alone
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • G01S7/4865Time delay measurement, e.g. time-of-flight measurement, time of arrival measurement or determining the exact position of a peak
    • G01S7/4866Time delay measurement, e.g. time-of-flight measurement, time of arrival measurement or determining the exact position of a peak by fitting a model or function to the received signal

Definitions

  • the present invention relates to the technical field of distance measurement, in particular to a distance measurement method, system and computer readable storage medium.
  • the time of flight principle can be used to measure the distance of the target to obtain a depth image containing the depth value of the target.
  • the distance measurement system based on the time of flight principle has been widely used in consumer electronics, unmanned aerial vehicles, AR/VR and other fields.
  • a distance measurement system based on the time-of-flight principle usually includes a transmitter and a collector. The transmitter emits a pulsed beam to illuminate the target field of view and the collector collects the reflected beam, and calculates the time required for the beam from emission to reflection and reception to calculate the distance of the object .
  • the current transmitter in the distance measurement system based on the time-of-flight principle includes a pixel array, especially a pixel array including a single-photon avalanche photodiode (SPAD).
  • SPAD single-photon avalanche photodiode
  • the SPAD needs to wait for a dead time (deadtime) after receiving a photon before receiving the next photon, in this way, only one photon can be received for multiple photons within the dead time.
  • the present invention provides a distance measurement method, system and computer readable storage medium.
  • a distance measurement method including the following steps: S1: controlling the emitter to emit pulsed beams towards the target area; S2: adjusting the pixel array of the collector to have at least two different detection efficiencies, respectively with the at least two different detection efficiencies Receive the photon signal formed by the photons in the light beam reflected by the target area, and obtain the depth image of the target area according to the photon signal; S3: fuse the depth image of the target area to obtain the fusion depth of the target area image.
  • the pixel array of the control collector has at least one detection efficiency for collecting photon signals formed by photons in the light beams reflected by all the targets to be measured in the target area. Adjust the pixel array of the collector to have a first detection efficiency and a second detection efficiency, and receive photons formed by photons in the light beam reflected by the target area with the first detection efficiency and the second detection efficiency, respectively Signal.
  • the pixel array that regulates the collector has a first detection efficiency, the pixel array receives a first photon signal formed by photons in the light beam reflected by the target area, and obtains the information of the target area according to the first photon signal.
  • a first depth image; the pixel array that regulates the collector has a second detection efficiency, and the pixel array receives photons in the light beam reflected by the target area to form a second photon signal and a third photon signal, respectively
  • the second photon signal and the third photon signal obtain a second depth image of the target area; the second detection efficiency is greater than the first detection efficiency.
  • the pixel array of the collector is adjusted to have the first detection efficiency, and the pixel array receives the fourth photon signal and the fifth photon signal formed by photons in the light beam reflected by the target area.
  • the photon signal is used to obtain a fourth depth image and a fifth depth image of the target area according to the fourth photon signal and the fifth photon signal respectively;
  • the pixel array of the collector is adjusted to have a second detection efficiency, the The pixel array receives photons in the light beam reflected by the target area to form a sixth photon signal, and obtains a sixth depth image of the target area according to the sixth photon signal; the first detection efficiency is greater than that of the second detection efficient.
  • fusing the depth image of the target area to obtain the fused depth image of the target area includes: selecting the at least two different detection efficiency levels according to the distance of the target to be detected. The depth value of the target to be measured in the depth image corresponding to the efficiency.
  • the present invention also provides a distance measurement system, including: a transmitter for emitting pulsed light beams to a target area; a collector including a pixel array with at least two different detection efficiencies, for using the at least two different The detection efficiency of receiving the photon signal formed by the photons in the light beam reflected by the target area, and obtaining the depth image of the target area according to the photon signal, the pixel array is a pixel array composed of single-photon avalanche photodiodes;
  • the control and processing circuit is respectively connected with the transmitter and the collector, and is used to realize any of the method control described above.
  • the pixel array has at least one detection efficiency for collecting photon signals formed by photons in the light beams reflected by all targets to be measured in the target area.
  • the pixel array has a first detection efficiency and a second detection efficiency; the first detection efficiency is greater than the second detection efficiency, and the first detection efficiency is used to collect light beams reflected by all targets to be detected in the target area Or, the second detection efficiency is greater than the first detection efficiency, and the second detection efficiency is used to collect the photons in the light beams reflected by all the targets to be measured in the target area Photon signal.
  • the present invention further provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps of any of the above methods are realized.
  • the beneficial effects of the present invention are: providing a distance measurement method, system and computer-readable storage medium, by adjusting the pixel array of the collector to have at least two different detection efficiencies to obtain corresponding depth images, and then according to the distance of the target to be measured The depth value of the target to be measured in the depth image corresponding to the high and low efficiency of the at least two different detection efficiencies is selected to obtain a fused depth image by fusion, and the pile_up phenomenon of received waveform distortion is eliminated.
  • Fig. 1 is a schematic diagram of a distance measurement system in an embodiment of the present invention.
  • Figure 2(a) is a schematic structural diagram of a transmitter in an embodiment of the present invention.
  • Figure 2(b) is a schematic structural diagram of a collector in an embodiment of the present invention.
  • Fig. 3 is a schematic diagram of a first distance measurement method in an embodiment of the present invention.
  • Fig. 4 is a schematic diagram of the first distance measurement system in an embodiment of the present invention.
  • Fig. 5 is a schematic diagram of a pixel unit in a collector in an embodiment of the present invention.
  • Fig. 6 is a schematic diagram of a second distance measurement method in an embodiment of the present invention.
  • Fig. 7 is a schematic diagram of a second distance measuring system in an embodiment of the present invention.
  • Fig. 8 is a schematic diagram of a third distance measurement system in an embodiment of the present invention.
  • Fig. 9 is a schematic diagram of a third distance measurement method in an embodiment of the present invention.
  • Fig. 10 is a schematic diagram of a pixel unit in another collector in an embodiment of the present invention.
  • Fig. 11 is a schematic structural diagram of another collector in an embodiment of the present invention.
  • Fig. 12 is a schematic diagram of a manufacturing method of a collector in an embodiment of the present invention.
  • connection can be used for fixing or circuit connection.
  • first and second are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Therefore, the features defined with “first” and “second” may explicitly or implicitly include one or more of these features.
  • “plurality” means two or more, unless otherwise specifically defined.
  • Avalanche photodiodes refer to photosensitive elements used in laser communications. After applying a reverse bias to the P-N junction of a photodiode made of silicon or germanium, the incident light will be absorbed by the P-N junction to form a photocurrent. Increasing the reverse bias voltage will produce an “avalanche” (that is, the photocurrent surges exponentially), so this type of diode is called an “avalanche photodiode.”
  • FIG. 1 is a schematic diagram of a distance measurement system according to an embodiment of the present invention.
  • the distance measurement system 10 includes a transmitter 11, a collector 12, and a control and processing circuit 13.
  • the transmitter 11 is used to emit a light beam 30 to the target area 20.
  • the light beam is emitted into the target area space to illuminate the target object in the space. At least part of the emitted light beam 30 is reflected by the target area 20 to form a reflected light beam 40.
  • the reflected light beam 40 At least part of the light beam in is received by the collector 12, and the control and processing circuit 13 is connected to the transmitter 11 and the collector 12 respectively, and synchronizes the trigger signal of the transmitter 11 and the collector 12 to calculate the time required for the beam from emission to reception, That is, the flight time t between the emitted light beam 30 and the reflected light beam 40, and further, the distance D of the corresponding point on the target object can be calculated by the following formula:
  • the transmitter 11 includes a light source 111, a transmitting optical element 112, a driver 113, and the like.
  • the light source 111 can be a light emitting diode (LED), a laser diode (LD), an edge emitting laser (EEL), a vertical cavity surface emitting laser (VCSEL), etc., or a one-dimensional or two-dimensional light source array composed of multiple light sources,
  • the light source array is a VCSEL array light source chip formed by generating multiple VCSEL light sources on a single semiconductor substrate, and the arrangement of the light sources in the light source array may be regular or irregular.
  • the light beam emitted by the light source 111 may be visible light, infrared light, ultraviolet light, or the like.
  • the light source 111 emits a light beam to the outside under the control of the driver 113.
  • the light source 111 emits a pulsed light beam at a certain frequency (pulse period) under the control of the driver 113, which can be used in direct time-of-flight (Direct TOF) measurement, and the frequency is set according to the measurement distance.
  • a part of the control and processing circuit 13 or a sub-circuit independent of the control and processing circuit 13 can also be used to control the light source 111 to emit light beams.
  • the emitting optical element 112 receives the light beam emitted from the light source 111, shapes it and projects it to the target area.
  • the transmitting optical element 112 receives the pulsed beam from the light source 111, and optically modulates the pulsed beam, such as diffraction, refraction, reflection, etc., and then emits the modulated beam into the space, such as a focused beam, Flood light beam, structured light beam, etc.
  • the transmitting optical element 112 may be one or more combinations of lenses, liquid crystal elements, diffractive optical elements, microlens arrays, metasurface optical elements, masks, mirrors, MEMS galvanometers, and the like.
  • the collector 12 includes a pixel unit 121, a filter unit 122, and a receiving optical element 123.
  • the receiving optical element 123 is used to receive at least part of the light beam reflected by the target and guide it to the pixel unit 121.
  • the filter unit 122 is used to filter out background light or Stray light.
  • the pixel unit 121 includes a two-dimensional pixel array composed of a plurality of pixels.
  • the pixel unit 121 is composed of a single-photon avalanche photodiode (SPAD).
  • the SPAD can respond to incident single photons and output instructions. Receive the signal of the corresponding arrival time of the photon at each SPAD, and use the time-correlated single photon counting method (TCSPC) to realize the collection of the weak light signal and the calculation of the flight time.
  • TCSPC time-correlated single photon counting method
  • the control and processing circuit 13 synchronizes the trigger signals of the transmitter 11 and the collector 12, processes the photon signal of the pixel collection beam, and calculates the distance information of the target to be measured based on the flight time of the reflected beam.
  • the SPAD outputs a photon signal in response to a single incident photon
  • the control and processing circuit 13 receives the photon signal and performs signal processing to obtain the flight time of the light beam.
  • the control and processing circuit 13 calculates the number of collected photons to form continuous time bins. These time bins are connected together to form a statistical histogram for reproducing the time series of the reflected light beam. Peak matching and filter detection are used to identify the reflected light beam from Flight time from transmission to reception.
  • control and processing circuit 13 includes a readout circuit composed of one or more of a signal amplifier, a time-to-digital converter (TDC), a digital-to-analog converter (ADC) and other devices (not shown in the figure). ). These circuits can be integrated with the pixels, and can also be used as part of the control and processing circuit 13. For ease of description, they will be collectively regarded as a part of the control and processing circuit 13. It can be understood that the control and processing circuit 13 may be an independent dedicated circuit, such as a dedicated SOC chip, an FPGA chip, an ASIC chip, etc., and may also include a general-purpose processing circuit.
  • TDC time-to-digital converter
  • ADC digital-to-analog converter
  • the distance measurement system 10 further includes a memory for storing a pulse encoding program, and the encoding program is used to control the excitation time, emission frequency, etc. of the light beam emitted by the light source 111.
  • the distance measurement system 10 may also include a color camera, an infrared camera, an IMU, and other devices.
  • the combination of these devices can achieve richer functions, such as 3D texture modeling, infrared face recognition, SLAM and other functions.
  • the transmitter 11 and the collector 12 can also be arranged in a coaxial form, that is, the two are realized by optical devices with reflection and transmission functions, such as a half mirror.
  • the emitter 11 includes a light source array 21 composed of multiple light sources, and the multiple light sources are arranged in a certain pattern on a single substrate.
  • the substrate may be a semiconductor substrate, a metal substrate, etc.
  • the light source may be a light emitting diode, an edge emitting laser, a vertical cavity surface emitting laser (VCSEL), etc., preferably, the light source array 21 is composed of a plurality of VCSEL light sources arranged on a semiconductor substrate Array VCSEL chip.
  • the light source array 21 emits light under the modulation drive of the driving circuit (which may be a part of the control and processing circuit 13), and may also emit light in groups or as a whole under the control of the driving circuit.
  • the pixel unit 121 includes a pixel array 22 and a readout circuit 23.
  • the pixel array 22 includes a two-dimensional array composed of a plurality of pixels for collecting at least part of the light beam reflected by the object and generating corresponding photon signals.
  • the readout circuit 23 Used to process photon signals to calculate flight time.
  • the readout circuit 23 includes a TDC circuit 231 and a histogram circuit 232 for drawing a histogram reflecting the pulse waveform emitted by the light source in the transmitter. Furthermore, the flight time can also be calculated based on the histogram, and finally The results are output.
  • the readout circuit 23 may be composed of a single TDC circuit and a histogram circuit, or may be an array readout circuit composed of a plurality of TDC circuit units and histogram circuit units.
  • the pixel array 22 is a pixel array composed of multiple SPADs.
  • the emitter 11 emits a spot beam to the object under test
  • the receiving optical element 123 in the collector 12 will guide the spot beam to the corresponding pixel.
  • the size of a single spot is usually set to correspond to multiple pixels (the correspondence here can be understood as imaging, and the optical element 112 generally includes an imaging lens).
  • each light source in the light source array 21 is configured to be paired with each combined pixel in the pixel array 22, that is, the projected field of view of each light source corresponds to the collection field of view of the corresponding combined pixel in a one-to-one correspondence.
  • the light beam emitted by the light source 211 is reflected by the object and the spot beam is guided by the receiving optical element 123 to the combined pixel 221, and the light beam emitted by the light source 212 is reflected by the object and guided by the receiving optical element 123.
  • the light beam reaches the combined pixel 224, and the light beam emitted by the light source 213 is reflected by the object and then the spot light beam is guided to the combined pixel 225 by the receiving optical element 123.
  • the distance measurement system between the transmitter 11 and the collector 12 can be divided into co-axial and off-axis according to different setting modes.
  • the beam emitted by the transmitter 11 will be collected by the corresponding combined pixel in the collector 12 after being reflected by the measured object, and the position of the combined pixel will not be affected by the distance of the measured object; but for the off-axis situation, due to In the presence of parallax, when the distance of the measured object is different, the position of the light spot on the pixel unit will also change, generally along the baseline (the line between the emitter 11 and the collector 12, which is unified in the present invention Use horizontal lines to indicate the baseline direction). The direction is shifted.
  • each pixel constitutes a pixel area, which is referred to herein as a "super pixel" for receiving the reflected spot light beam.
  • one super pixel 222 includes three combined pixels.
  • the size of a super pixel should exceed at least one super pixel.
  • the size of the super pixel is the same as the sum pixel along the vertical direction of the baseline, and is larger than the sum pixel along the baseline direction.
  • the number of superpixels is generally the same as the number of spot beams collected by the collector 12 in a single measurement.
  • the histogram circuit 232 draws a received waveform reflecting the pulse waveform emitted by the light source in the transmitter.
  • the received waveform is basically similar in shape to the transmitted pulse waveform, and the received waveform represents the number of photons in the reflected pulse incident on the pixel array.
  • the photons received by the pixel array include ambient photons and signal photons. The ambient photons continue to exist in the time bin of the histogram, while the signal photons only appear in the time bin corresponding to the target position and form a pulse peak.
  • the SPAD array enters the dead time after receiving photons and no longer detects photons, when the target to be measured is close to the SPAD array, or when the target to be measured has high reflectivity, the front photons in the reflected beam are faster
  • the incident into the SPAD array saturates multiple SPADs, and the subsequent incident photons are less likely to be collected by the SPAD, leading to the advancement of the pulse peak position.
  • a large number of ambient photons are incident on the SPAD array to saturate multiple SPADs, and then the probability of signal photons being collected by the SPAD is reduced, resulting in distortion of the formed receiving waveform.
  • Use the peak of the distorted receiving waveform to determine The TOF value is not accurate.
  • the distortion of the received waveform generated above is collectively referred to as the pile_up phenomenon, and some embodiments will be used to describe how to solve this problem and improve the accuracy of the distance measurement system.
  • FIG. 3 it is a flowchart of the distance measurement method according to the first embodiment of the present invention.
  • the distance measurement method is executed by the control and processing circuit 13 in the distance measurement system, and the specific method steps are as follows:
  • the transmitter 11 includes a light source array 21, which emits a pulsed beam of a spot pattern toward a target area, and forms a reflected beam after being reflected by an object in the target area.
  • the pixel array of the control collector has at least two different detection efficiencies, and the photon signals formed by the photons in the light beam reflected by the target area are respectively received with the at least two different detection efficiencies, and the photon signals are respectively received according to the photon Signal to obtain a depth image of the target area;
  • the control and processing circuit 13 adjusts the reverse bias voltage applied to each pixel in the pixel array 22 to change the detection efficiency (PDE) of the pixel array.
  • PDE refers to the ratio of the number of effective photons detected per unit time to the total number of incident photons.
  • the PDE of each pixel is closely related to the reverse bias voltage applied to the pixel. High, the longer the avalanche duration, the PDE will increase significantly. When the reverse bias voltage applied to the pixel is lower, the PDE will also decrease. When the reverse bias voltage is lower than the breakdown voltage, the avalanche will be quenched. When the pixel no longer receives photons. However, the bias voltage cannot be increased indefinitely. When the bias voltage is set too high, the dark count rate may increase significantly. Therefore, it is necessary to reasonably set the value of the reverse bias voltage according to the system requirements in practical applications.
  • the photon signals formed by the photons in the light beams reflected by the object to be measured within different distance ranges are obtained, and different detection efficiencies correspond to different ranging
  • the system's ranging range, reflectivity, etc. that is, the ranging range with low detection efficiency is smaller, and it is used to deal with short-range, high reflectivity, and strong ambient light; the ranging range with high detection efficiency is farther, Long distance, low reflectivity, low ambient light.
  • the number of detection efficiencies can be set according to specific conditions.
  • the pixel array of the control collector has at least one detection efficiency for collecting photon signals formed by photons in the light beams reflected by all the targets to be measured in the target area.
  • multiple detection efficiencies may be set according to the distance of the target to be detected in the target area, and the difference between the multiple detection efficiencies may be equal or unequal.
  • the pixel array of the control collector has a first detection efficiency and a second detection efficiency, respectively, and receives the reflection of the object to be detected in the target area with the first detection efficiency and the second detection efficiency.
  • the photon signal formed by the photons in the beam is not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to, but not limited to the light.
  • the control and processing circuit 13 regulates the pixel array 22 to have the first detection efficiency (the reverse bias voltage applied to the pixel is lower at this time), that is, the pixel array 22 has a lower PDE.
  • the first depth image acquisition of the target field of view is completed. Then the photons in the light beam reflected by the first target closer to the collector 12 are received by the pixels in the pixel array 22 to form a first photon signal; or, the photons in the light beam reflected by the first target with higher reflectivity are received by the pixel array
  • the pixels in 22 receive and form the first photon signal.
  • the control and processing circuit 13 calculates the first flight time according to the first photon signal to obtain the first depth image of the target area, and the pixels of the first depth image have the first TOF value.
  • the pixel array in the control collector has the second detection efficiency.
  • the pixel array receives the photons in the light beam reflected by the target area to form a second photon signal and a third photon signal, and obtains the target according to the second photon signal and the third photon signal.
  • the second depth image of the area is the first depth image of the area.
  • the control and processing circuit 13 adjusts the pixel array 22 to have a second detection efficiency (the reverse bias voltage applied to the pixel is higher at this time, where the second detection efficiency is greater than the first detection efficiency, and at this time, the pixel array 22 has a higher detection efficiency.
  • High PDE complete the second frame depth image acquisition of the target area.
  • the photons in the light beam reflected by the second target farther from the collector 12 can be received by the pixels in the pixel array 22 to form a second photon signal; or , The photons in the light beam reflected by the second target with lower reflectivity can be received by the pixels in the pixel array 22 to form the second photon signal.
  • the control and processing circuit 13 calculates the second time of flight to form the target area according to the second photon signal. In the second depth image, some pixels in the second depth image have the second TOF value.
  • the second detection efficiency can detect the target at the farthest distance of the system.
  • the maximum detection distance of the distance detection system is 150m, and the second detection efficiency can receive The photons reflected back when the target is located at 150m form a photon signal; and the first detection efficiency can only receive the photons reflected back from the target at 20m.
  • the control and processing circuit 13 can calculate the first target distance information based on the third photon signal.
  • the third flight time of the second depth image results in the third TOF value on some pixels in the second depth image, but due to the existence of the pile_up phenomenon, the third TOF value on the same pixel point is smaller than the first TOF value (accurate TOF value). Therefore, the accurate depth image of the target area is determined in the next step.
  • the control and processing circuit 13 adjusts the pixel array 22 to have the first detection efficiency (the reverse bias voltage applied to the pixel is higher at this time), that is, the pixel array 22 has a higher PDE.
  • the pixel array 22 receives the fourth photon signal and the fifth photon signal formed by the photons in the light beam reflected by the target area, and obtains the fourth depth image and the fifth depth image of the target area according to the fourth photon signal and the fifth photon signal, respectively; Then, the regulation pixel array has the second detection efficiency (the reverse bias voltage applied to the pixels is lower at this time), that is, the pixel array 22 has a lower PDE.
  • the pixel array receives photons in the light beam reflected by the target area to form a sixth photon signal, and obtains a sixth depth image of the target area according to the sixth photon signal. I won't repeat them here.
  • the depth of the target to be tested in the depth image corresponding to the high and low efficiency of the at least two different detection efficiencies is selected according to the distance of the target to be tested. value.
  • the control and processing circuit 13 assigns the first TOF value on each pixel point in the first depth image to the corresponding pixel point in the second depth image to replace the third TOF value on the pixel point,
  • the third depth image is formed, and the TOF value corresponding to each pixel in the third depth image is the accurate flight time.
  • the pixels mentioned here mainly refer to pixels with effective TOF values.
  • the processing for the fourth depth image, the fifth depth image, and the sixth depth image is similar.
  • the method based on the present invention also provides a distance measurement system for realizing the above method.
  • FIG. 4 it is a schematic diagram of a distance measurement system according to the first embodiment of the present invention.
  • the pixel array of the collector is adjusted to have at least two different detection efficiencies, and the photon signals formed by the photons in the light beam reflected by the target area are respectively received with the at least two different detection efficiencies, and The depth image of the target area is obtained according to the photon signal, and the depth image is merged into a frame of depth image, which effectively corrects the measurement error caused by pile_up.
  • the corresponding depth image is obtained by adjusting the pixel array of the collector to have at least two different detection efficiencies, and then selecting the at least two different detection efficiencies according to the distance of the target to be measured
  • the depth value of the target to be measured in the depth image corresponding to the high and low efficiency is fused to obtain a fused depth image, and the pile_up phenomenon of the distortion of the received waveform is eliminated.
  • FIG. 5 it is a schematic diagram of the pixel unit in the collector of the second embodiment of the present invention.
  • the pixel unit includes a pixel array 41 and a readout circuit 44.
  • the pixel array 41 includes a two-dimensional array composed of a plurality of pixels for collecting at least part of the light beam reflected by the object and generating corresponding photon signals.
  • the readout circuit 41 uses To process the photon signal to calculate the flight time.
  • the readout circuit 44 includes a TDC circuit 441 and a histogram circuit 442 for drawing a histogram reflecting the pulse waveform emitted by the light source in the transmitter. Furthermore, the flight time can also be calculated based on the histogram, and finally The results are output.
  • the readout circuit 44 may be composed of a single TDC circuit and a histogram circuit, or may be an array readout circuit composed of a plurality of TDC circuit units and histogram circuit units.
  • the pixel array 41 is a pixel array composed of a plurality of single-photon avalanche photodiodes (SPAD), where the pixel array 41 includes a reference pixel array 42 and an imaging pixel array 43.
  • the reference pixel array 42 includes at least one reference pixel 421.
  • the reference pixel array 42 is configured as a column of reference pixels arranged along the peripheral edge of the imaging pixel array 43. In other embodiments, the reference pixel array 42 may be arranged in at least one column or one row; or, the reference pixels It is located at any given position around the imaging pixel array 43.
  • the configuration of the imaging pixel array 43 is shown in the description of the pixel array in FIG. 2(b), and the description will not be repeated here.
  • the control and processing circuit 13 controls the transmitter 11 to emit pulsed beams toward the target area, and controls the pixels in the collector to turn on to receive the photons in the reflected beam.
  • the reflected beam reflected back through the target area is guided by the receiving optical element 123 to image the reflected beam to
  • the imaging pixel array 43 the imaging pixels in the imaging pixel array 43 collect photons in the reflected light beam to form a photon signal, and the control and processing circuit 13 calculates the flight time of the reflected light beam from emission to reception according to the photon signal.
  • the calculated reflected light beam may have errors.
  • the reference pixel array 42 is configured to count the number of received reference photons within a certain period of time, and the imaging pixels in the imaging pixel array 43 during the next frame acquisition are adjusted according to the number of reference photons. PDE.
  • the control and processing circuit 13 adjusts the reverse bias voltage applied to the imaging pixels in the imaging pixel array 43 to change the detection efficiency (PDE) of the imaging pixel array.
  • the reference photons received by the reference pixel array 42 within a predetermined time include ambient photons, and may also include signal photons in a partially reflected beam.
  • the number of reference photons is used to characterize the product of the ambient light intensity and the target reflectivity, and the number of reference photons is The PDE of the imaging pixels is inversely proportional. Adjust and control the detection efficiency of the imaging pixel array according to the number of reference photons received by the reference pixel array 42 within a predetermined time and control the collector to receive photons at the adjusted detection efficiency until the imaging pixel array receives the pulsed beam reflected back by the target area
  • the photons form the second photon signal to meet the predetermined requirement.
  • the predetermined requirement mentioned here may be to meet a predetermined accuracy, etc., and the number of adjustments is at least once.
  • the detection efficiency of the control imaging pixel array is lower than or higher than the first detection efficiency, and the adjustment is made according to the inverse proportional relationship between the number of reference photons and the PDE of the imaging pixel.
  • the threshold value of the number of reference photons received by the reference pixel array 42 within a certain period of time is preset, for example, the certain period of time is set to 10 us.
  • the control and processing circuit 13 regulates the imaging pixel array Receive the photons in the reflected beam with the first detection efficiency (lower PDE), and at the same time control the reference pixel array 42 to receive the reference photons. If the ambient light is low and/or the target reflectivity is low at this time, it will be received within 10us.
  • the control and processing circuit 13 adjusts the imaging pixel array 43 to receive the photons in the reflected beam with the second detection efficiency (higher PDE) when the next frame is collected. If the number of reference photons is greater than or equal to the threshold, Then the imaging pixel array still has the first detection efficiency when the next frame is collected.
  • the corresponding relationship between the number of reference photons received by the reference pixel array 42 within a predetermined time and the PDE of the imaging pixel can be predefined, and the control and processing circuit 13 combines the number of reference photons received by the reference pixel array 42 in the current frame in combination with the preset
  • the defined correspondence relationship can determine the PDE of the imaging pixel array 43 in the next frame, and real-time control can be achieved.
  • the distance measurement system in practical applications usually encounters many uncontrollable factors.
  • the LiDAR system used in autonomous driving may change the environment or the target during the continuous measurement process, and adjust the PDE of the imaging pixel in real time. It can also effectively solve the ranging errors caused by these situations and improve the accuracy of the system.
  • this method does not need to reduce the frame rate during the measurement process.
  • a distance measurement method is also proposed, which includes the following steps:
  • T1 Control the transmitter to emit pulsed beams
  • T2 Control the collector to have the first detection efficiency and receive photons at the first detection efficiency;
  • the pixel array of the collector includes a reference pixel array and an imaging pixel array;
  • the reference pixel array includes at least one reference pixel, To receive a reference photon;
  • the imaging pixel array includes at least one imaging pixel for receiving photons in the pulsed light beam reflected back by the target area to form a first photon signal;
  • T3 Adjust the detection efficiency of the imaging pixel array to the second detection efficiency according to the number of reference photons received by the reference pixel array within a predetermined time, and control the collector to receive photons at the second detection efficiency until all The imaging pixel array receives the photons in the pulse beam reflected back by the target area to form a second photon signal to meet a predetermined requirement;
  • adjusting the detection efficiency of the imaging pixel array to be lower than or higher than the first detection efficiency is a reference for obtaining the imaging condition of the target area according to the number of reference photons received by the reference pixel array.
  • T4 Calculate the flight time of the pulsed light beam from emission to reception according to the second photon signal.
  • a threshold value for the number of reference photons received within a certain period of time is set, and the detection efficiency of the pixel array is regulated according to the number of reference photons; if the number of reference photons is greater than or equal to the threshold, the control and processing circuit regulates the next frame acquisition time
  • the imaging pixel array has a first detection efficiency; if the number of reference photons is less than the threshold, the control and processing circuit adjusts the imaging pixel array to have a second detection efficiency when the next frame is collected; wherein the second detection efficiency is greater than the first detection efficiency.
  • the corresponding relationship table between the number of reference photons and the detection efficiency of the imaging pixels is stored in advance, and the corresponding relationship table is queried according to the number of reference photons to regulate the detection efficiency of the imaging pixel array in the next frame.
  • FIG. 7 it is a schematic diagram of a distance measurement system according to the second embodiment of the present invention.
  • the detection efficiency of the imaging pixel is adjusted according to the number of reference photons (ambient photons) received by the reference pixel, and the pile_up phenomenon of received waveform distortion is eliminated without reducing the measurement frame rate.
  • the number of times of adjusting the detection efficiency of the imaging pixel is reduced, and the complexity of the adjustment is reduced.
  • the accuracy of the adjustment is improved.
  • the distance measurement system 60 includes a transmitter 11, a collector 12, a camera 14 and a control and processing circuit 13.
  • the transmitter 11 is used to emit a light beam 30 to the target area 20.
  • the light beam is emitted into the target area space to illuminate the target object in the space.
  • At least part of the emitted light beam 30 is reflected by the target area 20 to form a reflected light beam 40.
  • the reflected light beam 40 At least part of the light beams are received by the collector 12, and the control and processing circuit 13 is respectively connected with the transmitter 11 and the collector 12 to synchronize the trigger signals of the transmitter 11 and the collector 12 to calculate the time required for the light beam from emission to reception.
  • control and processing circuit 13 is connected to the camera 14.
  • the camera 14 is used to collect a grayscale image of the target area, where the grayscale value of the pixel in the grayscale image represents the total light of the light beam 50 reflected by the target and the ambient light. strength.
  • the control and processing circuit 13 adjusts the detection efficiency (PDE) of the corresponding pixel in the pixel array in the collector 12 according to the gray value of the pixel in the gray image.
  • PDE detection efficiency
  • the camera 14 includes a first pixel unit 141 for collecting a grayscale image of a target area, and the first pixel unit 141 includes a first pixel array (not shown) composed of a plurality of first pixels, wherein the grayscale image The pixel points in and the first pixels in the first pixel unit 141 have a one-to-one correspondence.
  • the camera 14 may be a grayscale camera, an RGB camera, etc., preferably a grayscale camera.
  • the collector 12 includes a second pixel unit 121. In one embodiment, the structure of the second pixel unit 121 is as shown in FIG.
  • the pixel array 22 is denoted as a second pixel array, and the second pixel array includes a two-dimensional array composed of a plurality of second pixels, and preferably the second pixel is a SPAD pixel.
  • the camera 14 and the collector 12 are configured to have the same acquisition field of view, so that at least one first pixel is paired with at least one second pixel (in this embodiment, the second pixel may be a combined pixel or a super pixel).
  • the control and processing circuit 13 determines the light intensity of the reflected beam according to the gray value of each pixel in the gray image.
  • the gray value is between 0-255 and is divided into 256 levels. The larger the gray value, the light of the reflected beam. The greater the intensity. It is understandable that the light intensity of the light beam reflected by the first target closer to the collector is greater than the light intensity of the light beam reflected by the second target further away from the collector; or, the light beam reflected by the first target with higher reflectivity Compared with the second target with lower reflectivity, the light intensity of the light beam reflected by the second target is greater; or is affected by stronger ambient light, the reflected ambient light will correspondingly increase the gray value of the pixel in the grayscale image.
  • the control and processing circuit 13 adjusts the PDE corresponding to the second pixel in the second pixel array according to the gray value of the pixel in the gray image.
  • the control and processing circuit regulates the detection efficiency of the second pixel by changing the reverse bias voltage applied to the second pixel in the second pixel array.
  • the control and processing circuit 13 regulates the PDE of each second pixel in the second pixel array.
  • the second pixel array no longer has a unified PDE, and has multiple PDEs in the target area. When different targets to be tested, the accuracy of the measurement is effectively improved.
  • a correspondence relationship table between the gray value of the gray image and the value of the detection efficiency of the second pixel is stored in advance.
  • the control and processing circuit 13 determines the corresponding PDE of the second pixel according to the gray value query relation table of each pixel in the gray image, and adjusts the reverse bias voltage applied to the second pixel to change the next frame of acquisition PDE of the second pixel.
  • Correspondence table of gray value and PDE value can be obtained through calibration.
  • the gray value of the gray image is divided into at least two steps in order in advance, and the detection efficiency of the second pixel corresponding to each step is configured.
  • the gray value is divided into steps in the order from small to large (or large to small) in advance, and each step is configured to have a corresponding PDE.
  • each step is configured to have a corresponding PDE.
  • the gray value range of the first step is 0-85
  • the gray value range of the second step is 86-171
  • the gray value range of the third step is 172-256.
  • the PDE of the second pixel is set to the first PDE (higher PDE), the second PDE (middle PDE), and the third PDE (lower PDE).
  • the control and processing circuit 13 processes the grayscale image according to the grayscale value steps to divide the image into a plurality of first closed-loop areas, and the grayscale values of all pixels in the same closed-loop area belong to the same step. Further, according to the coordinates of the pixel points on the boundary line of the first closed-loop area, determine the second closed-loop area corresponding to the first closed-loop area in the second pixel array, and adjust all the second closed-loop areas in the second closed-loop area according to the detection efficiency corresponding to the steps. Pixel detection efficiency. For example, if the gray value in the first closed-loop area belongs to the first step, it is regulated that all the second pixels in the first closed-loop area have the first PDE. Through this hierarchical setting of regional adjustment, the adjustment time can be improved. It is understandable that the above regulation method is only an embodiment of the present invention, and does not specifically limit the content of the present invention.
  • a distance measurement method which includes the following steps:
  • P1 Control the transmitter to emit pulsed beams
  • P2 Control the first pixel array of the gray image acquisition unit to collect the gray image of the target area, and at the same time control the second pixel array of the collector to have the first detection efficiency, and receive the reflection from the target area with the first detection efficiency The first photon signal formed by the photons in the pulsed beam;
  • P3 Adjust the detection efficiency of the corresponding second pixel in the second pixel array according to the gray value of the pixel in the gray image, until the second pixel array receives the reflection from the target area The photons in the pulsed beam form a second photon signal to meet the predetermined requirement;
  • P4 Calculate the flight time of the pulsed beam from emission to reception according to the second photon signal.
  • the predetermined requirement is that the pixel array can receive enough photon signals to form a receiving waveform; or receive photon signals that meet a certain signal-to-noise ratio.
  • the detection efficiency of the second pixel array is adjusted to be lower than or higher than the first detection efficiency.
  • the distance measurement method of this embodiment adopts the distance measurement system of the aforementioned third embodiment for distance measurement, and its technical solution is the same as the aforementioned distance measurement system, so it will not be repeated here.
  • the distortion of the received waveform is eliminated without reducing the frame rate in the measurement process.
  • the pile_up phenomenon is eliminated by adjusting the detection efficiency of the second pixel of the collector according to the gray value of the gray image.
  • the adjustment time of the adjustment is improved through the hierarchical setting of the area.
  • Fig. 10 is a schematic diagram of a pixel unit in a collector according to a fourth embodiment of the present invention.
  • the pixel unit includes a pixel array 61 and a readout circuit 64.
  • the pixel array 61 includes a two-dimensional array composed of a plurality of pixels for collecting at least part of the light beam reflected by the object and generating corresponding photon signals.
  • the readout circuit 64 uses To process the photon signal to calculate the flight time.
  • the readout circuit 64 includes a TDC circuit 641 and a histogram circuit 642 for drawing a histogram reflecting the pulse waveform emitted by the light source in the transmitter. Further, the flight time can also be calculated according to the histogram, and finally Output the result.
  • the readout circuit 64 may be composed of a single TDC circuit and a histogram circuit, or may be an array readout circuit composed of a plurality of TDC circuit units and histogram circuit units.
  • the pixel array 61 is a pixel array composed of a plurality of SPADs.
  • the receiving optical element 123 in the collector 12 will guide the spot beam to the corresponding pixel.
  • the size of a single spot is usually set to correspond to multiple pixels (the correspondence here can be understood as imaging, and the optical element 123 generally includes an imaging lens), such as
  • the pixel area composed of the corresponding multiple pixels becomes "Combined pixel", the size of the combined pixel needs to be considered comprehensively when setting.
  • the super pixel 611 is configured to include a first combined pixel 621 and a second combined pixel 622, and the super pixel 611 is connected to a TDC circuit and a histogram circuit. Wherein, the collection field of view of the super pixel matches the projected field of view of the corresponding light source.
  • the light source corresponding to the super pixel 611 emits a pulsed beam toward the corresponding area
  • the first target in this area is located closer to the collector
  • the spot beam reflected by the first target is incident on the first combined pixel 621; if the second target in this area is located at a greater distance from the collector, the spot beam reflected by the second target (Indicated by the dotted circle) is incident on the second combined pixel 622.
  • an attenuation sheet 62 is provided on the first combined pixel 621, so that the light beam reflected from the first target at the target area first hits the attenuation sheet 62, and the light intensity of the reflected light beam after passing through the attenuation sheet 62 Reduced, and then incident into the first combined pixel 621, reducing the number of photons collected by the first combined pixel 621.
  • the attenuation coefficient of the attenuation sheet can be determined according to the distance measurement range of the distance measurement system. The formed photon signal.
  • the attenuation sheet not only solves the strong ambient light, but also weakens the strong reflected light generated by the close target. This is because the pile_up problem is mainly caused by the strong reflected light reflected when the target is located at close range. High reflectivity and strong ambient light are both It's just that auxiliary factors are not the dominant factors.
  • the number of pixels included in the first combined pixel 621 and the second combined pixel 622 may be different. In an embodiment, the number of pixels included in the first combined pixel 621 and the second combined pixel 622 may also be the same.
  • the number of combined pixels in a superpixel is not limited to two.
  • it may also include a third combined pixel, which is used to collect the pulsed beam reflected by a target at an intermediate distance.
  • a third combined pixel which is used to collect the pulsed beam reflected by a target at an intermediate distance.
  • attenuation sheets can be set on the combined pixels in the collection close range to reduce the pile_up effect.
  • the PDE of the pixel array can be adjusted to a higher PDE, which improves the measurement accuracy of the distant target and reduces the pile_up effect caused by the close target.
  • Fig. 11 is a schematic diagram of a collector according to a fifth embodiment of the present invention.
  • the collector 70 includes a receiving optical element 71, a filtering unit 72, a beam expanding optical element 73 and a pixel unit 74.
  • the receiving optical element 71 in the collector 70 will guide the spot beam to the corresponding pixel, and the pixel unit 74 is usually set at the focal point of the receiving optical element 71. on flat surface.
  • the beam expander optical element 73 is provided in the collector 70 to reduce the pile_up phenomenon caused by the stronger light beam reflected back by the close first target.
  • the receiving optical element 71 will receive the first spot beam reflected from the target, wherein the first spot beam and a pixel 741 on the pixel unit (in the present invention may be The combined pixel can also be a super pixel).
  • the beam After passing through the filter unit 72, the beam is expanded by the beam expanding optical element 73 to form a second spot beam with a uniformly diffused beam and a larger spot diameter, which is incident on the pixel unit 74
  • each pixel 741 is used to receive a part of the light signal in the second spot beam.
  • the filter unit 72 is mainly used to filter out background light or stray light.
  • the pixel unit 74 includes a two-dimensional pixel array composed of a plurality of pixels 741.
  • the pixel unit 74 includes a pixel array composed of a single-photon avalanche photodiode (SPAD).
  • the SPAD can respond to incident single photons and Output a signal indicating the corresponding arrival time of the received photons at each SPAD.
  • the pixel unit 74 also includes a microlens array, and each microlens 742 in the microlens array is matched with the pixel 741, and is used to converge part of the optical signal in the second spot beam to the corresponding pixel 741.
  • the receiving optical element 71 includes a first lens having a first focal length
  • the beam expanding optical element 73 includes a second lens having a second focal length, wherein the second focal length is greater than the first focal length.
  • the beam expander optical element 73 is a beam expander for forming a second spot beam with a uniform intensity distribution and a larger spot diameter.
  • the readout circuit 75 includes a TDC circuit array and a histogram circuit 752 for drawing a histogram reflecting the pulse waveform emitted by the light source in the transmitter. Furthermore, the flight time can also be calculated based on the histogram, and finally the result is output.
  • the TDC circuit array includes a plurality of TDC circuits 751, and each pixel 741 in the pixel unit 74 is configured to be connected with a TDC circuit 751 for receiving and calculating the time interval of the photon signal, and converting the time interval into time Code, multiple TDC circuits simultaneously calculate the photons collected by the pixels in the second spot beam, and the time code output by the TDC circuit array is processed by the histogram circuit 752 to draw a histogram reflecting the pulse waveform emitted by the light source in the transmitter Further, the flight time from emission to reception of the first spot beam can also be calculated according to the histogram, and the result is finally output.
  • the pixel 741 is configured as a combined pixel (specific settings are as described above), and each combined pixel is configured to be connected A TDC circuit.
  • the pixel 741 is configured as a super pixel (specific settings are as described above), and each super pixel is configured to be connected A TDC circuit.
  • the first spot beam is expanded by setting the beam expansion optical element to form a second spot beam with a larger diameter and uniform light intensity and incident on a plurality of pixels.
  • the distance collector changes the beam.
  • the beam expansion provides a buffered reception time for the pixel to collect photons. Even if the front photons in the reflected beam are incident on the pixel array faster, they can be collected because multiple pixels are collected at the same time. Effective photons get the accurate pulse peak value in the histogram and calculate the correct distance value.
  • a method for manufacturing a collector which includes the following steps:
  • the receiving optical element is used to receive the first spot light beam reflected by the target; the first spot light beam is matched with a pixel of the pixel unit;
  • a pixel unit is provided, and the pixel unit includes a two-dimensional pixel array composed of a plurality of pixels for receiving the second spot light beam, the second spot light beam being matched with the plurality of pixels.
  • the pixels are co-pixels, and each co-pixel includes at least two SPADs; or, the pixels are super-pixels.
  • the method further includes the following step: providing a microlens array, the microlens array includes a plurality of microlenses, and each microlens is used to converge part of the light signal to a corresponding pixel.
  • the receiving optical element includes a first lens having a first focal length
  • the beam expanding optical element includes a second lens having a second focal length; wherein the second focal length is greater than the first focal length
  • An embodiment of the present application also provides a control device, including a processor and a storage medium for storing a computer program; wherein the processor is used to execute the computer program at least to execute the method described above.
  • the embodiment of the present application also provides a storage medium for storing a computer program, and the computer program at least executes the above-mentioned method when the computer program is executed.
  • An embodiment of the present application further provides a processor, which executes a computer program and at least executes the method described above.
  • the storage medium may be implemented by any type of volatile or non-volatile storage device, or a combination thereof.
  • the non-volatile memory can be read-only memory (ROM, Read Only Memory), programmable read-only memory (PROM, Programmable Read-Only Memory), and erasable programmable read-only memory (EPROM, Erasable Programmable Read-Only).
  • Memory Electrically Erasable Programmable Read-Only Memory (EEPROM, Electrically Erasable Programmable Read-Only Memory), Magnetic Random Access Memory (FRAM, Ferromagnetic Random Access Memory), Flash Memory (Flash Memory), Magnetic Surface Memory, Optical Disk, Or CD-ROM (Compact Disc Read-Only Memory); magnetic surface memory can be disk storage or tape storage.
  • the volatile memory may be a random access memory (RAM, Random Access Memory), which is used as an external cache.
  • RAM random access memory
  • SRAM static random access memory
  • SSRAM synchronous static random access memory
  • DRAM Dynamic Random Access Memory
  • SDRAM Synchronous Dynamic Random Access Memory
  • DDRSDRAM Double Data Rate Synchronous Dynamic Random Access Memory
  • ESDRAM Enhanced Synchronous Dynamic Random Access Memory Access memory
  • SLDRAM synchronous connection dynamic random access memory
  • SyncLink Dynamic Random Access Memory direct memory bus random access memory
  • DRRAM Direct Rambus Random Access Memory
  • the storage media described in the embodiments of the present invention are intended to include, but are not limited to, these and any other suitable types of storage.
  • the disclosed system and method can be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the units is only a logical function division, and there may be other divisions in actual implementation, such as: multiple units or components can be combined, or It can be integrated into another system, or some features can be ignored or not implemented.
  • the coupling, or direct coupling, or communication connection between the components shown or discussed can be indirect coupling or communication connection through some interfaces, devices or units, and can be electrical, mechanical or other forms. of.
  • the units described above as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, that is, they may be located in one place or distributed on multiple network units; Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • the functional units in the embodiments of the present invention can be all integrated into one processing unit, or each unit can be individually used as a unit, or two or more units can be integrated into one unit; the above-mentioned integration
  • the unit of can be implemented in the form of hardware, or in the form of hardware plus software functional units.
  • a person of ordinary skill in the art can understand that all or part of the steps in the above method embodiments can be implemented by a program instructing relevant hardware.
  • the foregoing program can be stored in a computer readable storage medium. When the program is executed, the program is executed. Including the steps of the foregoing method embodiment; and the foregoing storage medium includes: removable storage devices, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disks or optical disks, etc.
  • ROM read-only memory
  • RAM Random Access Memory
  • magnetic disks or optical disks etc.
  • the aforementioned integrated unit of the present invention is implemented in the form of a software function module and sold or used as an independent product, it can also be stored in a computer readable storage medium.
  • the computer software product is stored in a storage medium and includes several instructions for A computer device (which may be a personal computer, a server, or a network device, etc.) executes all or part of the methods described in the various embodiments of the present invention.
  • the aforementioned storage media include: removable storage devices, ROM, RAM, magnetic disks, or optical disks and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

一种距离测量方法、系统及计算机可读存储介质,方法包括:控制发射器(11)朝向目标区域(20)发射脉冲光束(30);调控采集器(12)的像素阵列(22)具有至少两个不同的探测效率,分别以至少两个不同的探测效率接收目标区域(20)反射的光束(40)中的光子形成的光子信号,并分别根据光子信号得到目标区域(20)的深度图像;融合目标区域的深度图像得到目标区域(20)融合的深度图像。通过调控采集器(12)的像素阵列(22)具有至少两个不同的探测效率获得对应的深度图像,然后依据待测目标的距离的远近选取探测效率中高低效率对应的深度图像中的待测目标的深度值融合得到融合的深度图像,消除pile_up现象。

Description

一种距离测量方法、系统及计算机可读存储介质 技术领域
本发明涉及测距技术领域,尤其涉及一种距离测量方法、系统及计算机可读存储介质。
背景技术
利用飞行时间原理(TOF,Time of Flight)可以对目标进行距离测量以获取包含目标的深度值的深度图像,而基于飞行时间原理的距离测量系统已被广泛应用于消费电子、无人架驶、AR/VR等领域。基于飞行时间原理的距离测量系统通常包括发射器和采集器,利用发射器发射脉冲光束照射目标视场并利用采集器采集反射光束,计算光束由发射到反射接收所需要的时间来计算物体的距离。
目前基于飞行时间原理的距离测量系统中发射器包括像素阵列,特别是包括单光子雪崩光电二极管(SPAD)的像素阵列,当发射光束中的一个光子入射到SPAD时,即可触发雪崩事件输出信号用于记录光子到达SPAD的时间,基于此计算光束从发射到接收所需要的时间。但是由于SPAD接收一个光子后需要等待一个死区时间(deadtime)再接收下一个光子,这样对于死区时间内的多个光子最多只能接收一个光子。对于距离较近的物体、高反射率的物体或者强环境光的情况下,大量的光子在更早的时间入射到SPAD中使其饱和,而使SPAD不能检测随后入射的光子,导致在直方图电路中绘制的脉冲波形异常,无法确定光脉冲的接收时间,从而难以确定目标物体的距离。
以上背景技术内容的公开仅用于辅助理解本发明的构思及技术方案,其并不必然属于本专利申请的现有技术,在没有明确的证据表明上述内容在本专利申请的申请日已经公开的情况下,上述背景技术不应当用于评价本申请的新颖性和创造性。
发明内容
本发明为了解决现有的问题,提供一种距离测量方法、系统及计算机可读存储介质。
为了解决上述问题,本发明采用的技术方案如下所述:
一种距离测量方法,包括如下步骤:S1:控制发射器朝向目标区域发射脉冲光束;S2:调控采集器的像素阵列具有至少两个不同的探测效率,分别以所述至少两个不同的探测效率接收所述目标区域反射的光束中的光子形成的光子信号,并分别根据所述光子信号得到所述目标区域的深度图像;S3:融合所述目标区域的深度图像得到所述目标区域融合的深度图像。
在本发明的一种实施例中,控制采集器的像素阵列具有至少一个探测效率用于采集所述目标区域中所有待测目标反射的光束中的光子形成的光子信号。调控所述采集器的像素阵列分别具有第一探测效率和第二探测效率,并分别以所述第一探测效率和所述第二探测效率接收所述目标区域反射的光束中的光子形成的光子信号。调控所述采集器的像素阵列具有第一探测效率,所述像素阵列接收经所述目标区域反射的光束中的光子形成的第一光子信号,根据所述第一光子信号获得所述目标区域的第一深度图像;调控所述采集器的所述像素阵列具有第二探测效率,所述像素阵列接收经所述目标区域反射的光束中的光子形成第二光子信号和第三光子信号,分别根据所述第二光子信号和第三光子信号获得所述目标区域的第二深度图像;所述第二探测效率大于所述第一探测效率。
在本发明的又一种实施例中,调控所述采集器的像素阵列具有第一探测效率,所述像素阵列接收经所述目标区域反射的光束中的光子形成的第四光子信号和第五光子信号,分别根据所 述第四光子信号、第五光子信号获得所述目标区域的第四深度图像和第五深度图像;调控所述采集器的所述像素阵列具有第二探测效率,所述像素阵列接收经所述目标区域反射的光束中的光子形成第六光子信号,根据所述第六光子信号获得所述目标区域的第六深度图像;所述第一探测效率大于所述第二探测效率。
在本发明的再一种实施例中,融合所述目标区域的深度图像得到所述目标区域融合的深度图像包括:依据待测目标的距离的远近选取所述至少两个不同的探测效率中高低效率对应的所述深度图像中的所述待测目标的深度值。
本发明还提供一种距离测量系统,包括:发射器,用于向目标区域发射脉冲光束;采集器,包括具有至少两个不同的探测效率的像素阵列,用于分别以所述至少两个不同的探测效率接收所述目标区域反射的光束中的光子形成的光子信号,并分别根据所述光子信号得到所述目标区域的深度图像,所述像素阵列是单光子雪崩光电二极管组成的像素阵列;控制和处理电路,分别与所述发射器以及所述采集器连接,用于实现如上任一所述的方法控制。
在本发明的一种实施例中,所述像素阵列具有至少一个探测效率用于采集所述目标区域中所有待测目标反射的光束中的光子形成的光子信号。所述像素阵列具有第一探测效率和第二探测效率;所述第一探测效率大于所述第二探测效率,所述第一探测效率用于采集所述目标区域中所有待测目标反射的光束中的光子形成的光子信号;或,所述第二探测效率大于所述第一探测效率,所述第二探测效率用于采集所述目标区域中所有待测目标反射的光束中的光子形成的光子信号。
本发明再提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如上任一所述方法的步骤。
本发明的有益效果为:提供一种距离测量方法、系统及计算机可读存储介质,通过调控采集器的像素阵列具有至少两个不同的探测效率获得对应的深度图像,然后依据待测目标的距离的远近选取所述至少两个不同的探测效率中高低效率对应的所述深度图像中的待测目标的深度值融合得到融合的深度图像,消除接收波形失真的pile_up现象。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例中一种距离测量系统示意图。
图2(a)是本发明实施例中的一种发射器的结构示意图。
图2(b)是本发明实施例中的一种采集器的结构示意图。
图3是本发明实施例中第一种距离测量方法的示意图。
图4是本发明实施例中第一种距离测量系统示意图。
图5是本发明实施例中一种采集器中像素单元的示意图。
图6是本发明实施例中第二种距离测量方法的示意图。
图7是本发明实施例中第二种距离测量系统的示意图。
图8是本发明实施例中第三种距离测量系统的示意图。
图9是本发明实施例中的第三种距离测量方法的示意图。
图10是本发明实施例中又一种采集器中像素单元的示意图。
图11是本发明实施例中又一种采集器的结构示意图。
图12是本发明实施例中一种采集器的制造方法的示意图。
具体实施方式
为了使本发明实施例所要解决的技术问题、技术方案及有益效果更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
需要说明的是,当元件被称为“固定于”或“设置于”另一个元件,它可以直接在另一个元件上或者间接在该另一个元件上。当一个元件被称为是“连接于”另一个元件,它可以是直接连接到另一个元件或间接连接至该另一个元件上。另外,连接既可以是用于固定作用也可以是用于电路连通作用。
需要理解的是,术语“长度”、“宽度”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”、“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明实施例和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多该特征。在本发明实施例的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。
雪崩光电二极管指的是在激光通信中使用的光敏元件。在以硅或锗为材料制成的光电二极管的P-N结上加上反向偏压后,射入的光被P-N结吸收后会形成光电流。加大反向偏压会产生“雪崩”(即光电流成倍地激增)的现象,因此这种二极管被称为“雪崩光电二极管”。
图1所示为本发明一个实施例的距离测量系统示意图,该距离测量系统10包括发射器11、采集器12以及控制和处理电路13。其中,发射器11用于向目标区域20发射光束30,该光束发射至目标区域空间中以照明空间中的目标物体,至少部分发射光束30经目标区域20反射后形成反射光束40,反射光束40中的至少部分光束被采集器12接收,控制和处理电路13分别与发射器11以及采集器12连接,同步发射器11与采集器12的触发信号以计算光束从发射到接收所需要的时间,即发射光束30与反射光束40之间的飞行时间t,进一步,目标物体上对应点的距离D可由下式计算出:
D=c·t/2       (1)
其中,c为光速。
发射器11包括光源111、发射光学元件112以及驱动器113等。光源111可以是发光二极管(LED)、激光二极管(LD)、边发射激光器(EEL)、垂直腔面发射激光器(VCSEL)等,也可以是由多个光源组成的一维或二维光源阵列,优选地,光源阵列是在单块半导体基底上生成多个VCSEL光源以形成的VCSEL阵列光源芯片,光源阵列中光源的排列方式可以是规则的也可以是不规则的。光源111所发射的光束可以是可见光、红外光、紫外光等。光源111在驱动器 113的控制下向外发射光束。在一个实施例中,光源111在驱动器113的控制下以一定频率(脉冲周期)向外发射脉冲光束,可以用于直接飞行时间(Direct TOF)测量中,频率根据测量距离进行设定。可以理解的是,还可以利用控制和处理电路13中的一部分或者独立于控制和处理电路13存在的子电路来控制光源111发射光束。
发射光学元件112接收来自光源111发射的光束并整形后投射到目标区域。在一个实施例中,发射光学元件112接收来自光源111的脉冲光束,并将脉冲光束进行光学调制,比如衍射、折射、反射等调制,随后向空间中发射被调制后的光束,比如聚焦光束、泛光光束、结构光光束等。发射光学元件112可以是透镜、液晶元件、衍射光学元件、微透镜阵列、超表面(Metasurface)光学元件、掩膜板、反射镜、MEMS振镜等形式中的一种或多种组合。
采集器12包括像素单元121、过滤单元122和接收光学元件123,接收光学元件123用于接收由目标反射回的至少部分光束并引导到像素单元121上,过滤单元122用于滤除背景光或杂散光。像素单元121包括由多个像素组成的二维像素阵列,在一个实施例中,像素单元121由单光子雪崩光电二极管(SPAD)组成像素阵列,SPAD可以对入射的单个光子进行响应并输出指示所接收光子在每个SPAD处相应到达时间的信号,利用诸如时间相关单光子计数法(TCSPC)实现对微弱光信号的采集以及飞行时间的计算。
控制和处理电路13同步发射器11与采集器12的触发信号,对像素采集光束的光子信号进行处理,并基于反射光束的飞行时间计算出待测目标的距离信息。在一个实施例中,SPAD对入射的单个光子进行响应而输出光子信号,控制和处理电路13接收光子信号并进行信号处理获取光束的飞行时间。具体的,控制和处理电路13计算采集光子的数量形成连续的时间bin,这些时间bin连在一起形成统计直方图用于重现反射光束的时间序列,利用峰值匹配和滤波检测识别出反射光束从发射到接收的飞行时间。在一些实施例中,控制和处理电路13包括信号放大器、时数转换器(TDC)、数模转换器(ADC)等器件中的一种或多种组成的读出电路(图中未示出)。这些电路即可以与像素整合在一起,也可以作为控制和处理电路13的一部分,为便于描述,将统一视作控制和处理电路13的一部分。可以理解的是,控制和处理电路13可以是独立的专用电路,比如专用SOC芯片、FPGA芯片、ASIC芯片等等,也可以包含通用处理电路。
在一些实施例中,距离测量系统10还包括存储器,用于存储脉冲编码程序,利用编码程序控制光源111发射光束的激发时间、发射频率等。
在一些实施例中,距离测量系统10还可以包括彩色相机、红外相机、IMU等器件,与这些器件的组合可以实现更加丰富的功能,比如3D纹理建模、红外人脸识别、SLAM等功能。
在一些实施例中,发射器11与采集器12也可以被设置成共轴形式,即二者之间通过具备反射及透射功能的光学器件来实现,比如半透半反镜等。
图2(a)和图2(b)所示为本发明第一个实施例的发射器和采集器的结构示意图。其中,发射器11包括由多个光源组成的光源阵列21,多个光源以一定的图案形式排列在单片基底上形成的。基底可以是半导体基底、金属基底等,光源可以是发光二极管、边发射激光器、垂直腔面发射激光器(VCSEL)等,优选地,光源阵列21是由设置在半导体基底上的多个VCSEL光源组成的阵列VCSEL芯片。光源阵列21在驱动电路(可以是控制和处理电路13的一部分)的调制驱动下进行发光,也可以在驱动电路的控制下分组发光或者整体发光。
像素单元121包括像素阵列22以及读出电路23,其中像素阵列22包括由多个像素组成的二维阵列,用于采集由物体反射回的至少部分光束并生成相应的光子信号,读出电路23用于对光子信号进行处理以计算飞行时间。
在一个实施例中,读出电路23包括TDC电路231和直方图电路232,用于绘制反映发射器中光源所发射脉冲波形的直方图,进一步地,也可以根据直方图计算飞行时间,最后将结果进行输出。其中,读出电路23可以是单个TDC电路及直方图电路组成,也可以由多个TDC电路单元及直方图电路单元组成的阵列读出电路。
在一个实施例中,像素阵列22是由多个SPAD组成的像素阵列,当发射器11向被测物体发射斑点光束时,采集器12中的接收光学元件123会引导该斑点光束至相应的像素上,一般地,为了尽可能多地接收反射光束中的光子信号,通常将单个斑点的大小被设置为对应多个像素(这里的对应可以理解为成像,光学元件112一般包括成像透镜)。比如图2(b)所示,单个斑点对应2×2=4个像素,即该斑点光束反射回的光子会以一定的概率被对应的4个像素接收,一般地,将对应的多个像素组成的像素区域称为“合像素”,合像素的大小在设置时需根据测距系统综合考虑。在一个实施例中,配置光源阵列21中的每个光源与像素阵列22中的每个合像素配对,即每个光源的投射视场与对应合像素的采集视场一一对应。如图2(b)所示,光源211发射的光束被物体反射后由接收光学元件123引导该斑点光束至合像素221上,光源212发射的光束被物体反射后由接收光学元件123引导该斑点光束至合像素224上,光源213发射的光束被物体反射后由接收光学元件123引导该斑点光束至合像素225上。
一般地,发射器11和采集器12之间根据设置方式的不同距离测量系统可以分成共轴和离轴。对于共轴情形,发射器11发出的光束经过被测物体反射后将由采集器12中对应的合像素采集,合像素的位置不会因为被测物体的远近有影响;但对于离轴情形,由于视差的存在,当被测物体远近不同时,光斑落在像素单元上的位置也会发生变化,一般地会沿着基线(发射器11与采集器12之间的连线,在本发明中统一用横线来表示基线方向)方向发生偏移,当被测物体的距离未知时合像素的位置是不确定,为了解决这一问题,需要采用超像素技术,即设置超过合像素对应数量的多个像素组成像素区域,这里称为“超像素”用于接收反射回的斑点光束,如图2(b)所示的实施例中一个超像素222包括三个合像素。超像素的大小在设置时,需要同时考虑距离测量系统10的测量范围以及基线的长度,使得在测量范围内不同距离上物体反射回的斑点所对应的合像素均会落入在超像素区域内,即超像素的大小应超过至少一个合像素。一般地,超像素的尺寸沿与基线垂直方向与合像素相同,沿基线方向则大于合像素。超像素的数量一般与采集器12单次测量所采集到的斑点光束的数量相同。
直方图电路232中绘制出反映发射器中光源所发射脉冲波形的接收波形,通常接收波形与发射的脉冲波形在形状上基本相似,接收波形表示入射到像素阵列中的反射脉冲中的光子数量。像素阵列接收的光子包括环境光子和信号光子,其中环境光子在直方图的时间bin上持续存在,而信号光子只在目标位置对应的时间bin内出现形成脉冲峰值。但是,由于SPAD阵列在接收光子后进入死区时间而不再检测光子,当待测目标离SPAD阵列距离较近时,或者待测目标具有高反射率时,反射光束中前部的光子更快的入射到SPAD阵列中使多个SPAD饱和,而后续入射的光子被SPAD采集到的概率降低,导致脉冲峰值位置提前。或者,在强环境光条件下,大量的环境光子入射到SPAD阵列中使多个SPAD饱和,而后信号光子被SPAD采集到 的概率降低,导致形成的接收波形失真,使用失真的接收波形的波峰确定的TOF值不准确。以上产生的接收波形失真的情况统称为pile_up现象,下面将通过一些实施例描述如果解决这一问题,提高距离测量系统的准确性。
第一实施例
如图3所示,为本发明第一个实施例的距离测量方法的流程图。通过距离测量系统中的控制和处理电路13执行该距离测量方法,具体的方法步骤如下:
S1、控制发射器朝向目标区域发射脉冲光束。
其中,发射器11包括光源阵列21,朝向目标区域发射斑点图案的脉冲光束,经由目标区域中的物体反射后形成反射光束。
S2、调控采集器的像素阵列具有至少两个不同的探测效率,分别以所述至少两个不同的探测效率接收所述目标区域反射的光束中的光子形成的光子信号,并分别根据所述光子信号得到所述目标区域的深度图像;
控制和处理电路13通过调控像素阵列22中每个像素上施加的反向偏置电压改变像素阵列的探测效率(PDE)。其中,PDE是指单位时间内探测到有效光子数与入射光子总数的比率,每个像素的PDE与施加在像素上的反向偏置电压密切相关,施加在像素上的反向偏置电压越高,雪崩持续时间越长,则PDE明显提高,当施加在像素上的反向偏置电压越低,PDE也降低,当反向偏置电压低于击穿电压,会导致雪崩猝灭,此时像素不再接收光子。但偏置电压并非是可以无限提高的,当偏置电压设置过高时,可能引起暗计数率明显提高,因此,需要在实际应用中根据系统需求合理设置反向偏置电压的数值。
在本发明中,通过控制采集器的像素阵列具有至少两个不同的探测效率分别获取不同距离范围内的待测物体反射的光束中的光子形成的光子信号,不同的探测效率对应不同的测距系统的测距范围、反射率等,即采用低的探测效率的测距范围较小,用于处理近距、高反射率、强环境光;采用高的探测效率的测距范围较远,处理远距、低反射率、低环境光。实际上,探测效率的数量可根据具体情况进行设置。控制采集器的像素阵列具有至少一个探测效率用于采集目标区域中所有待测目标反射的光束中的光子形成的光子信号。
在本发明的一种实施例中,可以根据目标区域中待测目标的距离远近设置多个探测效率,多个探测效率之间的差值可以相等也可以不相等。
在本发明的一种实施例中,控制采集器的像素阵列分别具有第一探测效率和第二探测效率,并分别以所第一探测效率和第二探测效率接收目标区域中待测物体反射的光束中的光子形成的光子信号。
具体的,当第一探测效率低于第二探测效率时,控制和处理电路13调控像素阵列22具有第一探测效率(此时施加在像素上的反向偏置电压较低),即像素阵列22具有较低的PDE。此时完成对目标视场的第一帧深度图像采集。则距离采集器12更近的第一目标反射的光束中的光子被像素阵列22中的像素接收形成第一光子信号;或者,反射率更高的第一目标反射的光束中的光子被像素阵列22中的像素接收形成第一光子信号。即使在较强环境光中,由于此时像素具有较低的PDE,能够减少环境光子的影响而接收到有效的反射光束中的信号光子形成第一光子信号。控制和处理电路13根据第一光子信号计算得到第一飞行时间进而获得目标区域的第一深度图像,第一深度图像的像素点上具有第一TOF值。
可以理解的是,通过降低像素阵列22的PDE,有效的解决了pile_up的问题,提高了测量近距目标的准确度,但是降低了像素阵列的PDE,相应的也减小了测距系统的测距范围,对于距离采集器更远的第二目标或者是反射率更低的第二目标,像素阵列22很难采集到足够数量的有效光子,则无法生成具有足够信噪比的第二光子信号,不能计算出表征第二目标距离信息的第二飞行时间。因此采用下一步骤确定第二目标的第二飞行时间。
然后,调控采集器中的像素阵列具有第二探测效率,像素阵列接收经目标区域反射的光束中的光子形成第二光子信号和第三光子信号,根据第二光子信号和第三光子信号获得目标区域的第二深度图像。
控制和处理电路13调控像素阵列22具有第二探测效率(此时施加在像素上的反向偏置电压较高,其中,第二探测效率大于第一探测效率,此时,像素阵列22具有较高的PDE,完成对目标区域的第二帧深度图像采集。此时距离采集器12更远的第二目标反射的光束中的光子能够被像素阵列22中的像素接收形成第二光子信号;或者,反射率更低的第二目标反射的光束中的光子能够被像素阵列22中的像素接收形成第二光子信号。控制和处理电路13根据第二光子信号计算得到第二飞行时间形成目标区域的第二深度图像,第二深度图像中的部分像素点上具有第二TOF值。
在一种实施例中,第二探测效率能够探测到位于系统最远距离处的目标,在本发明的一种实施例中,距离探测系统的最大探测距离为150m,第二探测效率可以接收到目标位于150m处时反射回的光子形成光子信号;而第一探测效率只可以接收位于20m处的目标反射回的光子。
此外,当像素阵列具有较高的PDE时,同样可以接收到第一目标反射的光束中的光子生成第三光子信号,控制和处理电路13根据第三光子信号可以计算出表征第一目标距离信息的第三飞行时间,导致第二深度图像中部分像素点上具有第三TOF值,但由于pile_up现象的存在,导致相同像素点上第三TOF值要小于第一TOF值(准确TOF值)。因此,在下一步中确定目标区域的准确深度图像。
可以理解的是,当第一探测效率大于所述第二探测效率时,同样适用于本发明。即控制和处理电路13调控像素阵列22具有第一探测效率(此时施加在像素上的反向偏置电压较高),即像素阵列22具有较高的PDE。像素阵列22接收经目标区域反射的光束中的光子形成的第四光子信号和第五光子信号,分别根据第四光子信号、第五光子信号获得目标区域的第四深度图像和第五深度图像;然后,调控像素阵列具有第二探测效率(此时施加在像素上的反向偏置电压较低),即像素阵列22具有较低的PDE。像素阵列接收经目标区域反射的光束中的光子形成第六光子信号,根据第六光子信号获得目标区域的第六深度图像。此处不再赘述。
S3、融合所述目标区域的深度图像得到所述目标区域融合的深度图像。
在融合目标区域的深度图像得到目标区域融合的深度图像时,依据待测目标的距离的远近选取所述至少两个不同的探测效率中高低效率对应的所述深度图像中的待测目标的深度值。
具体的,如上所述,控制和处理电路13将第一深度图像中每个像素点上的第一TOF值赋值到第二深度图像中对应像素点上替换该像素点上的第三TOF值,从而形成第三深度图像,在第三深度图像中每个像素点上对应的TOF值即为准确飞行时间。可以理解的是,这里所说的像素主要是指具有有效TOF值的像素。
对于第四深度图像、第五深度图像和第六深度图像的处理是类似的。
基于本发明的方法还提供一种距离测量系统用于实现上述方法。
如图4所示,是本发明第一实施例的一种距离测量系统的示意图。
在本实施例中,通过调控采集器的像素阵列具有至少两个不同的探测效率,分别以所述至少两个不同的探测效率接收所述目标区域反射的光束中的光子形成的光子信号,并分别根据所述光子信号得到所述目标区域的深度图像,将深度图像融合成一帧深度图像,有效的修正了由于pile_up引起的测量误差。
采用如上所述的方法和系统,通过调控采集器的像素阵列具有至少两个不同的探测效率获得对应的深度图像,然后依据待测目标的距离的远近选取所述至少两个不同的探测效率中高低效率对应的所述深度图像中的待测目标的深度值融合得到融合的深度图像,消除接收波形失真的pile_up现象。
第二实施例
如图5所示,为本发明第二实施例的采集器中像素单元的示意图。像素单元包括像素阵列41以及读出电路44,其中像素阵列41包括由多个像素组成的二维阵列,用于采集由物体反射回的至少部分光束并生成相应的光子信号,读出电路41用于对光子信号进行处理以计算飞行时间。
在一个实施例中,读出电路44包括TDC电路441和直方图电路442,用于绘制反映发射器中光源所发射脉冲波形的直方图,进一步地,也可以根据直方图计算飞行时间,最后将结果进行输出。其中,读出电路44可以是单个TDC电路及直方图电路组成,也可以由多个TDC电路单元及直方图电路单元组成的阵列读出电路。
在一个实施例中,像素阵列41是由多个单光子雪崩光电二极管(SPAD)组成的像素阵列,其中,像素阵列41包括参考像素阵列42和成像像素阵列43。参考像素阵列42包括至少一个参考像素421。
如图5所示的实施例,参考像素阵列42被配置为沿成像像素阵列43外围边缘设置的一列参考像素,在其他实施例中,参考像素阵列42可设置至少一列或一行;或,参考像素位于成像像素阵列43周围任意给定的位置。成像像素阵列43的配置如图2(b)所示的像素阵列的描述,在此不再重复叙述。
控制和处理电路13控制发射器11朝向目标区域发射脉冲光束,同时控制采集器中的像素开启以接收反射光束中的光子,经目标区域反射回的反射光束由接收光学元件123引导反射光束成像至成像像素阵列43,成像像素阵列43中的成像像素采集反射光束中的光子形成光子信号,控制和处理电路13根据光子信号计算反射光束从发射到接收的飞行时间。但由于pile_up现象的存在,计算的反射光束可能存在误差,因此,通过配置参考像素阵列42统计在一定时间内接收参考光子数量,根据参考光子数量调控下一帧采集时成像像素阵列43中成像像素的PDE。控制和处理电路13通过调控成像像素阵列43中成像像素上施加的反向偏置电压改变成像像素阵列的探测效率(PDE)。
其中,参考像素阵列42在预定时间内接收的参考光子包括环境光子,也可能包括部分反射光束中的信号光子,参考光子数量用于表征环境光强度与目标反射率的乘积,则参考光子数量与成像像素的PDE成反比例关系。根据参考像素阵列42在预定时间内接收的参考光子的数量调整调控成像像素阵列的探测效率并控制采集器以调控后的探测效率接收光子,直至成像像 素阵列接收经目标区域反射回的脉冲光束中的光子形成第二光子信号满足预定需求。这里所述的预定需求可以是满足预定的精度等,调整次数至少一次。
在本发明的一个实施例中,调控成像像素阵列的探测效率低于或高于第一探测效率,具体根据参考光子数量与成像像素的PDE成反比例关系进行调整。
在一个实施例中,预先设定在一定时间内参考像素阵列42接收参考光子数量的阈值,比如将一定时间设置为10us,在第一帧深度图采集时,控制和处理电路13调控成像像素阵列以第一探测效率(较低的PDE)接收反射光束中的光子,同时控制参考像素阵列42接收参考光子,若此时处于较低环境光和/或目标反射率较低时,在10us内接收的参考光子数量小于阈值,则下一帧采集时控制和处理电路13调控成像像素阵列43以第二探测效率(较高的PDE)接收反射光束中的光子,若参考光子数量大于或等于阈值,则下一帧采集时成像像素阵列仍具有第一探测效率。通过设定参考光子数量的阈值,使调节成像像素的PDE的次数较少,减少调节时系统的复杂度。
在一个实施例中,可预定义参考像素阵列42在预定时间内接收的参考光子数量与成像像素的PDE的对应关系,控制与处理电路13根据当前帧参考像素阵列42接收的参考光子数量结合预定义的对应关系即可确定下一帧成像像素阵列43的PDE,可以实现实时调控。在实际应用中的距离测量系统,通常遇到许多不可控的因素,例如用于自动驾驶中的LiDAR系统,在连续测量过程中可能出现环境改变或者目标改变的情况,通过实时调控成像像素的PDE也可以有效的解决由于这些情况出现时引起的测距误差,提升系统的准确性,而且,通过这种方法不需要减小测量过程中的帧率。
如图6所示,基于第二实施例的说明,还提出了一种距离测量方法,包括如下步骤:
T1:控制发射器发射脉冲光束;
T2:控制采集器具有第一探测效率,并以所述第一探测效率接收光子;所述采集器的像素阵列包括参考像素阵列和成像像素阵列;所述参考像素阵列包括至少一个参考像素,用于接收参考光子;所述成像像素阵列包括至少一个成像像素,用于接收经目标区域反射回的所述脉冲光束中的光子形成第一光子信号;
T3:根据预定时间内所述参考像素阵列接收的所述参考光子数量调控所述成像像素阵列的探测效率为第二探测效率并控制所述采集器以所述第二探测效率接收光子,直至所述成像像素阵列接收经目标区域反射回的所述脉冲光束中的光子形成第二光子信号满足预定需求;
可以理解的是,调控成像像素阵列的探测效率低于或高于第一探测效率,是依据参考像素阵列接收的参考光子数量获得目标区域的成像情况的参考。
T4:根据所述第二光子信号计算所述脉冲光束从发射到接收的飞行时间。
在一个实施例中,设定在一定时间内接收参考光子数量的阈值,根据参考光子数量调控像素阵列的探测效率;若参考光子数量大于或等于阈值,则控制和处理电路调控下一帧采集时成像像素阵列具有第一探测效率;若参考光子数量小于阈值,则控制和处理电路调控下一帧采集时成像像素阵列具有第二探测效率;其中,第二探测效率大于第一探测效率。
在一个实施例中,预先存储参考光子数量和成像像素的探测效率的对应关系表,根据参考光子数量查询对应关系表调控下一帧成像像素阵列的探测效率。如图7所示,是本发明第二实施例的一种距离测量系统的示意图。
采用本发明的距离测量方法及系统,通过根据参考像素接收参考光子数(环境光子)调节成像像素的探测效率,在不减小测量帧率的情况下消除接收波形失真的pile_up现象。
进一步的,通过预先设定在预定时间内参考像素阵列接收的参考光子数量的阈值,使调节成像像素的探测效率的次数较少,减少调节时的复杂度。
再进一步的,通过预定义参考像素阵列在预定时间内接收的参考光子数量与成像像素的探测效率的对应关系,提升调节的准确性。
第三实施例
如图8所示,为本发明第三实施例的距离测量系统的示意图。距离测量系统60包括发射器11、采集器12、相机14以及控制和处理电路13。其中,发射器11用于向目标区域20发射光束30,该光束发射至目标区域空间中以照明空间中的目标物体,至少部分发射光束30经目标区域20反射后形成反射光束40,反射光束40中的至少部分光束被采集器12接收,控制和处理电路13分别与发射器11以及采集器12连接,同步发射器11与采集器12的触发信号以计算光束从发射到接收所需要的时间。另一方面,控制和处理电路13与相机14连接,相机14用于采集目标区域的灰度图像,其中灰度图像中像素点的灰度值表示经目标反射的光束50和环境光的总光强度。控制与处理电路13根据灰度图像中的像素点的灰度值调控采集器12中像素阵列中对应像素的探测效率(PDE)。
具体的,相机14包括第一像素单元141,用于采集目标区域的灰度图像,第一像素单元141包括由多个第一像素组成的第一像素阵列(未图示),其中灰度图像中的像素点与第一像素单元141中的第一像素一一对应。相机14可以是灰度相机、RGB相机等,优选地是灰度相机。采集器12包括第二像素单元121,在一个实施例中,第二像素单元121的结构如图2(b)所述,包括像素阵列22以及读出电路23,为便于在此实施例中的描述,将像素阵列22记为第二像素阵列,第二像素阵列包括由多个第二像素组成的二维阵列,优选地第二像素是SPAD像素。配置相机14与采集器12具有相同的采集视场,使至少一个第一像素与至少一个第二像素(在本实施例中,第二像素可以是合像素也可以是超像素)配对。
控制和处理电路13根据灰度图像中每个像素点的灰度值确定反射光束的光强度,灰度值处于0-255之间共分成256级,灰度值越大对应的反射光束的光强度越大。可以理解的是,距离采集器更近的第一目标反射的光束相比距离采集器更远的第二目标反射的光束的光强度更大;或者,反射率更高的第一目标反射的光束相比反射率更低的第二目标反射的光束的光强度更大;或者受到较强环境光的影响,反射的环境光也会相应增大灰度图像中像素点的灰度值。
为有效降低pile_up现象的影响,控制和处理电路13根据灰度图像中像素点的灰度值调整第二像素阵列中对应第二像素的PDE。控制和处理电路通过改变第二像素阵列中第二像素上施加的反向偏置电压调控第二像素的探测效率。通常,在下一帧深度图采集时,控制和处理电路13调控第二像素阵列中每个第二像素的PDE,此时的第二像素阵列不再具有统一的PDE,对于目标区域中具有多个不同的待测目标时,有效提高了测量的准确性。
在一个实施例中,预先存储灰度图像的灰度值与第二像素的探测效率的数值的对应关系表。控制和处理电路13根据灰度图像中每一像素点的灰度值查询关系表确定与之对应的第二像素的PDE,调控第二像素上施加的反向偏置电压而改变下一帧采集时第二像素的PDE。灰度值与PDE数值的对应关系表可以通过标定得到。
在一个实施例中,预先将灰度图像的灰度值按照顺序分成至少两个梯级,并配置每一个梯级对应的所述第二像素的探测效率。具体的,预先将灰度值按照从小到大(也可以从大到小)的顺序分梯级,配置每一梯级具有对应的PDE。比如可以分为三个梯级,其中第一梯级的灰度值范围为0-85,第二梯级的灰度值范围为86-171,第三梯级的灰度值范围为172-256,对应的第二像素的PDE设置为第一PDE(较高PDE)、第二PDE(中间PDE)、第三PDE(较低PDE)。控制和处理电路13根据灰度值梯级对灰度图像进行处理以将图像分为多个第一闭环区域,同一闭环区域内所有像素点的灰度值属于同一梯级。进一步,根据第一闭环区域边界线上像素点的坐标,确定第二像素阵列中与第一闭环区域对应的第二闭环区域,并根据与梯级对应的探测效率调控第二闭环区域内全部第二像素的探测效率。比如第一闭环区域内的灰度值属于第一梯级,则调控第一闭环区域内的全部第二像素具有第一PDE。通过这种分级设置化区域调节可以提升调控的时间。可以理解的是,以上调控方法只为本发明的一个实施例,不对本发明的内容做具体限制。
如图9所示,基于第三实施例的说明,还提出了一种距离测量方法,包括如下步骤:
P1:控制发射器发射脉冲光束;
P2:控制灰度图像获取单元的第一像素阵列采集目标区域的灰度图像,同时控制采集器的第二像素阵列具有第一探测效率,以所述第一探测效率接收经所述目标区域反射回的所述脉冲光束中的光子形成的第一光子信号;
P3:根据所述灰度图像中像素点的灰度值调控所述第二像素阵列中对应的所述第二像素的探测效率,直至所述第二像素阵列接收经目标区域反射回的所述脉冲光束中的光子形成第二光子信号满足预定需求;
P4:根据所述第二光子信号计算所述脉冲光束从发射到接收的飞行时间。
可以理解的是,预定需求是像素阵列能够接收足够多的光子信号形成接收波形;或者接收符合一定信噪比的光子信号。
可以理解的是,在本发明的一种实施例中,调控所述第二像素阵列的探测效率低于或高于第一探测效率。
需要说明的是,本实施例的距离测量方法采用前述第三实施例的距离测量系统进行距离测量,其技术方案与前述距离测量系统相同,故在此不再重复赘述。
通过本发明实施例的距离测量方法和系统,通过根据灰度图像的灰度值调节采集器的第二像素的探测效率,在不减小测量过程中的帧率的情况下消除接收波形失真的pile_up现象。
进一步的,通过预先存储灰度图像的灰度值与第二像素的探测效率的数值的对应关系表,对于目标区域中具有多个不同的待测目标时,有效提高了测量的准确性。
再进一步的,通过预先将灰度值按照顺序分成至少两个梯级,并配置与每一个梯级对应的第二像素的探测效率,通过分级设置化区域调节提升调控的时间。
第四实施例
图10所示是本发明第四实施例的采集器中像素单元的示意图。像素单元包括像素阵列61以及读出电路64,其中像素阵列61包括由多个像素组成的二维阵列,用于采集由物体反射回的至少部分光束并生成相应的光子信号,读出电路64用于对光子信号进行处理以计算飞行时间。
在一个实施例中个,读出电路64包括TDC电路641和直方图电路642,用于绘制反映发射器中光源所发射脉冲波形的直方图,进一步地,也可以根据直方图计算飞行时间,最后将结果进行输出。其中,读出电路64可以是单个TDC电路及直方图电路组成,也可以由多个TDC电路单元及直方图电路单元组成的阵列读出电路。
在一个实施例中,像素阵列61是由多个SPAD组成的像素阵列,当发射器11向被测物体发射斑点光束时,采集器12中的接收光学元件123会引导该光斑光束至相应的像素上,一般地,为了尽可能多地接收反射光束中的光子信号,通常将单个斑点的大小被设置为对应多个像素(这里的对应可以理解为成像,光学元件123一般包括成像透镜),比如图10所示单个斑点对应2×2=4个像素,即该斑点光束反射回的光子会以一定的概率被对应的4个像素接收,一般地,将对应的多个像素组成的像素区域成为“合像素”,合像素的大小在设置时需进行综合考虑。
发射器和采集器之间是离轴配置的距离测量系统中,由于视差的存在,当被测物体远近不同时,光斑落在像素单元上的位置也会发生变化,一般地会沿着基线(发射器11与采集器12之间的连线,在本发明中统一用横线来表示基线方向)方向发生偏移,当被测物体的距离未知时合像素的位置是不确定,为了解决这一问题,需要采用超像素技术,即设置超过合像素对应数量的多个像素组成像素区域611、612(这里称为“超像素”用于接收反射回的斑点光束)。
在一个实施例中,如图10所示,超像素611被设置成包括第一合像素621、第二合像素622,超像素611与一个TDC电路和直方图电路连接。其中,超像素的采集视场与对应光源的投射视场相匹配,与超像素611对应的光源朝向对应区域发射脉冲光束时,若在该区域处的第一目标位于距离采集器更近距离时,经第一目标反射的斑点光束(实线圆表示)入射到第一合像素621中;若在该区域处的第二目标位于距离采集器更远距离时,经第二目标反射的斑点光束(虚线圆表示)入射到第二合像素622中。为有效抑制pile_up效应的影响,在第一合像素621上设置衰减片62,使得从目标区域处的第一目标反射的光束首先打到衰减片62上,经过衰减片62后反射光束的光强度降低,再入射到第一合像素621中,减少第一合像素621采集到的光子数量。在本发明的一种实施例中,可以根据距离测量系统的测距范围确定衰减片的衰减系数,第一合像素用于接收目标区域中近距的目标物体反射回的脉冲光束中的光子后形成的光子信号。衰减片不仅仅解决强环境光,主要是减弱近距目标产生的强反射光,这是因为pile_up问题主要是由目标位于近距时反射的强反射光引起的,高反射率、强环境光都只是辅助因素并不是主导因素。
在一个实施例中,第一合像素621和第二合像素622中包含的像素数量可以是不相同的。在一个实施例中,第一合像素621和第二合像素622中包含的像素数量也可以是相同的。
可以理解的是,超像素中的合像素数量不仅限于两个,例如还可以包括第三合像素,用于采集中间距离的目标反射的脉冲光束,通过设置多个合像素分别采集测距范围的子区间内的反射光脉冲,不管设置多少个合像素,都可以在采集近距范围的合像素上设置衰减片以降低pile_up效应。
通过在采集近距目标反射光束的第一合像素上设置衰减片,可以调控像素阵列的PDE为较高的PDE,提升对于远距目标的测量精度同时可以减小近距目标产生的pile_up效应。
第五实施例
图11所示是本发明第五实施例的采集器的示意图。采集器70包括接收光学元件71、过滤单元72、扩束光学元件73和像素单元74。一般地,当发射器11向被测物体发射斑点光束时,采集器70中的接收光学元件71会引导该光斑光束至相应的像素上,则通常将像素单元74设置在接收光学元件71的焦平面上。当待测目标离像素阵列距离较近时,反射光束中前部的光子更快的入射到像素单元中使多个像素饱和,而后续入射的光子被像素采集到的概率降低,导致脉冲峰值位置提前。因此,本实施例中在采集器70中设置扩束光学元件73以减小由近距的第一目标反射回的较强光束引起的pile_up现象。
在一个实施例中,如图11所示,接收光学元件71会接收从目标反射回的第一斑点光束,其中第一斑点光束与所述像素单元上的一个像素741(在本发明中可以是合像素也可以是超像素)相匹配,经过过滤单元72后通过扩束光学元件73后实现扩束,形成光束均匀扩散且光斑直径更大的第二斑点光束,而入射到像素单元74中的多个像素741上,其中每个像素741用于接收第二斑点光束中的部分光信号。过滤单元72主要用于滤除背景光或杂散。像素单元74包括由多个像素741组成的二维像素阵列,在一个实施例中,像素单元74包括由单光子雪崩光电二极管(SPAD)组成的像素阵列,SPAD可以对入射的单个光子进行响应并输出指示所接收光子在每个SPAD处相应到达时间的信号。像素单元74中还包括微透镜阵列,微透镜阵列中的每个微透镜742与像素741相匹配,用于汇聚第二斑点光束中的部分光信号至对应的像素741上。
在一个实施例中,接收光学元件71包括具有第一焦距的第一透镜,扩束光学元件73包括具有第二焦距的第二透镜,其中,第二焦距大于第一焦距。在一个实施例中,扩束光学元件73是扩束镜,用于形成强度均匀分布且光斑直径更大的第二斑点光束。
读出电路75包括TDC电路阵列和直方图电路752,用于绘制反映发射器中光源所发射脉冲波形的直方图,进一步地,也可以根据直方图计算飞行时间,最后将结果进行输出。TDC电路阵列包括多个TDC电路751,像素单元74中的每个像素741被配置为与一个TDC电路751连接用于接收和计算所述光子信号的时间间隔,并将所述时间间隔转化为时间码,则多个TDC电路同时对第二斑点光束中被像素采集的光子进行计算,TDC电路阵列输出的时间码经由直方图电路752处理,绘制出反映发射器中光源所发射脉冲波形的直方图,进一步地,也可以根据直方图计算第一斑点光束从发射到接收的飞行时间,最后将结果进行输出。
在一个实施例中,当发射器和采集器被配置为是共轴情形的距离测量系统时,像素741被配置为是合像素(具体设置如前文所述),每个合像素被配置为连接一个TDC电路。
在一个实施例中,当发射器和采集器被配置为是离轴情形的距离测量系统时,像素741被配置为是超像素(具体设置如前文所述),每个超像素被配置为连接一个TDC电路。
可以理解的是,通过设置扩束光学元件将第一斑点光束扩束后形成直径更大且光强度均匀的第二斑点光束入射到多个像素上,对于第一斑点光束是由距离采集器更近的第一目标反射回来的情形,通过扩束给像素采集光子提供了缓冲接收时间,即使反射光束中前部的光子更快的入射到像素阵列中,由于多个像素同时采集也能够采集到有效的光子从而在直方图中得到准确的脉冲峰值,计算出正确的距离值。
如图12所示,作为本发明另一实施例,还提出了一种采集器的制造方法,包括如下步骤:
提供接收光学元件,所述接收光学元件用于接收由目标反射回的第一斑点光束;所述第一 斑点光束与所述像素单元的一个像素相匹配;
提供扩束光学元件,所述扩束光学元件用于接收所述第一斑点光束并形成光束均匀扩散且光斑直径更大的第二斑点光束;
提供像素单元,所述像素单元包括由多个像素组成的二维像素阵列,用于接收所述第二斑点光束,所述第二斑点光束与多个像素相匹配。
在一些实施例中,像素是合像素,每个合像素包括至少两个SPAD;或,像素是超像素。
在一些实施例中,还包括如下步骤:提供微透镜阵列,该微透镜阵列包括多个微透镜,每个微透镜用于汇聚部分光信号至对应的像素上。
在一些实施例中,接收光学元件包括具有第一焦距的第一透镜,扩束光学元件包括具有第二焦距的第二透镜;其中,第二焦距大于第一焦距。
本申请实施例还提供一种控制装置,包括处理器和用于存储计算机程序的存储介质;其中,处理器用于执行所述计算机程序时至少执行如上所述的方法。
本申请实施例还提供一种存储介质,用于存储计算机程序,该计算机程序被执行时至少执行如上所述的方法。
本申请实施例还提供一种处理器,所述处理器执行计算机程序,至少执行如上所述的方法。
所述存储介质可以由任何类型的易失性或非易失性存储设备、或者它们的组合来实现。其中,非易失性存储器可以是只读存储器(ROM,Read Only Memory)、可编程只读存储器(PROM,Programmable Read-Only Memory)、可擦除可编程只读存储器(EPROM,ErasableProgrammable Read-Only Memory)、电可擦除可编程只读存储器(EEPROM,ElectricallyErasable Programmable Read-Only Memory)、磁性随机存取存储器(FRAM,FerromagneticRandom Access Memory)、快闪存储器(Flash Memory)、磁表面存储器、光盘、或只读光盘(CD-ROM,Compact Disc Read-Only Memory);磁表面存储器可以是磁盘存储器或磁带存储器。易失性存储器可以是随机存取存储器(RAM,Random Access Memory),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(SRAM,Static Random Access Memory)、同步静态随机存取存储器(SSRAM,SynchronousStatic Random Access Memory)、动态随机存取存储器(DRAM,Dynamic Random AccessMemory)、同步动态随机存取存储器(SDRAM,Synchronous Dynamic Random AccessMemory)、双倍数据速率同步动态随机存取存储器(DDRSDRAM,Double Data RateSynchronous Dynamic Random Access Memory)、增强型同步动态随机存取存储器(ESDRAM,Enhanced Synchronous Dynamic Random Access Memory)、同步连接动态随机存取存储器(SLDRAM,SyncLink Dynamic Random Access Memory)、直接内存总线随机存取存储器(DRRAM,Direct Rambus Random Access Memory)。本发明实施例描述的存储介质旨在包括但不限于这些和任意其它适合类型的存储器。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本发明各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本发明上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本发明各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
本申请所提供的几个方法实施例中所揭露的方法,在不冲突的情况下可以任意组合,得到新的方法实施例。
本申请所提供的几个产品实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的产品实施例。
本申请所提供的几个方法或设备实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的方法实施例或设备实施例。
以上内容是结合具体的优选实施方式对本发明所做的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的技术人员来说,在不脱离本发明构思的前提下,还可以做出若干等同替代或明显变型,而且性能或用途相同,都应当视为属于本发明的保护范围。

Claims (10)

  1. 一种距离测量方法,其特征在于,包括如下步骤:
    S1:控制发射器朝向目标区域发射脉冲光束;
    S2:调控采集器的像素阵列具有至少两个不同的探测效率,分别以所述至少两个不同的探测效率接收所述目标区域反射的光束中的光子形成的光子信号,并分别根据所述光子信号得到所述目标区域的深度图像;
    S3:融合所述目标区域的深度图像得到所述目标区域融合的深度图像。
  2. 如权利要求1所述的距离测量方法,其特征在于,控制所述采集器的像素阵列具有至少一个所述探测效率用于采集所述目标区域中所有待测目标反射的光束中的光子形成的光子信号。
  3. 如权利要求2所述的距离测量方法,其特征在于,调控所述采集器的像素阵列分别具有第一探测效率和第二探测效率,并分别以所述第一探测效率和所述第二探测效率接收所述目标区域反射的光束中的光子形成的光子信号。
  4. 如权利要求3所述的距离测量方法,其特征在于,调控所述采集器的像素阵列具有第一探测效率,所述像素阵列接收经所述目标区域反射的光束中的光子形成第一光子信号,根据所述第一光子信号获得所述目标区域的第一深度图像;
    调控所述采集器的所述像素阵列具有第二探测效率,所述像素阵列接收经所述目标区域反射的光束中的光子形成第二光子信号和第三光子信号,分别根据所述第二光子信号和第三光子信号获得所述目标区域的第二深度图像;
    所述第二探测效率大于所述第一探测效率。
  5. 如权利要求3所述的距离测量方法,其特征在于,调控所述采集器的像素阵列具有第一探测效率,所述像素阵列接收经所述目标区域反射的光束中的光子形成的第四光子信号和第五光子信号,分别根据所述第四光子信号、第五光子信号获得所述目标区域的第四深度图像和第五深度图像;
    调控所述采集器的所述像素阵列具有第二探测效率,所述像素阵列接收经所述目标区域反射的光束中的光子形成第六光子信号,根据所述第六光子信号获得所述目标区域的第六深度图像;
    所述第一探测效率大于所述第二探测效率。
  6. 如权利要求1-5任一所述的距离测量方法,其特征在于,融合所述目标区域的深度图像得到所述目标区域融合的深度图像包括:
    依据待测目标距离的远近选取所述至少两个不同的探测效率中高低效率对应的所述深度 图像中的所述待测目标的深度值。
  7. 一种距离测量系统,其特征在于,包括:
    发射器,用于向目标区域发射脉冲光束;
    采集器,包括具有至少两个不同的探测效率的像素阵列,用于分别以所述至少两个不同的探测效率接收所述目标区域反射的光束中的光子形成的光子信号,并分别根据所述光子信号得到所述目标区域的深度图像;
    控制和处理电路,分别与所述发射器以及所述采集器连接,用于实现如权利要求1-6任一所述的方法控制。
  8. 如权利要求7所述的距离测量系统,其特征在于,所述像素阵列具有至少一个探测效率用于采集所述目标区域中所有待测目标反射的光束中的光子形成的光子信号。
  9. 如权利要求8所述的距离测量系统,其特征在于,所述像素阵列具有第一探测效率和第二探测效率;
    所述第一探测效率大于所述第二探测效率,所述第一探测效率用于采集所述目标区域中所有待测目标反射的光束中的光子形成的光子信号;
    或,所述第二探测效率大于所述第一探测效率,所述第二探测效率用于采集所述目标区域中所有待测目标反射的光束中的光子形成的光子信号。
  10. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1-6任一所述方法的步骤。
PCT/CN2020/138372 2020-06-04 2020-12-22 一种距离测量方法、系统及计算机可读存储介质 WO2021244011A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010501328.7 2020-06-04
CN202010501328.7A CN111766596A (zh) 2020-06-04 2020-06-04 一种距离测量方法、系统及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2021244011A1 true WO2021244011A1 (zh) 2021-12-09

Family

ID=72720184

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/138372 WO2021244011A1 (zh) 2020-06-04 2020-12-22 一种距离测量方法、系统及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN111766596A (zh)
WO (1) WO2021244011A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023123150A1 (zh) * 2021-12-30 2023-07-06 华为技术有限公司 一种控制方法、激光雷达及终端设备
CN117607837A (zh) * 2024-01-09 2024-02-27 苏州识光芯科技术有限公司 传感器阵列、距离测量设备及方法

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111766596A (zh) * 2020-06-04 2020-10-13 深圳奥锐达科技有限公司 一种距离测量方法、系统及计算机可读存储介质
CN113325439B (zh) * 2021-05-17 2023-04-07 奥比中光科技集团股份有限公司 一种深度相机及深度计算方法
CN115883798A (zh) * 2021-09-29 2023-03-31 中强光电股份有限公司 焦距调整方法
CN115980763A (zh) * 2021-10-15 2023-04-18 华为技术有限公司 探测方法及装置
CN116203574B (zh) * 2023-05-04 2023-07-28 天津宜科自动化股份有限公司 一种检测物体距离的数据处理系统
CN117169893B (zh) * 2023-11-02 2024-01-26 崂山国家实验室 激光致声跨空水下目标探测系统及方法

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1716811A (zh) * 1999-12-15 2006-01-04 日本电信电话株式会社 自适应阵列天线收发装置
US20120145911A1 (en) * 2009-09-18 2012-06-14 Hamamatsu Photonics K.K. Radiation detecting device
CN108267777A (zh) * 2018-02-26 2018-07-10 奕瑞新材料科技(太仓)有限公司 面阵列像素探测器及中低能射线源的定向方法
CN108827477A (zh) * 2018-06-27 2018-11-16 中国人民解放军战略支援部队信息工程大学 一种单光子探测器探测效率自动校准装置及方法
CN110007289A (zh) * 2019-03-21 2019-07-12 杭州蓝芯科技有限公司 一种基于飞行时间深度相机的运动伪差减小方法
CN110609293A (zh) * 2019-09-19 2019-12-24 深圳奥锐达科技有限公司 一种基于飞行时间的距离探测系统和方法
CN111766596A (zh) * 2020-06-04 2020-10-13 深圳奥锐达科技有限公司 一种距离测量方法、系统及计算机可读存储介质
CN111796296A (zh) * 2020-06-04 2020-10-20 深圳奥锐达科技有限公司 一种距离测量方法、系统及计算机可读存储介质
CN111830530A (zh) * 2020-06-04 2020-10-27 深圳奥锐达科技有限公司 一种距离测量方法、系统及计算机可读存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6657396B2 (ja) * 2016-06-02 2020-03-04 シャープ株式会社 光センサ、電子機器
EP3574344A2 (en) * 2017-01-25 2019-12-04 Apple Inc. Spad detector having modulated sensitivity
DE102018109544A1 (de) * 2018-04-20 2019-10-24 Sick Ag Optoelektronischer Sensor und Verfahren zur Abstandsbestimmung
CN109459760B (zh) * 2018-11-13 2020-06-23 西安理工大学 一种激光雷达观测数据处理方法及装置
CN111025318B (zh) * 2019-12-28 2022-05-27 奥比中光科技集团股份有限公司 一种深度测量装置及测量方法

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1716811A (zh) * 1999-12-15 2006-01-04 日本电信电话株式会社 自适应阵列天线收发装置
US20120145911A1 (en) * 2009-09-18 2012-06-14 Hamamatsu Photonics K.K. Radiation detecting device
CN108267777A (zh) * 2018-02-26 2018-07-10 奕瑞新材料科技(太仓)有限公司 面阵列像素探测器及中低能射线源的定向方法
CN108827477A (zh) * 2018-06-27 2018-11-16 中国人民解放军战略支援部队信息工程大学 一种单光子探测器探测效率自动校准装置及方法
CN110007289A (zh) * 2019-03-21 2019-07-12 杭州蓝芯科技有限公司 一种基于飞行时间深度相机的运动伪差减小方法
CN110609293A (zh) * 2019-09-19 2019-12-24 深圳奥锐达科技有限公司 一种基于飞行时间的距离探测系统和方法
CN111766596A (zh) * 2020-06-04 2020-10-13 深圳奥锐达科技有限公司 一种距离测量方法、系统及计算机可读存储介质
CN111796296A (zh) * 2020-06-04 2020-10-20 深圳奥锐达科技有限公司 一种距离测量方法、系统及计算机可读存储介质
CN111830530A (zh) * 2020-06-04 2020-10-27 深圳奥锐达科技有限公司 一种距离测量方法、系统及计算机可读存储介质

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023123150A1 (zh) * 2021-12-30 2023-07-06 华为技术有限公司 一种控制方法、激光雷达及终端设备
CN117607837A (zh) * 2024-01-09 2024-02-27 苏州识光芯科技术有限公司 传感器阵列、距离测量设备及方法
CN117607837B (zh) * 2024-01-09 2024-04-16 苏州识光芯科技术有限公司 传感器阵列、距离测量设备及方法

Also Published As

Publication number Publication date
CN111766596A (zh) 2020-10-13

Similar Documents

Publication Publication Date Title
WO2021244011A1 (zh) 一种距离测量方法、系统及计算机可读存储介质
CN111830530B (zh) 一种距离测量方法、系统及计算机可读存储介质
CN110596722B (zh) 直方图可调的飞行时间距离测量系统及测量方法
WO2021248892A1 (zh) 一种距离测量系统及测量方法
WO2021072802A1 (zh) 一种距离测量系统及方法
CN110596721B (zh) 双重共享tdc电路的飞行时间距离测量系统及测量方法
CN111722241B (zh) 一种多线扫描距离测量系统、方法及电子设备
CN111796295B (zh) 一种采集器、采集器的制造方法及距离测量系统
KR20210113312A (ko) 펄스형 빔들의 희소 어레이를 사용하는 깊이 감지
WO2022021797A1 (zh) 一种距离测量系统及测量方法
CN108139483A (zh) 用于确定到对象的距离的系统和方法
CN109791207A (zh) 用于确定到对象的距离的系统和方法
CN112731425B (zh) 一种处理直方图的方法、距离测量系统及距离测量设备
CN110780312B (zh) 一种可调距离测量系统及方法
CN111025321B (zh) 一种可变焦的深度测量装置及测量方法
CN111812661A (zh) 一种距离测量方法及系统
WO2022011974A1 (zh) 一种距离测量系统、方法及计算机可读存储介质
WO2020221188A1 (zh) 基于同步ToF离散点云的3D成像装置及电子设备
CN111025319B (zh) 一种深度测量装置及测量方法
CN211148917U (zh) 一种距离测量系统
CN213091889U (zh) 一种距离测量系统
CN111796296A (zh) 一种距离测量方法、系统及计算机可读存储介质
CN111965659B (zh) 一种距离测量系统、方法及计算机可读存储介质
WO2023065589A1 (zh) 一种测距系统及测距方法
CN213903798U (zh) 一种具有双重发光模式的距离测量系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20939251

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20939251

Country of ref document: EP

Kind code of ref document: A1