EP4229438A1 - Multi-detector lidar systems and methods for mitigating range aliasing - Google Patents

Multi-detector lidar systems and methods for mitigating range aliasing

Info

Publication number
EP4229438A1
EP4229438A1 EP21881130.5A EP21881130A EP4229438A1 EP 4229438 A1 EP4229438 A1 EP 4229438A1 EP 21881130 A EP21881130 A EP 21881130A EP 4229438 A1 EP4229438 A1 EP 4229438A1
Authority
EP
European Patent Office
Prior art keywords
light
light detector
time
detector
view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP21881130.5A
Other languages
German (de)
French (fr)
Other versions
EP4229438A4 (en
Inventor
Dane P. BENNINGTON
Ryan T. Davis
Michel H. J. LAVERNE
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LG Innotek Co Ltd
Original Assignee
LG Innotek Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/070,414 external-priority patent/US20220113405A1/en
Priority claimed from US17/070,765 external-priority patent/US11822018B2/en
Application filed by LG Innotek Co Ltd filed Critical LG Innotek Co Ltd
Publication of EP4229438A1 publication Critical patent/EP4229438A1/en
Publication of EP4229438A4 publication Critical patent/EP4229438A4/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/003Bistatic lidar systems; Multistatic lidar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4816Constructional features, e.g. arrangements of optical elements of receivers alone
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • G01S7/4861Circuits for detection, sampling, integration or read-out
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • B60W2420/408Radar; Laser, e.g. lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J1/00Photometry, e.g. photographic exposure meter
    • G01J1/42Photometry, e.g. photographic exposure meter using electric radiation detectors
    • G01J1/44Electric circuits
    • G01J2001/4446Type of detector
    • G01J2001/446Photodiode
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/10Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
    • G01S17/18Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves wherein range gates are used

Definitions

  • LIDAR systems for example, bistatic LIDAR systems that use a single receiver to detect light emitted by a single emitter
  • the emitter used to emit light into the environment and the receiver used to detect return light reflecting from objects in the environment are physically displaced relative to each other.
  • Such LIDAR configurations may inherently be associated with parallax problems because the light emitted by the emitter and received by the detector may not travel along parallel paths.
  • the emitter and receiver may need to be physically tilted towards each other (as opposed to aligning them at infinity distance).
  • this physical tilting may result in a loss of detection capabilities at long ranges from the LIDAR system.
  • some LIDAR systems may use a combination of a wide field of view and the aforementioned tilted assembly. This wide field of view, however, may result in another set of problems, including increasing the amount of background light detected by the receiver without increasing the amount of light emitted by the emitter, which may significantly increase the signal to noise ratio of the receiver.
  • Range aliasing may arise when multiple light pulses are emitted by an emitter and are traversing the environment at the same time.
  • the LIDAR system may have difficulty ascertaining which emitted light pulse the detected return light originated from.
  • the emitter emits a first light pulse at a first time and then emits a second light pulse at a second time before return light from the first light pulse is detected by the receiver.
  • both the first light pulse and the second light pulse are traversing the environment simultaneously.
  • the receiver may detect a return light pulse a short amount of time after the second light pulse is emitted.
  • the LIDAR system may have difficulty determining whether the return light is indicative of a short range reflection based on the second light pulse or a long range reflection based on the first light pulse.
  • Cross-talk concerns may arise based on a similar scenario, but instead of detecting return light from a first light pulse emitted by the same emitter, the receiver from the LIDAR system may instead detect a second light pulse that originates from an emitter of another LIDAR system. In this scenario, the LIDAR system may mistake the detected second light pulse from the other LIDAR system as return light originating from the first light pulse. Similar to range aliasing, this may cause the LIDAR system to mistakenly believe that a short range object is reflecting light back towards the LIDAR system.
  • FIG. 1 depicts an example system, in accordance with one or more example embodiments of the disclosure.
  • FIGs. 2A-2B depict an example use case, in accordance with one or more example embodiments of the disclosure.
  • FIG. 3 depicts an example use case, in accordance with one or more example embodiments of the disclosure.
  • FIGs. 4A-4B depict example circuit configurations, in accordance with one or more example embodiments of the disclosure.
  • FIGs. 5 A-5B depict example methods, in accordance with one or more example embodiments of the disclosure.
  • FIG. 6 depicts a schematic illustration of an example system architecture, in accordance with one or more example embodiments of the disclosure.
  • the LIDAR detectors may be referred to as “receivers,” “photodetectors,” “photodiodes,” or the like herein. Additionally, reference may be made herein to a single “photodetector” or “photodiode,” but the LIDAR systems described herein may also similarly include any number of such detectors). In some instances, the detectors may be photodiodes, which may be diodes that are capable of converting incoming light photons into an electrical signal (for example, an electrical current).
  • the detectors may be implemented in a LIDAR system that may emit light into an environment and may subsequently detect any light returning to the LIDAR system (for example, through the emitted light reflecting from an object in the environment) using the detectors.
  • the LIDAR system may be implemented in a vehicle (for example, autonomous vehicle, semi-autonomous vehicle, or any other type of vehicle), however the LIDAR system may be implemented in other contexts as well.
  • the detectors may also more specifically be Avalanche Photodiodes (APD), which may function in the same manner as a normal photodiode, but may operate with an internal gain as well.
  • APD Avalanche Photodiodes
  • an APD that receives the same number of incoming photons as a normal photodiode will produce a much greater resulting electrical signal through an “avalanching” of electrons, which allows the APD to be more sensitive to smaller numbers of incoming photons than a normal photodiode.
  • An APD may also operate in Geiger Mode, which may significantly increase the internal gain of the APD.
  • bistatic LIDAR systems may include emitters and receivers that are physically displaced relative to one another.
  • Such LIDAR configurations may inherently be associated with parallax problems because the light emitted by the emitter and received by the detector may not travel along parallel paths.
  • the emitter and receiver may need to be physically tilted towards each other (as opposed to aligning them in parallel).
  • this physical tilting may result in a loss of detection capabilities at long ranges from the LIDAR system.
  • some systems may use a combination of a wide field of view and the aforementioned tilted assembly.
  • bistatic LIDAR systems may have difficulty determining the source of certain return light based on range aliasing and/or cross talk problems.
  • a LIDAR system may be used that may include multiple photodetectors to detect return light that is based on emitted light from an emitter device (for example, laser diode) of the LIDAR system.
  • the photodetectors may be physically orientated (for example, pointed in at different angles) so that their individual fields of view include varying distances from the LIDAR system (for example, as depicted in FIG. 1).
  • a first photodetector may be physically oriented so that its detection field of view encompasses a physical space within a short range from the LIDAR system (for example, a range including a distance 0.1m away from the LIDAR system to 0.195m away from the LIDAR system encompassing a field of view of 2.85 degrees).
  • a second photodetector may be physically oriented so that its detection field of view covers a physical space just beyond the physical space that the first photodetector covers (for example, a range including a distance 0.19m away from the LIDAR system to 4. Im away from the LIDAR system, also with a field of view of 2.85 degrees).
  • third photodetector may be physically oriented so that its detection field of view covers a physical space just beyond the physical space that the second photodetector covers (for example, a range including a distance 4m away from the LIDAR system to an infinite distance away from the LIDAR system with a field of view of 2.85 degrees (these example distances and fields of view are provided as arbitrary examples, and any other ranges and/or fields of view may similarly be applicable).
  • the total number of photodetectors included within the LIDAR system may be based on a number of factors. As a first example, the number may be based on a maximum detection range that is desired to be covered by the photodetectors.
  • the maximum detection range is a smaller distance from the LIDAR system, then a smaller number of photodetectors may be used, and if the maximum detection range is a larger distance from the LIDAR system, then a larger number of photodetectors may be used (it should be noted that while the term “maximum detection range” is used, this distance could theoretically extend out to infinity).
  • the number of photodetectors used may also be based on the size of the fields of view of individual photodetectors being used. A larger number of photodetectors may need to be employed if some or all of the individual photodetectors have a more narrow field of view.
  • a smaller number of photodetectors may be employed if some or all of the individual photodetectors have a broader field of view.
  • factors that may influence the number of photodetectors used in the LIDAR system, and the number may also depend on any number of additional factors.
  • the number of photodetectors used may depend on the amount of overlap in the field of view between the photodetectors. In some instances, the transition between one photodetector’s field of view to another photodetector’s field of view may involve no overlap in the respective fields of view (that is, the end of one photodetector’s field of view may exactly correspond with the beginning of another photodetector’s field of view).
  • the photodetectors may be configured in an array such that they are physically spaced apart from one another at equal distances. However, in some embodiments, the photodetector arrays may also be physically spaced apart in unequal intervals.
  • a separation that is logarithmically based may be more beneficial to produce equal resolution to minimal range. This may be because using a linearly- spaced detector array with the same FoV may result in ranges that are asymmetrical, with a first detector in the array responsible for a smaller amount of physical space and the last photodetector in the array being responsible for a much larger amount of physical space (for example, 4m to infinity).
  • a different spacing model for example, logarithmic spacing as described above
  • individual detector ranges may be optimized to be more equalized. This may ensure, for example, that multiple returns from any one area in the environment do not overload the receiver responsible for that area.
  • more closely-spaced detectors may be used in the far-field to further reduce the FoV as the distance covered increases and the receiver cone becomes very large, thus making yourself less susceptible to noise in the far-field.
  • the photodetectors may be selectively “turned on” and/or “turned off’ (which may similarly be referred to as “activating” or deactivating” a photodetector). “Turning on” a photodetector may refer to providing a bias voltage to the photodetector that satisfies a threshold voltage level. The bias voltage satisfying the threshold voltage level (for example, being at or above the threshold voltage level) may provide sufficient voltage to the photodetector to allow it to produce a level of output current based on light received by the photodetector.
  • the output threshold voltage level being used and the corresponding output current being produced may depend on the type of photodetector being used and the desired mode of the operation of the photodetector. For example, if the photodetector is an APD, the threshold voltage level may be set high enough so that the photodetector is capable of avalanching upon receipt of light as described above. Similarly, if it is desired for the APD to operate in Geiger Mode, the threshold voltage level may be set even higher than if the APD were desired to operate outside of the Geiger Mode region of operation. That is, the gain of the photodetector when this higher threshold voltage level is applied may be much larger than if the photodetector were operating as a normal Avalanche Photodiode.
  • the threshold bias voltage may be lower.
  • the threshold voltage level may be set below a threshold voltage level used to allow the APD to avalanche upon receipt of light. That is, the bias voltage applied to the APD may be set low enough to allow the APD to still produce an output current, but only in a linear mode of operation.
  • “turning off’ a photodetector may refer to reducing the bias voltage provided to the photodetector to below the threshold voltage level.
  • “turning off’ the photodetector may not necessarily mean that the photodetector is not able to detect return light. That is, the photodetector may still be able to detect return light while the bias voltage is below the threshold voltage level, but the output signal produced by the photodetector may be below a noise floor established for a signal processing portion of the LIDAR system.
  • a photodetector may be an Avalanche Photodiode.
  • the APD may still produce an output, but the output may be based on a linear mode of operation and the resulting output current may be much lower than if the APD were to avalanche upon receipt of a same number of photons.
  • the signal processing portion of the LIDAR system may have a noise floor configured to correspond to an output of the APD in linear mode, so that any outputs from the APD when operating with this reduced bias voltage may effectively be disregarded by the LIDAR system.
  • selectively turning on and/or turning off the photodetectors may entail only having some of the photodetectors capable of detecting return light at a given time.
  • the timing at which the photodetectors may be turned on and/or turned off may depend on predetermined time intervals.
  • these predetermined time intervals may be based on an amount of time that has elapsed since a given light pulse was emitted from the emitter of the LIDAR system.
  • a first light pulse being emitted from the emitter may trigger a timing sequence.
  • the timing sequence may involve individual photodetectors being turned on when return light corresponding to the emitted light pulse being reflect from an object may be expected to be within a field of view of a particular photodetector.
  • a first photodetector may be pointed in a direction such that its field of view may include a range of physical space within a closest distance from the LIDAR system (Rxl as depicted in FIG.l may provide a visual example of this first photodetector’s field of view).
  • This first photodetector may be the first photodetector to be turned on for a first time interval during which return light originating from the first light pulse may be expected to have been reflected from objects within the first field of view.
  • the field of view of this first photodetector may include a range from 0.19m away from the LIDAR system to 4.1m away from the LIDAR system (again, it should be noted that any specific examples of ranges and/or fields of view of any photodetectors described herein may be arbitrary, and any other ranges and/or fields of view may similarly be applicable) from the LIDAR system. That is, if the emitted light were to be emitted from the LIDAR system and then reflect from an object in the environment within this range from the LIDAR system back towards the first photodetector, then the first photodetector pointing in that direction may be turned on and able to detect the return light.
  • a second photodetector may be turned on. Similar to the first time interval, the second time interval may correspond to a period of time during which any return light detected by the second photodetector is expected to have originated from an object in the field of view of the second photodetector. This process may continue with some or all of the remaining photodetectors in the array being turned on after successive time intervals of previous photodetectors in the array have passed. This process may be visualized, for example, in FIG. 2, as described below. Additionally, in some cases, once a time interval associated with a field of view of a particular photodetector has passed, that photodetector may be turned off as well.
  • the photodetector associated with the current time interval may be turned on at any given time. This may provide a number of benefits, such as serving to reduce extraneous data received by the other photodetectors during this time and/or reducing the power consumption of the LIDAR system, to name a few examples. In some embodiments, however, some or all of the photodetectors may remain on through multiple time intervals, or may remain on at all times. This may be beneficial because it may allow as much data from the environment to be captured as possible. Additionally, the time intervals may not necessarily need to be the same length of time. For example, a time interval associated with a given photodetector may depend on the size of the field of view of the photodetector.
  • a photodetector with a more narrow field of view may be associated with a shorter time interval than a photodetector with a more broad field of view. This situation may arise when photodetectors with varying sizes of field of view are employed. Additionally, in some cases, any other type of time interval may be used to determined when to turn on and/or turn off any of the photodetectors included within the LIDAR system.
  • the bias voltage provided to an individual photodetector during a given time interval may not necessarily be fixed. That is, the bias voltage that is provided to the photodetector may vary over time based on a certain function (for example, “function” may refer to a magnitude of bias voltage applied with respect to time. That is, if a plot of the bias voltage applied over time were to be created, the function would be visualized by the plot).
  • function may refer to a magnitude of bias voltage applied with respect to time. That is, if a plot of the bias voltage applied over time were to be created, the function would be visualized by the plot).
  • the function may not necessarily only involve the threshold voltage level only either being at threshold voltage level or at a value below the threshold voltage value (that is, if the bias voltage applied were plotted as a function, it may not necessarily look like a step function that rises to the threshold voltage level at the beginning of the time interval and drops below the threshold voltage level at the end of the time interval).
  • the function may instead be associated with a certain degree of change in the bias voltage throughout the time interval.
  • the function may represent a Gaussian function. Using this type of function to dictate the bias voltage being provided, for example, the bias voltage may increase over a first period of time, reach a peak bias voltage, and then may decrease back down to below the threshold voltage level over a second period of time.
  • the peak of the example Gaussian function may be maintained throughout the entire predetermined time interval associated with the particular photodetector.
  • the upward slope of the Gaussian function may begin at the beginning of the time interval and the peak of the Gaussian function may be reached at a certain amount of time after the beginning of the time interval. This may be desirable if it is desirable for the photodetector to be at its most sensitive to return light at a particular portion of its field of view.
  • the upward slope of the example Gaussian function may begin during the time interval of a previous photodetector (and likewise the downward slope may extend into the time interval for a successive photodetector’s time interval).
  • the function may be any other function other than a Gaussian function (for example, the function may actually even be a step function in some instances). That is, the bias voltage applied to a given photodetector may vary over time in any other number of ways. Additionally, different photodetectors may be associated with different types of functions. Different photodetectors may also be associated with the same type of function, but certain parameters of the function may vary. For example, the peak of a Gaussian function used for one photodetector may be greater than the peak of a Gaussian function used for a second photodetector. The functions used may also vary for different emitted light pulses from the LIDAR system.
  • a first type of function when the LIDAR system emits a first light pulse, a first type of function may be used, but when the LIDAR emits a subsequent light pulse, a second type of function may be used.
  • the above examples of how different functions may be used to control the bias voltage applied to a photodetector may merely be exemplary, and any other type(s) of functions may be applied to any combination of photodetectors based on any number of timing considerations.
  • the manner in which photodetectors are turned on and/or off may also be dynamic instead of being based on fixed time intervals that are used for successive light pulse emissions from the emitter.
  • the time intervals used to determine the bias voltage to be provided to different photodetectors may not consistently iterate in the same manner forever, but may rather change over time.
  • the time intervals used for some or all of the photodetectors may change after each successive emitted light pulse by the emitter.
  • the time intervals may change after a given number of emitted light pulses are emitted by the emitter.
  • the time intervals may also change within a period of time during which a single emitted light pulse may currently be traversing the environment.
  • a light pulse may be emitted, a first photodetector may be turned on for a first time interval, and then a second time interval for a second photodetector may be dynamically changed to a different time interval.
  • these time interval changes may be based on data that is received from the environment. That is, an closed loop feedback system may be implemented to vary the time intervals (it should be noted that this closed loop feedback system may similarly be used to dynamically adjust the types of functions used to dictate the bias voltage provided to different photodetectors.
  • the physical orientation of the photodetectors may also be either fixed and/or dynamically configurable. That is, individual photodetectors may include an actuation mechanism that may allow the direction in which a photodetector is pointed to be dynamically adjusted (and consequentially, the fields of view of the photodetectors may be dynamically adjustable).
  • the actuation mechanism may include microelectromechanical systems (MEMS), or any other type of actuation mechanism that may allow a photodetector to adjust the direction in which it points. This dynamic adjustment of the physical orientation of one or more of the photodetectors may be performed for any number of other reasons.
  • MEMS microelectromechanical systems
  • a first photodetector may be turned on and data may be captured by that first photodetector.
  • a direction in which a second photodetector is pointing may then be adjusted based on the data captured by the first photodetector.
  • multiple photodetectors may be adjusted to point in the same direction. This may be desirable because this may allow for more data to be captured from a particular portion of the environment than if a single photodetector were used to capture data from that portion of the environment.
  • one photodetector may serve as serve as a failsafe for another photodetector (that is, one photodetector may serve to validate the data received by the other photodetector or may serve to capture data from the portion of the environment if the other photodetector is unable to do so for a given period of time).
  • This may also be useful if the portion of the environment is determined to be an area of interest and thus it is desirable to obtain as much data from that portion of the environment as possible.
  • the circuitry used to capture the data being produced by individual photodetectors within the photodetector array may involve providing individual analog to digital converters (ADCs) for each photodetector.
  • ADCs analog to digital converters
  • An analog to digital converter may take an analog signal as an input and produce a corresponding digital output.
  • the current output by a photodetector may be an analog signal, so the analog to digital converter may take this signal as an input and convert it into a digital form that may be used by a signal processing portion of the LIDAR system.
  • a single ADC may be used for multiple of the photodetectors or all of the photodetectors.
  • the outputs of the individual photodetectors may be summed and provided as a single output to the ADC.
  • the summing may be performed using a summer circuit, which may comprise more than one circuits that are capacitive coupled together, or may also be in the form of an op-amp summer.
  • one or more attenuators may be used to attenuate one the outputs of one or more of the photodetectors may be performed as well For example, if all of the detectors are turned on at all times, the attenuators may be used to attenuate outputs of certain detectors at certain times.
  • the attenuation may be performed to attenuate the outputs of all of the detectors except one detector.
  • the one detector may correspond to a field of view in which it is expected return light from an emitted light pulse would currently be located.
  • the attenuators may thus serve to reduce the amount of detector output noise that is provided the ADC and signal processing components 410.
  • the attenuators may also be used in other ways as well. For example, all of the outputs of the detectors may be left unattenuated unless it is determined that it is desired to block the output of one or more detectors.
  • either of the aforementioned circuitry embodiments may be employed when some or all of the photodetectors remain turned on at all times, instead of being selectively turned on and off during a single light pulse emissions. In some embodiments, however, the circuitry may also be employed when the photodetectors are selectively turned on and/or off as well. For example, in scenarios where a photodetector is selectively turned on as return light based on an emitted light pulse is expected to enter a field of view of the photodetector (that is, only one photodetector is turned on at a time), a single ADC may be employed.
  • Range aliasing may be a phenomenon that may occur when a two or more different light pulses emitted by an emitter of the LIDAR system are simultaneously traversing the environment. For example, the emitter of the LIDAR system may emit a light pulse at a first time, and a time at which the light pulse would be expected to return to the LIDAR system from a maximum detection range may elapse. The LIDAR system may then emit a second light pulse.
  • the LIDAR system may incorrectly identify the return light pulse as being a short range return of the second light pulse instead of a long range return from the first light pulse.
  • This may be problematic because light associated with multiple emitted light pulses from the LIDAR system may exist in the environment during any given time interval. This may result in the shot rate (the rate at which subsequent light pulses may be emitted by the emitter) of the LIDAR system being lowered to reduce the likelihood of numerous light pulses traversing the environment at the same time and resulting in this range aliasing problem.
  • a first light pulse may be emitted at a first time.
  • the first light pulse may traverse the environment, and successive individual photodetectors may be turned on and/or off at varying time intervals as the first light pulse traverses further away from the emitter.
  • the last photodetector may be kept turned on as the first light pulse continues to travel beyond the field of view of the last photodetector.
  • a second light pulse may then be emitted by the emitter.
  • a short time after the second light pulse is emitted (for example, when it is within the field of view of a first photodetector with a field of view including a closest distance range to the emitter) the last photodetector may detect return light.
  • the second light pulse would not have traveled far enough to be detected by the last photodetector as return light, so the detection by the last photodetector is more likely associated with the first light pulse. In this manner, it may be more easily ascertained which light pulse a detected light return may be associated with.
  • range aliasing concerns may be mitigated and/or eliminated by the use of multiple photodetectors
  • the photodetectors are controlled to be turned on and/or off based on predetermined time intervals as described above, then return light that reflects from an object beyond the maximum detection range of the LIDAR system may never be detected (which may eliminate the possibility for range aliasing).
  • the reason for this may be exemplified as follows. In this example, a first light pulse is emitted.
  • the photodetectors then proceed through their sequence of turning on and/or off as described above until the final photodetector (the photodetector with the longest distance field of view from the LIDAR system) is turned off (when any expected return light would originate from beyond the maximum detection range). Then a second light pulse is emitted from the LIDAR system. If there were only one photodetector that was always turned on, then return light from the first light pulse may then return and be detected by the photodetector. However, if there are multiple photodetectors and each is only turned on for a given time interval, then it is more likely that return light that is detected by a given photodetector would have originated from the second light pulse.
  • the use of the multiple photodetectors may still allow for a determination to be made that the detected return light could potentially originate from the first light pulse. That is, if the return light from the first pulse returns and is detected by one of the turned on photodetectors, but the second light pulse has still not reflected from an object back towards a photodetector, then return light from the light pulse may be detected by a subsequent photodetector that is turned on.
  • the LIDAR system may thus be able to determine that at least one of the return light detections was based on range aliasing. If this is the case, the LIDAR system may simply disregard both of these two detected returns.
  • mitigating range aliasing as described above may have the added benefit of allowing for a larger number of light pulses being emitted within a time frame than if range aliasing were not mitigating using these systems and methods. This may be because it may not be as concerning to have more light pulses traversing the environment at the same time if it is more likely that the LIDAR system may be able to determine which return light is associated with which emitted light pulse. The ability to emit more light pulses in a given period of time may result in a larger amount of data being able to be collected about the environment at a faster rate.
  • the multiple photodetector bistatic LIDAR systems described herein may also have the further benefit of mitigating or eliminating cross talk between different LIDAR systems.
  • Cross talk may refer to a scenario that arises when an emitter from a first LIDAR system is pointed towards a second LIDAR system. If the first LIDAR system emits a light pulse, that light pulse may then travel towards the second LIDAR system and be detected by a photodetector of the second LIDAR system. Similar to range aliasing concerns, the second LIDAR system may have difficulty in discerning between its own emitted light pulses and a light pulse originating from another LIDAR system if only one photodetector is used. This cross talk scenario may be mitigated or eliminated in a similar manner in which range aliasing may be mitigated or eliminated.
  • the multiple photodetector bistatic LIDAR systems described herein may also have further benefits even beyond mitigating parallax, range aliasing, and/or cross talk concerns.
  • the use of the multiple photodetectors may mitigate a scenario where one particular photodetector may become saturated by a bright light. When this is the case, the photodetector may enter a recovery period during which it may not be able to detect any subsequent light. If only one photodetector is used in a LIDAR system, then the LIDAR system may become blind to return light during this recovery period. However, if multiple photodetectors are used, any of the other photodetectors may be used as backups for the photodetector currently in its recovery period.
  • FIG. 1 may depict a high-level schematic diagram of an example LIDAR system 101 that may implement the multiple detectors as described herein. A more detailed description of an example LIDAR system may be described with respect to FIG. 6 as well.
  • the LIDAR system 101 may include at least one or more emitter devices (for example, emitter device 102a, and/or any number of additional emitter devices) and one or more detector devices (for example, detector device 106a, detector device 106b, detector device 106c, and/or any number of additional detector devices).
  • emitter devices for example, emitter device 102a, and/or any number of additional emitter devices
  • detector devices for example, detector device 106a, detector device 106b, detector device 106c, and/or any number of additional detector devices.
  • the LIDAR system 101 may be incorporated onto a vehicle 101 and may be used at least to provide range determinations for the vehicle 101.
  • the vehicle 101 may traverse an environment 108 and may use the LIDAR system 101 to determine the relative distance of various objects (for example, the pedestrian 107a, the stop sign 107b, and/or the second vehicle 107c) in the environment 108 relative to the vehicle 101.
  • the one or more detector devices may be configured such that individual detector devices are physically oriented to point in different directions. Consequentially, different detectors may have different corresponding fields of view in the environment 108.
  • detector device 106c may be associated with field of view 110
  • detector device 106b may be associated with field of view 111
  • detector device 106a may be associated with field of view 112.
  • a field of view of an individual detector device may cover a particular range of distances from the emitter 102a.
  • the field of view 110 of detector device 106c is shown to cover a closest range of distances to the emitter 102a
  • field of view 111 of detector device 106b is shown to cover an intermediate range of distances to the emitter 102a
  • field of view 112 of detector device 106a is shown to cover a furthest range of distances from the emitter 102a.
  • the detector devices may cover a total field of view comprising the field of view 110, the field of view 111, and the field of view 112.
  • the figure depicts three detector devices with three associated fields of view the total view of view may similarly be split between any number of detector devices as well.
  • one field of view for one photodetector may begin at an exact location where a prior field of view for another photodetector ends, leaving no field of view blind spot between photodetectors.
  • the different fields of view may allow the different detector devices to detect return light 120 from the environment 108 that is based on the emitted light 105 from the emitter 102a. Due to the varying range of distances the different fields of view cover, each detector device may be configured to detect objects at varying distances from the vehicle 101.
  • the detector device 106c may be configured to detect return light 120 that is reflected from objects a shorter distance away from the vehicle 101 (for example, the vehicle 107c). As is described herein, the detector devices may also be selectively turned on and/or turned off based on time estimates as to when return light 120 would be within the field of view of the individual detector devices.
  • FIGs. 2A-2B depicts an example use case 200, in accordance with one or more example embodiments of the disclosure.
  • the use case 200 may exemplify a manner in which the one or more detectors may be selectively turned on and/or turned off (as described above) during various time intervals subsequent to a light pulse being emitted from an emitter.
  • the use case 200 may depict an emitter 202, which may be the same as emitter 102a described with respect to FIG. 1, as well as any other emitter described herein.
  • the use case 200 may also depict one or more detectors, including, for example, a first detector 203, a second detector 204, and a third detector 205, which may be the same as detector device 106a, detector device 106b, and/or detector device 106c, as well as any other detectors described herein.
  • Each of the detectors may have an associated field of view.
  • the first detector 203 may be associated with field of view 206
  • the second detector 204 may be associated with field of view 207
  • the third detector 205 may be associated with field of view 208.
  • field of view 206 for example, field of view 206, field of view 207, and/or field of view 208 may include varying distance ranges from the emitter 202 (or, more broadly speaking, from the LIDAR system).
  • the fields of view may allow for the detectors to detect return light (for example, return light 211, return light 213, and/or return light 215) that is reflected from objects in the environment (for example, a tree 209, a vehicle 214, and/or a house 219).
  • field of view 206 associated with the first detector 203 may cover a distance range that may be closest to the emitter 202 (or the LIDAR system)
  • field of view 207 associated with the second detector 204 may cover a distance range that is intermediate from the emitter 202 (or the LIDAR system)
  • field of view 208 associated with the third detector 205 may cover a distance range that is furthest from the emitter 202 (or the LIDAR system).
  • the field of view 206, field of view 207, and field of view 208 may cover a total field of view of the LIDAR system.
  • an individual detector with its corresponding field of view may cover a portion of the total field of view of the LIDAR system, and all of the fields of view taken together may cover the desired field of view of the LIDAR system.
  • the desired total field of view may correspond to a maximum detection range of the LIDAR system, which may be predefined, or selected based on any number of factors.
  • use case 200 may only depict three detectors with three corresponding fields of view, any other number of detectors and associated fields of view may be used to cover a detection range of the LIDAR system.
  • the detectors may be the same as the detectors depicted with respect to FIG. 1, as well as any other detectors described herein.
  • the emitter 202 and one or more detectors may be a part of an overall LIDAR system, such as a bistatic LIDAR system. That is, the use case 200 may depict a use case being implemented by the LIDAR system depicted in FIG. 1, for example.
  • Scene 201 may involve the emitter 202 of the LIDAR system emitting a light pulse 210 into the environment 210.
  • Scene 201 may also depict that, subsequent to the light pulse 210 being emitted by the emitter 202, the detector with a field of view 206 that covers a distance range closest to the emitter 202 (the first detector 203) may be turned on.
  • This first detector 203 may be turned on for a given first time interval, T 1 .
  • the other detectors in the LIDAR system for example, second detector 204 and third detector 205) may be turned off, which may be represented by their fields of view being depicted as dashed lines.
  • the first time interval, AT X may correspond to a time interval during which return light reflected from an object in the environment would be expected to be within the field of view 206 of the first detector 203.
  • scene 201 may depict a tree 209 that is within the field of view 206, and that reflects return light 211. This return light 211 may then be detected by the first detector 203.
  • the tree 209 (and associated return light 211) may be depicted in dashed lines as an example of what return light in the field of view 206 may look like.
  • the tree 209 may be considered to not actually exist in the environment so that the light pulse 210 may traverse the environment to further distances from the emitter 202.
  • Scene 215 may involve the light pulse 210 continuing to traverse the environment beyond the location of the tree 209 as shown in scene 201.
  • Scene 215 may take place during a second time interval, AT 2 •
  • the first detector 203 may be turned off and the second detector 204 may be turned on. That is, the second detector 204 may now be the only detector that is currently turned on.
  • the second time interval, AT 2 may correspond to a time interval during which return light reflected from an object in the environment would be expected to be within the field of view 207 of the second detector 204.
  • the second detector 204 may include a field of view 207 including a distance range that begins starting with the end of the distance range covered by the field of view 206 of the first detector 203. In some cases, although not depicted in the figure, there may also be some overlap between the field of view 207 and/or the field of view 206.
  • scene 215 may thus depict a vehicle 214 that is within the field of view 207, and that reflects return light 213. This return light 213 may then be detected by the second detector 204.
  • the vehicle 214 (and associated return light 213) may be depicted in dashed lines as an example of what return light in the field of view 207 may look like.
  • the vehicle 214 may be considered to not actually exist in the environment so that the light pule 210 may traverse the environment to greater distances as depicted in scene 230 described below.
  • the use case may proceed to scene 230.
  • Scene 230 may involve the light pulse 210 continuing to traverse the environment beyond the location of the vehicle 214 as shown in scene 215.
  • Scene 230 may take place during a third time interval, AT 3 .
  • the second detector 204 may be turned off and the third detector 205 may be turned on. That is, the third detector 205 may now be the only detector that is currently turned on.
  • the third time interval, AT 3 may correspond to a time interval during which return light reflected from an object in the environment would be expected to be within the field of view 208 of the third detector 205.
  • the third detector 205 may include a field of view 208 including a distance range that begins starting with the end of the distance range covered by the field of view 207 of the second detector 204.
  • scene 230 may depict a home 217 that is within the field of view 208, and that reflects return light 218. This return light 218 may then be detected by the third detector 205.
  • the use case 200 may thus depict a progression of an example of how various detectors may be selectively turned on and/or turned off over time as an emitted light pulse traverses further into the environment 210.
  • this use case 200 should not be taken as limiting, and the detectors may be operated in any other manner as may be described herein.
  • all of the detectors may be turned on at all times (instead of individual detectors being selectively turned on and/or turned off), more than one detector may be turned on at any given time, and/or any number of detectors may be turned on in any other combination for any other length of times.
  • the time intervals during which the various detectors are turned on and/or off may vary.
  • the time interval during which one detector is turned on may be shorter and/or longer than the time interval during which another detector is turned on.
  • the timing may depend on one or more types of functions used to determine the bias voltage to apply to a given photodetector over time.
  • the bias voltage applied to the different photodetectors may be represented as a time-shifted Gaussian function. That is, within a given time interval, the bias voltage applied to the detector 205 may ramp up to a peak bias voltage value, and then ramp back down as the end of the time interval is approached.
  • the bias voltage may begin to ramp up for the detector 204 using a similar Gaussian function, and so on.
  • fields of view for example, field of view 206, field of view 207, and/or field of view 208 are shown as being fixed in the use case 200, any of these fields of view may also be adjustable. That is, a field of view may be broadened or narrowed, or a direction of a field of view may be altered. The field of view may also be altered in any other manner, such as introducing optical systems that may alter the direction of the field of view. As described herein, the field of view may be altered for any number of reasons, such as to focus multiple detectors towards a similar location within the environment.
  • FIG. 3 depicts an example use case 300, in accordance with one or more example embodiments of the disclosure.
  • the use case 300 may depict one example of how range aliasing concerns may be mitigated and/or eliminated by the multi-detector systems and methods described herein.
  • Use case 300 may depict two parallel timelines.
  • Scenes 301 and 310 may comprise one timeline and scenes 320 and 340 may comprise a second, parallel timeline.
  • Scenes 301 and 310 may be included to depict how range aliasing issues may arise in single detector LIDAR systems, and scenes 320 and 340 may depict how these issues may be ameliorated by using a multi-detector system as described herein.
  • scenes 301 and 310 may depict a LIDAR system that may only include one emitter 302 and one detector 303.
  • the detector 303 may be capable of detecting return light with a field of view 304 that may cover up to a maximum detection distance 305.
  • scene 301 may depict the emitter 302 emitting a first light pulse 306 into the environment.
  • the first light pulse 306 is shown as traversing the environment and eventually moving past the maximum detection distance 305 of the detector 303. That is, the first light pulse 306 in scene 301 may not yet have reflected from an object in the environment as return light and been detected within the field of view 304 of the detector 303. With this being the case, a potential range aliasing problem may arise as depicted in scene 310.
  • the emitter 302 is shown as emitting a second light pulse 307 into the environment.
  • the first light pulse may finally reflect from an object (for example, tree 308) and return towards the field of view 304 of the detector 303 as return light 309.
  • the return light 309 may then be detected by the detector 303 at point 310, which may correspond to a point when the return light 309 first enters the field of view of the detector 303 (which, for exemplification purposes, may take place at a first time).
  • the second light pulse may also be currently within the environment at point 311.
  • the second light pulse 307 may have only traveled a short distance from the emitter 302 by the time the return light 309 from the first light pulse 306 is detector by the detector 303.
  • the back-end signal processing components of the LIDAR system may have difficulty in determining whether the return light that was detected by the detector 303 is a short range detection based on the second light pulse 307, or a long range detection based on the first light pulse 306. This may be because distance determinations based on the emitted light pulses from the emitter 302 may be made based on time of flight (ToF) determinations, for example.
  • ToF time of flight
  • the LIDAR system may ascertain when a light pulse is emitted, and may then compare the emission time to a time at which return light is detected by the detector determine The resulting difference in time may then be used to determine the distance at which the emitted light was reflected back to the LIDAR system.
  • the LIDAR system may not be able to discern between two light pulses at different distances within the field of view 304 of the single detector 303 since both light pulses could theoretically be the source of the return light being detected by the detector 303.
  • scene 320 and scene 340 may depict an example manner in which the multi-detector systems described herein may mitigate or eliminate the range aliasing concerns exemplified in scene 301 and scene 310.
  • Scene 320 and scene 340 may thus depict a multi-detector system that may include an emitter 302 and one or more detectors (for example, first detector 321, second detector 322, and/or third detector 323, which may be the same as the first detector 203, the second detector 204, and/or the third detector 205, as well as any other detectors described herein).
  • a detector may be associated with a field of view covering a particular distance range from the emitter 302.
  • the first detector 321 may be shown as being associated with field of view 324 that may include a distance range closest to the emitter 302.
  • the second detector 322 may be shown as being associated with field of view 325 that may include a distance range beyond the distance range covered by the field of view 324 of the first detector 321.
  • the third detector 323 may be shown as being associated with field of view 326 that may include a distance range beyond the distance range covered by the field of view 325 of the second detector 322.
  • the combination of the field of view 324, field of view 325, and field of view 326 may cover the same maximum detection range 305 from the emitter 302.
  • scene 320 may begin, similar to scene 301, with the emitter 302 emitting a first light pulse 306 into the environment.
  • the first light pulse 306 is shown as traversing the environment and eventually moving past the maximum detection distance 305 of the first detector 321, the second detector 322, and the third detector 323. That is, the first light pulse 306 may not have reflected from an object in the environment and been detected by the first detector 321, the second detector 322, or the third detector 323.
  • the emitter 302 may then emit a second light pulse 307.
  • the difference between scene 310 with the single detector 303 and the scene 340 with the multiple detectors may be that the multiple detectors may be used to provide more data about detected return light than the single detector 303 may be able to.
  • individual detectors may be able to be selectively turned on and/or off as the light pulse traverses the environment.
  • the first detector 321, the second detector 322, and the third detector 323 may be selectively turned on and then off as the first light pulse 306 traverses further away from the emitter 302.
  • the third detector 323 may be kept on even as the first light pulse 306 travels beyond the maximum detection range 305 of the detectors.
  • scene 340 shows the same second light pulse 307 being emitted from the emitter 302 before the first light pulse 306 reflects from an object and is detected by one of the detectors.
  • the difference here is that the LIDAR system now has two separate detectors monitoring two different distances from the emitter 302. That is, now both the first detector 321 with a known field of view 324 closer to the emitter 302 and the third detector 323 with a known field of view 326 that is further away from the emitter 302 are turned on.
  • third detector 323 may produce an output indicating that it detected return light at the same first time described in scenes 301 and 310.
  • this first time may correspond to a time at which return light originating from the second light pulse 307 may be within the field of view 324 of the first detector 321 (thus, the first detector 321 is shown as being on).
  • the system is then able to discern that the return light detected by the third detector 323 is associated with the first light pulse 306 instead of the second light pulse 307.
  • a LIDAR system may be able to track multiple light pulses traversing the environment simultaneously with reduced concern that range aliasing will cause difficultly in discerning between returns from the multiple emitted light pulses. Something about can double your shot rate (or even more).
  • the use case 300 may depict only one example of how a multidetector system may be used to mitigate or eliminate range aliasing concerns.
  • range aliasing concerns may be mitigated and/or eliminated by the use of multiple photodetectors, if the photodetectors are controlled to be turned on and/or off based on predetermined time intervals as described above, then return light that reflects from an object beyond the maximum detection range of the LIDAR system may never be detected (which may eliminate the possibility for range aliasing). The reason for this may be exemplified as follows. In this example, a first light pulse is emitted.
  • the photodetectors then proceed through their sequence of turning on and/or off as described above until the final photodetector (the photodetector with the longest distance field of view from the LIDAR system) is turned off (when any expected return light would originate from beyond the maximum detection range). Then a second light pulse is emitted from the LIDAR system. If there were only one photodetector that was always turned on, then return light from the first light pulse may then return and be detected by the photodetector. However, if there are multiple photodetectors and each is only turned on for a given time interval, then it is more likely that return light that is detected by a given photodetector would have originated from the second light pulse.
  • the use of the multiple photodetectors may still allow for a determination to be made that the detected return light could potentially originate from the first light pulse. That is, if the return light from the first pulse returns and is detected by one of the turned on photodetectors, but the second light pulse has still not reflected from an object back towards a photodetector, then return light from the light pulse may be detected by a subsequent photodetector that is turned on.
  • the LIDAR system may thus be able to determine that at least one of the return light detections was based on range aliasing. If this is the case, the LIDAR system may simply disregard both of these two detected returns.
  • FIGS. 4A-4B depict example circuit configurations, in accordance with one or more example embodiments of the disclosure.
  • the circuit configurations depicted in FIGs. 4A-4B may represent a back-end circuit connected to the outputs of the detectors. This back-end circuit may be used, for example, to pre-process the outputs of the detectors for any signal processing components of the LIDAR system (for example, systems that may make computing determinations based on the data received from the detectors).
  • FIG. 4A depicts a first circuit configuration 400.
  • individual detectors may be associated with their own individual analog to digital converters (ADCs) (for example, detector 403 may be associated with analog to digital converter 406, detector 404 may be associated with analog to digital converter 407, and detector 405 may be associated with analog to digital converter 408).
  • ADC may be used to convert an analog signal to a digital signal. That is, the ADC may be capable of receiving the analog output of a detector as an input and converting that analog signal into a digital signal. This digital signal may then be used by one or more signal processing components 410 of the LIDAR system.
  • a detector may be configured to receive one or more photons as an input and provide current as an output. This current output may be in analog form, and the ADC may convert the analog current into a digital current value for use by the one or more signal processing components 410.
  • FIG. 4B depicts a second circuit configuration 420. While the first circuit configuration 400 shown in FIG. 4A included multiple ADCs (for example, one ADC for each detector), the second circuit configuration 420 may only include one ADC for all of the detectors (or alternatively may include more than one ADC but multiple detectors may share a single ADC instead of each individual detector being associated with its own ADC). That is, the second circuit configuration 420 may include the outputs of the detectors being provided as inputs to a single ADC. The ADC may then provide a digital output to the one or more signal processing components 410 as was the case in the first circuit configuration 400. However, the second circuit configuration 420 may also include one or more additional components between the detectors and the ADC.
  • the first circuit configuration 400 shown in FIG. 4A included multiple ADCs (for example, one ADC for each detector)
  • the second circuit configuration 420 may only include one ADC for all of the detectors (or alternatively may include more than one ADC but multiple detectors may share a single ADC instead of
  • the second circuit configuration 420 may include a summer subcircuit 422.
  • the summer subcircuit 422 may receive the outputs from the one or more detectors and combine them into a single output.
  • Prior to the summer may also exist one or more attenuators (for example, one attenuator for each detector output.
  • An attenuator may be used to attenuate the output of a detector, which may be useful in a number of scenarios. For example, if all of the detectors are turned on at all times, the attenuators may be used to attenuate outputs of certain detectors at certain times. For example, the attenuation may be performed to attenuate the outputs of all of the detectors except one detector.
  • the one detector may correspond to a field of view in which it is expected return light from an emitted light pulse would currently be located.
  • the attenuators may thus serve to reduce the amount of detector output noise that is provided the ADC and signal processing components 410.
  • the attenuators may also be used in other ways as well. For example, all of the outputs of the detectors may be left unattenuated unless it is determined that it is desired to block the output of one or more detectors.
  • FIGs. 5A-5B illustrate example methods 500A and 500B in accordance with one or more example embodiments of the disclosure.
  • the method may include emitting, by a light emitter of a LIDAR system, a first light pulse.
  • Block 504a of the method 500A may include activating a first light detector of the LIDAR system at a first time, the first time corresponding a time when return light corresponding to the first light pulse would be within a first field of view of the first light detector.
  • Block 506a of the method 500A may include activating a second light detector of the LIDAR system at a second time, the second time corresponding a time when return light corresponding to the first light pulse would be within a second field of view of the second light detector, wherein the first light detector is configured to include the first field of view, the first field of view being associated with a first range from the light emitter, and wherein the second light detector configured to include the second field of view, the second field of view being associated with a second range from the light emitter.
  • the method may include emitting, by a light emitter, a first light pulse at a first time.
  • Block 504b of the method 500B may include activating a first light detector and a second light detector, wherein a field of view of the first light detector includes a range closer to the light emitter than the field of view of the second light detector.
  • Block 506b of the method 500B may include emitting, by the light emitter, a second light pulse at a second time.
  • Block 508b of the method 500B may include receiving return light by the second light detector at a third time.
  • Block 510 of method 500B may include determining, based on the return light being detected by the second light detector, that the return light is based on the first light pulse.
  • the photodetectors may be selectively “turned on” and/or “turned off’ (which may similarly be referred to as “activating” or deactivating” a photodetector). “Turning on” a photodetector may refer to providing a bias voltage to the photodetector that satisfies a threshold voltage level. The bias voltage satisfying the threshold voltage level (for example, being at or above the threshold voltage level) may provide sufficient voltage to the photodetector to allow it to produce a level of output current based on light received by the photodetector.
  • the output threshold voltage level being used and the corresponding output current being produced may depend on the type of photodetector being used and the desired mode of the operation of the photodetector. For example, if the photodetector is an APD, the threshold voltage level may be set high enough so that the photodetector is capable of avalanching upon receipt of light as described above. Similarly, if it is desired for the APD to operate in Geiger Mode, the threshold voltage level may be set even higher than if the APD were desired to operate outside of the Geiger Mode region of operation. That is, the gain of the photodetector when this higher threshold voltage level is applied may be much larger than if the photodetector were operating as a normal Avalanche Photodiode.
  • the threshold bias voltage may be lower.
  • the threshold voltage level may be set below a threshold voltage level used to allow the APD to avalanche upon receipt of light. That is, the bias voltage applied to the APD may be set low enough to allow the APD to still produce an output current, but only in a linear mode of operation.
  • “turning off’ a photodetector may refer to reducing the bias voltage provided to the photodetector to below the threshold voltage level.
  • “turning off’ the photodetector may not necessarily mean that the photodetector is not able to detect return light. That is, the photodetector may still be able to detect return light while the bias voltage is below the threshold voltage level, but the output signal produced by the photodetector may be below a noise floor established for a signal processing portion of the LIDAR system.
  • a photodetector may be an Avalanche Photodiode.
  • the APD may still produce an output, but the output may be based on a linear mode of operation and the resulting output current may be much lower than if the APD were to avalanche upon receipt of a same number of photons.
  • the signal processing portion of the LIDAR system may have a noise floor configured to correspond to an output of the APD in linear mode, so that any outputs from the APD when operating with this reduced bias voltage may effectively be disregarded by the LIDAR system.
  • selectively turning on and/or turning off the photodetectors may entail only having some of the photodetectors capable of detecting return light at a given time.
  • FIG. 6 illustrates an example LIDAR system 600, in accordance with one or more embodiments of this disclosure.
  • the LIDAR system 600 may be representative of any number of elements described herein, such as the LIDAR system 101 described with respect to FIG. 1, as well as any other LIDAR systems described herein.
  • the LIDAR system 600 may include at least an emitter portion 601, a detector portion 605, and a computing portion 613.
  • the emitter portion 601 may include at least one or more emitter(s) 602 (for simplicity, reference may be made hereinafter to “an emitter,” but multiple emitters could be equally as applicable) and/or one or more optical element(s) 604.
  • An emitter 602 may be a device that is capable of emitting light into the environment. Once the light is in the environment, it may travel towards an object 612. The light may then reflect from the object and return towards the LIDAR system 600 and be detected by the detector portion 605 of the LIDAR system 600 as may be described below.
  • the emitter 602 may be a laser diode as described above.
  • the emitter 602 may be capable of emitting light in a continuous waveform or as a series of pulses.
  • An optical element 604 may be an element that may be used to alter the light emitted from the emitter 602 before it enters the environment.
  • the optical element 604 may be a lens, a collimator, or a waveplate.
  • the lens may be used to focus the emitter light.
  • the collimator may be used to collimate the emitted light. That is, the collimator may be used to reduce the divergence of the emitter light.
  • the waveplate may be used to alter the polarization state of the emitted light. Any number or combination of different types of optical elements 604, including optical elements not listed herein, may be used in the LIDAR system 600.
  • the detector portion 605 may include at least one or more detector(s) 606 (for simplicity, reference may be made hereinafter to “a detector,” but multiple detectors could be equally as applicable) and/or one or more optical elements 608.
  • the detector may be a device that is capable of detecting return light from the environment (for example light that has been emitted by the LIDAR system 600 and reflected by an object 612).
  • the detectors may be photodiodes.
  • the photodiodes may specifically include Avalanche Photodiodes (APDs), which in some instances may operate in Geiger Mode.
  • APDs Avalanche Photodiodes
  • any other type of detector may be used, such as light emitting diodes (LED), vertical cavity surface emitting lasers (VCSEL), organic light emitting diodes (OLED), polymer light emitting diodes (PLED), light emitting polymers (LEP), liquid crystal displays (LCD), microelectromechanical systems (MEMS), and/or any other device configured to selectively transmit, reflect, and/or emit light to provide the plurality of emitted light beams and/or pulses.
  • the detectors of the array may take various forms.
  • the detectors may take the form of photodiodes, avalanche photodiodes (e.g., Geiger mode and/or linear mode avalanche photodiodes), phototransistors, cameras, active pixel sensors (APS), charge coupled devices (CCD), cryogenic detectors, and/or any other sensor of light configured to receive focused light having wavelengths in the wavelength range of the emitted light.
  • the functionality of the detector 606 in capturing return light from the environment may serve to allow the LIDAR system 600 to ascertain information about the object 612 in the environment. That is, the LIDAR system 600 may be able to determine information such as the distance of the object from the LIDAR system 600 and the shape and/or size of the object 612, among other information.
  • the optical element 608 may be an element that is used to alter the return light traveling towards the detector 606.
  • the optical element 608 may be a lens, a waveplate, or filter such as a bandpass filter.
  • the lens may be used to focus return light on the detector 606.
  • the waveplate may be used to alter the polarization state of the return light.
  • the filter may be used to only allow certain wavelengths of light to reach the detector (for example a wavelength of light emitted by the emitter 602). Any number or combination of different types of optical elements 608, including optical elements not listed herein, may be used in the LIDAR system 600.
  • the computing portion may include one or more processor(s) 614 and memory 616.
  • the processor 614 may execute instructions that are stored in one or more memory devices (referred to as memory 616).
  • the instructions can be, for instance, instructions for implementing functionality described as being carried out by one or more modules and systems disclosed above or instructions for implementing one or more of the methods disclosed above.
  • the processor(s) 614 can be embodied in, for example, a CPU, multiple CPUs, a GPU, multiple GPUs, a TPU, multiple TPUs, a multi-core processor, a combination thereof, and the like.
  • the processor(s) 614 can be arranged in a single processing device.
  • the processor(s) 614 can be distributed across two or more processing devices (for example multiple CPUs; multiple GPUs; a combination thereof; or the like).
  • a processor can be implemented as a combination of processing circuitry or computing processing units (such as CPUs, GPUs, or a combination of both). Therefore, for the sake of illustration, a processor can refer to a single-core processor; a single processor with software multithread execution capability; a multi-core processor; a multi-core processor with software multithread execution capability; a multi-core processor with hardware multithread technology; a parallel processing (or computing) platform; and parallel computing platforms with distributed shared memory.
  • a processor can refer to an integrated circuit (IC), an ASIC, a digital signal processor (DSP), a FPGA, a PLC, a complex programmable logic device (CPLD), a discrete gate or transistor logic, discrete hardware components, or any combination thereof designed or otherwise configured (for example manufactured) to perform the functions described herein.
  • the processor(s) 614 can access the memory 616 by means of a communication architecture (for example a system bus).
  • the communication architecture may be suitable for the particular arrangement (localized or distributed) and type of the processor(s) 614.
  • the communication architecture 606 can include one or many bus architectures, such as a memory bus or a memory controller; a peripheral bus; an accelerated graphics port; a processor or local bus; a combination thereof; or the like.
  • such architectures can include an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an Enhanced ISA (EISA) bus, a Video Electronics Standards Association (VESA) local bus, an Accelerated Graphics Port (AGP) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express bus, a Personal Computer Memory Card International Association (PCMCIA) bus, a Universal Serial Bus (USB), and or the like.
  • ISA Industry Standard Architecture
  • MCA Micro Channel Architecture
  • EISA Enhanced ISA
  • VESA Video Electronics Standards Association
  • AGP Accelerated Graphics Port
  • PCI Peripheral Component Interconnect
  • PCMCIA Personal Computer Memory Card International Association
  • USB Universal Serial Bus
  • Memory components or memory devices disclosed herein can be embodied in either volatile memory or non-volatile memory or can include both volatile and nonvolatile memory.
  • the memory components or memory devices can be removable or non-removable, and/or internal or external to a computing device or component.
  • Examples of various types of non-transitory storage media can include harddisc drives, zip drives, CD-ROMs, digital versatile disks (DVDs) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, flash memory cards or other types of memory cards, cartridges, or any other non- transitory media suitable to retain the desired information and which can be accessed by a computing device.
  • non-volatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM), which acts as external cache memory.
  • RAM is available in many forms such as synchronous RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), and direct Rambus RAM (DRRAM).
  • SRAM synchronous RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR SDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM Synchlink DRAM
  • DRRAM direct Rambus RAM
  • the disclosed memory devices or memories of the operational or computational environments described herein are intended to include one or more of these and/or any other suitable types of memory.
  • the memory 616 also can retain data.
  • Each computing device 600 also can include mass storage 617 that is accessible by the processor(s) 614 by means of the communication architecture 606.
  • the mass storage 617 can include machine-accessible instructions (for example computer-readable instructions and/or computer-executable instructions).
  • the machine-accessible instructions may be encoded in the mass storage 617 and can be arranged in components that can be built (for example linked and compiled) and retained in computer-executable form in the mass storage 617 or in one or more other machine- accessible non-transitory storage media included in the computing device 600.
  • Such components can embody, or can constitute, one or many of the various modules disclosed herein. Such modules are illustrated as multi-detector control modules 620.
  • the multi-detector control modules 620 including computer-executable instructions, code, or the like that responsive to execution by one or more of the processor(s) 614 may perform functions including controlling the one or more detectors as described herein. For example, turning on and/or turning off any of the detectors are described herein. Additionally, the functions may include execution of any other methods and/or processes described herein.
  • the LIDAR system 600 may include alternate and/or additional hardware, software, or firmware components beyond those described or depicted without departing from the scope of the disclosure. More particularly, it should be appreciated that software, firmware, or hardware components depicted as forming part of the computing device 600 are merely illustrative and that some components may not be present or additional components may be provided in various embodiments. While various illustrative program modules have been depicted and described as software modules stored in data storage, it should be appreciated that functionality described as being supported by the program modules may be enabled by any combination of hardware, software, and/or firmware. It should further be appreciated that each of the above-mentioned modules may, in various embodiments, represent a logical partitioning of supported functionality.
  • This logical partitioning is depicted for ease of explanation of the functionality and may not be representative of the structure of software, hardware, and/or firmware for implementing the functionality. Accordingly, it should be appreciated that functionality described as being provided by a particular module may, in various embodiments, be provided at least in part by one or more other modules. Further, one or more depicted modules may not be present in certain embodiments, while in other embodiments, additional modules not depicted may be present and may support at least a portion of the described functionality and/or additional functionality. Moreover, while certain modules may be depicted and described as sub-modules of another module, in certain embodiments, such modules may be provided as independent modules or as submodules of other modules.
  • blocks of the block diagrams and flow diagrams support combinations of means for performing the specified functions, combinations of elements or steps for performing the specified functions, and program instruction means for performing the specified functions. It will also be understood that each block of the block diagrams and flow diagrams, and combinations of blocks in the block diagrams and flow diagrams, may be implemented by special-purpose, hardware-based computer systems that perform the specified functions, elements or steps, or combinations of specialpurpose hardware and computer instructions.
  • the terms “environment,” “system,” “unit,” “module,” “architecture,” “interface,” “component,” and the like refer to a computer-related entity or an entity related to an operational apparatus with one or more defined functionalities.
  • the terms “environment,” “system,” “module,” “component,” “architecture,” “interface,” and “unit,” can be utilized interchangeably and can be generically referred to functional elements.
  • Such entities may be either hardware, a combination of hardware and software, software, or software in execution.
  • a module can be embodied in a process running on a processor, a processor, an object, an executable portion of software, a thread of execution, a program, and/or a computing device.
  • both a software application executing on a computing device and the computing device can embody a module.
  • one or more modules may reside within a process and/or thread of execution.
  • a module may be localized on one computing device or distributed between two or more computing devices.
  • a module can execute from various computer-readable non-transitory storage media having various data structures stored thereon. Modules can communicate via local and/or remote processes in accordance, for example, with a signal (either analogic or digital) having one or more data packets (for example data from one component interacting with another component in a local system, distributed system, and/or across a network such as a wide area network with other systems via the signal).
  • a module can be embodied in or can include an apparatus with a defined functionality provided by mechanical parts operated by electric or electronic circuitry that is controlled by a software application or firmware application executed by a processor.
  • a processor can be internal or external to the apparatus and can execute at least part of the software or firmware application.
  • a module can be embodied in or can include an apparatus that provides defined functionality through electronic components without mechanical parts.
  • the electronic components can include a processor to execute software or firmware that permits or otherwise facilitates, at least in part, the functionality of the electronic components.
  • modules can communicate via local and/or remote processes in accordance, for example, with a signal (either analog or digital) having one or more data packets (for example data from one component interacting with another component in a local system, distributed system, and/or across a network such as a wide area network with other systems via the signal).
  • modules can communicate or otherwise be coupled via thermal, mechanical, electrical, and/or electromechanical coupling mechanisms (such as conduits, connectors, combinations thereof, or the like).
  • An interface can include input/output (I/O) components as well as associated processors, applications, and/or other programming components.
  • Conditional language such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain implementations could include, while other implementations do not include, certain features, elements, and/or operations. Thus, such conditional language generally is not intended to imply that features, elements, and/or operations are in any way required for one or more implementations or that one or more implementations necessarily include logic for deciding, with or without user input or prompting, whether these features, elements, and/or operations are included or are to be performed in any particular implementation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

Systems, methods, and computer-readable media are disclosed for multi-detector LIDAR and methods. An example method may include emitting, by a light emitter of a LIDAR system, a first light pulse. The example method may also include activating a first light detector of the LIDAR system at a first time, the first time corresponding a time when return light corresponding to the first light pulse would be within a first field of view of the first light detector. The example method may also include activating a second light detector of the LIDAR system at a second time, the second time corresponding a time when return light corresponding to the first light pulse would be within a second field of view of the second light detector, wherein the first light detector is configured to include the first field of view, the first field of view being associated with a first range from the light emitter, and wherein the second light detector configured to include the second field of view, the second field of view being associated with a second range from the light emitter.

Description

MULTI-DETECTOR LIDAR SYSTEMS AND METHODS FOR MITIGATING RANGE ALIASING
BACKGROUND
[0001] In some LIDAR systems (for example, bistatic LIDAR systems that use a single receiver to detect light emitted by a single emitter), the emitter used to emit light into the environment and the receiver used to detect return light reflecting from objects in the environment are physically displaced relative to each other. Such LIDAR configurations may inherently be associated with parallax problems because the light emitted by the emitter and received by the detector may not travel along parallel paths. For example, for a LIDAR system designed to operate at very short distances (for example, at a distance of 0.1 meters), the emitter and receiver may need to be physically tilted towards each other (as opposed to aligning them at infinity distance). However, this physical tilting may result in a loss of detection capabilities at long ranges from the LIDAR system. To address this, and to have the capability to handle both short and long-range detections, some LIDAR systems may use a combination of a wide field of view and the aforementioned tilted assembly. This wide field of view, however, may result in another set of problems, including increasing the amount of background light detected by the receiver without increasing the amount of light emitted by the emitter, which may significantly increase the signal to noise ratio of the receiver.
[0002] Additionally, the use of the single receiver for the emitter (or even an array of receivers pointed in a common direction) can lead to other problems as well, such as difficulties in addressing range aliasing and cross-talk concerns. Range aliasing may arise when multiple light pulses are emitted by an emitter and are traversing the environment at the same time. When this is the case, the LIDAR system may have difficulty ascertaining which emitted light pulse the detected return light originated from. To provide an example, the emitter emits a first light pulse at a first time and then emits a second light pulse at a second time before return light from the first light pulse is detected by the receiver. Thus, both the first light pulse and the second light pulse are traversing the environment simultaneously. Subsequently, the receiver may detect a return light pulse a short amount of time after the second light pulse is emitted. However, the LIDAR system may have difficulty determining whether the return light is indicative of a short range reflection based on the second light pulse or a long range reflection based on the first light pulse. Cross-talk concerns may arise based on a similar scenario, but instead of detecting return light from a first light pulse emitted by the same emitter, the receiver from the LIDAR system may instead detect a second light pulse that originates from an emitter of another LIDAR system. In this scenario, the LIDAR system may mistake the detected second light pulse from the other LIDAR system as return light originating from the first light pulse. Similar to range aliasing, this may cause the LIDAR system to mistakenly believe that a short range object is reflecting light back towards the LIDAR system.
BRIEF DESCRIPTION OF THE DRAWINGS
[0003] The detailed description is set forth with reference to the accompanying drawings. The drawings are provided for purposes of illustration only and merely depict example embodiments of the disclosure. The drawings are provided to facilitate understanding of the disclosure and shall not be deemed to limit the breadth, scope, or applicability of the disclosure. In the drawings, the left-most digit(s) of a reference numeral may identify the drawing in which the reference numeral first appears. The use of the same reference numerals indicates similar, but not necessarily the same or identical components. However, different reference numerals may be used to identify similar components as well. Various embodiments may utilize elements or components other than those illustrated in the drawings, and some elements and/or components may not be present in various embodiments. The use of singular terminology to describe a component or element may, depending on the context, encompass a plural number of such components or elements and vice versa.
[0004] FIG. 1 depicts an example system, in accordance with one or more example embodiments of the disclosure.
[0005] FIGs. 2A-2B depict an example use case, in accordance with one or more example embodiments of the disclosure.
[0006] FIG. 3 depicts an example use case, in accordance with one or more example embodiments of the disclosure.
[0007] FIGs. 4A-4B depict example circuit configurations, in accordance with one or more example embodiments of the disclosure. [0008] FIGs. 5 A-5B depict example methods, in accordance with one or more example embodiments of the disclosure.
[0009] FIG. 6 depicts a schematic illustration of an example system architecture, in accordance with one or more example embodiments of the disclosure.
DETAILED DESCRIPTION
OVERVIEW
[0010] This disclosure relates to, among other things, multi-detector LIDAR systems and methods (the LIDAR detectors may be referred to as “receivers,” “photodetectors,” “photodiodes,” or the like herein. Additionally, reference may be made herein to a single “photodetector” or “photodiode,” but the LIDAR systems described herein may also similarly include any number of such detectors). In some instances, the detectors may be photodiodes, which may be diodes that are capable of converting incoming light photons into an electrical signal (for example, an electrical current). The detectors may be implemented in a LIDAR system that may emit light into an environment and may subsequently detect any light returning to the LIDAR system (for example, through the emitted light reflecting from an object in the environment) using the detectors. As one example implementation, the LIDAR system may be implemented in a vehicle (for example, autonomous vehicle, semi-autonomous vehicle, or any other type of vehicle), however the LIDAR system may be implemented in other contexts as well. The detectors may also more specifically be Avalanche Photodiodes (APD), which may function in the same manner as a normal photodiode, but may operate with an internal gain as well. Consequentially, an APD that receives the same number of incoming photons as a normal photodiode will produce a much greater resulting electrical signal through an “avalanching” of electrons, which allows the APD to be more sensitive to smaller numbers of incoming photons than a normal photodiode. An APD may also operate in Geiger Mode, which may significantly increase the internal gain of the APD.
[0011] As aforementioned, bistatic LIDAR systems may include emitters and receivers that are physically displaced relative to one another. Such LIDAR configurations may inherently be associated with parallax problems because the light emitted by the emitter and received by the detector may not travel along parallel paths. For example, for a LIDAR system to detect objects at short distances, the emitter and receiver may need to be physically tilted towards each other (as opposed to aligning them in parallel). However, this physical tilting may result in a loss of detection capabilities at long ranges from the LIDAR system. To address this, and to have the capability to handle both short and long-range detections, some systems may use a combination of a wide field of view and the aforementioned tilted assembly. This wide field of view, however, may result in another set of problems, including increasing the amount of background light detected by the receiver without increasing the amount of light emitted by the emitter, which may significantly increase the signal to noise ratio of the receiver. Additionally, also as mentioned above, bistatic LIDAR systems (or even LIDAR systems in general, including, for example, monostatic LIDAR systems) may have difficulty determining the source of certain return light based on range aliasing and/or cross talk problems.
[0012] To eliminate or mitigate the above parallax concerns with regards to bistatic LIDAR configurations, a LIDAR system may be used that may include multiple photodetectors to detect return light that is based on emitted light from an emitter device (for example, laser diode) of the LIDAR system. The photodetectors may be physically orientated (for example, pointed in at different angles) so that their individual fields of view include varying distances from the LIDAR system (for example, as depicted in FIG. 1). In such a configuration, a first photodetector may be physically oriented so that its detection field of view encompasses a physical space within a short range from the LIDAR system (for example, a range including a distance 0.1m away from the LIDAR system to 0.195m away from the LIDAR system encompassing a field of view of 2.85 degrees). A second photodetector may be physically oriented so that its detection field of view covers a physical space just beyond the physical space that the first photodetector covers (for example, a range including a distance 0.19m away from the LIDAR system to 4. Im away from the LIDAR system, also with a field of view of 2.85 degrees). Similarly, third photodetector may be physically oriented so that its detection field of view covers a physical space just beyond the physical space that the second photodetector covers (for example, a range including a distance 4m away from the LIDAR system to an infinite distance away from the LIDAR system with a field of view of 2.85 degrees (these example distances and fields of view are provided as arbitrary examples, and any other ranges and/or fields of view may similarly be applicable). The total number of photodetectors included within the LIDAR system may be based on a number of factors. As a first example, the number may be based on a maximum detection range that is desired to be covered by the photodetectors. If the maximum detection range is a smaller distance from the LIDAR system, then a smaller number of photodetectors may be used, and if the maximum detection range is a larger distance from the LIDAR system, then a larger number of photodetectors may be used (it should be noted that while the term “maximum detection range” is used, this distance could theoretically extend out to infinity). As a second example, the number of photodetectors used may also be based on the size of the fields of view of individual photodetectors being used. A larger number of photodetectors may need to be employed if some or all of the individual photodetectors have a more narrow field of view. Likewise, a smaller number of photodetectors may be employed if some or all of the individual photodetectors have a broader field of view. These are just two non-limiting examples of factors that may influence the number of photodetectors used in the LIDAR system, and the number may also depend on any number of additional factors. As a third example, the number of photodetectors used may depend on the amount of overlap in the field of view between the photodetectors. In some instances, the transition between one photodetector’s field of view to another photodetector’s field of view may involve no overlap in the respective fields of view (that is, the end of one photodetector’s field of view may exactly correspond with the beginning of another photodetector’s field of view). In other instances, there may be some overlap between fields of view for different photodetectors. The overlap may serve as a safeguard to ensure that sufficient photons corresponding to return light may be detected by the photodetectors. The size of the overlap between the fields of view of the photodetectors may vary, and may depend on factors such as a relative size of the return light versus an the active area of the photodetectors. Furthermore, in some cases the direction in which individual photodetectors are pointing may be dynamically adjustable as well. In some embodiments, the photodetectors may be configured in an array such that they are physically spaced apart from one another at equal distances. However, in some embodiments, the photodetector arrays may also be physically spaced apart in unequal intervals. For example, a separation that is logarithmically based may be more beneficial to produce equal resolution to minimal range. This may be because using a linearly- spaced detector array with the same FoV may result in ranges that are asymmetrical, with a first detector in the array responsible for a smaller amount of physical space and the last photodetector in the array being responsible for a much larger amount of physical space (for example, 4m to infinity). By moving to a different spacing model (for example, logarithmic spacing as described above), individual detector ranges may be optimized to be more equalized. This may ensure, for example, that multiple returns from any one area in the environment do not overload the receiver responsible for that area. Alternatively, more closely-spaced detectors may be used in the far-field to further reduce the FoV as the distance covered increases and the receiver cone becomes very large, thus making yourself less susceptible to noise in the far-field.
[0013] In some embodiments, the photodetectors may be selectively “turned on” and/or “turned off’ (which may similarly be referred to as “activating” or deactivating” a photodetector). “Turning on” a photodetector may refer to providing a bias voltage to the photodetector that satisfies a threshold voltage level. The bias voltage satisfying the threshold voltage level (for example, being at or above the threshold voltage level) may provide sufficient voltage to the photodetector to allow it to produce a level of output current based on light received by the photodetector. The output threshold voltage level being used and the corresponding output current being produced may depend on the type of photodetector being used and the desired mode of the operation of the photodetector. For example, if the photodetector is an APD, the threshold voltage level may be set high enough so that the photodetector is capable of avalanching upon receipt of light as described above. Similarly, if it is desired for the APD to operate in Geiger Mode, the threshold voltage level may be set even higher than if the APD were desired to operate outside of the Geiger Mode region of operation. That is, the gain of the photodetector when this higher threshold voltage level is applied may be much larger than if the photodetector were operating as a normal Avalanche Photodiode. Additionally, if the photodetector is not an APD, and operates in a linear mode of operation in which the output current produced based on a similar amount of detected light may be much lower than if the photodetector were an APD, then the threshold bias voltage may be lower. Furthermore, even if the photodetector is an APD, the threshold voltage level may be set below a threshold voltage level used to allow the APD to avalanche upon receipt of light. That is, the bias voltage applied to the APD may be set low enough to allow the APD to still produce an output current, but only in a linear mode of operation.
[0014] Likewise, in some embodiments, “turning off’ a photodetector may refer to reducing the bias voltage provided to the photodetector to below the threshold voltage level. In some instances, “turning off’ the photodetector may not necessarily mean that the photodetector is not able to detect return light. That is, the photodetector may still be able to detect return light while the bias voltage is below the threshold voltage level, but the output signal produced by the photodetector may be below a noise floor established for a signal processing portion of the LIDAR system. As one non-limiting example, a photodetector may be an Avalanche Photodiode. If a sufficient enough bias voltage is provided to allow the APD to avalanche upon receipt of light, then a large current output may be produced by the APD. However, if a lower bias voltage is applied, then the APD may still produce an output, but the output may be based on a linear mode of operation and the resulting output current may be much lower than if the APD were to avalanche upon receipt of a same number of photons. The signal processing portion of the LIDAR system may have a noise floor configured to correspond to an output of the APD in linear mode, so that any outputs from the APD when operating with this reduced bias voltage may effectively be disregarded by the LIDAR system. Thus, selectively turning on and/or turning off the photodetectors may entail only having some of the photodetectors capable of detecting return light at a given time.
[0015] In some embodiments, the timing at which the photodetectors may be turned on and/or turned off may depend on predetermined time intervals. As a first example, these predetermined time intervals may be based on an amount of time that has elapsed since a given light pulse was emitted from the emitter of the LIDAR system. Continuing this first example, a first light pulse being emitted from the emitter may trigger a timing sequence. The timing sequence may involve individual photodetectors being turned on when return light corresponding to the emitted light pulse being reflect from an object may be expected to be within a field of view of a particular photodetector. Still continuing this first example, a first photodetector may be pointed in a direction such that its field of view may include a range of physical space within a closest distance from the LIDAR system (Rxl as depicted in FIG.l may provide a visual example of this first photodetector’s field of view). This first photodetector may be the first photodetector to be turned on for a first time interval during which return light originating from the first light pulse may be expected to have been reflected from objects within the first field of view. As an example, the field of view of this first photodetector may include a range from 0.19m away from the LIDAR system to 4.1m away from the LIDAR system (again, it should be noted that any specific examples of ranges and/or fields of view of any photodetectors described herein may be arbitrary, and any other ranges and/or fields of view may similarly be applicable) from the LIDAR system. That is, if the emitted light were to be emitted from the LIDAR system and then reflect from an object in the environment within this range from the LIDAR system back towards the first photodetector, then the first photodetector pointing in that direction may be turned on and able to detect the return light. Once the first time interval has passed and a second time interval begins, a second photodetector may be turned on. Similar to the first time interval, the second time interval may correspond to a period of time during which any return light detected by the second photodetector is expected to have originated from an object in the field of view of the second photodetector. This process may continue with some or all of the remaining photodetectors in the array being turned on after successive time intervals of previous photodetectors in the array have passed. This process may be visualized, for example, in FIG. 2, as described below. Additionally, in some cases, once a time interval associated with a field of view of a particular photodetector has passed, that photodetector may be turned off as well. That is, only the photodetector associated with the current time interval may be turned on at any given time. This may provide a number of benefits, such as serving to reduce extraneous data received by the other photodetectors during this time and/or reducing the power consumption of the LIDAR system, to name a few examples. In some embodiments, however, some or all of the photodetectors may remain on through multiple time intervals, or may remain on at all times. This may be beneficial because it may allow as much data from the environment to be captured as possible. Additionally, the time intervals may not necessarily need to be the same length of time. For example, a time interval associated with a given photodetector may depend on the size of the field of view of the photodetector. That is, a photodetector with a more narrow field of view may be associated with a shorter time interval than a photodetector with a more broad field of view. This situation may arise when photodetectors with varying sizes of field of view are employed. Additionally, in some cases, any other type of time interval may be used to determined when to turn on and/or turn off any of the photodetectors included within the LIDAR system.
[0016] In some embodiments, the bias voltage provided to an individual photodetector during a given time interval may not necessarily be fixed. That is, the bias voltage that is provided to the photodetector may vary over time based on a certain function (for example, “function” may refer to a magnitude of bias voltage applied with respect to time. That is, if a plot of the bias voltage applied over time were to be created, the function would be visualized by the plot). The function may not necessarily only involve the threshold voltage level only either being at threshold voltage level or at a value below the threshold voltage value (that is, if the bias voltage applied were plotted as a function, it may not necessarily look like a step function that rises to the threshold voltage level at the beginning of the time interval and drops below the threshold voltage level at the end of the time interval). The function may instead be associated with a certain degree of change in the bias voltage throughout the time interval. For example, the function may represent a Gaussian function. Using this type of function to dictate the bias voltage being provided, for example, the bias voltage may increase over a first period of time, reach a peak bias voltage, and then may decrease back down to below the threshold voltage level over a second period of time. In some cases, the peak of the example Gaussian function may be maintained throughout the entire predetermined time interval associated with the particular photodetector. In some cases, the upward slope of the Gaussian function may begin at the beginning of the time interval and the peak of the Gaussian function may be reached at a certain amount of time after the beginning of the time interval. This may be desirable if it is desirable for the photodetector to be at its most sensitive to return light at a particular portion of its field of view. In some cases, the upward slope of the example Gaussian function may begin during the time interval of a previous photodetector (and likewise the downward slope may extend into the time interval for a successive photodetector’s time interval). In some cases, the function may be any other function other than a Gaussian function (for example, the function may actually even be a step function in some instances). That is, the bias voltage applied to a given photodetector may vary over time in any other number of ways. Additionally, different photodetectors may be associated with different types of functions. Different photodetectors may also be associated with the same type of function, but certain parameters of the function may vary. For example, the peak of a Gaussian function used for one photodetector may be greater than the peak of a Gaussian function used for a second photodetector. The functions used may also vary for different emitted light pulses from the LIDAR system. That is, when the LIDAR system emits a first light pulse, a first type of function may be used, but when the LIDAR emits a subsequent light pulse, a second type of function may be used. The above examples of how different functions may be used to control the bias voltage applied to a photodetector may merely be exemplary, and any other type(s) of functions may be applied to any combination of photodetectors based on any number of timing considerations. [0017] In some embodiments, the manner in which photodetectors are turned on and/or off may also be dynamic instead of being based on fixed time intervals that are used for successive light pulse emissions from the emitter. That is, the time intervals used to determine the bias voltage to be provided to different photodetectors may not consistently iterate in the same manner forever, but may rather change over time. In some cases, the time intervals used for some or all of the photodetectors may change after each successive emitted light pulse by the emitter. In some cases, the time intervals may change after a given number of emitted light pulses are emitted by the emitter. In some cases, the time intervals may also change within a period of time during which a single emitted light pulse may currently be traversing the environment. That is, a light pulse may be emitted, a first photodetector may be turned on for a first time interval, and then a second time interval for a second photodetector may be dynamically changed to a different time interval. In some cases, these time interval changes may be based on data that is received from the environment. That is, an closed loop feedback system may be implemented to vary the time intervals (it should be noted that this closed loop feedback system may similarly be used to dynamically adjust the types of functions used to dictate the bias voltage provided to different photodetectors.
[0018] In some embodiments, the physical orientation of the photodetectors may also be either fixed and/or dynamically configurable. That is, individual photodetectors may include an actuation mechanism that may allow the direction in which a photodetector is pointed to be dynamically adjusted (and consequentially, the fields of view of the photodetectors may be dynamically adjustable). For example, the actuation mechanism may include microelectromechanical systems (MEMS), or any other type of actuation mechanism that may allow a photodetector to adjust the direction in which it points. This dynamic adjustment of the physical orientation of one or more of the photodetectors may be performed for any number of other reasons. As a first example, within the time period during which one particular emitted light pulse is traversing the environment, a first photodetector may be turned on and data may be captured by that first photodetector. A direction in which a second photodetector is pointing may then be adjusted based on the data captured by the first photodetector. As a second example, multiple photodetectors may be adjusted to point in the same direction. This may be desirable because this may allow for more data to be captured from a particular portion of the environment than if a single photodetector were used to capture data from that portion of the environment. This may be beneficial because one photodetector may serve as serve as a failsafe for another photodetector (that is, one photodetector may serve to validate the data received by the other photodetector or may serve to capture data from the portion of the environment if the other photodetector is unable to do so for a given period of time). This may also be useful if the portion of the environment is determined to be an area of interest and thus it is desirable to obtain as much data from that portion of the environment as possible. However, these are merely examples of reasons for adjusting the physical orientation of one or more of the photodetectors, and such adjustments may be made for any other number of reasons as well.
[0019] In some embodiments, the circuitry (examples of which may be depicted in FIGs. 4A-4B) used to capture the data being produced by individual photodetectors within the photodetector array may involve providing individual analog to digital converters (ADCs) for each photodetector. An analog to digital converter may take an analog signal as an input and produce a corresponding digital output. The current output by a photodetector may be an analog signal, so the analog to digital converter may take this signal as an input and convert it into a digital form that may be used by a signal processing portion of the LIDAR system. In some embodiments, however, a single ADC may be used for multiple of the photodetectors or all of the photodetectors. In such embodiments, the outputs of the individual photodetectors may be summed and provided as a single output to the ADC. The summing may be performed using a summer circuit, which may comprise more than one circuits that are capacitive coupled together, or may also be in the form of an op-amp summer. Additionally, prior to summing the outputs of the individual photodetectors, one or more attenuators may be used to attenuate one the outputs of one or more of the photodetectors may be performed as well For example, if all of the detectors are turned on at all times, the attenuators may be used to attenuate outputs of certain detectors at certain times. For example, the attenuation may be performed to attenuate the outputs of all of the detectors except one detector. For example, the one detector may correspond to a field of view in which it is expected return light from an emitted light pulse would currently be located. The attenuators may thus serve to reduce the amount of detector output noise that is provided the ADC and signal processing components 410. The attenuators may also be used in other ways as well. For example, all of the outputs of the detectors may be left unattenuated unless it is determined that it is desired to block the output of one or more detectors. In some cases, either of the aforementioned circuitry embodiments may be employed when some or all of the photodetectors remain turned on at all times, instead of being selectively turned on and off during a single light pulse emissions. In some embodiments, however, the circuitry may also be employed when the photodetectors are selectively turned on and/or off as well. For example, in scenarios where a photodetector is selectively turned on as return light based on an emitted light pulse is expected to enter a field of view of the photodetector (that is, only one photodetector is turned on at a time), a single ADC may be employed. In such cases, there may not be any need for summing and/or attenuation as may be the case when multiple or all photodetectors are turned on at the same time. If the photodetector that is used has a long recovery period, then this configuration may help to prevent background noise from reducing a dynamic range.
[0020] In some embodiments, the use of multiple photodetectors as described herein may also have the added benefit of mitigating range aliasing concerns that may arise in LIDAR systems. Range aliasing may be a phenomenon that may occur when a two or more different light pulses emitted by an emitter of the LIDAR system are simultaneously traversing the environment. For example, the emitter of the LIDAR system may emit a light pulse at a first time, and a time at which the light pulse would be expected to return to the LIDAR system from a maximum detection range may elapse. The LIDAR system may then emit a second light pulse. A short time after this second light pulse is emitted, return light based on the first light pulse reflecting from an object outside the maximum detection range may be detected. When this is the case, the LIDAR system may incorrectly identify the return light pulse as being a short range return of the second light pulse instead of a long range return from the first light pulse. This may be problematic because light associated with multiple emitted light pulses from the LIDAR system may exist in the environment during any given time interval. This may result in the shot rate (the rate at which subsequent light pulses may be emitted by the emitter) of the LIDAR system being lowered to reduce the likelihood of numerous light pulses traversing the environment at the same time and resulting in this range aliasing problem.
[0021] In some embodiments, using multiple photodetectors in the manner described herein may mitigate or eliminate range aliasing concerns because more data about the environment may be ascertained than if only one photodetector was used. This concept may be further exemplified in FIG. 3. For example, a first light pulse may be emitted at a first time. The first light pulse may traverse the environment, and successive individual photodetectors may be turned on and/or off at varying time intervals as the first light pulse traverses further away from the emitter. However, instead of turning off the last photodetector (for example, the photodetector with the detection range furthest from the LIDAR system) when the first light pulse travels beyond the field of view of the last photodetector without reflecting from an object and being detected by the last photodetector, the last photodetector may be kept turned on as the first light pulse continues to travel beyond the field of view of the last photodetector. Continuing this example, a second light pulse may then be emitted by the emitter. A short time after the second light pulse is emitted (for example, when it is within the field of view of a first photodetector with a field of view including a closest distance range to the emitter) the last photodetector may detect return light. In this case, it is known that the second light pulse would not have traveled far enough to be detected by the last photodetector as return light, so the detection by the last photodetector is more likely associated with the first light pulse. In this manner, it may be more easily ascertained which light pulse a detected light return may be associated with.
[0022] As another example of how range aliasing concerns may be mitigated and/or eliminated by the use of multiple photodetectors, if the photodetectors are controlled to be turned on and/or off based on predetermined time intervals as described above, then return light that reflects from an object beyond the maximum detection range of the LIDAR system may never be detected (which may eliminate the possibility for range aliasing). The reason for this may be exemplified as follows. In this example, a first light pulse is emitted. The photodetectors then proceed through their sequence of turning on and/or off as described above until the final photodetector (the photodetector with the longest distance field of view from the LIDAR system) is turned off (when any expected return light would originate from beyond the maximum detection range). Then a second light pulse is emitted from the LIDAR system. If there were only one photodetector that was always turned on, then return light from the first light pulse may then return and be detected by the photodetector. However, if there are multiple photodetectors and each is only turned on for a given time interval, then it is more likely that return light that is detected by a given photodetector would have originated from the second light pulse. That is, unless return light based on the first light pulse were to reach a field of view during the time interval at which that particular photodetector is turned on. Even if this scenario does take place (the return light from the first light pulse reaches a photodetector’ s field of view while that photodetector is turned on), the use of the multiple photodetectors may still allow for a determination to be made that the detected return light could potentially originate from the first light pulse. That is, if the return light from the first pulse returns and is detected by one of the turned on photodetectors, but the second light pulse has still not reflected from an object back towards a photodetector, then return light from the light pulse may be detected by a subsequent photodetector that is turned on. This means that return light from both pulses may be received, and the LIDAR system may thus be able to determine that at least one of the return light detections was based on range aliasing. If this is the case, the LIDAR system may simply disregard both of these two detected returns.
[0023] In some embodiments, mitigating range aliasing as described above may have the added benefit of allowing for a larger number of light pulses being emitted within a time frame than if range aliasing were not mitigating using these systems and methods. This may be because it may not be as concerning to have more light pulses traversing the environment at the same time if it is more likely that the LIDAR system may be able to determine which return light is associated with which emitted light pulse. The ability to emit more light pulses in a given period of time may result in a larger amount of data being able to be collected about the environment at a faster rate.
[0024] In some embodiments, the multiple photodetector bistatic LIDAR systems described herein may also have the further benefit of mitigating or eliminating cross talk between different LIDAR systems. Cross talk may refer to a scenario that arises when an emitter from a first LIDAR system is pointed towards a second LIDAR system. If the first LIDAR system emits a light pulse, that light pulse may then travel towards the second LIDAR system and be detected by a photodetector of the second LIDAR system. Similar to range aliasing concerns, the second LIDAR system may have difficulty in discerning between its own emitted light pulses and a light pulse originating from another LIDAR system if only one photodetector is used. This cross talk scenario may be mitigated or eliminated in a similar manner in which range aliasing may be mitigated or eliminated.
[0025] In some embodiments, the multiple photodetector bistatic LIDAR systems described herein may also have further benefits even beyond mitigating parallax, range aliasing, and/or cross talk concerns. For example, the use of the multiple photodetectors may mitigate a scenario where one particular photodetector may become saturated by a bright light. When this is the case, the photodetector may enter a recovery period during which it may not be able to detect any subsequent light. If only one photodetector is used in a LIDAR system, then the LIDAR system may become blind to return light during this recovery period. However, if multiple photodetectors are used, any of the other photodetectors may be used as backups for the photodetector currently in its recovery period.
[0026] Turning to the figures, FIG. 1 may depict a high-level schematic diagram of an example LIDAR system 101 that may implement the multiple detectors as described herein. A more detailed description of an example LIDAR system may be described with respect to FIG. 6 as well. With reference to the elements depicted in the figure, the LIDAR system 101 may include at least one or more emitter devices (for example, emitter device 102a, and/or any number of additional emitter devices) and one or more detector devices (for example, detector device 106a, detector device 106b, detector device 106c, and/or any number of additional detector devices). Hereinafter, reference may be made to elements such as “emitter device” or “detector device,” however such references may similarly apply to multiple of such elements as well. In some embodiments, the LIDAR system 101 may be incorporated onto a vehicle 101 and may be used at least to provide range determinations for the vehicle 101. For example, the vehicle 101 may traverse an environment 108 and may use the LIDAR system 101 to determine the relative distance of various objects (for example, the pedestrian 107a, the stop sign 107b, and/or the second vehicle 107c) in the environment 108 relative to the vehicle 101.
[0027] Still referring to FIG. 1, the one or more detector devices may be configured such that individual detector devices are physically oriented to point in different directions. Consequentially, different detectors may have different corresponding fields of view in the environment 108. For example, detector device 106c may be associated with field of view 110, detector device 106b may be associated with field of view 111, and detector device 106a may be associated with field of view 112. As depicted in the figure, a field of view of an individual detector device may cover a particular range of distances from the emitter 102a. For example, the field of view 110 of detector device 106c is shown to cover a closest range of distances to the emitter 102a, field of view 111 of detector device 106b is shown to cover an intermediate range of distances to the emitter 102a, and field of view 112 of detector device 106a is shown to cover a furthest range of distances from the emitter 102a. Thus, taken together, the detector devices may cover a total field of view comprising the field of view 110, the field of view 111, and the field of view 112. Although the figure depicts three detector devices with three associated fields of view, the total view of view may similarly be split between any number of detector devices as well. Additionally, as depicted in the figure, one field of view for one photodetector may begin at an exact location where a prior field of view for another photodetector ends, leaving no field of view blind spot between photodetectors. However, in some cases (not depicted in the figure, there may be overlap between the various fields of view of the different photodetectors as well. The different fields of view may allow the different detector devices to detect return light 120 from the environment 108 that is based on the emitted light 105 from the emitter 102a. Due to the varying range of distances the different fields of view cover, each detector device may be configured to detect objects at varying distances from the vehicle 101. For example, because the detector device 106c is associated with field of view 110 that covers distances closest to the vehicle 101, the detector device 106c may be configured to detect return light 120 that is reflected from objects a shorter distance away from the vehicle 101 (for example, the vehicle 107c). As is described herein, the detector devices may also be selectively turned on and/or turned off based on time estimates as to when return light 120 would be within the field of view of the individual detector devices.
EXAMPLE USE CASES
[0028] FIGs. 2A-2B depicts an example use case 200, in accordance with one or more example embodiments of the disclosure. The use case 200 may exemplify a manner in which the one or more detectors may be selectively turned on and/or turned off (as described above) during various time intervals subsequent to a light pulse being emitted from an emitter. The use case 200 may depict an emitter 202, which may be the same as emitter 102a described with respect to FIG. 1, as well as any other emitter described herein. The use case 200 may also depict one or more detectors, including, for example, a first detector 203, a second detector 204, and a third detector 205, which may be the same as detector device 106a, detector device 106b, and/or detector device 106c, as well as any other detectors described herein. Each of the detectors may have an associated field of view. For example, the first detector 203 may be associated with field of view 206, the second detector 204 may be associated with field of view 207, and the third detector 205 may be associated with field of view 208. As with the fields of view in FIG. 1, the individual fields of view in the use case 200 of FIG. 2 (for example, field of view 206, field of view 207, and/or field of view 208) may include varying distance ranges from the emitter 202 (or, more broadly speaking, from the LIDAR system). The fields of view may allow for the detectors to detect return light (for example, return light 211, return light 213, and/or return light 215) that is reflected from objects in the environment (for example, a tree 209, a vehicle 214, and/or a house 219). As depicted in use case 200, field of view 206 associated with the first detector 203 may cover a distance range that may be closest to the emitter 202 (or the LIDAR system), field of view 207 associated with the second detector 204 may cover a distance range that is intermediate from the emitter 202 (or the LIDAR system), and field of view 208 associated with the third detector 205 may cover a distance range that is furthest from the emitter 202 (or the LIDAR system). Taken together, the field of view 206, field of view 207, and field of view 208 may cover a total field of view of the LIDAR system. That is, an individual detector with its corresponding field of view may cover a portion of the total field of view of the LIDAR system, and all of the fields of view taken together may cover the desired field of view of the LIDAR system. For example, the desired total field of view may correspond to a maximum detection range of the LIDAR system, which may be predefined, or selected based on any number of factors. Although use case 200 may only depict three detectors with three corresponding fields of view, any other number of detectors and associated fields of view may be used to cover a detection range of the LIDAR system. The detectors may be the same as the detectors depicted with respect to FIG. 1, as well as any other detectors described herein. In some cases, the emitter 202 and one or more detectors may be a part of an overall LIDAR system, such as a bistatic LIDAR system. That is, the use case 200 may depict a use case being implemented by the LIDAR system depicted in FIG. 1, for example.
[0029] Referring to FIG. 2 A, the use case 200 may initiate with scene 201. Scene 201 may involve the emitter 202 of the LIDAR system emitting a light pulse 210 into the environment 210. Scene 201 may also depict that, subsequent to the light pulse 210 being emitted by the emitter 202, the detector with a field of view 206 that covers a distance range closest to the emitter 202 (the first detector 203) may be turned on. This first detector 203 may be turned on for a given first time interval, T1. During this first time interval, the other detectors in the LIDAR system (for example, second detector 204 and third detector 205) may be turned off, which may be represented by their fields of view being depicted as dashed lines. The first time interval, ATX, may correspond to a time interval during which return light reflected from an object in the environment would be expected to be within the field of view 206 of the first detector 203. For illustrative purposes, scene 201 may depict a tree 209 that is within the field of view 206, and that reflects return light 211. This return light 211 may then be detected by the first detector 203. It should be noted that the tree 209 (and associated return light 211) may be depicted in dashed lines as an example of what return light in the field of view 206 may look like. However, for the sake of continuing the example in subsequent scenes of this use case 200, the tree 209 may be considered to not actually exist in the environment so that the light pulse 210 may traverse the environment to further distances from the emitter 202.
[0030] Continuing with FIG. 2B, the use case 200 may proceed with scene 215. Scene 215 may involve the light pulse 210 continuing to traverse the environment beyond the location of the tree 209 as shown in scene 201. Scene 215 may take place during a second time interval, AT2 • During the second time interval, the first detector 203 may be turned off and the second detector 204 may be turned on. That is, the second detector 204 may now be the only detector that is currently turned on. Similar to the first time interval, the second time interval, AT2, may correspond to a time interval during which return light reflected from an object in the environment would be expected to be within the field of view 207 of the second detector 204. As depicted in the figure, the second detector 204 may include a field of view 207 including a distance range that begins starting with the end of the distance range covered by the field of view 206 of the first detector 203. In some cases, although not depicted in the figure, there may also be some overlap between the field of view 207 and/or the field of view 206. For illustrative purposes, scene 215 may thus depict a vehicle 214 that is within the field of view 207, and that reflects return light 213. This return light 213 may then be detected by the second detector 204. Again, it should be noted that the vehicle 214 (and associated return light 213) may be depicted in dashed lines as an example of what return light in the field of view 207 may look like. However, for the sake of continuing the example in subsequent scenes of this use case 200, the vehicle 214 may be considered to not actually exist in the environment so that the light pule 210 may traverse the environment to greater distances as depicted in scene 230 described below. [0031] Continuing with FIG. 2B, the use case may proceed to scene 230. Scene 230 may involve the light pulse 210 continuing to traverse the environment beyond the location of the vehicle 214 as shown in scene 215. Scene 230 may take place during a third time interval, AT3. During the third time interval, the second detector 204 may be turned off and the third detector 205 may be turned on. That is, the third detector 205 may now be the only detector that is currently turned on. Similar to the first time interval and the second time interval, the third time interval, AT3, may correspond to a time interval during which return light reflected from an object in the environment would be expected to be within the field of view 208 of the third detector 205. As depicted in the figure, the third detector 205 may include a field of view 208 including a distance range that begins starting with the end of the distance range covered by the field of view 207 of the second detector 204. In some cases, although not depicted in the figure, there may also be some overlap between the field of view 207 and/or the field of view 208. For illustrative purposes, scene 230 may depict a home 217 that is within the field of view 208, and that reflects return light 218. This return light 218 may then be detected by the third detector 205.
[0032] Continuing with FIG. 2, the use case 200 may thus depict a progression of an example of how various detectors may be selectively turned on and/or turned off over time as an emitted light pulse traverses further into the environment 210. However, this use case 200 should not be taken as limiting, and the detectors may be operated in any other manner as may be described herein. For example, in some cases, all of the detectors may be turned on at all times (instead of individual detectors being selectively turned on and/or turned off), more than one detector may be turned on at any given time, and/or any number of detectors may be turned on in any other combination for any other length of times. Additionally, in scenarios where detectors are selectively turned on and/or off, as shown in the use case 200, the time intervals during which the various detectors are turned on and/or off may vary. For example, the time interval during which one detector is turned on may be shorter and/or longer than the time interval during which another detector is turned on. In some cases, the timing may depend on one or more types of functions used to determine the bias voltage to apply to a given photodetector over time. For example, as described herein, the bias voltage applied to the different photodetectors may be represented as a time-shifted Gaussian function. That is, within a given time interval, the bias voltage applied to the detector 205 may ramp up to a peak bias voltage value, and then ramp back down as the end of the time interval is approached. Similarly, as the end of the first time interval approaches and the beginning of the second time interval associated with the detector 204 approaches, the bias voltage may begin to ramp up for the detector 204 using a similar Gaussian function, and so on. Finally, although the fields of view (for example, field of view 206, field of view 207, and/or field of view 208) are shown as being fixed in the use case 200, any of these fields of view may also be adjustable. That is, a field of view may be broadened or narrowed, or a direction of a field of view may be altered. The field of view may also be altered in any other manner, such as introducing optical systems that may alter the direction of the field of view. As described herein, the field of view may be altered for any number of reasons, such as to focus multiple detectors towards a similar location within the environment.
[0033] FIG. 3 depicts an example use case 300, in accordance with one or more example embodiments of the disclosure. The use case 300 may depict one example of how range aliasing concerns may be mitigated and/or eliminated by the multi-detector systems and methods described herein. Use case 300 may depict two parallel timelines. Scenes 301 and 310 may comprise one timeline and scenes 320 and 340 may comprise a second, parallel timeline. Scenes 301 and 310 may be included to depict how range aliasing issues may arise in single detector LIDAR systems, and scenes 320 and 340 may depict how these issues may be ameliorated by using a multi-detector system as described herein. As such, scenes 301 and 310 may depict a LIDAR system that may only include one emitter 302 and one detector 303. The detector 303 may be capable of detecting return light with a field of view 304 that may cover up to a maximum detection distance 305.
[0034] Beginning with scenes 301 and 310, scene 301 may depict the emitter 302 emitting a first light pulse 306 into the environment. The first light pulse 306 is shown as traversing the environment and eventually moving past the maximum detection distance 305 of the detector 303. That is, the first light pulse 306 in scene 301 may not yet have reflected from an object in the environment as return light and been detected within the field of view 304 of the detector 303. With this being the case, a potential range aliasing problem may arise as depicted in scene 310. In scene 310, the emitter 302 is shown as emitting a second light pulse 307 into the environment. However, at some point while the second light pulse 307 is traversing the environment, the first light pulse may finally reflect from an object (for example, tree 308) and return towards the field of view 304 of the detector 303 as return light 309. The return light 309 may then be detected by the detector 303 at point 310, which may correspond to a point when the return light 309 first enters the field of view of the detector 303 (which, for exemplification purposes, may take place at a first time). However, at the first time when the return light 309 from the first light pulse 306 is detected by the detector 303 at point 310, the second light pulse may also be currently within the environment at point 311. That is, the second light pulse 307 may have only traveled a short distance from the emitter 302 by the time the return light 309 from the first light pulse 306 is detector by the detector 303. When this happens, the back-end signal processing components of the LIDAR system (not shown in the figure) may have difficulty in determining whether the return light that was detected by the detector 303 is a short range detection based on the second light pulse 307, or a long range detection based on the first light pulse 306. This may be because distance determinations based on the emitted light pulses from the emitter 302 may be made based on time of flight (ToF) determinations, for example. That is, the LIDAR system may ascertain when a light pulse is emitted, and may then compare the emission time to a time at which return light is detected by the detector determine The resulting difference in time may then be used to determine the distance at which the emitted light was reflected back to the LIDAR system. Given this, the LIDAR system may not be able to discern between two light pulses at different distances within the field of view 304 of the single detector 303 since both light pulses could theoretically be the source of the return light being detected by the detector 303.
[0035] Continuing with FIG. 3, scene 320 and scene 340 may depict an example manner in which the multi-detector systems described herein may mitigate or eliminate the range aliasing concerns exemplified in scene 301 and scene 310. Scene 320 and scene 340 may thus depict a multi-detector system that may include an emitter 302 and one or more detectors (for example, first detector 321, second detector 322, and/or third detector 323, which may be the same as the first detector 203, the second detector 204, and/or the third detector 205, as well as any other detectors described herein). A detector may be associated with a field of view covering a particular distance range from the emitter 302. For example, the first detector 321 may be shown as being associated with field of view 324 that may include a distance range closest to the emitter 302. The second detector 322 may be shown as being associated with field of view 325 that may include a distance range beyond the distance range covered by the field of view 324 of the first detector 321. Finally, the third detector 323 may be shown as being associated with field of view 326 that may include a distance range beyond the distance range covered by the field of view 325 of the second detector 322. Additionally, for exemplification purposes, the combination of the field of view 324, field of view 325, and field of view 326, may cover the same maximum detection range 305 from the emitter 302.
[0036] Continuing with FIG. 3, scene 320 may begin, similar to scene 301, with the emitter 302 emitting a first light pulse 306 into the environment. Again, the first light pulse 306 is shown as traversing the environment and eventually moving past the maximum detection distance 305 of the first detector 321, the second detector 322, and the third detector 323. That is, the first light pulse 306 may not have reflected from an object in the environment and been detected by the first detector 321, the second detector 322, or the third detector 323. Also similar to scene 310, in scene 340 the emitter 302 may then emit a second light pulse 307. However, the difference between scene 310 with the single detector 303 and the scene 340 with the multiple detectors may be that the multiple detectors may be used to provide more data about detected return light than the single detector 303 may be able to. For example, as described herein, individual detectors may be able to be selectively turned on and/or off as the light pulse traverses the environment. Thus, the first detector 321, the second detector 322, and the third detector 323 may be selectively turned on and then off as the first light pulse 306 traverses further away from the emitter 302. To mitigate range aliasing, instead of turning off the third detector 323 when it is determined that the time in which a return light pulse originating from the first light pulse would be beyond the field of view 236 of the third detector 323, the third detector 323 may be kept on even as the first light pulse 306 travels beyond the maximum detection range 305 of the detectors.
[0037] Still continuing with FIG. 3, scene 340 shows the same second light pulse 307 being emitted from the emitter 302 before the first light pulse 306 reflects from an object and is detected by one of the detectors. However, the difference here is that the LIDAR system now has two separate detectors monitoring two different distances from the emitter 302. That is, now both the first detector 321 with a known field of view 324 closer to the emitter 302 and the third detector 323 with a known field of view 326 that is further away from the emitter 302 are turned on. Thus, as shown in scene 340, if the first light pulse 306 then reflects from an object in the environment (for example, the tree 308) and returns back into the field of view 326 of the third detector 323 as return light 309, then third detector 323 may produce an output indicating that it detected return light at the same first time described in scenes 301 and 310. In the example depicted in scene 340, this first time may correspond to a time at which return light originating from the second light pulse 307 may be within the field of view 324 of the first detector 321 (thus, the first detector 321 is shown as being on). With this being the case, the system is then able to discern that the return light detected by the third detector 323 is associated with the first light pulse 306 instead of the second light pulse 307. Thus, with this multi-detector configuration, a LIDAR system may be able to track multiple light pulses traversing the environment simultaneously with reduced concern that range aliasing will cause difficultly in discerning between returns from the multiple emitted light pulses. Something about can double your shot rate (or even more).
[0038] Continuing with FIG. 3, it should be noted that the use case 300 (specifically scenes 320 and 340 of use case 300) may depict only one example of how a multidetector system may be used to mitigate or eliminate range aliasing concerns. As another example of how range aliasing concerns may be mitigated and/or eliminated by the use of multiple photodetectors, if the photodetectors are controlled to be turned on and/or off based on predetermined time intervals as described above, then return light that reflects from an object beyond the maximum detection range of the LIDAR system may never be detected (which may eliminate the possibility for range aliasing). The reason for this may be exemplified as follows. In this example, a first light pulse is emitted. The photodetectors then proceed through their sequence of turning on and/or off as described above until the final photodetector (the photodetector with the longest distance field of view from the LIDAR system) is turned off (when any expected return light would originate from beyond the maximum detection range). Then a second light pulse is emitted from the LIDAR system. If there were only one photodetector that was always turned on, then return light from the first light pulse may then return and be detected by the photodetector. However, if there are multiple photodetectors and each is only turned on for a given time interval, then it is more likely that return light that is detected by a given photodetector would have originated from the second light pulse. That is, unless return light based on the first light pulse were to reach a field of view during the time interval at which that particular photodetector is turned on. Even if this scenario does take place (the return light from the first light pulse reaches a photodetector’s field of view while that photodetector is turned on), the use of the multiple photodetectors may still allow for a determination to be made that the detected return light could potentially originate from the first light pulse. That is, if the return light from the first pulse returns and is detected by one of the turned on photodetectors, but the second light pulse has still not reflected from an object back towards a photodetector, then return light from the light pulse may be detected by a subsequent photodetector that is turned on. This means that return light from both pulses may be received, and the LIDAR system may thus be able to determine that at least one of the return light detections was based on range aliasing. If this is the case, the LIDAR system may simply disregard both of these two detected returns.
EXAMPLE SYSTEM ARCHITECTURE
[0039] FIGS. 4A-4B depict example circuit configurations, in accordance with one or more example embodiments of the disclosure. The circuit configurations depicted in FIGs. 4A-4B may represent a back-end circuit connected to the outputs of the detectors. This back-end circuit may be used, for example, to pre-process the outputs of the detectors for any signal processing components of the LIDAR system (for example, systems that may make computing determinations based on the data received from the detectors). In some embodiments, FIG. 4A depicts a first circuit configuration 400. In the first circuit configuration, individual detectors (for example, detector 403, detector 404, and/or detector 405) may be associated with their own individual analog to digital converters (ADCs) (for example, detector 403 may be associated with analog to digital converter 406, detector 404 may be associated with analog to digital converter 407, and detector 405 may be associated with analog to digital converter 408). An ADC may be used to convert an analog signal to a digital signal. That is, the ADC may be capable of receiving the analog output of a detector as an input and converting that analog signal into a digital signal. This digital signal may then be used by one or more signal processing components 410 of the LIDAR system. As a more specific example, a detector may be configured to receive one or more photons as an input and provide current as an output. This current output may be in analog form, and the ADC may convert the analog current into a digital current value for use by the one or more signal processing components 410.
[0040] In some embodiments, FIG. 4B depicts a second circuit configuration 420. While the first circuit configuration 400 shown in FIG. 4A included multiple ADCs (for example, one ADC for each detector), the second circuit configuration 420 may only include one ADC for all of the detectors (or alternatively may include more than one ADC but multiple detectors may share a single ADC instead of each individual detector being associated with its own ADC). That is, the second circuit configuration 420 may include the outputs of the detectors being provided as inputs to a single ADC. The ADC may then provide a digital output to the one or more signal processing components 410 as was the case in the first circuit configuration 400. However, the second circuit configuration 420 may also include one or more additional components between the detectors and the ADC. For example, the second circuit configuration 420 may include a summer subcircuit 422. The summer subcircuit 422 may receive the outputs from the one or more detectors and combine them into a single output. Prior to the summer may also exist one or more attenuators (for example, one attenuator for each detector output. An attenuator may be used to attenuate the output of a detector, which may be useful in a number of scenarios. For example, if all of the detectors are turned on at all times, the attenuators may be used to attenuate outputs of certain detectors at certain times. For example, the attenuation may be performed to attenuate the outputs of all of the detectors except one detector. For example, the one detector may correspond to a field of view in which it is expected return light from an emitted light pulse would currently be located. The attenuators may thus serve to reduce the amount of detector output noise that is provided the ADC and signal processing components 410. The attenuators may also be used in other ways as well. For example, all of the outputs of the detectors may be left unattenuated unless it is determined that it is desired to block the output of one or more detectors.
ILLUSTRATIVE METHODS
[0041] FIGs. 5A-5B illustrate example methods 500A and 500B in accordance with one or more example embodiments of the disclosure.
[0042] At block 502a of the method 500A in FIG. 5A, the method may include emitting, by a light emitter of a LIDAR system, a first light pulse. Block 504a of the method 500A may include activating a first light detector of the LIDAR system at a first time, the first time corresponding a time when return light corresponding to the first light pulse would be within a first field of view of the first light detector. Block 506a of the method 500A may include activating a second light detector of the LIDAR system at a second time, the second time corresponding a time when return light corresponding to the first light pulse would be within a second field of view of the second light detector, wherein the first light detector is configured to include the first field of view, the first field of view being associated with a first range from the light emitter, and wherein the second light detector configured to include the second field of view, the second field of view being associated with a second range from the light emitter.
[0043] At block 502b of the method 500B in FIG. 5B, the method may include emitting, by a light emitter, a first light pulse at a first time. Block 504b of the method 500B may include activating a first light detector and a second light detector, wherein a field of view of the first light detector includes a range closer to the light emitter than the field of view of the second light detector. Block 506b of the method 500B may include emitting, by the light emitter, a second light pulse at a second time. Block 508b of the method 500B may include receiving return light by the second light detector at a third time. Block 510 of method 500B may include determining, based on the return light being detected by the second light detector, that the return light is based on the first light pulse.
[0044] In some embodiments, the photodetectors may be selectively “turned on” and/or “turned off’ (which may similarly be referred to as “activating” or deactivating” a photodetector). “Turning on” a photodetector may refer to providing a bias voltage to the photodetector that satisfies a threshold voltage level. The bias voltage satisfying the threshold voltage level (for example, being at or above the threshold voltage level) may provide sufficient voltage to the photodetector to allow it to produce a level of output current based on light received by the photodetector. The output threshold voltage level being used and the corresponding output current being produced may depend on the type of photodetector being used and the desired mode of the operation of the photodetector. For example, if the photodetector is an APD, the threshold voltage level may be set high enough so that the photodetector is capable of avalanching upon receipt of light as described above. Similarly, if it is desired for the APD to operate in Geiger Mode, the threshold voltage level may be set even higher than if the APD were desired to operate outside of the Geiger Mode region of operation. That is, the gain of the photodetector when this higher threshold voltage level is applied may be much larger than if the photodetector were operating as a normal Avalanche Photodiode. Additionally, if the photodetector is not an APD, and operates in a linear mode of operation in which the output current produced based on a similar amount of detected light may be much lower than if the photodetector were an APD, then the threshold bias voltage may be lower. Furthermore, even if the photodetector is an APD, the threshold voltage level may be set below a threshold voltage level used to allow the APD to avalanche upon receipt of light. That is, the bias voltage applied to the APD may be set low enough to allow the APD to still produce an output current, but only in a linear mode of operation.
[0045] Likewise, in some embodiments, “turning off’ a photodetector may refer to reducing the bias voltage provided to the photodetector to below the threshold voltage level. In some instances, “turning off’ the photodetector may not necessarily mean that the photodetector is not able to detect return light. That is, the photodetector may still be able to detect return light while the bias voltage is below the threshold voltage level, but the output signal produced by the photodetector may be below a noise floor established for a signal processing portion of the LIDAR system. As one non-limiting example, a photodetector may be an Avalanche Photodiode. If a sufficient enough bias voltage is provided to allow the APD to avalanche upon receipt of light, then a large current output may be produced by the APD. However, if a lower bias voltage is applied, then the APD may still produce an output, but the output may be based on a linear mode of operation and the resulting output current may be much lower than if the APD were to avalanche upon receipt of a same number of photons. The signal processing portion of the LIDAR system may have a noise floor configured to correspond to an output of the APD in linear mode, so that any outputs from the APD when operating with this reduced bias voltage may effectively be disregarded by the LIDAR system. Thus, selectively turning on and/or turning off the photodetectors may entail only having some of the photodetectors capable of detecting return light at a given time.
[0046] The operations described and depicted in the illustrative process flow of FIG. 5 may be carried out or performed in any suitable order as desired in various example embodiments of the disclosure. Additionally, in certain example embodiments, at least a portion of the operations may be carried out in parallel. Furthermore, in certain example embodiments, less, more, or different operations than those depicted in FIG. 5 may be performed.
EXAMPLE LIDAR SYSTEM CONFIGURATION
[0047] FIG. 6 illustrates an example LIDAR system 600, in accordance with one or more embodiments of this disclosure. The LIDAR system 600 may be representative of any number of elements described herein, such as the LIDAR system 101 described with respect to FIG. 1, as well as any other LIDAR systems described herein. The LIDAR system 600 may include at least an emitter portion 601, a detector portion 605, and a computing portion 613.
[0048] In some embodiments, the emitter portion 601 may include at least one or more emitter(s) 602 (for simplicity, reference may be made hereinafter to “an emitter,” but multiple emitters could be equally as applicable) and/or one or more optical element(s) 604. An emitter 602 may be a device that is capable of emitting light into the environment. Once the light is in the environment, it may travel towards an object 612. The light may then reflect from the object and return towards the LIDAR system 600 and be detected by the detector portion 605 of the LIDAR system 600 as may be described below. For example, the emitter 602 may be a laser diode as described above. The emitter 602 may be capable of emitting light in a continuous waveform or as a series of pulses. An optical element 604 may be an element that may be used to alter the light emitted from the emitter 602 before it enters the environment. For example, the optical element 604 may be a lens, a collimator, or a waveplate. In some instances, the lens may be used to focus the emitter light. The collimator may be used to collimate the emitted light. That is, the collimator may be used to reduce the divergence of the emitter light. The waveplate may be used to alter the polarization state of the emitted light. Any number or combination of different types of optical elements 604, including optical elements not listed herein, may be used in the LIDAR system 600.
[0049] In some embodiments, the detector portion 605 may include at least one or more detector(s) 606 (for simplicity, reference may be made hereinafter to “a detector,” but multiple detectors could be equally as applicable) and/or one or more optical elements 608. The detector may be a device that is capable of detecting return light from the environment (for example light that has been emitted by the LIDAR system 600 and reflected by an object 612). For example, the detectors may be photodiodes. The photodiodes may specifically include Avalanche Photodiodes (APDs), which in some instances may operate in Geiger Mode. However, any other type of detector may be used, such as light emitting diodes (LED), vertical cavity surface emitting lasers (VCSEL), organic light emitting diodes (OLED), polymer light emitting diodes (PLED), light emitting polymers (LEP), liquid crystal displays (LCD), microelectromechanical systems (MEMS), and/or any other device configured to selectively transmit, reflect, and/or emit light to provide the plurality of emitted light beams and/or pulses. Generally, the detectors of the array may take various forms. For example, the detectors may take the form of photodiodes, avalanche photodiodes (e.g., Geiger mode and/or linear mode avalanche photodiodes), phototransistors, cameras, active pixel sensors (APS), charge coupled devices (CCD), cryogenic detectors, and/or any other sensor of light configured to receive focused light having wavelengths in the wavelength range of the emitted light. The functionality of the detector 606 in capturing return light from the environment may serve to allow the LIDAR system 600 to ascertain information about the object 612 in the environment. That is, the LIDAR system 600 may be able to determine information such as the distance of the object from the LIDAR system 600 and the shape and/or size of the object 612, among other information. The optical element 608 may be an element that is used to alter the return light traveling towards the detector 606. For example, the optical element 608 may be a lens, a waveplate, or filter such as a bandpass filter. In some instances, the lens may be used to focus return light on the detector 606. The waveplate may be used to alter the polarization state of the return light. The filter may be used to only allow certain wavelengths of light to reach the detector (for example a wavelength of light emitted by the emitter 602). Any number or combination of different types of optical elements 608, including optical elements not listed herein, may be used in the LIDAR system 600.
[0050] In some embodiments, the computing portion may include one or more processor(s) 614 and memory 616. The processor 614 may execute instructions that are stored in one or more memory devices (referred to as memory 616). The instructions can be, for instance, instructions for implementing functionality described as being carried out by one or more modules and systems disclosed above or instructions for implementing one or more of the methods disclosed above. The processor(s) 614 can be embodied in, for example, a CPU, multiple CPUs, a GPU, multiple GPUs, a TPU, multiple TPUs, a multi-core processor, a combination thereof, and the like. In some embodiments, the processor(s) 614 can be arranged in a single processing device. In other embodiments, the processor(s) 614 can be distributed across two or more processing devices (for example multiple CPUs; multiple GPUs; a combination thereof; or the like). A processor can be implemented as a combination of processing circuitry or computing processing units (such as CPUs, GPUs, or a combination of both). Therefore, for the sake of illustration, a processor can refer to a single-core processor; a single processor with software multithread execution capability; a multi-core processor; a multi-core processor with software multithread execution capability; a multi-core processor with hardware multithread technology; a parallel processing (or computing) platform; and parallel computing platforms with distributed shared memory. Additionally, or as another example, a processor can refer to an integrated circuit (IC), an ASIC, a digital signal processor (DSP), a FPGA, a PLC, a complex programmable logic device (CPLD), a discrete gate or transistor logic, discrete hardware components, or any combination thereof designed or otherwise configured (for example manufactured) to perform the functions described herein.
[0051] The processor(s) 614 can access the memory 616 by means of a communication architecture (for example a system bus). The communication architecture may be suitable for the particular arrangement (localized or distributed) and type of the processor(s) 614. In some embodiments, the communication architecture 606 can include one or many bus architectures, such as a memory bus or a memory controller; a peripheral bus; an accelerated graphics port; a processor or local bus; a combination thereof; or the like. As an illustration, such architectures can include an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an Enhanced ISA (EISA) bus, a Video Electronics Standards Association (VESA) local bus, an Accelerated Graphics Port (AGP) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express bus, a Personal Computer Memory Card International Association (PCMCIA) bus, a Universal Serial Bus (USB), and or the like.
[0052] Memory components or memory devices disclosed herein can be embodied in either volatile memory or non-volatile memory or can include both volatile and nonvolatile memory. In addition, the memory components or memory devices can be removable or non-removable, and/or internal or external to a computing device or component. Examples of various types of non-transitory storage media can include harddisc drives, zip drives, CD-ROMs, digital versatile disks (DVDs) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, flash memory cards or other types of memory cards, cartridges, or any other non- transitory media suitable to retain the desired information and which can be accessed by a computing device.
[0053] As an illustration, non-volatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory can include random access memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in many forms such as synchronous RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), and direct Rambus RAM (DRRAM). The disclosed memory devices or memories of the operational or computational environments described herein are intended to include one or more of these and/or any other suitable types of memory. In addition to storing executable instructions, the memory 616 also can retain data.
[0054] Each computing device 600 also can include mass storage 617 that is accessible by the processor(s) 614 by means of the communication architecture 606. The mass storage 617 can include machine-accessible instructions (for example computer-readable instructions and/or computer-executable instructions). In some embodiments, the machine-accessible instructions may be encoded in the mass storage 617 and can be arranged in components that can be built (for example linked and compiled) and retained in computer-executable form in the mass storage 617 or in one or more other machine- accessible non-transitory storage media included in the computing device 600. Such components can embody, or can constitute, one or many of the various modules disclosed herein. Such modules are illustrated as multi-detector control modules 620.
[0055] The multi-detector control modules 620 including computer-executable instructions, code, or the like that responsive to execution by one or more of the processor(s) 614 may perform functions including controlling the one or more detectors as described herein. For example, turning on and/or turning off any of the detectors are described herein. Additionally, the functions may include execution of any other methods and/or processes described herein.
[0056] It should further be appreciated that the LIDAR system 600 may include alternate and/or additional hardware, software, or firmware components beyond those described or depicted without departing from the scope of the disclosure. More particularly, it should be appreciated that software, firmware, or hardware components depicted as forming part of the computing device 600 are merely illustrative and that some components may not be present or additional components may be provided in various embodiments. While various illustrative program modules have been depicted and described as software modules stored in data storage, it should be appreciated that functionality described as being supported by the program modules may be enabled by any combination of hardware, software, and/or firmware. It should further be appreciated that each of the above-mentioned modules may, in various embodiments, represent a logical partitioning of supported functionality. This logical partitioning is depicted for ease of explanation of the functionality and may not be representative of the structure of software, hardware, and/or firmware for implementing the functionality. Accordingly, it should be appreciated that functionality described as being provided by a particular module may, in various embodiments, be provided at least in part by one or more other modules. Further, one or more depicted modules may not be present in certain embodiments, while in other embodiments, additional modules not depicted may be present and may support at least a portion of the described functionality and/or additional functionality. Moreover, while certain modules may be depicted and described as sub-modules of another module, in certain embodiments, such modules may be provided as independent modules or as submodules of other modules.
[0057] Although specific embodiments of the disclosure have been described, one of ordinary skill in the art will recognize that numerous other modifications and alternative embodiments are within the scope of the disclosure. For example, any of the functionality and/or processing capabilities described with respect to a particular device or component may be performed by any other device or component. Further, while various illustrative implementations and architectures have been described in accordance with embodiments of the disclosure, one of ordinary skill in the art will appreciate that numerous other modifications to the illustrative implementations and architectures described herein are also within the scope of this disclosure.
[0058] Certain aspects of the disclosure are described above with reference to block and flow diagrams of systems, methods, apparatuses, and/or computer program products according to example embodiments. It will be understood that one or more blocks of the block diagrams and flow diagrams, and combinations of blocks in the block diagrams and the flow diagrams, respectively, may be implemented by execution of computerexecutable program instructions. Likewise, some blocks of the block diagrams and flow diagrams may not necessarily need to be performed in the order presented, or may not necessarily need to be performed at all, according to some embodiments. Further, additional components and/or operations beyond those depicted in blocks of the block and/or flow diagrams may be present in certain embodiments. [0059] Accordingly, blocks of the block diagrams and flow diagrams support combinations of means for performing the specified functions, combinations of elements or steps for performing the specified functions, and program instruction means for performing the specified functions. It will also be understood that each block of the block diagrams and flow diagrams, and combinations of blocks in the block diagrams and flow diagrams, may be implemented by special-purpose, hardware-based computer systems that perform the specified functions, elements or steps, or combinations of specialpurpose hardware and computer instructions.
[0060] What has been described herein in the present specification and annexed drawings includes examples of systems, devices, techniques, and computer program products that, individually and in combination, permit the automated provision of an update for a vehicle profile package. It is, of course, not possible to describe every conceivable combination of components and/or methods for purposes of describing the various elements of the disclosure, but it can be recognized that many further combinations and permutations of the disclosed elements are possible. Accordingly, it may be apparent that various modifications can be made to the disclosure without departing from the scope or spirit thereof. In addition, or as an alternative, other embodiments of the disclosure may be apparent from consideration of the specification and annexed drawings, and practice of the disclosure as presented herein. It is intended that the examples put forth in the specification and annexed drawings be considered, in all respects, as illustrative and not limiting. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.
[0061] As used in this application, the terms “environment,” “system,” “unit,” “module,” “architecture,” “interface,” “component,” and the like refer to a computer-related entity or an entity related to an operational apparatus with one or more defined functionalities. The terms “environment,” “system,” “module,” “component,” “architecture,” “interface,” and “unit,” can be utilized interchangeably and can be generically referred to functional elements. Such entities may be either hardware, a combination of hardware and software, software, or software in execution. As an example, a module can be embodied in a process running on a processor, a processor, an object, an executable portion of software, a thread of execution, a program, and/or a computing device. As another example, both a software application executing on a computing device and the computing device can embody a module. As yet another example, one or more modules may reside within a process and/or thread of execution. A module may be localized on one computing device or distributed between two or more computing devices. As is disclosed herein, a module can execute from various computer-readable non-transitory storage media having various data structures stored thereon. Modules can communicate via local and/or remote processes in accordance, for example, with a signal (either analogic or digital) having one or more data packets (for example data from one component interacting with another component in a local system, distributed system, and/or across a network such as a wide area network with other systems via the signal).
[0062] As yet another example, a module can be embodied in or can include an apparatus with a defined functionality provided by mechanical parts operated by electric or electronic circuitry that is controlled by a software application or firmware application executed by a processor. Such a processor can be internal or external to the apparatus and can execute at least part of the software or firmware application. Still in another example, a module can be embodied in or can include an apparatus that provides defined functionality through electronic components without mechanical parts. The electronic components can include a processor to execute software or firmware that permits or otherwise facilitates, at least in part, the functionality of the electronic components.
[0063] In some embodiments, modules can communicate via local and/or remote processes in accordance, for example, with a signal (either analog or digital) having one or more data packets (for example data from one component interacting with another component in a local system, distributed system, and/or across a network such as a wide area network with other systems via the signal). In addition, or in other embodiments, modules can communicate or otherwise be coupled via thermal, mechanical, electrical, and/or electromechanical coupling mechanisms (such as conduits, connectors, combinations thereof, or the like). An interface can include input/output (I/O) components as well as associated processors, applications, and/or other programming components.
[0064] Further, in the present specification and annexed drawings, terms such as “store,” “storage,” “data store,” “data storage,” “memory,” “repository,” and substantially any other information storage component relevant to the operation and functionality of a component of the disclosure, refer to memory components, entities embodied in one or several memory devices, or components forming a memory device. It is noted that the memory components or memory devices described herein embody or include non- transitory computer storage media that can be readable or otherwise accessible by a computing device. Such media can be implemented in any methods or technology for storage of information, such as machine-accessible instructions (for example computer- readable instructions), information structures, program modules, or other information objects.
[0065] Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain implementations could include, while other implementations do not include, certain features, elements, and/or operations. Thus, such conditional language generally is not intended to imply that features, elements, and/or operations are in any way required for one or more implementations or that one or more implementations necessarily include logic for deciding, with or without user input or prompting, whether these features, elements, and/or operations are included or are to be performed in any particular implementation.

Claims

- 36 -
THAT WHICH IS CLAIMED IS: A method comprising: emitting, by a light emitter, a first light pulse at a first time; activating a first light detector and a second light detector, wherein a field of view of the first light detector includes a range closer to the light emitter than the field of view of the second light detector; emitting, by the light emitter, a second light pulse at a second time; receiving return light by the second light detector at a third time; and determining, based on the return light being detected by the second light detector, that the return light is based on the first light pulse. The method of claim 1, wherein activating the first light detector further comprises providing a first bias voltage to the first light detector, and wherein activating the second light detector further comprises providing a second bias voltage to the second light detector. The method of claim 2, wherein the first bias voltage is provided to the first light detector at a second time and the second bias voltage is provided to the second light detector at a third time, the second time corresponding to a time at which return light based on the first light pulse would be within the field of view of the first light detector, and the third time corresponding to a time at which return light based on the first light pulse would be within the field of view of the second light detector. The method of claim 1, wherein the first light pulse and second light pulse are simultaneously traversing an environment for a period of time. The method of claim 1, wherein determining that the return light is based on the first light pulse is further based on the first light detector being active at the third time. - 37 - The method of claim 1, further comprising: determining that second return light is detected by the first light detector at the third time; and determining, based on the second light detector detecting return light at the third time, that the second return light detected by the first light detector is based on the second light pulse. The method of claim 1, wherein the return light is detected by the second light detector prior to a detection of return light based on the second light pulse. A non-transitory computer readable medium including computer-executable instructions stored thereon, which when executed by one or more processors of a wireless access point, cause the one or more processors to perform operations of: causing to emit, by a light emitter, a first light pulse at a first time; causing to activate a first light detector and a second light detector, wherein a field of view of the first light detector includes a range closer to the light emitter than the field of view of the second light detector; causing to emit, by the light emitter, a second light pulse at a second time; determining that return light is detected by the second light detector at a third time; and determining, based on the return light being detected by the second light detector, that the return light is based on the first light pulse. The non-transitory computer readable medium of claim 8, wherein causing to activate the first light detector further comprises causing to provide a first bias voltage to the first light detector, and wherein causing to activate the second light detector further comprises causing to provide a second bias voltage to the second light detector. The non-transitory computer readable medium of claim 9, wherein the first bias voltage is provided to the first light detector at a second time and the second bias voltage is provided to the second light detector at a third time, the second time corresponding to a time at which return light based on the first light pulse would be within the field of view of the first light detector, and the third time corresponding to a time at which return light based on the first light pulse would be within the field of view of the second light detector. The non-transitory computer readable medium of claim 8, wherein the first light pulse and second light pulse are simultaneously traversing an environment for a period of time. The non-transitory computer readable medium of claim 8, wherein determining that the return light is based on the first light pulse is further based on a determination that the first light detector is active at the third time. The non-transitory computer readable medium of claim 8, wherein the computerexecutable instructions further cause the one or more processors to perform operations of determining that second return light is detected by the first light detector at the third time; and determining, based on the second light detector detecting return light at the third time, that the second return light detected by the first light detector is based on the second light pulse. The non-transitory computer readable medium of claim 8, wherein the return light is detected by the second light detector prior to a detection of return light based on the second light pulse. A system comprising: a light emitter configured to emit a first light pulse; a first light detector configured to point in a first direction and having a first field of view, the first field of view associated with a first range from the light emitter; a second light detector configured to point in a second direction and having a second field of view, the first field of view associated with a second range from the light emitter; a processor; and a memory storing computer-executable instructions, that when executed by the processor, cause the processor to: causing to emit, by a light emitter, a first light pulse at a first time; causing to activate a first light detector and a second light detector, wherein a field of view of the first light detector includes a range closer to the light emitter than the field of view of the second light detector; causing to emit, by the light emitter, a second light pulse at a second time; determining that return light is detected by the second light detector at a third time; and determining, based on the return light being detected by the second light detector, that the return light is based on the first light pulse. The system of claim 15, wherein causing to activate the first light detector further comprises causing to provide a first bias voltage to the first light detector, and wherein causing to activate the second light detector further comprises causing to provide a second bias voltage to the second light detector. The system of claim 16, wherein the first bias voltage is provided to the first light detector at a second time and the second bias voltage is provided to the second light detector at a third time, the second time corresponding to a time at which return light based on the first light pulse would be within the field of view of the first light detector, and the third time corresponding to a time at which return light based on the first light pulse would be within the field of view of the second light detector. The system of claim 15, wherein determining that the return light is based on the first light pulse is further based on a determination that the first light detector is active at the third time. The system of claim 15, wherein the computer-executable instructions further cause the processor to perform operations of determining that second return light is detected by the first light detector at the third time; and determining, based on the second light detector detecting return light at the third time, that the second return light detected by the first light detector is based on the second light pulse. The system of claim 15, wherein the return light is detected by the second light detector prior to a detection of return light based on the second light pulse. A LIDAR system comprising: a light emitter configured to emit a first light pulse; a first light detector having a first field of view, the first field of view associated with a first range from the light emitter; a second light detector having a second field of view, the second field of view associated with a second range from the light emitter; a processor; and a memory storing computer-executable instructions, that when executed by the processor, cause the processor to: cause the light emitter to emit the first light pulse; activate the first light detector at a first time, the first time corresponding a time when return light corresponding to the first light pulse would be within the first field of view; and activate the second light detector at a second time, the second time corresponding a time when return light corresponding to the first light pulse would be within the second field of view. The system of claim 21, wherein to activate the first light detector further comprises to provide a first bias voltage to the first light detector, and wherein to activate the second light detector further comprises to provide a second bias voltage to the second light detector. The system of claim 22, wherein the first bias voltage and second bias voltage are the same voltage level. - 41 - The system of claim 22, wherein the computer-executable instructions further cause the processor to: provide a third bias voltage to the first light detector at the second time, the third bias voltage being lower than the first bias voltage. The system of claim 24, wherein the first light detector is an Avalanche Photodiode (APD), wherein the first light detector is configured to operate in a Geiger Mode at the first bias voltage, and wherein the first light detector is configured to operate in a linear mode at a first time and be inoperable at the third bias voltage at a second time. The system of claim 21, wherein the computer-executable instructions further cause the processor to send an instruction to activate the first light detector based on a Gaussian function. The system of claim 21, further comprising a third light detector, wherein the first light detector, second light detector, and third light detector are separated by a spacing that is logarithmic. A method comprising: emitting, by a light emitter of a LIDAR system, a first light pulse; activating a first light detector of the LIDAR system at a first time, the first time corresponding a time when return light corresponding to the first light pulse would be within a first field of view of the first light detector; and activating a second light detector of the LIDAR system at a second time, the second time corresponding a time when return light corresponding to the first light pulse would be within a second field of view of the second light detector, wherein the first light detector is configured to include the first field of view, the first field of view being associated with a first range from the light emitter, and wherein the second light detector configured to include the second field of view, the second field of view being associated with a second range from the light emitter. - 42 - The method of claim 28, wherein activating the first light detector further comprises providing a first bias voltage to the first light detector, and wherein activating the second light detector further comprises providing a second bias voltage to the second light detector. The method of claim 29, wherein the first bias voltage and second bias voltage are the same voltage level. The method of claim 29, further comprising: providing a third bias voltage to the first light detector at the second time, the third bias voltage being lower than the first bias voltage. The method of claim 31, wherein the first light detector is an Avalanche Photodiode (APD), wherein the first light detector is configured to operate in a Geiger Mode at the first bias voltage, and wherein the first light detector is configured to operate in a linear mode at a first time and be inoperable at the third bias voltage at a second time. The method of claim 28, further comprising activating the first light detector based on a Gaussian function. The method of claim 28, further comprising a third light detector, wherein the first light detector, second light detector, and third light detector are separated by a spacing that is logarithmic. A LIDAR system comprising: a light emitter; a first light detector having a first field of view, the first field of view including a first range from the light emitter; a second light detector having a second field of view, the second field of view including a second range from the light emitter; a processor; and - 43 - a memory storing computer-executable instructions, that when executed by the processor, cause the processor to: monitor, at a first time, an output of the first light detector, the first time corresponding a time when return light corresponding to a first light pulse would be within a first field of view of the first light detector; and monitor, at a second time, an output of the second light detector, the second time corresponding a time when return light corresponding to the first light pulse would be within a second field of view of the second light detector. The system of claim 35, wherein the first light detector and second light detector are continuously active. The system of claim 35, wherein the computer-executable instructions further cause the processor to: attenuate the output of the second light detector at the first time; and attenuate the output of the first light detector at the second time. The system of claim 35, wherein the first field of view and second field of view together comprise a total field of view of the system. The system of claim 35, wherein the first light detector and second light detector are Avalanche Photodiodes (APDs) operating in Geiger Mode. The system of claim 35, further comprising a third light detector, wherein the first light detector, second light detector, and third light detector are separated by a spacing that is logarithmic.
EP21881130.5A 2020-10-14 2021-10-14 Multi-detector lidar systems and methods for mitigating range aliasing Pending EP4229438A4 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US17/070,414 US20220113405A1 (en) 2020-10-14 2020-10-14 Multi-Detector Lidar Systems and Methods
US17/070,765 US11822018B2 (en) 2020-10-14 2020-10-14 Multi-detector LiDAR systems and methods for mitigating range aliasing
PCT/US2021/055083 WO2022081910A1 (en) 2020-10-14 2021-10-14 Multi-detector lidar systems and methods for mitigating range aliasing

Publications (2)

Publication Number Publication Date
EP4229438A1 true EP4229438A1 (en) 2023-08-23
EP4229438A4 EP4229438A4 (en) 2024-10-23

Family

ID=81208641

Family Applications (1)

Application Number Title Priority Date Filing Date
EP21881130.5A Pending EP4229438A4 (en) 2020-10-14 2021-10-14 Multi-detector lidar systems and methods for mitigating range aliasing

Country Status (3)

Country Link
EP (1) EP4229438A4 (en)
KR (1) KR20230085159A (en)
WO (1) WO2022081910A1 (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160205378A1 (en) * 2015-01-08 2016-07-14 Amir Nevet Multimode depth imaging
US9529079B1 (en) * 2015-03-26 2016-12-27 Google Inc. Multiplexed multichannel photodetector
EP3198350B1 (en) * 2015-03-31 2021-11-10 SZ DJI Technology Co., Ltd. System and method for mobile platform operation
JP6729864B2 (en) * 2015-11-02 2020-07-29 株式会社デンソーテン Radar device, signal processing device of radar device, and signal processing method
US10379540B2 (en) * 2016-10-17 2019-08-13 Waymo Llc Light detection and ranging (LIDAR) device having multiple receivers
US10754033B2 (en) * 2017-06-30 2020-08-25 Waymo Llc Light detection and ranging (LIDAR) device range aliasing resilience by multiple hypotheses
US10802122B1 (en) * 2020-01-15 2020-10-13 Ike Robotics, Inc. Methods and systems for calibration of multiple lidar devices with non-overlapping fields of view

Also Published As

Publication number Publication date
KR20230085159A (en) 2023-06-13
WO2022081910A1 (en) 2022-04-21
EP4229438A4 (en) 2024-10-23

Similar Documents

Publication Publication Date Title
US10605922B2 (en) High resolution, high frame rate, low power image sensor
EP3722832B1 (en) Laser radar system
CN111356934B (en) Noise adaptive solid state LIDAR system
US11573304B2 (en) LiDAR device with a dynamic spatial filter
US20230341531A1 (en) Systems and methods for intra-shot dynamic adjustment of lidar detector gain
US20240295640A1 (en) Systems and methods for pre-blinding lidar detectors
US11520050B2 (en) Three-dimensional image element and optical radar device comprising an optical conversion unit to convert scanned pulse light into fan-like pulse light
WO2020223561A1 (en) Temporal jitter in a lidar system
US11662435B2 (en) Chip scale integrated scanning LiDAR sensor
CN112068148B (en) Light detection device and electronic apparatus
US20210181308A1 (en) Optical distance measuring device
US11822018B2 (en) Multi-detector LiDAR systems and methods for mitigating range aliasing
US20220113405A1 (en) Multi-Detector Lidar Systems and Methods
US20220043156A1 (en) Configurable memory blocks for lidar measurements
EP4229438A1 (en) Multi-detector lidar systems and methods for mitigating range aliasing
US20230185308A1 (en) Robot and control method thereof
US12099145B2 (en) SPAD array with ambient light suppression for solid-state LiDAR
US20220099812A1 (en) Systems and methods for light detection in lidar systems
CN112904313A (en) Method, system, and electronic circuit for suppressing ambient light of LiDAR equipment
KR102575734B1 (en) Apparatus for estimating level of signal output from photodetector and method thereof
CN112887627B (en) Method for increasing dynamic range of LiDAR device, light detection and ranging LiDAR device, and machine-readable medium
JP2024108115A (en) Distance image capturing device, distance image capturing method, and program

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20230419

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)