US20220308186A1 - Hyper Temporal Lidar with Optimized Range-Based Detection Intervals - Google Patents
Hyper Temporal Lidar with Optimized Range-Based Detection Intervals Download PDFInfo
- Publication number
- US20220308186A1 US20220308186A1 US17/490,260 US202117490260A US2022308186A1 US 20220308186 A1 US20220308186 A1 US 20220308186A1 US 202117490260 A US202117490260 A US 202117490260A US 2022308186 A1 US2022308186 A1 US 2022308186A1
- Authority
- US
- United States
- Prior art keywords
- shot
- laser
- mirror
- laser pulse
- energy
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 172
- 230000002123 temporal effect Effects 0.000 title description 17
- 238000000034 method Methods 0.000 claims description 118
- 230000008569 process Effects 0.000 claims description 93
- 230000033001 locomotion Effects 0.000 claims description 72
- 238000012545 processing Methods 0.000 claims description 62
- 238000004519 manufacturing process Methods 0.000 claims 2
- 238000010304 firing Methods 0.000 abstract description 57
- 239000000835 fiber Substances 0.000 description 63
- 239000000872 buffer Substances 0.000 description 37
- 230000003287 optical effect Effects 0.000 description 32
- 230000006870 function Effects 0.000 description 28
- 230000008685 targeting Effects 0.000 description 18
- 230000014509 gene expression Effects 0.000 description 16
- 230000004913 activation Effects 0.000 description 12
- 239000003550 marker Substances 0.000 description 11
- 238000004088 simulation Methods 0.000 description 9
- 230000008859 change Effects 0.000 description 8
- 230000002829 reductive effect Effects 0.000 description 8
- 238000002310 reflectometry Methods 0.000 description 8
- 238000013459 approach Methods 0.000 description 7
- 230000006399 behavior Effects 0.000 description 6
- 230000009849 deactivation Effects 0.000 description 6
- 230000001052 transient effect Effects 0.000 description 6
- 230000003321 amplification Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 5
- 238000003199 nucleic acid amplification method Methods 0.000 description 5
- 238000005457 optimization Methods 0.000 description 5
- 238000012360 testing method Methods 0.000 description 5
- 230000001276 controlling effect Effects 0.000 description 4
- 230000003247 decreasing effect Effects 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 238000012913 prioritisation Methods 0.000 description 4
- 230000001105 regulatory effect Effects 0.000 description 4
- 238000000926 separation method Methods 0.000 description 4
- 230000001960 triggered effect Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 230000000717 retained effect Effects 0.000 description 3
- 239000000523 sample Substances 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 230000001934 delay Effects 0.000 description 2
- 230000036039 immunity Effects 0.000 description 2
- 230000003116 impacting effect Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000013021 overheating Methods 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 230000002441 reversible effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000011895 specific detection Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000011144 upstream manufacturing Methods 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 230000003213 activating effect Effects 0.000 description 1
- 230000001154 acute effect Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000032683 aging Effects 0.000 description 1
- 238000004378 air conditioning Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 239000002019 doping agent Substances 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000010355 oscillation Effects 0.000 description 1
- 230000035515 penetration Effects 0.000 description 1
- 230000002250 progressing effect Effects 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000000638 stimulation Effects 0.000 description 1
- 239000012780 transparent material Substances 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
- G01S17/931—Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
- G01S17/32—Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated
- G01S17/36—Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated with phase comparison between the received signal and the contemporaneously transmitted signal
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/12—Mirror assemblies combined with other articles, e.g. clocks
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W40/00—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
- B60W40/02—Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J1/00—Photometry, e.g. photographic exposure meter
- G01J1/42—Photometry, e.g. photographic exposure meter using electric radiation detectors
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J1/00—Photometry, e.g. photographic exposure meter
- G01J1/42—Photometry, e.g. photographic exposure meter using electric radiation detectors
- G01J1/44—Electric circuits
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/003—Bistatic lidar systems; Multistatic lidar systems
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
- G01S17/10—Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
- G01S17/10—Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
- G01S17/18—Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves wherein range gates are used
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/481—Constructional features, e.g. arrangements of optical elements
- G01S7/4811—Constructional features, e.g. arrangements of optical elements common to transmitter and receiver
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/481—Constructional features, e.g. arrangements of optical elements
- G01S7/4811—Constructional features, e.g. arrangements of optical elements common to transmitter and receiver
- G01S7/4813—Housing arrangements
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/481—Constructional features, e.g. arrangements of optical elements
- G01S7/4814—Constructional features, e.g. arrangements of optical elements of transmitters alone
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/481—Constructional features, e.g. arrangements of optical elements
- G01S7/4816—Constructional features, e.g. arrangements of optical elements of receivers alone
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/481—Constructional features, e.g. arrangements of optical elements
- G01S7/4817—Constructional features, e.g. arrangements of optical elements relating to scanning
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/484—Transmitters
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/486—Receivers
- G01S7/4861—Circuits for detection, sampling, integration or read-out
- G01S7/4863—Detector arrays, e.g. charge-transfer gates
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/486—Receivers
- G01S7/4865—Time delay measurement, e.g. time-of-flight measurement, time of arrival measurement or determining the exact position of a peak
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/497—Means for monitoring or calibrating
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
- B60R1/12—Mirror assemblies combined with other articles, e.g. clocks
- B60R2001/1223—Mirror assemblies combined with other articles, e.g. clocks with sensors or transducers
-
- B60W2420/408—
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J1/00—Photometry, e.g. photographic exposure meter
- G01J1/02—Details
- G01J1/029—Multi-channel photometry
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J1/00—Photometry, e.g. photographic exposure meter
- G01J1/42—Photometry, e.g. photographic exposure meter using electric radiation detectors
- G01J2001/4238—Pulsed light
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/86—Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
Definitions
- FIG. 3 depicts an example lidar transmitter that uses a laser energy model and a mirror motion model to schedule laser pulses.
- FIG. 10 depicts an example process flow for using an eye safety model to adjust a shot list.
- FIG. 14 depicts an example lidar system where a lidar transmitter and a lidar receiver coordinate their operations with each other.
- FIG. 22 shows an example process flow for assigning range swaths to return detections for a shot list of laser pulse shots.
- mirror 110 can be driven in a resonant mode according to a sinusoidal signal while mirror 112 is driven in a point-to-point mode according to a step signal that varies as a function of the range points to be targeted with laser pulses 122 by the lidar transmitter 100 .
- mirror 110 can be operated as a fast-axis mirror while mirror 112 is operated as a slow-axis mirror.
- mirror 110 scans through scan angles in a sinusoidal pattern.
- mirror 110 can be scanned at a frequency in a range between around 100 Hz and around 20 kHz.
- the seed energy (S) includes both the energy deposited in the fiber amplifier 116 by the pump laser 118 and the energy deposited in the fiber amplifier 116 by the seed laser 114 , it should be understood that for most embodiments the energy from the seed laser 114 will be very small relative to the energy from the pump laser 118 . As such, a practitioner can choose to model the seed energy solely in terms of energy produced by the pump laser 118 over time.
- the control circuit 106 can use the laser energy model 108 to determine that such high density probing can be achieved by inserting a lower density period 1606 of laser pulses during the time period immediately prior to scanning through the region of interest 1604 .
- this lower density period 1604 can be a quiet period where no laser pulses are fired.
- Such timing schedules of laser pulses can be defined for different elevations of the scan pattern to permit high resolution probing of regions of interest that are detected in the field of view.
- the control circuit 106 seeks to schedule shots for as many range points as it can at a given elevation for each scan direction of the mirror 110 in view of the laser energy model 108 . For any shots at the subject elevation that cannot be scheduled for a given scan direction due to energy model constraints, the control circuit 106 then seeks to schedule those range points on the reverse scan (and so on until all of the shots are scheduled).
- FIG. 24 discloses a process flow where the control circuit 106 can swap out potentially suboptimal detection intervals for more desirable detection intervals.
- step 2202 can operate as described in connection with FIG. 22 to generate data corresponding to the detection interval solutions computed in accordance with the models of FIGS. 23A and 23B .
- the lidar receiver 1400 and the lidar transmitter 100 are deployed in the lidar system in a bistatic architecture.
- the bistatic architecture there is a spatial offset of the field of view for the lidar transmitter 100 relative to the field of view for the lidar receiver 1400 .
- This spatial separation provides effective immunity from flashes and first surface reflections that arise when a laser pulse shot is fired.
- an activated pixel cluster of the array 1802 can be used to detect returns at the same time that the lidar transmitter 100 fires a laser pulse shot 122 because the spatial separation prevents the flash from the newly fired laser pulse shot 122 from blinding the activated pixel cluster.
Abstract
A lidar receiver that includes a photodetector circuit can be controlled so that the detection intervals used by the lidar receiver to detect returns from fired laser pulse shots are closely controlled. Such control over the detection intervals used by the lidar receiver allows for close coordination between a lidar transmitter and the lidar receiver where the lidar receiver is able to adapt to variable shot intervals of the lidar transmitter (including periods of high rate firing as well as periods of low rate firing). The lidar receiver can determine the detection intervals using a cost function that optimizes determination of the detection intervals for a plurality of the laser pulse shots from a shot list.
Description
- This patent application claims priority to U.S. provisional patent application 63/186,661, filed May 10, 2021, and entitled “Hyper Temporal Lidar with Controllable Detection Intervals”, the entire disclosure of which is incorporated herein by reference.
- This patent application also claims priority to U.S. provisional patent application 63/166,475, filed Mar. 26, 2021, and entitled “Hyper Temporal Lidar with Dynamic Laser Control”, the entire disclosure of which is incorporated herein by reference.
- This patent application is related to (1) U.S. patent application Ser. No. ______, filed this same day, and entitled “Hyper Temporal Lidar with Controllable Detection Intervals” (said patent application being identified by Thompson Coburn Attorney Docket Number 56976-213637), (2) U.S. patent application Ser. No. ______, filed this same day, and entitled “Hyper Temporal Lidar with Shot-Specific Detection Control” (said patent application being identified by Thompson Coburn Attorney Docket Number 56976-213638), (3) U.S. patent application Ser. No. ______, filed this same day, and entitled “Hyper Temporal Lidar with Controllable Detection Intervals Based on Range Estimates” (said patent application being identified by Thompson Coburn Attorney Docket Number 56976-213639), (4) U.S. patent application Ser. No. ______, filed this same day, and entitled “Hyper Temporal Lidar with Controllable Detection Intervals Based on Environmental Conditions” (said patent application being identified by Thompson Coburn Attorney Docket Number 56976-213640), (5) U.S. patent application Ser. No. ______, filed this same day, and entitled “Hyper Temporal Lidar with Controllable Detection Intervals Based on Regions of Interest” (said patent application being identified by Thompson Coburn Attorney Docket Number 56976-213641), (6) U.S. patent application Ser. No. ______, filed this same day, and entitled “Hyper Temporal Lidar with Controllable Detection Intervals Based on Location Information” (said patent application being identified by Thompson Coburn Attorney Docket Number 56976-213642), (7) U.S. patent application Ser. No. ______, filed this same day, and entitled “Hyper Temporal Lidar with Multi-Processor Return Detection” (said patent application being identified by Thompson Coburn Attorney Docket Number 56976-213644), (8) U.S. patent application Ser. No. ______, filed this same day, and entitled “Hyper Temporal Lidar with Multi-Channel Readout of Returns” (said patent application being identified by Thompson Coburn Attorney Docket Number 56976-213645), (9) U.S. patent application Ser. No. ______, filed this same day, and entitled “Bistatic Lidar Architecture for Vehicle Deployments” (said patent application being identified by Thompson Coburn Attorney Docket Number 56976-213646), and (10) U.S. patent application Ser. No. ______, filed this same day, and entitled “Hyper Temporal Lidar with Asynchronous Shot Intervals and Detection Intervals” (said patent application being identified by Thompson Coburn Attorney Docket Number 56976-213647), the entire disclosures of each of which are incorporated herein by reference
- There is a need in the art for lidar systems that operate with low latency and rapid adaptation to environmental changes. This is particularly the case for automotive applications of lidar as well as other applications where the lidar system may be moving at a high rate of speed or where there is otherwise a need for decision-making in short time intervals. For example, when an object of interest is detected in the field of view for a lidar transmitter, it is desirable for the lidar transmitter to rapidly respond to this detection by firing high densities of laser pulses at the detected object. However, as the firing rate for the lidar transmitter increases, this places pressure on the operational capabilities of the laser source employed by the lidar transmitter because the laser source will need re-charging time.
- This issue becomes particularly acute in situations where the lidar transmitter has a variable firing rate. With a variable firing rate, the laser source's operational capabilities are not only impacted by periods of high density firing but also periods of low density firing. As charge builds up in the laser source during a period where the laser source is not fired, a need arises to ensure that the laser source does not overheat or otherwise exceed its maximum energy limits.
- The lidar transmitter may employ a laser source that uses optical amplification to support the generation of laser pulses. Such laser sources have energy characteristics that are heavily impacted by time and the firing rate of the laser source. These energy characteristics of a laser source that uses optical amplification have important operational impacts on the lidar transmitter when the lidar transmitter is designed to operate with fast scan times and laser pulses that are targeted on specific range points in the field of view.
- As a technical solution to these problems in the art, the inventors disclose that a laser energy model can be used to model the available energy in the laser source over time. The timing schedule for laser pulses fired by the lidar transmitter can then be determined using energies that are predicted for the different scheduled laser pulse shots based on the laser energy model. This permits the lidar transmitter to reliably ensure at a highly granular level that each laser pulse shot has sufficient energy to meet operational needs, including when operating during periods of high density/high resolution laser pulse firing. The laser energy model is capable of modeling the energy available for laser pulses in the laser source over very short time intervals as discussed in greater detail below. With such short interval time modeling, the laser energy modeling can be referred to as a transient laser energy model.
- Furthermore, the inventors also disclose that mirror motion can be modeled so that the system can also reliably predict where a scanning mirror is aimed within a field of view over time. This mirror motion model is also capable of predicting mirror motion over short time intervals as discussed in greater detail below. In this regard, the mirror motion model can also be referred to as a transient mirror motion model. The model of mirror motion over time can be linked with the model of laser energy over time to provide still more granularity in the scheduling of laser pulses that are targeted at specific range points in the field of view. Thus, a control circuit can translate a list of arbitrarily ordered range points to be targeted with laser pulses into a shot list of laser pulses to be fired at such range points using the modeled laser energy coupled with the modeled mirror motion. In this regard, the “shot list” can refer to a list of the range points to be targeted with laser pulses as combined with timing data that defines a schedule or sequence by which laser pulses will be fired toward such range points.
- Through the use of such models, the lidar system can provide hyper temporal processing where laser pulses can be scheduled and fired at high rates with high timing precision and high spatial targeting/pointing precision. This results in a lidar system that can operate at low latency, high frame rates, and intelligent range point targeting where regions of interest in the field of view can be targeted with rapidly-fired and spatially dense laser pulse shots.
- According to additional example embodiments, the inventors disclose that the detection intervals used by a lidar receiver to detect returns of the fired laser pulse shots can be closely controlled. Such control over the detection intervals used by the lidar receiver allows for close coordination between the lidar transmitter and the lidar receiver where the lidar receiver is able to adapt to variable shot intervals of the lidar transmitter (including periods of high rate firing as well as periods of low rate firing).
- Each detection interval can be associated with a different laser pulse shot from which a return is to be collected during the associated detection interval. Accordingly, each detection interval is also associated with the return for its associated laser pulse shot. The lidar receiver can control these detection intervals on a shot-specific basis so that the lidar receiver will be able to use the appropriate pixel sets for detecting the returns from the detection interval's associated shots. The lidar receiver includes a plurality of detector pixels arranged as a photodetector array, and different sets of detector pixels can be selected for use to detect the returns from different laser pulse shots. During a given detection interval, the lidar receiver will collect sensed signal data from the selected pixel set, and this collected signal data can be processed to detect the associated return for that detection interval. The choice of which pixel set to use for detecting a return from a given laser pulse shot can be based on the location in the field of the range point targeted by the given laser pulse shot. In this fashion, the lidar receiver will readout from different pixel sets during the detection intervals in a sequenced pattern that follows the sequenced spatial pattern of the laser pulse shots.
- The lidar receiver can use any of a number of criteria for deciding when to start and stop reading out from the different pixel sets for detecting returns. For example, the lidar receiver can determine the detection intervals using a cost function that optimizes determination of the detection intervals for a plurality of the laser pulse shots from a shot list. As another example, the lidar receiver can use estimates of potential ranges to the targeted range points to decide on when the collections should start and stop from various pixel sets. As an example, if an object at range point X is located 10 meters from the lidar system, it can be expected that the return from the laser pulse shot fired at this object will reach the photodetector array relatively quickly, while it would take relatively longer for a return to reach the photodetector array if the object at range point X is located 1,000 meters from the lidar system. To control when the collections should start and stop from the pixel sets in order to detect returns from the laser pulse shots, the system can determine pairs of minimum and maximum range values for the range points targeted by each laser pulse shot, and these minimum and maximum range values can be translated into on/off times for the pixel sets. Through intelligent control of these on (start collection) and off (stop collection) times, the risk of missing a return due to the return impacting a deactivated pixel is reduced.
- Moreover, the detection intervals can vary across different shots (e.g., Detection Interval A (associated with Shot A to support detection of the return from Shot A) can have a different duration than Detection Interval B (associated with Shot B to support detection of the return from Shot B)). Further still, at least some of the detection intervals can be controlled to be of different durations than the shot intervals that correspond to such detection intervals. The shot interval that corresponds to a given detection interval is the time between the shot that is associated with that detection interval and the next shot in the shot sequence. Counterintuitively, the inventors have found that it is often not desirable for a detection interval to be of the same duration as its corresponding shot interval due to factors such as the amount of processing time that is needed to detect returns within return signals. In many cases, it will be desirable for the control process to define a detection interval so that it exhibits a duration shorter than the duration of its corresponding shot interval; while in some other cases it may be desirable for the control process to define a detection interval so that it exhibits a longer duration than the duration of its corresponding shot interval. This characteristic can be referred to as a detection interval that is asynchronous relative to its corresponding shot interval duration.
- Further still, the inventors also disclose the use of multiple processors in a lidar receiver to distribute the workload of processing returns. The activation/deactivation times of the pixel sets can be used to define which samples in a return buffer will be used for processing to detect each return, and multiple processors can share the workload of processing these samples in an effort to improve the latency of return detection.
- The inventors also disclose the use of multiple readout channels within a lidar receiver that are capable of simultaneously reading out sensed signals from different pixel sets of the photodetector array. In doing so, the lidar receiver can support the use of overlapping detection intervals when collecting signal data for detecting different returns.
- Moreover, the inventors disclose a lidar system having a lidar transmitter and lidar receiver that are in a bistatic arrangement with each other. Such a bistatic lidar system can be deployed in a climate-controlled compartment of a vehicle to reduce the exposure of the lidar system to harsher elements so it can operate in more advantageous environments with regards to factors such as temperature, moisture, etc. In an example embodiment, the bistatic lidar system can be connected to or incorporated within a rear view mirror assembly of a vehicle.
- These and other features and advantages of the invention will be described in greater detail below.
-
FIG. 1 depicts an example lidar transmitter that uses a laser energy model to schedule laser pulses. -
FIG. 2A depicts an example process flow the control circuit ofFIG. 1 . -
FIG. 2B-2D depict additional examples of lidar transmitters that use a laser energy model to schedule laser pulses. -
FIG. 3 depicts an example lidar transmitter that uses a laser energy model and a mirror motion model to schedule laser pulses. -
FIGS. 4A-4D illustrate how mirror motion can be modeled for a mirror that scans in a resonant mode. -
FIG. 4E depicts an example process flow for controllably adjusting an amplitude for mirror scanning. -
FIG. 5 depicts an example process flow for the control circuit ofFIG. 3 . -
FIGS. 6A and 6B depict example process flows for shot scheduling using the control circuit ofFIG. 3 . -
FIG. 7A depicts an example process flow for simulating and evaluating different shot ordering candidates based on the laser energy model and the mirror motion model. -
FIG. 7B depicts an example of how time slots in a mirror scan can be related to the shot angles for the mirror using the mirror motion model. -
FIG. 7C depicts an example process flow for simulating different shot ordering candidates based on the laser energy model. -
FIGS. 7D-7F depict different examples of laser energy predictions produced by the laser energy model with respect to different shot order candidates. -
FIG. 8 depicts an example lidar transmitter that uses a laser energy model and a mirror motion model to schedule laser pulses, where the control circuit includes a system controller and a beam scanner controller. -
FIG. 9 depicts an example process flow for inserting marker shots into a shot list. -
FIG. 10 depicts an example process flow for using an eye safety model to adjust a shot list. -
FIG. 11 depicts an example lidar transmitter that uses a laser energy model, a mirror motion model, and an eye safety model to schedule laser pulses. -
FIG. 12 depicts an example process flow for simulating different shot ordering candidates based on the laser energy model and eye safety model. -
FIG. 13 depicts another example process for determining shot schedules using the models. -
FIG. 14 depicts an example lidar system where a lidar transmitter and a lidar receiver coordinate their operations with each other. -
FIG. 15 depicts another example process for determining shot schedules using the models. -
FIG. 16 illustrates how the lidar transmitter can change its firing rate to probe regions in a field of view with denser groupings of laser pulses. -
FIGS. 17A-17F depict example process flows for prioritized selections of elevations with respect to shot scheduling. -
FIG. 18A depicts an example lidar receiver in accordance with an example embodiment. -
FIG. 18B depicts an example process flow for use by the lidar receiver ofFIG. 18A to control the activation and deactivation of detector pixels in a photodetector array. -
FIG. 18C depicts an example lidar receiver in accordance with another example embodiment. -
FIG. 18D depicts an example process flow for use by the lidar receiver ofFIG. 18C to control the activation and deactivation of detector pixels in a photodetector array. -
FIGS. 19A and 19B show examples of detection timing for a lidar receiver to detect returns from laser pulse shots. -
FIG. 20 shows an example process flow for use by a signal processing circuit of a lidar receiver to detect returns from laser pulse shots. -
FIGS. 21A and 21B show examples of a multi-processor arrangement for distributing the workload of detecting returns within a lidar receiver. -
FIG. 22 shows an example process flow for assigning range swaths to return detections for a shot list of laser pulse shots. -
FIGS. 23A and 23B show examples of mathematical operations that can be used to assign range swath values to return detections. -
FIG. 24 shows another example process flow for assigning range swaths to return detections for a shot list of laser pulse shots. -
FIG. 25 shows an example where a bistatic lidar system in accordance with an example embodiment is deployed inside a climate-controlled compartment of a vehicle. -
FIG. 26 shows an example embodiment for a lidar receiver which employs multiple readout channels to enable the use of overlapping detection intervals for detecting the returns from different shots. -
FIG. 27 shows an example embodiment of a lidar receiver that includes details showing how pixel activation can be controlled in concert with selective pixel readout. -
FIG. 1 shows an example embodiment of alidar transmitter 100 that can be employed to support hyper temporal lidar. In an example embodiment, thelidar transmitter 100 can be deployed in a vehicle such as an automobile. However, it should be understood that thelidar transmitter 100 described herein need not be deployed in a vehicle. As used herein, “lidar”, which can also be referred to as “ladar”, refers to and encompasses any of light detection and ranging, laser radar, and laser detection and ranging. In the example ofFIG. 1 , thelidar transmitter 100 includes alaser source 102, amirror subsystem 104, and acontrol circuit 106.Control circuit 106 uses alaser energy model 108 to govern the firing oflaser pulses 122 by thelaser source 102.Laser pulses 122 transmitted by thelaser source 102 are sent into the environment viamirror subsystem 104 to target various range points in a field of view for thelidar transmitter 100. Theselaser pulses 122 can be interchangeably referred to as laser pulse shots (or more simply, as just “shots”). The field of view will include different addressable coordinates (e.g., {azimuth, elevation} pairs) which serve as range points that can be targeted by thelidar transmitter 100 with thelaser pulses 122. - In the example of
FIG. 1 ,laser source 102 can use optical amplification to generate thelaser pulses 122 that are transmitted into the lidar transmitter's field of view via themirror subsystem 104. In this regard, alaser source 102 that includes an optical amplifier can be referred to as an opticalamplification laser source 102. In the example ofFIG. 1 , the opticalamplification laser source 102 includes aseed laser 114, anoptical amplifier 116, and apump laser 118. In this laser architecture, theseed laser 114 provides the input (signal) that is amplified to yield the transmittedlaser pulse 122, while thepump laser 118 provides the power (in the form of the energy deposited by thepump laser 118 into the optical amplifier 116). So, theoptical amplifier 116 is fed by two inputs—the pump laser 118 (which deposits energy into the optical amplifier 116) and the seed laser 114 (which provides the signal that stimulates the energy in theoptical amplifier 116 and inducespulse 122 to fire). - Thus, the
pump laser 118, which can take the form of an electrically-driven pump laser diode, continuously sends energy into theoptical amplifier 116. Theseed laser 114, which can take the form of an electrically-driven seed laser that includes a pulse formation network circuit, controls when the energy deposited by thepump laser 118 into theoptical amplifier 116 is released by theoptical amplifier 116 as alaser pulse 122 for transmission. Theseed laser 114 can also control the shape oflaser pulse 122 via the pulse formation network circuit (which can drive the pump laser diode with the desired pulse shape). Theseed laser 114 also injects a small amount of (pulsed) optical energy into theoptical amplifier 116. - Given that the energy deposited in the
optical amplifier 116 by thepump laser 118 andseed laser 114 serves to seed theoptical amplifier 116 with energy from which thelaser pulses 122 are generated, this deposited energy can be referred to as “seed energy” for thelaser source 102. - The
optical amplifier 116 operates to generatelaser pulse 122 from the energy deposited therein by theseed laser 114 and pumplaser 118 when theoptical amplifier 116 is induced to fire thelaser pulse 122 in response to stimulation of the energy therein by theseed laser 114. Theoptical amplifier 116 can take the form of a fiber amplifier. In such an embodiment, thelaser source 102 can be referred to as a pulsed fiber laser source. With a pulsedfiber laser source 102, thepump laser 118 essentially places the dopant electrons in thefiber amplifier 116 into an excited energy state. When it is time to firelaser pulse 122, theseed laser 114 stimulates these electrons, causing them to emit energy and collapse down to a lower (ground) state, which results in the emission ofpulse 122. An example of a fiber amplifier that can be used for theoptical amplifier 116 is a doped fiber amplifier such as an Erbium-Doped Fiber Amplifier (EDFA). - It should be understood that other types of optical amplifiers can be used for the
optical amplifier 116 if desired by a practitioner. For example, theoptical amplifier 116 can take the form of a semiconductor amplifier. In contrast to a laser source that uses a fiber amplifier (where the fiber amplifier is optically pumped by pump laser 118), a laser source that uses a semiconductor amplifier can be electrically pumped. As another example, theoptical amplifier 116 can take the form of a gas amplifier (e.g., a CO2 gas amplifier). Moreover, it should be understood that a practitioner may choose to include a cascade ofoptical amplifiers 116 inlaser source 102. - In an example embodiment, the
pump laser 118 can exhibit a fixed rate of energy buildup (where a constant amount of energy is deposited in theoptical amplifier 116 per unit time). However, it should be understood that a practitioner may choose to employ apump laser 118 that exhibits a variable rate of energy buildup (where the amount of energy deposited in theoptical amplifier 116 varies per unit time). - The
laser source 102 fireslaser pulses 122 in response to firingcommands 120 received from thecontrol circuit 106. In an example where thelaser source 102 is a pulsed fiber laser source, the firing commands 120 can cause theseed laser 114 to induce pulse emissions by thefiber amplifier 116. In an example embodiment, thelidar transmitter 100 employs non-steady state pulse transmissions, which means that there will be variable timing between thecommands 120 to fire thelaser source 102. In this fashion, thelaser pulses 122 transmitted by thelidar transmitter 100 will be spaced in time at irregular intervals. There may be periods of relatively high densities oflaser pulses 122 and periods of relatively low densities oflaser pulses 122. Examples of laser vendors that provide such variable charge time control include Luminbird and ITF. As examples, lasers that have the capacity to regulate pulse timing over timescales corresponding to preferred embodiments discussed herein and which are suitable to serve aslaser source 102 in these preferred embodiments are expected to exhibit laser wavelengths of 1.5 μm and available energies in a range of around hundreds of nano-Joules to around tens of micro-Joules, with timing controllable from hundreds of nanoseconds to tens of microseconds and with an average power range from around 0.25 Watts to around 4 Watts. - The
mirror subsystem 104 includes a mirror that is scannable to control where thelidar transmitter 100 is aimed. In the example embodiment ofFIG. 1 , themirror subsystem 104 includes two mirrors—mirror 110 andmirror 112.Mirrors Mirror 110 is positioned optically downstream from thelaser source 102 and optically upstream frommirror 112. In this fashion, alaser pulse 122 generated by thelaser source 102 will impactmirror 110, whereupon mirror 110 will reflect thepulse 122 ontomirror 112, whereupon mirror 112 will reflect thepulse 122 for transmission into the environment. It should be understood that theoutgoing pulse 122 may pass through various transmission optics during its propagation frommirror 112 into the environment. - In the example of
FIG. 1 ,mirror 110 can scan through a plurality of mirror scan angles to define where thelidar transmitter 100 is targeted along a first axis. This first axis can be an X-axis so thatmirror 110 scans between azimuths.Mirror 112 can scan through a plurality of mirror scan angles to define where thelidar transmitter 100 is targeted along a second axis. The second axis can be orthogonal to the first axis, in which case the second axis can be a Y-axis so thatmirror 112 scans between elevations. The combination of mirror scan angles formirror 110 andmirror 112 will define a particular {azimuth, elevation} coordinate to which thelidar transmitter 100 is targeted. These azimuth, elevation pairs can be characterized as {azimuth angles, elevation angles} and/or {rows, columns} that define range points in the field of view which can be targeted withlaser pulses 122 by thelidar transmitter 100. - A practitioner may choose to control the scanning of
mirrors mirror 110 can be driven in a resonant mode according to a sinusoidal signal whilemirror 112 is driven in a point-to-point mode according to a step signal that varies as a function of the range points to be targeted withlaser pulses 122 by thelidar transmitter 100. In this fashion,mirror 110 can be operated as a fast-axis mirror whilemirror 112 is operated as a slow-axis mirror. When operating in such a resonant mode,mirror 110 scans through scan angles in a sinusoidal pattern. In an example embodiment,mirror 110 can be scanned at a frequency in a range between around 100 Hz and around 20 kHz. In a preferred embodiment,mirror 110 can be scanned at a frequency in a range between around 10 kHz and around 15 kHz (e.g., around 12 kHz). As noted above,mirror 112 can be driven in a point-to-point mode according to a step signal that varies as a function of the range points to be targeted withlaser pulses 122 by thelidar transmitter 100. Thus, if thelidar transmitter 100 is to fire alaser pulse 122 at a particular range point having an elevation of X, then the step signal can drivemirror 112 to scan to the elevation of X. When thelidar transmitter 100 is later to fire alaser pulse 122 at a particular range point having an elevation of Y, then the step signal can drivemirror 112 to scan to the elevation of Y. In this fashion, themirror subsystem 104 can selectively target range points that are identified for targeting withlaser pulses 122. It is expected thatmirror 112 will scan to new elevations at a much slower rate thanmirror 110 will scan to new azimuths. As such,mirror 110 may scan back and forth at a particular elevation (e.g., left-to-right, right-to-left, and so on) several times beforemirror 112 scans to a new elevation. Thus, while themirror 112 is targeting a particular elevation angle, thelidar transmitter 100 may fire a number oflaser pulses 122 that target different azimuths at that elevation whilemirror 110 is scanning through different azimuth angles. U.S. Pat. Nos. 10,078,133 and 10,642,029, the entire disclosures of which are incorporated herein by reference, describe examples of mirror scan control using techniques and transmitter architectures such as these (and others) which can be used in connection with the example embodiments described herein. -
Control circuit 106 is arranged to coordinate the operation of thelaser source 102 andmirror subsystem 104 so thatlaser pulses 122 are transmitted in a desired fashion. In this regard, thecontrol circuit 106 coordinates the firing commands 120 provided tolaser source 102 with the mirror control signal(s) 130 provided to themirror subsystem 104. In the example ofFIG. 1 , where themirror subsystem 104 includesmirror 110 andmirror 112, the mirror control signal(s) 130 can include a first control signal that drives the scanning ofmirror 110 and a second control signal that drives the scanning ofmirror 112. Any of the mirror scan techniques discussed above can be used to controlmirrors mirror 110 can be driven with a sinusoidal signal to scanmirror 110 in a resonant mode, andmirror 112 can be driven with a step signal that varies as a function of the range points to be targeted withlaser pulses 122 to scanmirror 112 in a point-to-point mode. - As discussed in greater detail below,
control circuit 106 can use alaser energy model 108 to determine a timing schedule for thelaser pulses 122 to be transmitted from thelaser source 102. Thislaser energy model 108 can model the available energy within thelaser source 102 for producinglaser pulses 122 over time in different shot schedule scenarios. By modeling laser energy in this fashion, thelaser energy model 108 helps thecontrol circuit 106 make decisions on when thelaser source 102 should be triggered to fire laser pulses. Moreover, as discussed in greater detail below, thelaser energy model 108 can model the available energy within thelaser source 102 over short time intervals (such as over time intervals in a range from 10-100 nanoseconds), and such a short intervallaser energy model 108 can be referred to as a transientlaser energy model 108. -
Control circuit 106 can include a processor that provides the decision-making functionality described herein. Such a processor can take the form of a field programmable gate array (FPGA) or application-specific integrated circuit (ASIC) which provides parallelized hardware logic for implementing such decision-making. The FPGA and/or ASIC (or other compute resource(s)) can be included as part of a system on a chip (SoC). However, it should be understood that other architectures forcontrol circuit 106 could be used, including software-based decision-making and/or hybrid architectures which employ both software-based and hardware-based decision-making. The processing logic implemented by thecontrol circuit 106 can be defined by machine-readable code that is resident on a non-transitory machine-readable storage medium such as memory within or available to thecontrol circuit 106. The code can take the form of software or firmware that define the processing operations discussed herein for thecontrol circuit 106. This code can be downloaded onto thecontrol circuit 106 using any of a number of techniques, such as a direct download via a wired connection as well as over-the-air downloads via wireless networks, which may include secured wireless networks. As such, it should be understood that thelidar transmitter 100 can also include a network interface that is configured to receive such over-the-air downloads and update thecontrol circuit 106 with new software and/or firmware. This can be particularly advantageous for adjusting thelidar transmitter 100 to changing regulatory environments with respect to criteria such as laser dosage and the like. When using code provisioned for over-the-air updates, thecontrol circuit 106 can operate with unidirectional messaging to retain function safety. - Modeling Laser Energy Over Time:
-
FIG. 2A shows an example process flow for thecontrol circuit 106 with respect to using thelaser energy model 108 to govern the timing schedule forlaser pulses 122. Atstep 200, thecontrol circuit 106 maintains thelaser energy model 108. This step can include reading the parameters and expressions that define thelaser energy model 108, discussed in greater detail below. Step 200 can also include updating thelaser energy model 108 over time aslaser pulses 122 are triggered by thelaser source 102 as discussed below. - In an example embodiment where the
laser source 102 is a pulsed fiber laser source as discussed above, thelaser energy model 108 can model the energy behavior of theseed laser 114,pump laser 118, andfiber amplifier 116 over time aslaser pulses 122 are fired. As noted above, the firedlaser pulses 122 can be referred to as “shots”. For example, thelaser energy model 108 can be based on the following parameters: -
- CE(t), which represents the combined amount of energy within the
fiber amplifier 116 at the moment when thelaser pulse 122 is fired at time t. - EF(t), which represents the amount of energy fired in
laser pulse 122 at time t; - EP, which represents the amount of energy deposited by the
pump laser 118 into thefiber amplifier 116 per unit of time. - S(t+δ), which represents the cumulative amount of seed energy that has been deposited by the
pump laser 118 andseed laser 114 into thefiber amplifier 116 over the time duration δ, where δ represents the amount of time between the most recent laser pulse 122 (for firing at time t) and the next laser pulse 122 (to be fired at time t+δ). - F(t+δ), which represents the amount of energy left behind in the
fiber amplifier 116 when thepulse 122 is fired at time t (and is thus available for use with thenext pulse 122 to be fired at time t+δ). - CE(t+δ), which represents the amount of combined energy within the
fiber amplifier 116 at time t+δ (which is the sum of S(t+δ) and F(t+δ)) - EF(t+δ), which represents the amount of energy fired in
laser pulse 122 fired at time t+δ - a and b, where “a” represents a proportion of energy transferred from the
fiber amplifier 116 into thelaser pulse 122 when thelaser pulse 122 is fired, where “b” represents a proportion of energy retained in thefiber amplifier 116 after thelaser pulse 122 is fired, where a+b=1.
- CE(t), which represents the combined amount of energy within the
- While the seed energy (S) includes both the energy deposited in the
fiber amplifier 116 by thepump laser 118 and the energy deposited in thefiber amplifier 116 by theseed laser 114, it should be understood that for most embodiments the energy from theseed laser 114 will be very small relative to the energy from thepump laser 118. As such, a practitioner can choose to model the seed energy solely in terms of energy produced by thepump laser 118 over time. Thus, after the pulsedfiber laser source 102 fires a laser pulse at time t, thepump laser 118 will begin re-supplying thefiber amplifier 116 with energy over time (in accordance with EP) until theseed laser 116 is triggered at time t+δ to cause thefiber amplifier 116 to emit thenext laser pulse 122 using the energy left over in thefiber amplifier 116 following the previous shot plus the new energy that has been deposited in thefiber amplifier 116 bypump laser 118 since the previous shot. As noted above, the parameters a and b model how much of the energy in thefiber amplifier 116 is transferred into thelaser pulse 122 for transmission and how much of the energy is retained by thefiber amplifier 116 for use when generating thenext laser pulse 122. - The energy behavior of pulsed
fiber laser source 102 with respect to the energy fired inlaser pulses 122 in this regard can be expressed as follows: -
EF(t)=aCE(t) -
F(t+δ)=bCE(t) -
S(t+δ)=δE P -
CE(t+δ)=S(t+δ)+F(t+δ) -
EF(t+δ)=aCE(t+δ) - With these relationships, the value for CE(t) can be re-expressed in terms of EF(t) as follows:
-
- Furthermore, the value for F(t+δ) can be re-expressed in terms of EF(t) as follows:
-
- This means that the values for CE(t+δ) and EF(t+δ) can be re-expressed as follows:
-
- And this expression for EF(t+δ) shortens to:
-
EF(t+δ)=aδE P +bEF(t) - It can be seen, therefore, that the energy to be fired in a
laser pulse 122 at time t+δ in the future can be computed as a function of how much energy was fired in theprevious laser pulse 122 at time t. Given that a, b, EP, and EF(t) are known values, and δ is a controllable variable, these expressions can be used as thelaser energy model 108 that predicts the amount of energy fired in a laser pulse at select times in the future (as well as how much energy is present in thefiber amplifier 116 at select times in the future). - While this example models the energy behavior over time for a pulsed
fiber laser source 102, it should be understood that these models could be adjusted to reflect the energy behavior over time for other types of laser sources. - Thus, the
control circuit 106 can use thelaser energy model 108 to model how much energy is available in thelaser source 102 over time and can be delivered in thelaser pulses 122 for different time schedules of laser pulse shots. With reference toFIG. 2A , this allows thecontrol circuit 106 to determine a timing schedule for the laser pulses 122 (step 202). For example, at step 202, thecontrol circuit 106 can compare thelaser energy model 108 with various defined energy requirements to assess how the laser pulse shots should be timed. As examples, the defined energy requirements can take any of a number of forms, including but not limited to (1) a minimum laser pulse energy, (2) a maximum laser pulse energy, (3) a desired laser pulse energy (which can be per targeted range point for alidar transmitter 100 that selectively targets range points with laser pulses 122), (4) eye safety energy thresholds, and/or (5) camera safety energy thresholds. Thecontrol circuit 106 can then, atstep 204, generate and provide firing commands 120 to thelaser source 102 that trigger thelaser source 102 to generatelaser pulses 122 in accordance with the determined timing schedule. Thus, if thecontrol circuit 106 determines that laser pulses should be generated at times t1, t2, t3, . . . , the firing commands 120 can trigger the laser source to generatelaser pulses 122 at these times. - A control variable that the
control circuit 106 can evaluate when determining the timing schedule for the laser pulses is the value of δ, which controls the time interval between successive laser pulse shots. The discussion below illustrates how the choice of δ impacts the amount of energy in eachlaser pulse 122 according to thelaser energy model 108. - For example, during a period where the
laser source 102 is consistently fired every δ units of time, thelaser energy model 108 can be used to predict energy levels for the laser pulses as shown in the following toy example. -
- Toy Example 1, where EP=1 unit of energy; δ=1 unit of time; the initial amount of energy stored by the
fiber laser 116 is 1 unit of energy; a=0.5 and b=0.5:
- Toy Example 1, where EP=1 unit of energy; δ=1 unit of time; the initial amount of energy stored by the
-
Shot Number 1 2 3 4 5 Time t + 1 t + 2 t + 3 t + 4 t + 5 Seed Energy from Pump Laser (S) 1 1 1 1 1 Leftover Fiber Energy (F) 1 1 1 1 1 Combined Energy (S + F) 2 2 2 2 2 Energy Fired (EF) 1 1 1 1 1 - If the rate of firing is increased, this will impact how much energy is included in the laser pulses. For example, relative to Toy Example 1, if the firing rate is doubled (δ=0.5 units of time) (while the other parameters are the same), the
laser energy model 108 will predict the energy levels perlaser pulse 122 as follows below with Toy Example 2. -
- Toy Example 2, where EP=1 unit of energy; δ=0.5 units of time; the initial amount of energy stored by the
fiber laser 116 is 1 unit of energy; a=0.5 and b=0.5:
- Toy Example 2, where EP=1 unit of energy; δ=0.5 units of time; the initial amount of energy stored by the
-
Shot Number 1 2 3 4 5 Time t + 0.5 t + 1 t + 1.5 t + 2 t + 3.5 Seed Energy from 0.5 0.5 0.5 0.5 0.5 Pump Laser (S) Leftover Fiber Energy (F) 1 0.75 0.625 0.5625 0.53125 Combined Energy (S + F) 1.5 1.25 1.125 1.0625 1.03125 Energy Fired (EF) 0.75 0.625 0.5625 0.53125 0.515625 - Thus, in comparing Toy Example 1 with Toy Example 2 it can be seen that increasing the firing rate of the laser will decrease the amount of energy in the
laser pulses 122. As example embodiments, thelaser energy model 108 can be used to model a minimum time interval in a range between around 10 nanoseconds to around 100 nanoseconds. This timing can be affected by both the accuracy of the clock for control circuit 106 (e.g., clock skew and clock jitter) and the minimum required refresh time for thelaser source 102 after firing. - If the rate of firing is decreased relative to Toy Example 1, this will increase how much energy is included in the laser pulses. For example, relative to Toy Example 1, if the firing rate is halved (δ=2 units of time) (while the other parameters are the same), the
laser energy model 108 will predict the energy levels perlaser pulse 122 as follows below with Toy Example 3. -
- Toy Example 3, where EP=1 unit of energy; δ=2 units of time; the initial amount of energy stored by the
fiber laser 116 is 1 unit of energy; a=0.5 and b=0.5:
- Toy Example 3, where EP=1 unit of energy; δ=2 units of time; the initial amount of energy stored by the
-
Shot Number 1 2 3 4 5 Time t + 2 t + 4 t + 6 t + 8 t + 10 Seed Energy from Pump Laser (S) 2 2 2 2 2 Leftover Fiber Energy (F) 1 1.5 1.75 1.875 1.9375 Combined Energy (S + F) 3 3.5 3.75 3.875 3.9375 Energy Fired (EF) 1.5 1.75 1.875 1.9375 1.96875 - If a practitioner wants to maintain a consistent amount of energy per laser pulse, it can be seen that the
control circuit 106 can use thelaser energy model 108 to define a timing schedule forlaser pulses 122 that will achieve this goal (through appropriate selection of values for δ). - For practitioners that want the
lidar transmitter 100 to transmit laser pulses at varying intervals, thecontrol circuit 106 can use thelaser energy model 108 to define a timing schedule forlaser pulses 122 that will maintain a sufficient amount of energy perlaser pulse 122 in view of defined energy requirements relating to thelaser pulses 122. For example, if the practitioner wants thelidar transmitter 100 to have the ability to rapidly fire a sequence of laser pulses (for example, to interrogate a target in the field of view with high resolution) while ensuring that the laser pulses in this sequence are each at or above some defined energy minimum, thecontrol circuit 106 can define a timing schedule that permits such shot clustering by introducing a sufficiently long value for δ just before firing the clustered sequence. This long δ value will introduce a “quiet” period for thelaser source 102 that allows the energy inseed laser 114 to build up so that there is sufficient available energy in thelaser source 102 for the subsequent rapid fire sequence of laser pulses. As indicated by the decay pattern of laser pulse energy reflected by Toy Example 2, increasing the starting value for the seed energy (S) before entering the time period of rapidly-fired laser pulses will make more energy available for the laser pulses fired close in time with each other. - Toy Example 4 below shows an example shot sequence in this regard, where there is a desire to fire a sequence of 5 rapid laser pulses separated by 0.25 units of time, where each laser pulse has a minimum energy requirement of 1 unit of energy. If the laser source has just concluded a shot sequence after which time there is 1 unit of energy retained in the
fiber laser 116, the control circuit can wait 25 units of time to allow sufficient energy to build up in theseed laser 114 to achieve the desired rapid fire sequence of 5laser pulses 122, as reflected in the table below. -
- Toy Example 4, where EP=1 unit of energy; δLONG=25 units of time; δSHORT=0.25 units of time; the initial amount of energy stored by the
fiber laser 116 is 1 unit of energy; a=0.5 and b=0.5; and the minimum pulse energy requirement is 1 unit of energy:
- Toy Example 4, where EP=1 unit of energy; δLONG=25 units of time; δSHORT=0.25 units of time; the initial amount of energy stored by the
-
Shot Number 1 2 3 4 5 Time t + 25 t + 25.25 t + 25.5 t + 25.75 t + 26 Seed Energy from Pump Laser (S) 25 0.25 0.25 0.25 0.25 Leftover Fiber Energy (F) 1 13 6.625 3.4375 1.84375 Combined Energy (S + F) 26 13.25 6.875 3.6875 2.09375 Energy Fired (EF) 13 6.625 3.4375 1.84375 1.046875 - This ability to leverage “quiet” periods to facilitate “busy” periods of laser activity means that the
control circuit 106 can provide highly agile and responsive adaptation to changing circumstances in the field of view. For example,FIG. 16 shows an example where, during afirst scan 1600 across azimuths from left to right at a given elevation, thelaser source 102fires 5laser pulses 122 that are relatively evenly spaced in time (where the laser pulses are denoted by the “X” marks on the scan 1600). If a determination is made that an object of interest is found atrange point 1602, thecontrol circuit 106 can operate to interrogate the region ofinterest 1604 aroundrange point 1602 with a higher density of laser pulses onsecond scan 1610 across the azimuths from right to left. To facilitate this high density period of rapidly fired laser pulses within the region ofinterest 1604, thecontrol circuit 106 can use thelaser energy model 108 to determine that such high density probing can be achieved by inserting alower density period 1606 of laser pulses during the time period immediately prior to scanning through the region ofinterest 1604. In the example ofFIG. 16 , thislower density period 1604 can be a quiet period where no laser pulses are fired. Such timing schedules of laser pulses can be defined for different elevations of the scan pattern to permit high resolution probing of regions of interest that are detected in the field of view. - The
control circuit 106 can also use theenergy model 108 to ensure that thelaser source 102 does not build up with too much energy. For practitioners that expect thelidar transmitter 100 to exhibit periods of relatively infrequent laser pulse firings, it may be the case that the value for δ in some instances will be sufficiently long that too much energy will build up in thefiber amplifier 116, which can cause problems for the laser source 102 (either due to equilibrium overheating of thefiber amplifier 116 or non-equilibrium overheating of thefiber amplifier 116 when theseed laser 114 induces a large amount of pulse energy to exit the fiber amplifier 116). To address this problem, thecontrol circuit 106 can insert “marker” shots that serve to bleed off energy from thelaser source 102. Thus, even though thelidar transmitter 100 may be primarily operating by transmittinglaser pulses 122 at specific, selected range points, these marker shots can be fired regardless of the selected list of range points to be targeted for the purpose of preventing damage to thelaser source 102. For example, if there is a maximum energy threshold for thelaser source 102 of 25 units of energy, thecontrol circuit 106 can consult thelaser energy model 108 to identify time periods where this maximum energy threshold would be violated. When thecontrol circuit 106 predicts that the maximum energy threshold would be violated because the laser pulses have been too infrequent, thecontrol circuit 106 can provide afiring command 120 to thelaser source 102 before the maximum energy threshold has been passed, which triggers thelaser source 102 to fire the marker shot that bleeds energy out of thelaser source 102 before the laser source's energy has gotten too high. This maximum energy threshold can be tracked and assessed in any of a number of ways depending on how thelaser energy model 108 models the various aspects of laser operation. For example, it can be evaluated as a maximum energy threshold for thefiber amplifier 116 if theenergy model 108 tracks the energy in the fiber amplifier 116 (S+F) over time. As another example, the maximum energy threshold can be evaluated as a maximum value of the duration δ (which would be set to prevent an amount of seed energy (S) from being deposited into thefiber amplifier 116 that may cause damage when taking the values for EP and a presumed value for F into consideration. - While the toy examples above use simplified values for the model parameters (e.g. the values for EP and δ) for the purpose of ease of explanation, it should be understood that practitioners can select values for the model parameters or otherwise adjust the model components to accurately reflect the characteristics and capabilities of the
laser source 102 being used. For example, the values for EP, a, and b can be empirically determined from testing of a pulsed fiber laser source (or these values can be provided by a vendor of the pulsed fiber laser source). Moreover, a minimum value for δ can also be a function of the pulsedfiber laser source 102. That is, the pulsed fiber laser sources available from different vendors may exhibit different minimum values for δ, and this minimum value for δ (which reflects a maximum achievable number of shots per second) can be included among the vendor's specifications for its pulsed fiber laser source. - Furthermore, in situations where the pulsed
fiber laser source 102 is expected or observed to exhibit nonlinear behaviors, such nonlinear behavior can be reflected in the model. As an example, it can be expected that the pulsedfiber laser source 102 will exhibit energy inefficiencies at high power levels. In such a case, the modeling of the seed energy (S) can make use of a clipped, offset (affine) model for the energy that gets delivered to thefiber amplifier 116 bypump laser 118 for pulse generation. For example, in this case, the seed energy can be modeled in thelaser energy model 108 as: -
S(t+δ)=E P max(a 1 δ+a 0,offset) - The values for a1, a0, and offset can be empirically measured for the pulsed
fiber laser source 102 and incorporated into the modeling of S(t+δ) used within thelaser energy model 108. It can be seen that for a linear regime, the value for a1 would be 1, and the values for a0 and offset would be 0. In this case, the model for the seed energy S(t+δ) reduces to δEP as discussed in the examples above. - The
control circuit 106 can also update thelaser energy model 108 based on feedback that reflects the energies within theactual laser pulses 122. In this fashion,laser energy model 108 can better improve or maintain its accuracy over time. In an example embodiment, thelaser source 102 can monitor the energy withinlaser pulses 122 at the time of firing. This energy amount can then be reported by thelaser source 102 to the control circuit 106 (see 250 inFIG. 2B ) for use in updating themodel 108. Thus, if thecontrol circuit 106 detects an error between the actual laser pulse energy and the modeled pulse energy, then thecontrol circuit 106 can introduce an offset or other adjustment intomodel 108 to account for this error. - For example, it may be necessary to update the values for a and b to reflect actual operational characteristics of the
laser source 102. As noted above, the values of a and b define how much energy is transferred from thefiber amplifier 116 into thelaser pulse 122 when thelaser source 102 is triggered and theseed laser 114 induces thepulse 122 to exit thefiber amplifier 116. An updated value for a can be computed from the monitored energies in transmitted pulses 122 (PE) as follows: -
a=argmina(Σk=1 . . . N |PE(t k+δk)−aPE(t k)−(1−a)δt kl | 2) - In this expression, the values for PE represent the actual pulse energies at the referenced times (tk or tk+δk). This is a regression problem and can be solved using commercial software tools such as those available from MATLAB, Wolfram, PTC, ANSYS, and others. In an ideal world, the respective values for PE(t) and PE(t+δ) will be the same as the modeled values of EF(t) and EF(t+δ), However, for a variety of reasons, the gain factors a and b may vary due to laser efficiency considerations (such as heat or aging whereby back reflections reduce the resonant efficiency in the laser cavity). Accordingly, a practitioner may find it useful to update the
model 108 over time to reflect the actual operational characteristics of thelaser source 102 by periodically computing updated values to use for a and b. - In scenarios where the
laser source 102 does not report its own actual laser pulse energies, a practitioner can choose to include a photodetector at or near an optical exit aperture of the lidar transmitter 100 (e.g., seephotodetector 252 inFIG. 2C ). Thephotodetector 252 can be used to measure the energy within the transmitted laser pulses 122 (while allowinglaser pulses 122 to propagate into the environment toward their targets), and these measured energy levels can be used to detect potential errors with respect to the modeled energies for the laser pulses somodel 108 can be adjusted as noted above. As another example for use in a scenario where thelaser source 102 does not report its own actual laser pulse energies, a practitioner derives laser pulse energy fromreturn data 254 with respect to returns from known fiducial objects in a field of view (such as road signs which are regulated in terms of their intensity values for light returns) (see 254 inFIG. 2D ) as obtained from apoint cloud 256 for the lidar system. Additional details about such energy derivations are discussed below. Thus, in such an example, themodel 108 can be periodically re-calibrated using point cloud data for returns from such fiducials, whereby thecontrol circuit 106 derives the laser pulse energy that would have produced the pulse return data found in thepoint cloud 256. This derived amount of laser pulse energy can then be compared with the modeled laser pulse energy for adjustment of thelaser energy model 108 as noted above. - Modeling Mirror Motion Over Time:
- In a particularly powerful example embodiment, the
control circuit 106 can also model mirror motion to predict where themirror subsystem 104 will be aimed at a given point in time. This can be especially helpful forlidar transmitters 100 that selectively target specific range points in the field of view withlaser pulses 122. By coupling the modeling of laser energy with a model of mirror motion, thecontrol circuit 106 can set the order of specific laser pulse shots to be fired to targeted range points with highly granular and optimized time scales. As discussed in greater detail below, the mirror motion model can model mirror motion over short time intervals (such as over time intervals in a range from 5-50 nanoseconds). Such a short interval mirror motion model can be referred to as a transient mirror motion model. -
FIG. 3 shows anexample lidar transmitter 100 where thecontrol circuit 106 uses both alaser energy model 108 and amirror motion model 308 to govern the timing schedule forlaser pulses 122. - In an example embodiment, the
mirror subsystem 104 can operate as discussed above in connection withFIG. 1 . For example, thecontrol circuit 106 can (1)drive mirror 110 in a resonant mode using a sinusoidal signal to scanmirror 110 across different azimuth angles and (2)drive mirror 112 in a point-to-point mode using a step signal to scanmirror 112 across different elevations, where the step signal will vary as a function of the elevations of the range points to be targeted withlaser pulses 122.Mirror 110 can be scanned as a fast-axis mirror, whilemirror 112 is scanned as a slow-axis mirror. In such an embodiment, a practitioner can choose to use themirror motion model 308 to model the motion ofmirror 110 as (comparatively)mirror 112 can be characterized as effectively static for one or more scans across azimuth angles. -
FIGS. 4A-4C illustrate how the motion ofmirror 110 can be modeled over time. In these examples, (1) the angle theta (θ) represents the tilt angle ofmirror 110, (2) the angle phi (Φ) represents the angle at which alaser pulse 122 from thelaser source 102 will be incident onmirror 110 whenmirror 110 is in a horizontal position (where θ is zero degrees—seeFIG. 4A ), and (3) the angle mu (μ) represents the angle of pulse 422 as reflected bymirror 110 relative to the horizontal position ofmirror 110. In this example, the angle μ can represent the scan angle of themirror 110, where this scan angle can also be referred to as a shot angle formirror 110 as angle μ corresponds to the angle at which reflectedlaser pulse 122′ will be directed into the field of view if fired at that time. -
FIG. 4A showsmirror 110, wheremirror 110 is at “rest” with a tilt angle θ of zero degrees, which can be characterized as the horizon ofmirror 110.Laser source 102 is oriented in a fixed position so thatlaser pulses 122 will impactmirror 110 at the angle Φ relative to the horizontal position ofmirror 110. Given the property of reflections, it should be understood that the value of the shot angle μ will be the same as the value of angle Φ when themirror 110 is horizontal (where θ=0). -
FIG. 4B showsmirror 110 when it has been tilted aboutpivot 402 to a positive non-zero value of θ. It can be seen that the tilting of mirror to angle θ will have the effect of steering the reflectedlaser pulse 122′ clockwise and to the right relative to the angle of the reflectedlaser pulse 122′ inFIG. 4A (whenmirror 110 was horizontal). -
Mirror 110 will have a maximum tilt angle that can be referred to as the amplitude A ofmirror 110. Thus, it can be understood thatmirror 110 will scan through its tilt angles between the values of −A (which corresponds to −θMax) and +A (which corresponds to +θMax). It can be seen that the angle of reflection for the reflectedlaser pulse 122′ relative to the actual position ofmirror 110 is the sum of θ+Φ as shown byFIG. 4B . In then follows that the value of the shot angle μ will be equal to 2θ+Φ, as can be seen fromFIG. 4B . - When driven in a resonant mode according to sinusoidal control signal,
mirror 110 will change its tilt angle θ according to a cosine oscillation, where its rate of change is slowest at the ends of its scan (when it changes its direction of tilt) and fastest at the mid-point of its scan. In an example where themirror 110 scans between maximum tilt angles of −A to +A, the value of the angle θ as a function of time can be expressed as: -
θ=A cos(2πft) - where f represents the scan frequency of
mirror 110 and t represents time. Based on this model, it can be seen that the value for θ can vary from A (when t=0) to 0 (when t is a value corresponding to 90 degrees of phase (or 270 degrees of phase) to −A (when t is a value corresponding to 180 degrees of phase). - This means that the value of the shot angle μ can be expressed as a function of time by substituting the cosine expression for θ into the expression for the shot angle of μ=2θ+Φ as follows:
-
μ=2A cos(2πft)+φ - From this expression, one can then solve fort to produce an expression as follows:
-
- This expression thus identifies the time t at which the scan of
mirror 110 will target a given shot angle μ. Thus, when thecontrol circuit 106 wants to target a shot angle of μ, the time at which mirror 110 will scan to this shot angle can be readily computed given that the values for Φ, A, and f will be known. In this fashion, themirror motion model 308 can model that shot angle as a function of time and predict the time at which themirror 110 will target a particular shot angle. -
FIG. 4C showsmirror 110 when it has been tilted aboutpivot 402 to a negative non-zero value of −θ. It can be seen that the tilting of mirror to angle −θ will have the effect of steering the reflectedlaser pulse 122′ counterclockwise and to the left relative to the angle of the reflectedlaser pulse 122′ inFIG. 4A (whenmirror 110 was horizontal).FIG. 4C also demonstrates a constraint for a practitioner on the selection of the value for the angle Φ.Laser source 102 will need to be positioned so that the angle Φ is greater than the value of A to avoid a situation where the underside of the tiltedmirror 110 occludes thelaser pulse 122 when mirror is tilted to a value of θ that is greater than Φ. Furthermore, the value of the angle Φ should not be 90° to avoid a situation where themirror 110 will reflect thelaser pulse 122 back into thelaser source 102. A practitioner can thus position thelaser source 102 at a suitable angle Φ accordingly. -
FIG. 4D illustrates a translation of this relationship to how themirror 110 scans across a field ofview 450. Themirror 110 will alternately scan in a left-to-right direction 452 and right-to-left direction 454 asmirror 110 tilts between its range of tilt angles (e.g., θ=−A through +A). For the example ofFIG. 4A where the value for θ is zero, this means that a laser pulse fired at theuntilted mirror 110 will be directed as shown by 460 inFIG. 4D , where the laser pulse is directed toward a range point at the mid-point of scan. The shot angle μ for this “straight ahead” gaze is Φ as discussed above in connection withFIG. 4A . As the angle θ increases from θ=0, this will cause the laser pulses directed bymirror 110 to scan to the right in the field of view until themirror 110 tilts to the angle θ=+A. When θ=+A,mirror 110 will be at the furthest extent of itsrightward scan 452, and it will direct a laser pulse as shown by 462. The shot angle μ for this rightmost scan position will be the value μ=2A+Φ. From that point, themirror 110 will begin scanning leftward indirection 454 by reducing its tilt angle θ. Themirror 110 will once again scan through the mid-point and eventually reach a tilt angle of θ=−A. When θ=−A,mirror 110 will be at the furthest extent of itsleftward scan 452, and it will direct a laser pulse as shown by 464. The shot angle μ for this leftmost scan position will be the value μ=Φ−2A. From that point, themirror 110 will begin tilting in therightward direction 450 again, and the scan repeats. As noted above, due to themirror motion model 308, thecontrol circuit 106 will know the time at which themirror 110 is targeting a shot angle of μi to direct a laser pulse as shown by 466 ofFIG. 4D . - In an example embodiment, the values for +A and −A can be values in a range between +/−10 degrees and +/−20 degrees (e.g., +/−16 degrees) depending on the nature of mirror chosen as
mirror 110. In an example where A is 16 degrees andmirror 110 scans as discussed above in connection withFIGS. 4A-4D , it can be understood that the angular extent of the scan formirror 110 would be 64 degrees (or 2A from the scan mid-point in both the right and left directions for a total of 4A). - In some example embodiments, the value for A in the
mirror motion model 308 can be a constant value. However, some practitioners may find it desirable to deploy amirror 110 that exhibits an adjustable value for A (e.g., a variable amplitude mirror such as a variable amplitude MEMS mirror can serve as mirror 110). From the relationships discussed above, it can be seen that the time required to move between two shot angles is reduced when the value for amplitude A is reduced. Thecontrol circuit 106 can leverage this relationship to determine whether it is desirable to adjust the amplitude of themirror 110 before firing a sequence oflaser pulses 122.FIG. 4E shows an example process flow in this regard. Atstep 470, thecontrol circuit 106 determines the settle time (ts) for changing the amplitude from A to A′ (where A′<A). It should be understood that changing the mirror amplitude in this fashion will introduce a time period where the mirror is relatively unstable, and time will need to be provided to allow the mirror to settle down to a stable position. This settling time can be empirically determined or tracked for themirror 110, and thecontrol circuit 106 can maintain this settle time value as a control parameter. Atstep 472, thecontrol circuit 106 determines the time it will take to collect a shot list data set in a circumstance where the amplitude of the mirror is unchanged (amplitude remains A). This time can be referenced as collection time tc. This value for tc can be computed through the use of thelaser energy model 108 andmirror motion model 308 with reference to the shots included in a subject shot list. Atstep 474, thecontrol circuit 106 determines the time it will take to collect the same shot list data set in a circumstance where the amplitude of the mirror is changed to A′. This time can be referenced as collection time tc′. This value for tc′ can be computed through the use of thelaser energy model 108 and mirror motion model 308 (as adjusted in view of the reduced amplitude of A′) with reference to the shots included in the subject shot list. Atstep 476, the control circuit compares tc with the sum of tc′ and ts. If the sum (tc′+ts) is less than tc, this means that it will be time efficient to change the mirror amplitude to A′. In this circumstance, the process flow proceeds to step 478, and thecontrol circuit 106 adjusts the amplitude ofmirror 110 to A′. If the sum (tc′+ts) is not less than tc, then thecontrol circuit 106 leaves the amplitude value unchanged (step 480). - Model-Based Shot Scheduling:
-
FIG. 5 shows an example process flow for thecontrol circuit 106 to use both thelaser energy model 108 and themirror motion model 308 to determine the timing schedule forlaser pulses 122. Step 200 can operate as described above with reference toFIG. 2A to maintain thelaser energy model 108. Atstep 500, thecontrol circuit 106 maintains themirror motion model 308. As discussed above, thismodel 308 can model the shot angle that the mirror will target as a function of time. Accordingly, themirror motion model 308 can predict the shot angle ofmirror 110 at a given time t. To maintain and update themodel 308, thecontrol circuit 108 can establish the values for A, Φ, and f to be used for themodel 308. These values can be read from memory or determined from the operating parameters for the system. - At step 502, the
control circuit 106 determines a timing schedule forlaser pulses 122 using thelaser energy model 108 and themirror motion model 308. By linking thelaser energy model 108 and themirror motion model 308 in this regard, thecontrol circuit 106 can determine how much energy is available for laser pulses targeted toward any of the range points in the scan pattern ofmirror subsystem 104. For purposes of discussion, we will consider an example embodiment wheremirror 110 scans in azimuth between a plurality of shot angles at a high rate whilemirror 112 scans in elevation at a sufficiently slower rate so that the discussion below will assume that the elevation is held steady whilemirror 110 scans back and forth in azimuth. However, the techniques described herein can be readily extended to modeling the motion of bothmirrors - If there is a desire to target a range point at a Shot Angle A with a laser pulse of at least X units of energy, the
control circuit 106, at step 502, can consult thelaser energy model 108 to determine whether there is sufficient laser energy for the laser pulse when themirror 110's scan angle points at Shot Angle A. If there is sufficient energy, thelaser pulse 122 can be fired when themirror 110 scans to Shot Angle A. If there is insufficient energy, thecontrol circuit 106 can wait to take the shot until aftermirror 110 has scanned through and back to pointing at Shot Angle A (if thelaser energy model 108 indicates there is sufficient laser energy when the mirror returns to Shot Angle A). Thecontrol circuit 106 can compare the shot energy requirements for a set of shot angles to be targeted with laser pulses to determine when thelaser pulses 122 should be fired. Upon determination of the timing schedule for thelaser pulses 122, thecontrol circuit 106 can generate and provide firing commands 120 to thelaser source 102 based on this determined timing schedule (step 504). -
FIGS. 6A and 6B show example process flows for implementingsteps 502 and 504 ofFIG. 5 in a scenario where themirror subsystem 104 includesmirror 110 that scans through azimuth shot angles in a resonant mode (fast-axis) andmirror 112 that scans through elevation shot angles in a point-to-point mode (slow-axis).Lidar transmitter 100 in these examples seeks to firelaser pulses 122 at intelligently selected range points in the field of view. With the example ofFIG. 6A , thecontrol circuit 106 schedules shots for batches of range points at a given elevation on whichever scan direction of themirror 110 is schedulable for those range points according to thelaser energy model 108. With the example ofFIG. 6B , thecontrol circuit 106 seeks to schedule shots for as many range points as it can at a given elevation for each scan direction of themirror 110 in view of thelaser energy model 108. For any shots at the subject elevation that cannot be scheduled for a given scan direction due to energy model constraints, thecontrol circuit 106 then seeks to schedule those range points on the reverse scan (and so on until all of the shots are scheduled). - The process flow of
FIG. 6A begins withstep 600. Atstep 600, thecontrol circuit 106 receives a list of range points to be targeted with laser pulses. These range points can be expressed as (azimuth angle, elevation angle) pairs, and they may be ordered arbitrarily. - At
step 602, thecontrol circuit 106 sorts the range points by elevation to yield sets of azimuth shot angles sorted by elevation. The elevation-sorted range points can also be sorted by azimuth shot angle (e.g., where all of the shot angles at a given elevation are sorted in order of increasing azimuth angle (smallest azimuth shot angle to largest azimuth shot angle) or decreasing azimuth angle (largest azimuth shot angle to smallest azimuth shot angle). For the purposes of discussing the process flows ofFIGS. 6A and 6B , these azimuth shot angles can be referred to as the shot angles for thecontrol circuit 106. Step 602 produces apool 650 of range points to be targeted with shots (sorted by elevation and then by shot angle). - At
step 604, thecontrol circuit 106 selects a shot elevation from among the shot elevations in the sorted list of range points inpool 650. Thecontrol circuit 106 can make this selection on the basis of any of a number of criteria. The order of selection of the elevations will govern which elevations are targeted withlaser pulses 122 before others. - Accordingly, in an example embodiment, the
control circuit 106 can prioritize the selection of elevations atstep 604 that are expected to encompass regions of interest in the field of view. As an example, some practitioners may find the horizon in the field of view (e.g., a road horizon) to be high priority for targeting withlaser pulses 122. In such a case, step 604 can operate as shown byFIG. 17A to determine the elevation(s) which correspond to a horizon in the field of view (e.g. identify the elevations at or near the road horizon) (see step 1702) and then prioritize the selection of those elevations from pool 650 (see step 1702).Step 1702 can be performed by analyzing lidar return point cloud data and/or camera images of the field of view to identify regions in the field of view that are believed to qualify as the horizon (e.g., using contrast detection techniques, edge detection techniques, and/or other pattern processing techniques applied to lidar or image data). - As another example, the
control circuit 106 can prioritize the selection of elevations based on the range(s) to detected object(s) in the field of view. Some practitioners may find it desirable to prioritize the shooting of faraway objects in the field of view. Other practitioners may find it desirable to prioritize the shooting of nearby objects in the field of view. Thus, in an example such as that shown byFIG. 17B , the range(s) applicable to detected object(s) is determined (see step 1706). This range information will be available from the lidar return point cloud data. Atstep 1708, the control circuit sorts the detected object(s) by their determined range(s). Then, atstep 1708, thecontrol circuit 106 prioritizes the selection of elevations frompool 650 based on the determined range(s) for object(s) included in those elevations. Withstep 1708, prioritization can be given to larger range values than for smaller range values if the practitioner wants to shoot faraway objects before nearby objects. For practitioners that want to shoot nearby objects before faraway objects,step 1708 can give priority to smaller range values than for larger range values. Which objects are deemed faraway and which are deemed nearby can be controlled using any of a number of techniques. For example, a range threshold can be defined, and thecontrol circuit 106 can make the elevation selections based on which elevations include sorted objects whose range is above (or below as the case may be) the defined range threshold. As another example, the relative ranges for the sorted objects can be used to control the selection of elevations (where the sort order of either farthest to nearest or nearest to farthest governs the selection of elevations which include those objects). - As yet another example, the
control circuit 106 can prioritize the selection of elevations based on the velocity(ies) of detected object(s) in the field of view. Some practitioners may find it desirable to prioritize the shooting of fast-moving objects in the field of view.FIG. 17C shows an example process flow for this. Atstep 1714, the velocity is determined for each detected object in the field of view. This velocity information can be derived from the lidar return point cloud data. Atstep 1716, thecontrol circuit 106 can sort the detected object(s) by the determined velocity(ies). Thecontrol circuit 106 can then use determined velocities for the sorted objects as a basis for prioritizing the selection of elevations which contain those detected objects (step 1718). This prioritization atstep 1718 can be carried out in any of a number of ways. For example, a velocity threshold can be defined, and step 1718 can prioritize the selection of elevation include an object moving at or above this defined velocity threshold. As another example, the relative velocities of the sorted objects can be used where an elevation that includes an object moving faster than another object can be selected before an elevation that includes the another (slower moving) object. - As yet another example, the
control circuit 106 can prioritize the selection of elevations based on the directional heading(s) of detected object(s) in the field of view. Some practitioners may find it desirable to prioritize the shooting of objects in the field of view that moving toward thelidar transmitter 100.FIG. 17D shows an example process flow for this. Atstep 1720, the directional heading is determined for each detected object in the field of view. This directional heading can be derived from the lidar return point cloud data. Thecontrol circuit 1722 can then prioritize the selection of elevation(s) that include object(s) that are determined to be heading toward the lidar transmitter 100 (within some specified degree of tolerance where the elevation that contains an object heading near thelidar transmitter 100 would be selected before an elevation that contains an object moving away from the lidar transmitter 100). - Further still, some practitioners may find it desirable to combine the process flows of
FIGS. 17C and 17D to prioritize the selection of fast-moving objects that are heading toward thelidar transmitter 100. An example for this is shown byFIG. 17E . WithFIG. 17E , steps 1714 and 1720 can be performed as discussed above. Atstep 1724, the detected object(s) are sorted by their directional headings (relative to the lidar transmitter 100) and then by the determined velocities. Atstep 1726, the elevations which contain objected deemed to be heading toward the lidar transmitter 100 (and moving faster than other such objects) are prioritized for selection. - In another example embodiment, the
control circuit 106 can select elevations atstep 604 based on eye safety or camera safety criteria. For example, eye safety requirements may specify that thelidar transmitter 100 should not direct more than a specified amount of energy in a specified spatial area over of a specified time period. To reduce the risk of firing too much energy into the specified spatial area, thecontrol circuit 106 can select elevations in a manner that avoids successive selections of adjacent elevations (e.g., jumping fromElevation 1 toElevation 3 rather than Elevation 2) to insert more elevation separation between laser pulses that may be fired close in time. This manner of elevation selection may optionally be implemented dynamically (e.g., where elevation skips are introduced if thecontrol circuit 106 determines that the energy in a defined spatial area has exceeded some level that is below but approaching the eye safety thresholds). Furthermore, it should be understood that the number of elevations to skip (a skip interval) can be a value selected by a practitioner or user to define how many elevations will be skipped when progressing from elevation-to-elevation. As such, a practitioner may choose to set the elevation skip interval to be a value larger than 1 (e.g., a skip interval of 5, which would cause the system to progress fromElevation 3 to Elevation 9). Furthermore, similar measures can be taken to avoid hitting cameras that may be located in the field of view with too much energy.FIG. 17F depicts an example process flow for this approach. Atstep 1730, thecontrol circuit 106 selects Elevation Xt (where this selected elevation is larger (or smaller) than the preceding selected elevation (Elevation Xt−1) by the defined skip interval. Then, thecontrol circuit 106 schedules the shots for the selected elevation (step 1732), and the process flow returns to step 1730 where the next elevation (Elevation Xt+1) is selected (according to the skip interval relative to Elevation Xt). - Thus, it should be understood that
step 604 can employ a prioritized classification system that decides the order in which elevations are to be targeted withlaser pulses 122 based on the criteria ofFIGS. 17A-17F or any combinations of any of these criteria. - At
step 606, thecontrol circuit 106 generates a mirror control signal formirror 112 to drivemirror 112 so that it targets the angle of the selected elevation. As noted, this mirror control signal can be a step signal that stepsmirror 112 up (or down) to the desired elevation angle. In this fashion, it can be understood that thecontrol circuit 106 will be drivingmirror 112 in a point-to-point mode where the mirror control signal formirror 112 will vary as a function of the range points to be targeted with laser pulses (and more precisely, as a function of the order of range points to be targeted with laser pulses). - At
step 608, thecontrol circuit 106 selects a window of azimuth shot angles that are in thepool 650 at the selected elevation. The size of this window governs how many shot angles that thecontrol circuit 106 will order for a given batch oflaser pulses 122 to be fired. This window size can be referred to as the search depth for the shot scheduling. A practitioner can configure thecontrol circuit 106 to set this window size based on any of a number of criteria. While the toy examples discussed below use a window size of 3 for purposes of illustration, it should be understood that practitioners may want to use a larger (or smaller) window size in practice. For example, in an example embodiment, the size of the window may be a value in a range between 2 shots and 12 shots. However, should thecontrol circuit 106 have larger capacities for parallel processing or should there be more lenient time constraints on latency, a practitioner may find it desirable to choose larger window sizes. Furthermore, thecontrol circuit 106 can consider a scan direction for themirror 110 when selecting the shot angles to include in this window. Thus, if thecontrol circuit 106 is scheduling shots for a scan direction corresponding to increasing shot angles, thecontrol circuit 106 can start from the smallest shot angle in the sortedpool 650 and include progressively larger shot angles in the shot angle sort order of thepool 650. Similarly, if thecontrol circuit 106 is scheduling shots for a scan direction corresponding to decreasing shot angles, thecontrol circuit 106 can start from the largest shot angle in the sortedpool 650 and include progressively smaller shot angles in the shot angle sort order of thepool 650. - At step 610, the
control circuit 106 determines an order for the shot angles in the selected window using thelaser energy model 108 and themirror motion model 308. As discussed above, this ordering operation can compare candidate orderings with criteria such as energy requirements relating to the shots to find a candidate ordering that satisfies the criteria. Once a valid candidate ordering of shot angles is found, this can be used as ordered shot angles that will define the timing schedule for the selected window oflaser pulses 122. Additional details about example embodiments for implementing step 610 are discussed below. - Once the shot angles in the selected window have been ordered at step 610, the
control circuit 106 can add these ordered shot angles to theshot list 660. As discussed in greater detail below, theshot list 660 can include an ordered listing of shot angles and a scan direction corresponding to each shot angle. - At
step 612, thecontrol circuit 106 determines whether there are any more shot angles inpool 650 to consider at the selected elevation. In other words, if the window size does not encompass all of the shot angles in thepool 650 at the selected elevation, then the process flow can loop back to step 608 to grab another window of shot angles from thepool 650 for the selected elevation. If so, the process flow can then performsteps 610 and 612 for the shot angles in this next window. - Once all of the shots have been scheduled for the shot angles at the selected elevation, the process flow can loop back from
step 612 to step 604 to select the next elevation frompool 650 for shot angle scheduling. As noted above, this selection can proceed in accordance with a defined prioritization of elevations. From there, thecontrol circuit 106 can perform steps 606-614 for the shot angles at the newly selected elevation. - Meanwhile, at
step 614, thecontrol circuit 106 generates firing commands 120 for thelaser source 102 in accordance with the determined order of shot angles as reflected byshot list 660. By providing these firing commands 120 to thelaser source 102, thecontrol circuit 106 triggers thelaser source 102 to transmit thelaser pulses 122 in synchronization with themirrors laser pulse 122 targets its desired range point in the field of view. Thus, if the shot list includes Shot Angles A and C to be fired at during a left-to-right scan of themirror 110, thecontrol circuit 106 can use themirror motion model 308 to identify the times at which mirror 110 will be pointing at Shot Angles A and C on a left-to-right scan and generate the firing commands 120 accordingly. Thecontrol circuit 106 can also update thepool 650 to mark the range points corresponding to the firing commands 120 as being “fired” to effectively remove those range points from thepool 650. - In the example of
FIG. 6B , as noted above, thecontrol circuit 106 seeks to schedule as many shots as possible on each scan direction ofmirror 110.Steps FIG. 6A . - At
step 620, thecontrol circuit 106 selects a scan direction ofmirror 110 to use for scheduling. A practitioner can choose whether this scheduling is to start with a left-to-right scan direction or a right-to-left scan direction. Then, step 608 can operate as discussed above in connection withFIG. 6A , but where thecontrol circuit 106 uses the scan direction selected atstep 620 to govern which shot angles are included in the selected window. Thus, if the selected scan direction corresponds to increasing shot angles, thecontrol circuit 106 can start from the smallest shot angle in the sortedpool 650 and include progressively larger shot angles in the shot angle sort order of thepool 650. Similarly, if the selected scan direction corresponds to decreasing shot angles, thecontrol circuit 106 can start from the largest shot angle in the sortedpool 650 and include progressively smaller shot angles in the shot angle sort order of thepool 650. - At step 622, the
control circuit 106 determines an order for the shot angles based on thelaser energy model 108 and themirror motion model 308 as discussed above for step 610, but where thecontrol circuit 106 will only schedule shot angles if thelaser energy model 108 indicates that those shot angles are schedulable on the scan corresponding to the selected scan direction. Scheduled shot angles are added to theshot list 660. But, if thelaser energy model 108 indicates that the system needs to wait until the next return scan (or later) to take a shot at a shot angle in the selected window, then the scheduling of that shot angle can be deferred until the next scan direction for mirror 110 (see step 624). This effectively returns the unscheduled shot angle to pool 650 for scheduling on the next scan direction if possible. - At
step 626, thecontrol circuit 106 determines if there are any more shot angles inpool 650 at the selected elevation that are to be considered for scheduling on the scan corresponding to the selected scan direction. If so, the process flow returns to step 608 to grab another window of shot angles at the selected elevation (once again taking into consideration the sort order of shot angles at the selected elevation in view of the selected scan direction). - Once the
control circuit 106 has considered all of the shot angles at the selected elevation for scheduling on the selected scan direction, the process flow proceeds to step 628 where a determination is made as to whether there are any more unscheduled shot angles frompool 650 at the scheduled elevation. If so, the process flow loops back to step 620 to select the next scan direction (i.e., the reverse scan direction). From there, the process flow proceeds throughsteps list 660. Oncestep 628 results in a determination that all of the shot angles at the selected elevation have been scheduled, the process flow can loop back to step 604 to select the next elevation frompool 650 for shot angle scheduling. As noted above, this selection can proceed in accordance with a defined prioritization of elevations, and thecontrol circuit 106 can performsteps - Thus, it can be understood that the process flow of
FIG. 6B will seek to schedule all of the shot angles for a given elevation during a single scan of mirror 110 (from left-to-right or right-to-left as the case may be) if possible in view of thelaser energy model 108. However, should thelaser energy model 108 indicate that more time is needed to fire shots at the desired shot angles, then some of the shot angles may be scheduled for the return scan (or subsequent scan) ofmirror 110. - It should also be understood that the
control circuit 106 will always be listening for new range points to be targeted withnew laser pulses 122. As such,steps FIG. 6A ) or whilesteps FIG. 6B ). Similarly, step 614 can be performed by thecontrol circuit 106 while the other steps of theFIGS. 6A and 6B process flows are being performed. Furthermore, it should be understood that the process flows ofFIGS. 6A and 6B can accommodate high priority requests for range point targeting. For example, as described in U.S. Pat. No. 10,495,757, the entire disclosure of which is incorporated herein by reference, a request may be received to target a set of range points in a high priority manner. - Thus, the
control circuit 106 can also always be listening for such high priority requests and then cause the process flow to quickly begin scheduling the firing of laser pulses toward such range points. In a circumstance where a high priority targeting request causes thecontrol circuit 106 to interrupt its previous shot scheduling, thecontrol circuit 106 can effectively pause the current shot schedule, schedule the new high priority shots (using the same scheduling techniques) and then return to the previous shot schedule oncelaser pulses 122 have been fired at the high priority targets. - Accordingly, as the process flows of
FIGS. 6A and 6B work their way through the list of range points inpool 650, thecontrol circuit 106 will provide improved scheduling oflaser pulses 122 fired at those range points through use of thelaser energy model 108 andmirror motion model 308 as compared to defined criteria such as shot energy thresholds for those shots. Moreover, by modeling laser energy and mirror motion over short time intervals on the order of nanoseconds using transient models as discussed above, these shot scheduling capabilities of the system can be characterized as hyper temporal because highly precise shots with highly precise energy amounts can be accurately scheduled over short time intervals if necessary. - While
FIGS. 6A and 6B show their process flows as an iterated sequence of steps, it should be understood that if thecontrol circuit 106 has sufficient parallelized logic resources, then many of the iterations can be unrolled and performed in parallel without the need for return loops (or using a few number of returns through the steps). For example, different windows of shot angles at the selected elevation can be processed in parallel with each other if thecontrol circuit 106 has sufficient parallelized logic capacity. Similarly, thecontrol circuit 106 can also work on scheduling for different elevations at the same time if it has sufficient parallelized logic capacity. -
FIG. 7A shows an example process flow for carrying out step 610 ofFIG. 6A . Atstep 700, thecontrol circuit 106 creates shot angle order candidates from the shot angles that are within the window selected atstep 608. These candidates can be created based on themirror motion model 308. - For example, as shown by
FIG. 7B , the times at which themirror 110 will target the different potential shot angles can be predicted using themirror motion model 308. Thus, each shot angle can be assigned atime slot 710 with respect to the scan ofmirror 110 across azimuth angles (and back). As shown byFIG. 7B , ifmirror 110 starts at Angle Zero atTime 1, it will then scan to Angle A atTime 2, then scan to Angle B atTime 3, and so on through its full range of angles (which in the example ofFIG. 7B reaches Angle J before themirror 110 begins scanning back toward Angle Zero). The time slots for these different angles can be computed using themirror motion model 308. Thus, if the window of shot angles identifies Angle A, Angle C, and Angle I as the shot angles, then thecontrol circuit 106 will know which time slots of the mirror scan formirror 110 will target those shot angles. For example, according toFIG. 7B ,Time Slots Time Slot 11 will also target Angle I (as shown byFIG. 7B ), whileTime Slots time slots 710 can correspond to time intervals in a range between around 5 nanoseconds and around 50 nanoseconds, which would correspond to angular intervals of around 0.01 to 0.1 degrees ifmirror 110 is scanning at 12 kHz over an angular extent of 64 degrees (where +/−A is +/−16 degrees). - To create the order candidates at
step 700, thecontrol circuit 106 can generate different permutations of time slot sequences for different orders of the shot angles in the selected window. Continuing with an example where the shot angles are A, C, and I, step 700 can produce the following set of example order candidates (where each order candidate can be represented by a time slot sequence): -
Order Time Slot Candidate Sequence Comments Candidate 1 1, 3, 9 This would correspond to firing laser pulses in the shot angle order of ACI during the first scan for mirror 110 (which moves from left-to-right) Candidate 21, 9, 17 This would correspond to firing laser pulses in the shot angle order of AIC, where laser pulses are fired at Shot Angles A and I during the first scan for mirror 110 and where the laser pulseis fired at Shot Angle C during the second (return) scan for mirror 110 (where this second scan moves from right-to-left). Candidate 33, 9, 19 This would correspond to firing laser pulses in the shot angle order of CIA, where laser pulses are fired at Shot Angles C and I during the first scan for mirror 110 and where the laser pulseis fired at Shot Angle A during the second (return) scan for mirror 110.Candidate 43, 9, 21 This would correspond to firing laser pulses in the shot angle order of CIA, where laser pulses are fired at Shot Angles C and I during the first scan for mirror 110 and where the laser pulseis fired at Shot Angle A during the third scan for mirror 110 (which moves from left-to-right) . . . . . . . . . - It should be understood that the
control circuit 106 could create additional candidate orderings from different permutations of time slot sequences for Shot Angles A, C, and I. A practitioner can choose to control how many of such candidates will be considered by thecontrol circuit 106. - At step 702, the
control circuit 106 simulates the performance of the different order candidates using thelaser energy model 108 and the defined shot requirements. As discussed above, these shot requirements may include requirements such as minimum energy thresholds for each laser pulse (which may be different for each shot angle), maximum energy thresholds for each laser pulse (or for the laser source), and/or desired energy levels for each laser pulse (which may be different for each shot angle). - To reduce computational latency, this simulation and comparison with shot requirements can be performed in parallel for a plurality of the different order candidates using parallelized logic resources of the
control circuit 106. An example of such parallelized implementation of step 702 is shown byFIG. 7C . In the example ofFIG. 7C , steps 720, 722, and 724 are performed in parallel with respect to a plurality of the different time slot sequences that serve as the order candidates. Thus, steps 720 a, 722 a, and 724 a are performed forTime Slot Sequence 1;steps Time Slot Sequence 2; and so on throughsteps - At step 720, the
control circuit 106 uses thelaser energy model 108 to predict the energy characteristics of the laser source and resultant laser pulse if laser pulse shots are fired at the time slots corresponding to the subject time slot sequence. These modeled energies can then be compared to criteria such as a maximum laser energy threshold and a minimum laser energy threshold to determine if the time slot sequence would be a valid sequence in view of the system requirements. At step 722, thecontrol circuit 106 can label each tested time slot sequence as valid or invalid based on this comparison between the modeled energy levels and the defined energy requirements. At step 724, thecontrol circuit 106 can compute the elapsed time that would be needed to fire all of the laser pulses for each valid time slot sequence. For example,Candidate 1 from the example above would have an elapsed time duration of 9 units of time, whileCandidate 2 from the example above would have an elapsed time duration of 17 units of time. -
FIGS. 7D, 7E, and 7F show examples of such simulations of time slot sequences for our example where the shot angles to be scheduled with laser pulses are Shot Angles A, C, and I. In this scenario, we will assume that thelaser energy model 108 will employ (1) the value for ES as a constant value of 1 unit of energy per unit of time and (2) the values for a and b as 0.5 each. Furthermore, we will assume that there are 3 units of energy left in thefiber laser 116 when the scan begins (and where the scan begins at Angle Zero while moving from left-to-right). Moreover, for the purposes of this example, the energy requirements for the shots can be defined as (8,3,4) for minimum shot energies with respect to shot angles A, C, and I respectively, and where the maximum laser energy for the laser source can be defined as 20 units of combined seed and stored fiber energy (which would translate to a maximum laser pulse energy of 10 units of energy). -
FIG. 7D shows an example result for simulating the time slot sequence of laser pulses attime slots -
FIG. 7E shows an example result for simulating the time slot sequence of laser pulses attime slots -
FIG. 7F shows an example result for simulating the time slot sequence of laser pulses attime slots FIG. 7F that a simulation of a Time Slot Sequence of (3,9,19) also would have failed because there is insufficient energy in a laser pulse that would have been fired at Shot Angle A. - Accordingly, the simulation of these time slot sequences would result in a determination that the time slot sequence of (3,9,21) is a valid candidate, which means that this time slot sequence can define the timing schedule for laser pulses fired toward the shot angles in the selected window. The elapsed time for this valid candidate is 21 units of time.
- Returning to
FIG. 7A , atstep 704, thecontrol circuit 106 selects the valid order candidate which has the lowest elapsed time. Thus, in a scenario where the simulations at step 702 would have produced two or more valid order candidates, thecontrol circuit 106 will select the order candidate that will complete its firing of laser pulses the soonest which helps improve the latency of the system. - For example embodiments, the latency with which the
control circuit 106 is able to determine the shot angle order and generate appropriate firing commands is an important operational characteristic for thelidar transmitter 100. To maintain high frame rates, it is desirable for thecontrol circuit 106 to carry out the scheduling operations for all of the shot angles at a selected elevation in the amount of time it takes to scanmirror 110 through a full left-to-right or right-to-left scan if feasible in view of the laser energy model 108 (where this time amount is around 40 microseconds for a 12 kHz scan frequency). Moreover, it is also desirable for thecontrol circuit 106 to be able to schedule shots for a target that is detected based on returns from shots on the current scan line during the next return scan (e.g., when alaser pulse 122 fired during the current scan detects something of interest that is to be interrogated with additional shots (seeFIG. 16 discussed above)). In this circumstance, the detection path for a pulse return through a lidar receiver and into a lidar point cloud generator where the target of interest is detected will also need to be taken into account. This portion of the processing is expected to require around 0.4 to 10 microseconds, which leaves around 30 microseconds for thecontrol circuit 106 to schedule the new shots at the region of interest during the next return scan if possible. For a processor of thecontrol circuit 106 which has 2Gflops of processing per second (which is a value available from numerous FPGA and ASIC vendors), this amounts to 50 operations per update, which is sufficient for the operations described herein. For example, thecontrol circuit 106 can maintain lookup tables (LUTs) that contain pre-computed values of shot energies for different time slots within the scan. Thus, the simulations of step 702 can be driven by looking up precomputed shot energy values for the defined shot angles/time slots. The use of parallelized logic by thecontrol circuit 106 to accelerate the simulations helps contribute to the achievement of such low latency. Furthermore, practitioners can adjust operational parameters such as the window size (search depth) in a manner to achieve desired latency targets. -
FIG. 8 shows an example embodiment for thelidar transmitter 100 where thecontrol circuit 106 comprises asystem controller 800 and abeam scanner controller 802.System controller 800 andbeam scanner controller 802 can each include a processor and memory for use in carrying out its tasks. Themirror subsystem 104 can be part of beam scanner 810 (which can also be referred to as a lidar scanner).Beam scanner controller 802 can be embedded as part of thebeam scanner 810. In this example, thesystem controller 800 can carry outsteps FIG. 6A if thecontrol circuit 106 employs theFIG. 6A process flow (or steps 600, 602, 604, 620, 608, 622, 624, 626, and 628 ofFIG. 6B if thecontrol circuit 106 employs theFIG. 6B process flow), whilebeam scanner controller 802 carries outsteps FIGS. 6A and 6B process flows. Accordingly, once thesystem controller 800 has selected the elevation and the order of shot angles, this information can be communicated from thesystem controller 800 to thebeam scanner controller 802 asshot elevation 820 and ordered shot angles 822. - The ordered shot
angles 822 can also include flags that indicate the scan direction for which the shot is to be taken at each shot angle. This scan direction flag will also allow the system to recognize scenarios where the energy model indicates there is a need to pass by a time slot for a shot angle without firing a shot and then firing the shot when the scan returns to that shot angle in a subsequent time slot. For example, with reference to the example above, the scan direction flag will permit the system to distinguish between Candidate 3 (for the sequence of shot angles CIA attime slots time slots shot elevations 802 and order shotangles 822 serve as portions of theshot list 660 used by thelidar transmitter 100 to target range points withlaser pulses 122. - The
beam scanner controller 802 can generate control signal 806 formirror 112 based on the defined shotelevation 820 to drivemirror 112 to a scan angle that targets the elevation defined by 820. Meanwhile, thecontrol signal 804 formirror 110 will continue to be the sinusoidal signal that drivesmirror 110 in a resonant mode. However, some practitioners may choose to also vary control signal 804 as a function of the ordered shot angles 822 (e.g., by varying amplitude A as discussed above). - In the example of
FIG. 8 , themirror motion model 308 can comprise a firstmirror motion model 808 a maintained and used by thebeam scanner controller 802 and a secondmirror motion model 808 b maintained and used by thesystem controller 800. WithFIG. 8 , the task of generating the firing commands 120 can be performed by thebeam scanner controller 802. Thebeam scanner controller 810 can include afeedback system 850 that tracks the actual mirror tilt angles θ formirror 110. Thisfeedback system 850 permits thebeam scanner controller 802 to closely monitor the actual tilt angles ofmirror 110 over time which then translates to the actual scan angles μ ofmirror 110. This knowledge can then be used to adjust and updatemirror motion model 808 a maintained by thebeam scanner controller 802. Becausemodel 808 a will closely match the actual scan angles formirror 110 due to the feedback from 850, we can refer to model 808 a as the “fine”mirror motion model 808 a. In this fashion, when thebeam scanner controller 802 is notified of the ordered shotangles 822 to be targeted withlaser pulses 122, thebeam scanner controller 802 can use this “fine”mirror motion model 808 a to determine when the mirror has hit the time slots which target the ordered shot angles 822. When these time slots are hit according to the “fine”mirror motion model 808 a, thebeam scanner controller 802 can generate and provide corresponding firing commands 120 to thelaser source 102. - Examples of techniques that can be used for the scan tracking
feedback system 850 are described in the above-referenced and incorporated U.S. Pat. No. 10,078,133. For example, thefeedback system 850 can employ optical feedback techniques or capacitive feedback techniques to monitor and adjust the scanning (and modeling) ofmirror 110. Based on information from thefeedback system 850, thebeam scanner controller 802 can determine how the actual mirror scan angles may differ from the modeled mirror scan angles in terms of frequency, phase, and/or maximum amplitude. Accordingly, thebeam scanner controller 802 can then incorporate one or more offsets or other adjustments relating the detected errors in frequency, phase, and/or maximum amplitude into themirror motion model 808 a so thatmodel 808 a more closely reflects reality. This allows thebeam scanner controller 802 to generate firingcommands 120 for thelaser source 102 that closely match up with the actual shot angles to be targeted with thelaser pulses 122. - Errors in frequency and maximum amplitude within the
mirror motion model 808 a can be readily derived from the tracked actual values for the tilt angle θ as the maximum amplitude A should be the maximum actual value for θ, and the actual frequency is measurable based on tracking the time it takes to progress from actual values for A to −A and back. - Phased locked loops (or techniques such as PID control, both available as software tools in MATLAB) can be used to track and adjust the phase of the
model 808 a as appropriate. The expression for the tilt angle θ that includes a phase component (p) can be given as: -
θ=A cos(2πft+p) - From this, we can recover the value for the phase p by the relation:
-
θ≈A cos(2πft)−A sin(2πft)p - Solving for p, this yields the expression:
-
- Given that the tracked values for A, f, t, and θ are each known, the value for p can be readily computed. It should be understood that this expression for p assumes that the value of the p is small, which will be an accurate assumption if the actual values for A, f, t, and θ are updated frequently and the phase is also updated frequently. This computed value of p can then be used by the “fine”
mirror motion model 808 a to closely track the actual shot angles formirror 110, and identify the time slots that correspond to those shot angles according to the expression: -
- While a practitioner will find it desirable for the
beam scanner controller 802 to rely on the highly accurate “fine”mirror motion model 808 a when deciding when the firing commands 120 are to be generated, the practitioner may also find that the shot scheduling operations can suffice with less accurate mirror motion modeling. Accordingly, thesystem controller 800 can maintain itsown model 808 b, and thismodel 808 b can be less accurate thanmodel 808 a as small inaccuracies in themodel 808 b will not materially affect the energy modeling used to decide on the ordered shot angles 822. In this regard,model 808 b can be referred to as a “coarse”mirror motion model 808 b. If desired, a practitioner can further communicate feedback from thebeam scanner controller 802 to thesystem controller 800 so thesystem controller 800 can also adjusts itsmodel 808 b to reflect the updates made to model 808 a. In such a circumstance, the practitioner can also decide on how frequently the system will pass these updates frommodel 808 a to model 808 b. - Marker Shots to Bleed Off and/or Regulate Shot Energy:
-
FIG. 9 depicts an example process flow for execution by thecontrol circuit 106 to insert marker shots into the shot list in order to bleed off energy from thelaser source 102 when needed. As discussed above, thecontrol circuit 106 can consult thelaser energy model 108 as applied to the range points to be targeted withlaser pulses 122 to determine whether a laser energy threshold would be violated. If so, thecontrol circuit 106 may insert a marker shot into the shot list to bleed energy out of the laser source 102 (step 902). In an example embodiment, this threshold can be set to define a maximum or peak laser energy threshold so as to avoid damage to thelaser source 102. In another example embodiment, this threshold can be set to achieve a desired consistency, smoothness, and/or balance in the energies of the laser pulse shots. - For example, one or more marker shots can be fired to bleed off energy so that a later targeted laser pulse shot (or set of targeted shots) exhibits a desired amount of energy. As an example embodiment, the marker shots can be used to bleed off energy so that the targeted laser pulse shots exhibit consistent energy levels despite a variable rate of firing for the targeted laser pulse shots (e.g., so that the targeted laser pulse shots will exhibit X units of energy (plus or minus some tolerance) even if those targeted laser pulse shots are irregularly spaced in time). The
control circuit 106 can consult thelaser energy model 108 to determine when such marker shots should be fired to regulate the targeted laser pulse shots in this manner. - Modeling Eye and Camera Safety Over Time:
-
FIG. 10 depicts an example process flow for execution by thecontrol circuit 106 where eye safety requirements are also used to define or adjust the shot list. To support these operations, thecontrol circuit 106 can also, atstep 1000, maintain aneye safety model 1002. Eye safety requirements for alidar transmitter 100 may be established to define a maximum amount of energy that can be delivered within a defined spatial area in the field of view over a defined time period. Since the system is able to model per pulse laser energy with respect to precisely targeted range points over highly granular time periods, this allows thecontrol circuit 106 to also monitor whether a shot list portion would violate eye safety requirements. Thus, theeye safety model 1002 can model how much aggregated laser energy is delivered to the defined spatial area over the defined time period based on the modeling produced from thelaser energy model 108 and themirror motion model 308. Atstep 1010, thecontrol circuit 106 uses theeye safety model 1002 to determine whether the modeled laser energy that would result from a simulated sequence of shots would violate the eye safety requirements. If so, the control circuit can adjust the shot list to comply with the eye safety requirements (e.g., by inserting longer delays between ordered shots delivered close in space, by re-ordering the shots, etc.) -
FIG. 11 shows anexample lidar transmitter 100 that is similar in nature to the example ofFIG. 8 , but where thesystem controller 800 also considers theeye safety model 1002 when deciding on how to order the shot angles.FIG. 12 shows how the simulation step 702 fromFIG. 7A can be performed in example embodiments where theeye safety model 1002 is used. As shown byFIG. 12 , each parallel path can include steps 720, 722, and 724 as discussed above. Each parallel path can also include a step 1200 to be performed prior to step 722 where thecontrol circuit 106 uses theeye safety model 1002 to test whether the modeled laser energy for the subject time slot sequence would violate eye safety requirements. If the subject time slot sequence complies with the criteria tested at steps 720 and 1200, then the subject time slot sequence can be labeled as valid. If the subject time slot sequence violates the criteria tested at steps 720 or 1200, then the subject time slot sequence can be labeled as invalid. - Similar to the techniques described for eye safety in connection with Figured 10, 11, and 12, it should be understood that a practitioner can also use the control circuit to model and evaluate whether time slot sequences would violate defined camera safety requirements. To reduce the risk of
laser pulses 122 impacting on and damaging cameras in the field of view, the control circuit can also employ a camera safety model in a similar manner and toward similar ends as theeye safety model 1002. In the camera safety scenario, thecontrol circuit 106 can respond to detections of objects classified as cameras in the field of view by monitoring how much aggregated laser energy will impact that camera object over time. If the model indicates that the camera object would have too much laser energy incident on it in too short of a time period, the control circuit can adjust the shot list as appropriate. - Moreover, as noted above with respect to the
laser energy model 108 and themirror motion model 308, the eye safety and camera safety models can track aggregated energy delivered to defined spatial areas over defined time periods over short time intervals, and such short interval eye safety and camera safety models can be referred to as transient eye safety and camera safety models. -
FIG. 13 shows another example of a process flow for thecontrol circuit 106 with respect to using the models to dynamically determine the shot list for thetransmitter 100. - At
step 1300, thelaser energy model 108 andmirror motion model 308 are established. This can include determining from factory or calibration the values to be used in the models for parameters such as EP, a, b, andA. Step 1300 can also include establishing theeye safety model 1002 by defining values for parameters that govern such a model (e.g. parameters indicative of limits for aggregated energy for a defined spatial area over a defined time period). Atstep 1302, the control law for the system is connected to the models established atstep 1300. - At step 1304, the seed energy model used by the
laser energy model 108 is adjusted to account for nonlinearities. This can employ the clipped, offset (affine) model for seed energy as discussed above. - At step 1306, the
laser energy model 108 can be updated based on lidar return data and other feedback from the system. For example, as noted above in connection withFIG. 2D , the actual energies inlaser pulses 122 can be derived from the pulse return data included inpoint cloud 256. For example, the pulse return energy can be modeled as a function of the transmitted pulse energy according to the following expression (for returns from objects that are equal to or exceed the laser spot size and assuming modest atmospheric attenuation): -
- In this expression, Pulse Return Energy represents the energy of the pulse return (which is known from the point cloud 256), PE represents the unknown energy of the transmitted
laser pulse 122, ApertureReceiver represents the known aperture of the lidar receiver (see 1400 inFIG. 14 ), R represents the measured range for the return (which is known from the point cloud 256), and Reflectivity represents the percentage of reflectivity for the object from which the return was received. Therefore, one can solve for PE so long as the reflectivity is known. This will be the case for objects like road signs whose reflectivity is governed by regulatory agencies. Accordingly, by using returns from known fiducials such as road signs, thecontrol circuit 106 can derive the actual energy of the transmittedlaser pulse 122 and use this value to facilitate determinations as to whether any adjustments to thelaser energy model 108 are needed (e.g., see discussions above re updating the values for a and b based on PE values which represent the actual energies of the transmitted laser pulses 122). - Also, at
step 1308, the laser health can be assessed and monitored as a background task. The information derived from the feedback received forsteps 1306 and 1308 can be used to update model parameters as discussed above. For example, as noted above, the values for the seed energy model parameters as well as the values for a and b can be updated by measuring the energy produced by thelaser source 102 and fitting the data to the parameters. Techniques which can be used for this process include least squares, sample matrix inversion, regression, and multiple exponential extensions. Further still, as noted above, the amount of error can be reduced by using known targets with a given reflectivity and using these to calibrate the system. - This is helpful because the reflectivity of a quantity that is known, i.e. a fiducial, allows one to explicitly extract shot energy (after backing out range dependencies and any obliquity). Examples of fiducials that may be employed include road signs and license plates.
- At step 1310, the lidar return data and the coupled models can be used to ensure that the laser pulse energy does not exceed safety levels. These safety levels can include eye safety as well as camera safety as discussed above. Without step 1310, the system may need to employ a much more stringent energy requirement using trial and error to establish laser settings to ensure safety. For example if we only had a laser model where the shot energy is accurate to only ±3J per shot around the predicted shot, and maximum shot energy is limited to 8, we could not use any shots predicted to exceed 5. However, the hyper temporal modeling and control that is available from the
laser energy model 108 andmirror motion model 308 as discussed herein allows us to obtain accurate predictions within a few percent error, virtually erasing the operational lidar impact of margin. - At step 1312, the coupled models are used with different orderings of shots, thereby obtaining a predicted shot energy in any chosen ordered sequence of shots drawn from the specified list of range points. Step 1312 may employ simulations to predict shot energies for different time slots of shots as discussed above.
- At step 1314, the system inserts marker shots in the timing schedule if the models predict that too much energy will build up in the
laser source 102 for a given shot sequence. This reduces the risk of too much energy being transferred into thefiber laser 116 and causing damage to thefiber laser 116. - At
step 1316, the system determines the shot energy that is needed to detect targets with each shot. These values can be specified as a minimum energy threshold for each shot. The value for such threshold(s) can be determined from radiometric modeling of the lidar, and the assumed range and reflectivity of a candidate target. In general, this step can be a combination of modeling assumptions as well as measurements. For example, we may have already detected a target, so the system may already know the range (within some tolerance). Since the energy required for detection is expected to vary as the square of the range, this knowledge would permit the system to establish the minimum pulse energy thresholds so that there will be sufficient energy in the shots to detect the targets. -
Steps - At step 1322, candidate orderings are formed using elevation movements on both scan directions. This allows the system to consider taking shots on both a left-to-right scan and a right-to-left scan. For example, suppose that the range point list has been completed on a certain elevation, when the mirror is close to the left hand side. Then it is faster to move the elevation mirror at that point in time and begin the fresh window of range points to be scheduled beginning on this same left hand side and moving right. Conversely, if we deplete the range point list when the mirror is closer to the right hand side it is faster to move the mirror in elevation whilst it is on the right hand side. Moreover, in choosing an order from among the order candidates, and when moving from one elevation to another, movement on either side of the mirror motion, the system may move to a new elevation when
mirror 110 is at one of its scan extremes (full left or full right). However, in instances where a benefit may arise from changing elevations whenmirror 110 is not at one of its scan extremes, the system may implement interline skipping as described in the above-referenced and incorporated U.S. Pat. No. 10,078,133. Themirror motion model 308 can also be adjusted to accommodate potential elevation shift during a horizontal scan. - At step 1324, if processing time allows the
control circuit 106 to implement auctioning (whereby multiple order candidates are investigated, the lowest “cost” (e.g., fastest lidar execution time) order candidate is selected by the control circuit 106 (acting as “auctioneer”). A practitioner may not want the control circuit to consider all of the possible order candidates as this may be too computationally expensive and introduce an undue amount of latency. Thus, thecontrol circuit 106 can enforce maximums or other controls on how many order candidates are considered per batch of shots to be ordered. Greedy algorithms can be used when choosing ordering shots. - Generally, the system can use a search depth value (which defines how many shots ahead the control circuit will evaluate) in this process in a manner that is consistent with any real time consideration in shot list generation. At step 1326, delays can be added in the shot sequence to suppress a set of shots and thus increase available shot energy to enable a finer (denser) grid as discussed above. The methodology for sorting through different order candidates can be considered a special case of the Viterbi algorithm which can be implemented using available software packages such as Mathworks. This can also be inferred using equivalence classes or group theoretic methods. Furthermore, if the system detects that reduced latency is needed, the search depth can be reduced (see step 1328).
-
FIG. 14 depicts an example embodiment for alidar transmitter 100 that shows how thesystem controller 800 can interact with thelidar receiver 1400 to coordinate system operations. Thelidar receiver 1400 can receive and process pulse returns 1402 to compute range information for objects in the field of view impacted by thelaser pulses 122. This range information can then be included in thepoint cloud 1404 generated by the lidar system. Examples of suitable technology for use as thelidar receiver 1400 are described in U.S. Pat. Nos. 9,933,513 and 10,754,015, the entire disclosures of which are incorporated herein by reference. In the example ofFIG. 14 , thesystem controller 800 can use thepoint cloud 1404 to intelligently select range points for targeting with laser pulses, as discussed in the above-referenced and incorporated patents. For example, the point cloud data can be used to determine ranges for objects in the field of view that are to be targeted withlaser pulses 122. Thecontrol circuit 106 can use this range information to determine desired energy levels for thelaser pulses 122 which will target range points that are believed to correspond to those objects. In this fashion, thecontrol circuit 106 can controllably adjust the laser pulse energy as a function of the estimated range of the object being targeted so the object is illuminated with a sufficient amount of light energy given its estimated range to facilitate adequate detection by thelidar receiver 1400. Further still, thebeam scanner controller 802 can provide shottiming information 1410 to thereceiver 1400 and thesystem controller 800 can provide shot data 1412 (such as data identifying the targeting range points) to thereceiver 1400. The combination of this information informs the receiver how to control which pixels of thereceiver 1400 should be activated for detecting pulse returns 1402 (including when those pixels should be activated). As discussed in the above-referenced and incorporated '513 and '015 patents, the receiver can select pixels for activation to detectpulse returns 1402 based on the locations of the targeted range points in the field of view. Accordingly, precise knowledge of which range points were targeted and when those range points were targeted helps improve the operations ofreceiver 1400. Although not shown inFIG. 14 , it should also be understood that a practitioner may choose to also include a camera that images the field of view, and this camera can be optically co-axial (co-bore sighted) with thelidar transmitter 100. Camera images can also be used to facilitate intelligent range point selection among other tasks. -
FIG. 15 shows another example of a process flow for thecontrol circuit 106 with respect to using the models to dynamically determine the shot list for thetransmitter 100. Atstep 1500, thelaser energy model 108 andmirror motion model 308 are established. This can operate likestep 1300 discussed above. At step 1502, the model parameters are updated using pulse return statistics (which may be derived frompoint cloud 1404 or other information provided by the receiver 1400) and mirror scan position feedback (e.g., from feedback system 850). Atstep 1504, the models are coupled so that shot angles are assigned to time slots according to themirror motion model 308 for which shot energies can be predicted according to thelaser energy model 108. These coupled models can then be embedded in the shot scheduling logic used bycontrol circuit 106. Atstep 1506, a list of range points to be targeted withlaser pulses 122 is received. Atstep 1508, a selection is made for the search depth that governs how far ahead the system will schedule shots. - Based on the listed range points and the defined search depth, the order candidates for laser pulse shots are created (step 1510). The
mirror motion model 308 can assign time slots to these order candidates as discussed above. Atstep 1512, each candidate is tested using thelaser energy model 108. This testing may also include testing based on theeye safety model 1002 and a camera safety model. This testing can evaluate the order candidates for compliance with criteria such as peak energy constraints, eye safety constraints, camera safety constraints, minimum energy thresholds, and completion times. If a valid order candidate is found, the system can fire laser pulses in accordance with the timing/sequencing defined by the fastest of the valid order candidates. Otherwise, the process flow can return to step 1510 to continue the search for a valid order candidate. - Controllable Detection Intervals for Return Processing:
- In accordance with another example embodiment, the shot list can be used to exercise control over how the
lidar receiver 1400 detects returns fromlaser pulse shots 122.FIG. 18A shows anexample lidar receiver 1400 for use in a lidar system. Thelidar receiver 1400 comprisesphotodetector circuitry 1800 which includes aphotodetector array 1802. Thephotodetector array 1802 comprises a plurality ofdetector pixels 1804 that sense incident light and produce a signal representative of the sensed incident light. Thedetector pixels 1804 can be organized in thephotodetector array 1802 in any of a number of patterns. In some example embodiments, thephotodetector array 1802 can be a two-dimensional (2D) array ofdetector pixels 1804. However, it should be understood that other example embodiments may employ a one-dimensional (1D) array ofdetector pixels 1804 if desired by a practitioner. - The
photodetector circuitry 1800 generates areturn signal 1806 in response to apulse return 1402 that is incident on thephotodetector array 1802. The choice of whichdetector pixels 1804 to use for collecting areturn signal 1806 corresponding to a givenreturn 1402 can be made based on where the laser pulse shot 122 corresponding to thereturn 1402 that was targeted. Thus, if a laser pulse shot 122 is targeting a range point located as a particular azimuth angle, elevation angle pair; then the lidar receiver can map that azimuth, elevation angle pair to a set ofpixels 1804 within thearray 1802 that will be used to detect thereturn 1402 from that laser pulse shot 122. The mapped pixel set can include one or more of thedetector pixels 1804. This pixel set can then be activated and read out from to support detection of the subject return 1402 (while the pixels outside the pixel set are deactivated so as to minimize potential obscuration of thereturn 1402 within thereturn signal 1806 by ambient or interfering light that is not part of thereturn 1402 but would be part of thereturn signal 1806 ifunnecessary pixels 1804 were activated whenreturn 1402 was incident on array 1802). In this fashion, thelidar receiver 1400 will select different pixel sets of thearray 1802 for readout in a sequenced pattern that follows the sequenced spatial pattern of thelaser pulse shots 122.Return signals 1806 can be read out from the selected pixel sets, and thesereturn signals 1806 can be processed to detectreturns 1402 therewithin. -
FIG. 18A shows an example where one of thepixels 1804 is turned on to start collection of a sensed signal that represents incident light on that pixel (to support detection of areturn 1402 within the collected signal), while theother pixels 1804 are turned off (or at least not selected for readout). While the example ofFIG. 18A shows asingle pixel 1804 being included in the pixel set selected for readout, it should be understood that a practitioner may prefer thatmultiple pixels 1804 be included in one or more of the selected pixel sets. For example, it may be desirable to include in the selected pixel set one ormore pixels 1804 that are adjacent to thepixel 1804 where thereturn 1402 is expected to strike. - Examples of circuitry and control logic that can used for this selective pixel set readout are described in U.S. Pat. Nos. 10,754,015 and 10,641,873, the entire disclosures of each of which are incorporated herein by reference. These incorporated patents also describe example embodiments for the
photodetector circuitry 1800, including the use of a multiplexer to selectively read out signals from desired pixel sets as well as an amplifier stage positioned between thephotodetector array 1802 and multiplexer. -
Signal processing circuit 1820 operates on thereturn signal 1806 to computereturn information 1822 for the targeted range points, where thereturn information 1822 is added to thelidar point cloud 1404. Thereturn information 1822 may include, for example, data that represents a range to the targeted range point, an intensity corresponding to the targeted range point, an angle to the targeted range point, etc. As described in the above-referenced and incorporated '015 and '873 patents, thesignal processing circuit 1820 can include an analog-to-digital converter (ADC) that converts thereturn signal 1806 into a plurality of digital samples. Thesignal processing circuit 1820 can process these digital samples to detect thereturns 1402 and compute thereturn information 1822 corresponding to thereturns 1402. In an example embodiment, thesignal processing circuit 1820 can perform time of flight (TOF) measurement to compute range information for thereturns 1402. However, if desired by a practitioner, thesignal processing circuit 1820 could employ time-to-digital conversion (TDC) to compute the range information. Additional details about how thesignal processing circuit 1820 can operate for an example embodiment are discussed below. - The
lidar receiver 1400 can also include circuitry that can serve as part of thecontrol circuit 106 of the lidar system. This control circuitry is shown as areceiver controller 1810 inFIG. 18A . Thereceiver controller 1810 can process scheduled shotinformation 1812 to generate thecontrol data 1814 that defines which pixel set to select (and when to use each pixel set) for detectingreturns 1402. The scheduled shotinformation 1812 can include shot data information that identifies timing and target coordinates for thelaser pulse shots 122 to be fired by thelidar transmitter 100. In an example embodiment, the scheduledshot information 1812 can also include detection range values to use for each scheduled shot to support the detection ofreturns 1412 from those scheduled shots. These detection range values may include minimum and maximum range values (Rmin and Rmax respectively) for each shot. In this fashion, Rmin(i) would be the minimum detection range associated with Shot(i) and Rmax(i) would be the maximum detection range associated with Shot(i). These minimum and maximum range values can be translated by thereceiver controller 1810 into times for starting and stopping collections from the selectedpixels 1804 of thearray 1802 as discussed below. - The
receiver controller 1810 and/orsignal processing circuit 1820 may include one or more processors. These one or more processors may take any of a number of forms. For example, the processor(s) may comprise one or more microprocessors. The processor(s) may also comprise one or more multi-core processors. As another example, the one or more processors can take the form of a field programmable gate array (FPGA) or application-specific integrated circuit (ASIC) which provide parallelized hardware logic for implementing their respective operations. The FPGA and/or ASIC (or other compute resource(s)) can be included as part of a system on a chip (SoC). However, it should be understood that other architectures for such processor(s) could be used, including software-based decision-making and/or hybrid architectures which employ both software-based and hardware-based decision-making. The processing logic implemented by thereceiver controller 1810 and/orsignal processing circuit 1820 can be defined by machine-readable code that is resident on a non-transitory machine-readable storage medium such as memory within or available to thereceiver controller 1810 and/orsignal processing circuit 1820. The code can take the form of software or firmware that define the processing operations discussed herein. This code can be downloaded onto the processor using any of a number of techniques, such as a direct download via a wired connection as well as over-the-air downloads via wireless networks, which may include secured wireless networks. As such, it should be understood that thelidar receiver 1400 can also include a network interface that is configured to receive such over-the-air downloads and update the processor(s) with new software and/or firmware. This can be particularly advantageous for adjusting thelidar receiver 1400 to changing regulatory environments. When using code provisioned for over-the-air updates, thelidar receiver 1400 can operate with unidirectional messaging to retain function safety. -
FIGS. 19A and 19B illustrate time constraints involved in detecting shot returns 1402 and how these time constraints relate to the Rmin and Rmax values. - In
FIGS. 19A and 19B , the horizontal axes correspond to time. InFIG. 19A , the time at which a first laser pulse shot (Shot 1) is fired is denoted by T(1) (where the parenthetical (1) in T(1) references the shot number). TT1(1) denotes when thereceiver 1400 starts the collection from the pixel set used for detecting a return fromShot 1. The time at which collection stops from this pixel set (to end the readout of signal from the pixel set for detecting a return from Shot 1) is denoted as TT2(2). It should be understood that the parentheticals in these terms T, TT1, and TT2 reference the shot number for the detection. Thus, the time duration from TT1(1) to TT2(1) represents the detection interval for detecting a return fromShot 1 because it is during this time interval that thereceiver 1400 is able to collect signal from the pixel set used for detecting a return fromShot 1. - As shown by
FIG. 19A , the time duration from T(1) to TT1(1) corresponds to the minimum range that must exist to the target (relative to the receiver 1400) in order to detect a return fromShot 1. This minimum range is denoted by Rmin(1) inFIG. 19A (where the parenthetical references the shot number to which the minimum range value is applicable). If the target were located less than Rmin(1) from thereceiver 1400, then thereceiver 1400 would not be able to detect the return fromShot 1 because the lidar receiver would not yet have started collection from the pixel set used for detecting that return. -
FIG. 19A also shows that the time duration from T(1) to TT2(1) corresponds to the maximum range to the target for detecting a return fromShot 1. This maximum range is denoted by Rmax(1) inFIG. 19A (where the parenthetical references the shot number to which the maximum range value is applicable). If the target were located greater than Rmax(1) from thereceiver 1400, then thereceiver 1400 would not be able to detect the return fromShot 1 because the lidar receiver would have already stopped collection from the pixel set used for detecting that return by the time that the return strikes that pixel set. - Thus, so long as the target for
Shot 1 is located at a range between Rmin(1) and Rmax(1), thereceiver 1400 is expected to be capable of detecting the return if collection from the pixel set starts at time TT1(1) and stops at time TT2(1). The range interval encompassed by the detection interval of TT1(1) to TT2(1) can be referred to as the range swath S(1) (where the parenthetical references the shot number to which the range swath is applicable). This range swath can also be referenced as a range buffer as it represents a buffer of ranges for the target that make the target detectable by thereceiver 1400. -
FIG. 19B further extends these time-range relationships by adding a second shot (Shot 2). The time at which Shot 2 is fired by thelidar transmitter 100 is denoted as T(2) inFIG. 19B . The start collection time for the pixel set used to detect the return fromShot 2 is denoted as TT1(2) inFIG. 19B , and the stop collection time for the pixel set used with respect to detecting the return fromShot 2 is denoted as TT2(2) inFIG. 19B .FIG. 19B further shows that (1) the time duration from T(2) to TT1(2) corresponds to the minimum range to the target for detecting a return fromShot 2 and (2) the time duration from T(2) to TT2(2) corresponds to the maximum range to the target for detecting a return fromShot 2. These minimum and maximum range values are denoted as Rmin(2) and Rmax(2) respectively byFIG. 19B . The range swath S(2) defines the range interval (or range buffer) between Rmin(2) and Rmax(2) for the detection interval of TT1(2) to TT2(2). - In an example embodiment, the
photodetector circuitry 1800 is capable of sensing returns from one pixel set at a time. For such an example embodiment, the detection interval for detecting a return for a given shot cannot overlap with the detection interval for detecting a return from another shot. This means that TT1(2) should be greater than or equal to TT2(1), which then serves as a constraint on the choice of start and stop collection times for the pixel clusters. - However, it should be understood that this constraint could be eliminated with other example embodiments through the use of multiple readout channels for the
lidar receiver 1400 as discussed below in connection withFIG. 26 . - As noted above, each detection interval (D(i), which corresponds to (TT1(i) to TT2(i)) will be associated with a particular laser pulse shot (Shot(i)). The system can control these shot-specific detection intervals so that they can vary across different shots. As such, the detection interval of D(j) for Shot(j) can have a different duration than the detection interval of D(k) for Shot(k).
- Moreover, as noted above, each detection interval D(i) has a corresponding shot interval SI(i), where the shot interval SI(i) corresponding to D(i) can be represented by the interval from shot time T(i) to the shot time T(i+1). Thus, consider a shot sequence of Shots 1-4 at times T(1), T(2), T(3), and T(4) respectively. For this shot sequence, detection interval D(1) for detecting the return from Shot(1) would have a corresponding shot interval SI(1) represented by the time interval from T(1) to T(2). Similarly, detection interval D(2) for detecting the return from Shot(2) would have a corresponding shot interval SI(2) represented by the time interval from T(2) to T(3); and the detection interval D(3) for detecting the return from Shot(3) would have a corresponding shot interval SI(3) represented by the time interval from T(3) to T(4). Counterintuitively, the inventors have found that it is often not desirable for a detection interval to be of the same duration as its corresponding shot interval due to factors such as the amount of processing time that is needed to detect returns within return signals (as discussed in greater detail below). In many cases, it will be desirable for the control process to define a detection interval so that it exhibits a duration shorter than the duration of its corresponding shot interval (D(i)<SI(i)). In this fashion, processing resources in the
signal processing circuit 1820 can be better utilized, as discussed below. Furthermore, in some other cases, it may be desirable for the variability of the detection intervals relative to their corresponding shot intervals to operate where a detection interval exhibits a duration longer than the duration of its corresponding shot interval (D(i)>SI(i)). For example, if the next shot at T(i+1) has an associated Rmin value greater than zero, and where the shot at T(i) is targeting a range point expected to be at a long range while the shot at T(i+1) is targeting a range point expected to be at medium or long range, then it may be desirable for D(i) to be greater than SI(i). - It can be appreciated that a laser pulse shot, Shot(i), fired at time T(i) will be traveling at the speed of light. On this basis, and using the minimum and maximum range values of Rmin(i) and Rmax(i) for detecting the return from Shot(i), the minimum roundtrip distance for Shot(i) and its return would be 2Rmin(i) and the minimum roundtrip time for Shot(i) and its return would be TT1(i)−T(i). The value for TT1(i) could be derived from Rmin(i) according to these relationships as follows (where the term c represents the speed of light):
-
(TT1(i)−T(i))c=2R min(i) -
- which can be re-expressed as:
-
- Thus, knowledge of when Shot(i) is fired and knowledge of the value for Rmin(i) allows the
receiver 1400 to define when collection should start from the pixel set to be used for detecting the return from Shot (i). - Similarly, the value for TT2(i) can be derived from Rmax(i) according to these relationships as follows (where the term c represents the speed of light):
-
(TT2(i)−T(i))c=2R max(i) -
- which can be re-expressed as:
-
- Thus, knowledge of when Shot(i) is fired and knowledge of the value for Rmax(i) allows the
receiver 1400 to define when collection can stop from the pixel set to be used for detecting the return from Shot(i). - A control process for the lidar system can then operate to determine suitable Rmin(i) and Rmax(i) values for detecting the returns from each Shot(i). These Rmin, Rmax pairs can then be translated into appropriate start and stop collection times (the on/off times of TT1 and TT2) for each shot. In an example embodiment, if the
lidar point cloud 1404 has range data and location data about a plurality of objects of interest in a field of view for thereceiver 1400, this range data and location data can be used to define current range estimates for the objects of interest, and suitable Rmin, Rmax values for detecting returns from laser pulse shots that target range points corresponding to where these objects of interest are located can be derived from these range estimates. In another example embodiment, the control process for the lidar system can access map data based on the geographic location of thereceiver 1400. From this map data, the control process can derive information about the environment of thereceiver 1400, and suitable Rmin, Rmax values can be derived from this environmental information. Additional example embodiments for determine the values for the Rmin, Rmax pairs are discussed below. -
FIG. 18B depicts an example process flow for execution by thereceiver controller 1810 for thelidar receiver 1400 ofFIG. 18A to control the selection ofdetector pixels 1804 in thephotodetector array 1802 for readout. In this example, the lidar system includes abuffer 1830 in which the scheduledshot information 1812 is buffered by thecontrol circuit 106. This scheduled shotinformation 1812 can include, for each scheduled laser pulse shot, data that identifies the time the shot is to be fired, data that identifies range point to be targeted with the subject shot (e.g., the azimuth and elevation angles at which the subject shot will be fired), and data that defines the detection interval for detecting the return from the subject shot (e.g., data that identifies the Rmin and Rmax values to use for detecting the return from the subject shot). Theentries 1832 inbuffer 1830 thus correspond to the shot time, azimuth angle, elevation angle, Rmin, and Rmax values for each shot. Moreover, the order in which these entries are stored in thebuffer 1832 can define the shot schedule. For example, the buffer can be a first in/first out (FIFO) buffer whereentries 1832 are added into and read out of the buffer in accordance with the order in which the shots are to be fired. -
Steps FIG. 18B define a translation process flow for thereceiver controller 1810. Atstep 1850, thereceiver controller 1810 reads theentry 1832 corresponding to the first shot to determine the shot time, shot angles, and range swath information for that shot. Atstep 1852, thereceiver controller 1852 determines which pixel set of thephotodetector array 1800 to select for detecting the return from the shot defined by the shot angles of theentry 1832 read atstep 1850. This can be accomplished by mapping the azimuth and elevation angles for the shot to a particular pixel set of thearray 1802. Thus,step 1852 can operate to generate data that identifies one ormore pixels 1804 of thearray 1802 to include in the pixel set for detecting the return from the subject shot. The above-referenced and incorporated '015 and '873 patents describe how a practitioner can implement this mapping of shot locations to pixels. Atstep 1854, thereceiver controller 1810 translates the shot time and Rmin, Rmax values into the TT1, TT2 values. This translation can use the expressions discussed above for computing TT1(i) and TT2(i) as a function of T(i), Rmin(i), and Rmax(i) for a given Shot(i). The values for the determined pixel set and the start/stop collection times for the pixel set can then be added tobuffer 1840 as anew entry 1842. The process flow can then return to step 1850 to iterate through steps 1850-1854 and generate the control data for the next shot (as defined by thenext entry 1832 in buffer 1830), and so on for so long as there arenew entries 1832 inbuffer 1830. -
Steps FIG. 18B define a readout control process flow for thereceiver controller 1810. Eachentry 1842 inbuffer 1840 corresponds to exercising control over detecting the return from a different shot and identifies the pixel set to use for the detection as well as the start and stop collection times for that detection (TT1, TT2 values). Atstep 1856, the receiver controller reads theentry 1842 corresponding to the first shot to identify the pixel set, TT1, and TT2 values for detecting the return from that shot. Then, at the time corresponding to TT1, thereceiver controller 1810 starts collecting the sensed signal from the identified pixel set (step 1860). In this fashion, thereceiver controller 1810 can providecontrol data 1814 to thephotodetector circuitry 1800 at the time corresponding to TT1 that instructs thephotodetector circuitry 1800 to start the signal readout from the pixel(s) within the identified pixel set. Next, at the time corresponding to TT2, thereceiver controller 1810 stops the collection from the identified pixel set (step 1862). To accomplish this, thereceiver controller 1810 can providecontrol data 1814 to thephotodetector circuitry 1800 at the time corresponding to TT2 that instructs thephotodetector circuitry 1800 to stop the readout from the pixel(s) within the identified pixel set. From there, the process flow returns to step 1856 to continue the readout control process flow for thenext entry 1842 inbuffer 1840. This iteration through steps 1856-1860 continues for so long as there arenew entries 1842 inbuffer 1840 to process. - As discussed above in connection with
FIGS. 8, 11, and 14 , a practitioner may want the lidar system to exercise highly precise control over when the laser pulse shots are fired; and this can be accomplished by having thebeam scanner controller 802 provide firing commands 120 to thelaser source 102 precisely when the finemirror motion model 808 a indicates thelidar transmitter 100 will be pointing at a particular shot angle (e.g., an azimuth angle, elevation angle pair). Thebeam scanner controller 802 can then report these precise shot times to thereceiver 1400 asshot timing information 1412. Accordingly, as shown byFIG. 18C , thelidar receiver 1400 can receive the scheduledshot information 1812 in the form of (1) theshot timing information 1410 from the beam scanner controller 802 (which in this example will identify the precise shot times for each shot) and (2) the shot data 1312 from the system controller 800 (which in this example will identify the shot angles for each shot as well as the Rmin and Rmax values for each shot). -
FIG. 18D depicts an example process flow for thereceiver controller 1810 with respect to the example ofFIG. 18C . The process flow ofFIG. 18D can operate in a similar fashion as the process flow ofFIG. 18D with a couple of exceptions. For example, atstep 1854 ofFIG. 18D , thereceiver controller 1810 can compute the start and stop collection times as time offsets relative to the fire time for the shot rather than as absolute values. That is, rather than computing values for TT1(i) and TT2(i), the receiver controller can compute values for (1) TT1(i)−T(i) (which would identify the TT1(i) offset relative to fire time T(i)) and (2) TT2(i)−T(i) (which would identify the TT2(i) offset relative to fire time T(i)) as follows: -
- Then, after
step 1856 is performed to readentry 1842 inbuffer 1840, thereceiver controller 1870 can also determine the fire time T(i) for the subject shot(i) based on theshot timing information 1410 received from thebeam scanner controller 802. Using this shot time as the frame of reference for the TT1 and TT2 offset values,steps -
FIG. 20 shows an example process flow for use by thesignal processing circuit 1820 to detectreturns 1402 and computereturn information 1822 for thereturns 1402. Atstep 2000, thesignal processing circuit 1820 digitizes the sensedsignal 1806 produced by thephotodetector circuitry 1800. This sensedsignal 1806 represents the incident light on the activatedpixels 1804 of thearray 1802 over time, and thus is expected to include signals corresponding to thereturns 1402. Thesignal processing circuit 1820 can performstep 2000 using an ADC to producedigital samples 2004 that are added tobuffer 2002. - The
signal processing circuit 1820 then needs to segment thesesamples 2004 into groups corresponding to the detection intervals for the returns from each shot. This aspect of the process flow is identified by thedetection loop 2020 ofFIG. 20 . Aprocessor 2202 within thesignal processing circuit 1820 can perform thisdetection loop 2020. To help accomplish this, the processor can access thebuffer 1840 to determine the start and stop collection times for detecting the returns from each shot. As discussed above,entries 1842 inbuffer 1840 can include the start/stop collection times as either absolute or offset values for TT1 and TT2. Atstep 2006, the processor reads thenext entry 1842 inbuffer 1840 to determine the TT1 and TT2 values (which as noted can be either absolute values or offset values). These TT1 and TT2 values can then be used to find whichsamples 2004 in thebuffer 2002 correspond to the detection interval of TT1 to TT2 for detecting the subject return from (step 2008). The processor thus reads thedigital samples 2004 corresponding to the subject detection interval, and then processes thedigital samples 2004 in this group to detect whether the return is present (step 2010). When a return is detected,step 2010 can also computereturn information 1822 based on thesesamples 2004. As noted above, thesamples 2004 can be processed to compute a range to target for the shot. For example, according to a TOF flight technique, the processor can compute the range for the return based on knowledge of when the shot was fired, when the detected return was received, and the value for the speed of light. Thesamples 2004 can also be processed to compute an intensity for the shot return. For example, the return intensity can be computed by multiplying the return energy by the range squared, and then dividing by the transmitted shot energy and again by the effective receiver pupil aperture. Fromstep 2010, thedetection loop 2020 can return tosteps next entry 1842 frombuffer 1840 and grab the next group ofsamples 2004 frombuffer 2002 and then detect the return and compute its return information at step 2010 (and so on for additional returns). - Multi-Processor Return Detection:
- The amount of time needed by
processor 2022 to perform thedetection loop 2020 is an important metric that impacts the lidar system. This amount of time can be characterized as Tproc, and it defines the rate at whichprocessor 2022 drawssamples 2004 frombuffer 2002. This rate can be referenced asRate 1. The rate at which the receiver addssamples 2004 to buffer 2002 can be referenced asRate 2. It is highly desirable for theprocessor 2022 to operate in a manner whereRate 1 is greater than (or at least no less than)Rate 2 so as to avoid throughput problems and potential buffer overflows. To improve throughput for thelidar receiver 1400 in this regard, thesignal processing circuit 1820 can includemultiple processors 2022 that distribute the detection workload so that themultiple processors 2022 combine to make it possible for thereceiver 1400 to keep up with the shot rate of thelidar transmitter 100 even ifRate 1 is less than the shot rate of thelidar transmitter 100. For example, if there areN processors 2022, thenRate 1 can be N times less than that shot rate of thelidar transmitter 100 while still keeping pace with the shots.FIG. 21A shows an example of a multi-processor architecture for thesignal processing circuit 1820 in this regard. As shown byFIG. 21A , theprocessor 2022 comprises two ormore processors 2022 1, . . . , 2022 N. Eachprocessor 2022 i can accessbuffers FIG. 20 for the returns corresponding to different shots.FIG. 21B shows an example of control flow for thedifferent processors 2022 i. Atstep 2100, eachprocessor 2022 i decides whether it is free to process another return. In other words, has it finished processing the previous return it was working on? If thesubject processor 2022 i decides that it is free, it proceeds to step 2102 where it performs thedetection processing loop 2020 for the next return available from thebuffers processor 2022 i can grabsamples 2004 frombuffer 2002 to work on the next return on a first come first served basis and thereby distribute the workload of processing the returns across multiple processors to help reduce the processing latency of thesignal processing circuit 1820. - The
processors 2022 i can take any of a number of forms. For example, eachprocessor 2022 i can be a different microprocessor that shares access to thebuffers samples 2004 corresponding to different returns if necessary. As another example, eachprocessor 2022 i can be a different processing core of a multi-core processor, in which case the different processing cores can operate onsamples 2004 corresponding to different returns if necessary. As yet another example, eachprocessor 2022 i can be a different set of parallelized processing logic within a field programmable gate array (FPGA) or application-specific integrated circuit (ASIC). In this fashion, parallelized compute resources within the FPGA or ASIC can operate onsamples 2004 corresponding to different returns if necessary. - It is expected that the use of two
processors 2022 will be sufficient to distribute the workload of processing thesamples 2004 withinbuffer 2002. With this arrangement, the twoprocessors 2022 can effectively alternate in terms of which returns they will process (e.g.,Processor 1 can work on the samples for even-numbered returns whileProcessor 2 works on the samples for the odd-numbered returns). However, this alternating pattern may not necessarily hold up if, for example, the detection interval forReturn 1 is relatively long (in whichcase Processor 1 may need to process a large number of samples 2004) while the detection intervals forReturns Processor 1 is still processing the samples fromReturn 1 whenProcessor 2 completes its processing of the samples from Return 2 (and thusProcessor 2 is free to begin processing the samples fromReturn 3 whileProcessor 1 is still working on the samples from Return 1). - Moreover, the
return information 1822 computed by eachprocessor 2022 i can be effectively joined or shuffled together into their original time sequence of shots when adding thereturn information 1822 to thepoint cloud 1404. - Choosing Rmin, Rmax Values:
- The task of choosing suitable Rmin and Rmax values for each shot can be technically challenging and involves a number of tradeoffs. In an ideal world, the value of Rmin would be zero and the value of Rmax would be infinite; but this is not feasible for real world applications because there are a number of constraints which impact the choice of values for Rmin and Rmax. Examples of such constraints are discussed below, and these constraints introduce a number of tradeoffs that a practitioner can resolve to arrive at desirable Rmin and Rmax values for a given use case.
- For an example embodiment as discussed above where the
lidar receiver 1400 is only capable of receiving/detecting one return at a time, a first constraint is the shot timing. That is, thereceiver 1400 needs to quit listening for a return fromShot 1 before it can start listening for a return fromShot 2. Accordingly, for a given fixed shot spacing, if a practitioner wants to have fixed Rmin and Rmax values, their differences must be equal to the intershot timing (after scaling by 2/c). For example, for a 1 μsec detection interval, the corresponding range buffer would be a total of 150 m. This would permit Rmin to be set at 0 m and Rmax to be set at 150 m (or Rmin=40 m, Rmax=190 m, etc.). Thus, if Rmax is increased, we can avoid adding time to Tproc by also increasing the value of Rmin by a corresponding amount so that the Rmax−Rmin does not change. - A second constraint on Rmin, Rmax values is physics. For example, the
receiver 1400 can only detect up to a certain distance for a given shot energy. For example, if the energy in a laser pulse shot 122 is low, there would not be a need for a large Rmax value. Moreover, thereceiver 1400 can only see objects up to a certain distance based on the elevation angle. As an example, thereceiver 1400 can only see a short distance if it is looking at a steep downward elevation angle because the field of view would quickly hit the ground at steep downward elevation angles. In this regard, for areceiver 1400 at a height of 1 m and an elevation angle of −45 degrees, Rmax would be about 1.4 m. The light penetration structure of the air within the environment of the lidar system can also affect the physics of detection. For example, if thelidar receiver 1400 is operating in clear weather, at night with dark or artificial lighting, and/or in a relatively open area (e.g., on a highway), the potential value for Rmax could be very large (e.g., 1 km or more) as thelidar receiver 1400 will be capable of detecting targets at very long range. But, if thelidar receiver 1400 is operating in bad weather or during the day (with bright ambient light), the potential value for Rmax may be much shorter (e.g., around 100 m) as thelidar receiver 1400 would likely only need to be capable of detecting targets at relatively shorter ranges. - A third constraint arises from geometry and a given use case. Unlike the physics constraints in the second constraint category discussed above (which are based on features of the air surrounding the lidar system), geometry and use case can be determined a priori (e.g., based on maps and uses cases such as a given traffic environment that may indicate how congested the field of view would be with other vehicles, buildings, etc.), with no need to measure attributes in the return data. For example, if the goal is to track objects on a road, and the road curves, then there is no need to set Rmax beyond the curve. Thus, if the
receiver 1400 is looking straight ahead and the road curves at a radius of curvature of 1 km, roughly 100 m for Rmax would suffice. This would be an example where accessing map data can help in the choice of suitable Rmax values. As another example, if thelidar receiver 1400 is operating in a relatively congested environment (e.g., on a city street), the potential value for Rmax may be relatively short (e.g., around 100 m) as thelidar receiver 1400 would likely only need to be capable of detecting targets at relatively short ranges. Also, for use cases where there is some a priori knowledge of what the range is to an object being targeted with a laser pulse shot, this range knowledge can influence the selection of Rmin and Rmax. This would be an example where accessing lidarpoint cloud data 1404 can help in the choice of suitable Rmin and Rmax values. Thus, if a given laser pulse shot is targeting an object having a known estimated range of 50 m, then this knowledge can drive the selection of Rmin, Rmax values for that shot to be values that encompass the 50 m range within a relatively tight tolerance (e.g., Rmin=25 m and Rmax=75 m). - A fourth constraint arises from the processing time needed to detect a return and compute return information (Tproc, as discussed above). If the
receiver 1400 has N processors and all are busy processing previous returns, then thereceiver 1400 must wait until one of the processors is free before processing the next return. This Tproc constraint can make it undesirable to simply set the detection intervals so that they coincide with their corresponding shot intervals (e.g., TT1(i)=T(i) and TT2(i)=T(i+1), where TT1(1)=T(1), TT2(1)=T(2), and so on). For example, imagine a scenario where thereceiver 1400 includes two processors for load balancing purposes and where the shot spacing has a long delay betweenShots 1 and 2 (say 100 μsec), and then a quick sequence ofShots Shot 1, and Processor B would need 10 μsec to process the return fromShot 2. This means that Processor A would still be working on Shot 1 (and Processor B would still be working on Shot 2) when the return fromShot 3 reaches thereceiver 1400. Accordingly, the system may want to tradeoff the detection interval for detecting the return fromShot 1 by using a smaller value for Rmax(1) so that there is a processor available to work on the return fromShot 3. Thus, the variable shot intervals that can be accommodated by the lidar system disclosed herein will often make it desirable to control at least some of the detection intervals so that they have durations that are different than the durations of the corresponding shot intervals, as discussed above. - Accommodating the Tproc constraint can be accomplished in different ways depending on the needs and desires of a practitioner. For example, under a first approach, the Rmax value for the processor that would be closest to finishing can be redefined to a lesser value so that processor is free exactly when the new shot is fired. In this case, the Rmin for the new shot can be set to zero. Under a second approach, we can keep Rmax the same for the last shot, and then set Rmin for the new shot to be exactly the time when the processor first frees up. Additional aspects of this constraint will be discussed in greater detail below.
- A fifth constraint arises from the amount of time that the
pixels 1804 of thearray 1802 need to warm up when activated. This can be referred to as a settle time (Tsettle) for thepixels 1804 of thearray 1802. When a givenpixel 1804 is activated, it will not reliably measure incident light until the settle time passes, which is typically around 1 μsec. This settle time effectively defines the average overall firing rate for a lidar system that uses example embodiments of thelidar receiver 1400 described herein. For example, if the firing rate of thelidar transmitter 100 is 5 million shots per second, the settle time would prevent thereceiver 1400 from detecting returns from all of these shots because that would exceed the ability of thepixels 1804 to warmup sufficiently quickly for detecting returns from all of those shots. However, if the firing rate is only 100,000 shots per second, then the settle time would not be a limiting factor. -
FIG. 27 shows an example embodiment that illustrates how the circuitry of thereceiver 1400 can accommodate the settle time for thepixels 1804. For ease of illustration, thearray 1802 ofFIG. 27 includes only fourpixels 1804. However, it should be understood that a practitioner would likely choose a much larger number ofpixels 1804 for thearray 1802.FIG. 27 shows an example where thephotodetector circuit 1800 includes an amplifier network that connects thepixels 1804 ofarray 1802 to a corresponding amplifier that amplifies the signals from itscorresponding pixel 1804. Each amplified signal is then fed as an input line tomultiplexer 2710. Thus, as shown byFIG. 27 ,Pixel 1 feeds into amplifier A1, which in turn feeds into multiplexer input line M1. Similarly,Pixel 2 feeds into amplifier A2, which in turn feeds into multiplexer input line M2 (and so on forPixels 3 and 4). As discussed in the above-referenced and incorporated U.S. Pat. Nos. 9,933,513 and 10,754,015, the amplifiers in the amplifier network can be maintained in a quiescent state when their correspondingpixels 1804 are not being used to detect returns. In doing so, the amount of power consumed by thereceiver 1400 during operation can be greatly reduced. When it is time for a givenpixel 1804 to be used to detect a return, that pixel's corresponding amplifier is awakened by powering it up. However, as discussed above, the pixel will need to wait for the settle time before its corresponding amplifier can pass an accurate signal to themultiplexer 2710. This means that if thelidar receiver 1400 is to start collection from a pixel at time TT1, then it must activate that pixel at least Tsettle before TT1. - Multiplexer 2710 operates to read out a sensed signal from a desired
pixel 1804 in accordance with areadout control signal 2708, where thereadout control signal 2708 controls which of the multiplexer input lines are passed as output. Thus, by controlling thereadout control signal 2708, thereceiver 1400 can control which of thepixels 1804 are selected for passing its sensed signal as thereturn signal 1806. - The
receiver controller 1810 includeslogic 2700 that operates on the scheduledshot information 1812 to convert the scheduled shot information into data for use in controlling thephotodetector circuit 1800. The scheduled shotinformation 1812 can include, for each shot, identifications of (1) a shot time (T(i)), (2) shot angles (e.g., an elevation angle, azimuth angle pair), (3) a minimum detection range value (Rmin(i)), and (4) a maximum detection range value (Rmax(i)).Logic 2700 converts this scheduled shot information into the following values used for controlling the photodetector circuit 1800: -
- An identification of the pixel set that will be used to detect the return from the subject
- Shot (i). This identified pixel set is shown as P(i) by
FIG. 27 . - An identification of an activation time that will be used to define the time at which the amplifier(s) corresponding to the identified pixel set P(i) will be switched to a powered up state from a quiescent state. This identified activation time is shown as Ta(i) by
-
FIG. 27 . - An identification of the start collection time (TT1(i)) for the identified pixel set P(i)
- An identification of the stop collection time (TT2(i)) for the identified pixel set P(i)
- An identification of a deactivation time that will be used to define the time at which the amplifier(s) corresponding to the identified pixel set P(i) will be switched from the powered-up state to the quiescent state.
- The
logic 2700 can also pass the shot times T(i) as shown byFIG. 27 . - The values for P(i) can be determined from the shot angles in the scheduled
shot information 1812 based on a mapping of shot angles to pixel sets, as discussed in the above-referenced and incorporated patents. - The values for Ta(i) can be determined so that the settle time for the identified pixel set P(i) will have passed by the time TT1(i) arrives so that P(i) will be ready to have collection started from it at time TT1(i). A practitioner has some flexibility in choosing how the
logic 2700 will compute an appropriate value for Ta(i). For example, thelogic 2700 can activate the next pixel set when the immediately previous shot is fired. That is,logic 2700 can set the value for Ta(i)=T(i−1), which is expected to give P(i) enough time to power up so that collection from it can begin at time TT1(i). However, in another example embodiment, thelogic 2700 can set the value for Ta(i)=TT1(i)−Tsettle (or some time value between these two options). - The values for TT1(i) and TT2(ii) can be computed from the Rmin(i) and Rmax(i) values as discussed above.
- The values for Td(i) can be determined so that Td(i) either equals TT2(i) or falls after TT2(i), preferably sufficiently close in time to TT2(i) so as to not unduly waste power. In choosing a suitable value for Td(i), the
logic 2700 can examine the upcoming shots that are close in time to see if any of the pixels in P(i) will be needed for such upcoming shots. In such a circumstance, thelogic 2700 may choose to leave the corresponding amplifier powered up. But, in an example embodiment where a practitioner wants to power down the amplifier(s) for a pixel set as soon as collection from that pixel set stops, then TT2(i) can be used as the deactivation time in place of a separate Td(i) value. - In the example of
FIG. 27 , thereceiver controller 1810 includes pixelactivation control logic 2702 and pixelreadout control logic 2704. Pixelactivation control logic 2702 operates to provide anactivation control signal 2706 to the amplifier network that (1) activates the amplifier(s) corresponding to each pixel set P(i) at each time Ta(i) and (2) deactivates the amplifier(s) corresponding to each pixel set P(i) at each time Td(i) (or TT2(i) as the case may be). Pixelreadout control logic 2704 operates to provide areadout control signal 2708 to themultiplexer 1710 that operates to (1) select each pixel set P(i) for readout at time TT1(i) (to begin collection from that pixel set) and (2) de-select each pixel set P(i) at time TT2(i) (to stop collection from that pixel set). - Accordingly,
FIG. 27 shows an example of how thereceiver controller 1810 can control which pixel sets ofarray 1802 will pass their sensed signals as output in thereturn signal 1806 to be processed by thesignal processing circuit 1820. To facilitate this timing control, thereceiver controller 1810 can include a pipeline of time slots that are populated with flags for the different control signals as may be applicable, so that thelogic respective control signals receiver 1400 is better able to support close range return detections. For example, if thereceiver 1400 were to wait until TT1 to activate a given pixel set, this would mean that the pixel set would not be ready to begin detection until the time TT1+Tsettle. Given that a typical value of Tsettle is around 1 μsec, this would translate to a minimum detection range of 150 m. By contrast, using the pixel activation technique ofFIG. 27 , thereceiver 1400 can support minimum detection ranges of 0 m. - With an example embodiment, the system begins with the shot list and then chooses a suitable set of Rmin and Rmax values for each shot. Of the five constraints discussed above, all but the second and third constraints discussed above can be resolved based simply on the shot list, knowledge of Tproc, and knowledge of the number (N) of
processors 2022 used for load balancing. For example, the third constraint would need access to additional information such as a map to be implemented; while the second constraint would need either probing of the atmosphere or access to weather information to ascertain how air quality might impact the physics of light propagation. - In an example embodiment discussed below for computing desired Rmin, Rmax values, the approach balances the first and fourth constraints using a mathematical framework, but it should be understood that this approach is also viable for balancing the other constraints as well.
-
FIG. 22 shows an example process flow for assigning Rmin and Rmax values to each shot that is scheduled by thecontrol circuit 106. In this example, the control circuit 106 (e.g., system controller 800) can perform theFIG. 22 process flow. However, it should be understood that this need not necessarily be the case. For example, a practitioner may choose to implement theFIG. 22 process flow within thereceiver controller 1810 if desired. - The
FIG. 22 process flow can operate on theshot list 2200 that is generated by thecontrol circuit 106. Thisshot list 2200 defines a schedule of range points to be targeted with thelaser pulse shots 122, where each range point can be defined by a particular angle pair {azimuth angle, elevation angle}. These shots can also be associated with a scheduled fire time for each shot. - As discussed above, a number of tradeoffs exist when selecting Rmin and Rmax values to use for detecting each shot. This is particularly the case when determining the detection interval in situations where there is little a priori knowledge about the target environment.
Step 2202 ofFIG. 22 can assign Rmin and Rmax values to each shot based on an analysis that balances a number of constraints that correspond to these tradeoffs. This analysis can solve a cost function that optimizes the detection intervals based on a number of constraints, including the first constraint discussed above for an example embodiment where thereceiver 1400 cannot listen to two returns from two different shots at the same time. Thus it is desirable to optimize the maximum and minimum ranges across a given shot list according to a preset cost function, where inputs to the cost function can involve information from thepoint cloud data 1404 of previous frames. In general, the cost function can include a multiple range function (e.g., a one-to-many or a many-to-many function) resulting in multiple criteria optimization. - The
shot list 2200 that step 2202 operates on can be defined in any of a number of ways. For example, theshot list 2200 can be a fixed list of shots that is solved as a batch to compute the Rmin, Rmax values. In another example, theshot list 2200 can be defined as a shot pattern selected from a library of shot patterns. In this regard, the lidar system may maintain a library of different shot patterns, and thecontrol circuit 106 can select an appropriate shot pattern based on defined criteria such as the environment or operational setting of the lidar system. For example, the library may include a desired default shot pattern for when a lidar-equipped vehicle is traveling on a highway at high speed, a desired default shot pattern for when a lidar-equipped vehicle is traveling on a highway at low speed, a desired default shot pattern for when a lidar-equipped vehicle is traveling in an urban environment with significant traffic, etc. Other shot patterns may include foviation patterns where shots are clustered at a higher densities near an area such as a road horizon and at lower densities elsewhere. Examples of using such shot pattern libraries are described in the above-referenced and incorporated U.S. Pat. App. Pub. 2020/0025887.Step 2202 can then operate to solve for suitable Rmin, Rmax values for each of the shots in the selected shot pattern. - With respect to step 2202, the plurality of criteria used for optimization might include, for example, minimizing the range offset from zero meters in front of the
lidar receiver 1400, or minimizing the range offset from no less than “x” meters in front of the lidar receiver 1400 (where “x” is a selected preset value). The cost function might also include minimizing the maximum number of shots in the shot list that have a range beyond a certain preset range “xx”. In general, “x” and “xx” are adapted from point cloud information in a data adaptive fashion, based on detection of objects which the perception stack determines are worthy of further investigation. While the perception stack may in some cases operate at much slower time scales, the presets can be updated on a shot-by-shot basis. - The value of optimization of the range buffers (specifically controlling when to start and stop collection of each return) to include multiple range buffers per scan row is that this allows faster frame rates by minimizing dead time (namely, the time when data is not being collected for return detection). The parameters to be optimized, within constraints, include processing latency, start time, swath (stop time minus start time), and row angle offsets. Presets can include state space for the
processor 2022, state space for the laser source 102 (dynamic model), and state space for themirror 110. -
Step 2202 solves equations for choosing range buffers (where examples of these equations are detailed below), and then generates the range buffer (Rmin and Rmax values) for each shot return. These operations are pre-shot-firing. - The outer bounds for Rmin and Rmax for each shot return can correspond to the pixel switching times TT1 and TT2, where TT1(k) can be set equal to TT2(k−1) and where TT2(k) can be set equal to TT1(k+1). It will often be the case where it is desirable for the
lidar receiver 1400 to turn off the old pixel set at exactly the time the new pixel set is turned on. - A set of constraints used for a state space model can be described as follows, for a use case where two
processors 2022 are employed to equally distribute the processing workload by handling alternating returns. - We assume that the
signal processing circuit 1820 begins processing data the moment the initial data sample is available (namely, at time TT1(k)). Processor A cannot ingest more data until the processing for Return(k) is cleared, which we can define as Tproc seconds after the previous return detection was terminated. The same goes for Processor B. For ease of conception, we will define Tproc as being one half of the realtime rate of return detection (or faster). We will take TT1(k)=T(k) (where an Rmin of zero is the starting point) to simplify the discussion, although it should be understood that this need not be the case. With the TT1 values set equal to the fire times of their corresponding shots, this means that the shot T(k+1) cannot be fired until the system stops collecting samples from the last shot. In other words, T(k+1)>TT2(k). - Collection for the shot fired at T(k+2) cannot be started unless the previous shot processed by the same processor (e.g., the same even or odd parity if we assume the two
processors 2022 alternate return collections). This leads to the second of our two inequalities: -
- If we put together these equations, using ≥≈>, adding relaxation constraints, and using S as a shift operator (where ST(k)=T(k+1), we get:
-
- These inequalities can be re-expressed using matrix notation as shown by
FIG. 23A . As noted, S is a shift operator where ST(k)=T(k+1), and S2 denotes a shift from T(k) to T(k+2). TT2 and T are expressed as vectors of dimension B, and b is a positive arbitrary 2N dimensional vector (of relaxation terms, built from shuffled versions of bk, bk′ of size twice that of TT2. In is an n-by-n identity matrix where the diagonal values are all ones and the other values are all zeros; and On is an n-by-n matrix of all zeros. Inequality constraints can be replaced with equality constraints using new entries as we have done here. B can be referred to as a relaxation variable. For example, we can replace x>0 with x+b=0 (where b>0). Note that T is known, and TT2 is our free variable. - The equation of
FIG. 23A is a state space equation since it is expressed in terms of relations between past and current values on an unknown value. This can be solved for real valued variables using simultaneous linear inequality solvers such as quadratic programming, which is available with software packages such as MATLAB (available from Mathworks). An example quadratic programming embodiment for the state space model ofFIG. 23A is shown byFIG. 23B . In this fashion, atstep 2202 ofFIG. 22 , thecontrol circuit 106 can determine the detection interval data for each of a plurality of laser pulse shots according to a state space equation that is solved using multiple simultaneous inequality constraint equations. This can yield the scheduledshot information 1812 where the schedule of range points to be targeted with laser pulse shots is augmented with associated detection interval data such as pixel set activation/deactivation times and/or Rmin, Rmax values. While these discussions are expressed in terms of TT2 values, it should be understood that these solutions can also work from Rmax values using the relationships discussed above where TT2 can be expressed in terms of Rmax and the shot times. -
FIG. 23A expresses all possible detection intervals consistent with the shot list, andFIG. 23B represents one solution amongst these possibilities (where this solution is one that, loosely speaking, performs uniformly well). But, this solution is not necessarily optimal. For example, at a shot elevation that is low, the ground can be expected to be close, in which it case it makes little sense to set TT2 in a fashion that enables long range detection. A toy example can help illustrate this point. - Suppose our shot list has shot times in a sequence of 1 μsec, 2 μsec, 98 μsec, 100 μsec, 102 μsec.
- If we have two processors, each of which computes detections at 2× realtime, we might have as a solution (where we will assume in all cases that Rmin=0):
-
Processor A: Shot Time: 1 μsec pulse 98 μsec pulse Range Interval: Rmax = 7.3 km Rmax = 150 m Processor B: Shot Time: 2 μsec pulse 100 μsec pulse Range Interval: Rmax = 7.3 km Rmax = 150 m - While this solution “works”, it should be understood that the two large Rmax values (>7 km) would “hog” the processors by making them unavailable for release to work on another return for awhile. This might not be ideal, and one might want to adjust the solution for a smaller Rmax value. There are an almost endless set of reasons why this is desirable because the processors are used for a variety of functions such as: intensity computation, range computation, velocity estimation, bounding box estimation etc.
- Accordingly, the inventors also disclose an embodiment that combines mathematical optimization functions with some measure of value substitutions and updating in certain circumstances to arrive a better solution (an example of which is discussed below in connection with
FIG. 24 ). - As another example where range substitutions and optimization updates can improve the solution, suppose the shot list obtained for a particular scenario fires at the following times in units of microseconds, at elevation angle shown respectively:
-
- Shot Time (pec)={12 40 70 86 101 121}
- Elevation-Angle (degrees)={−10,−10,0,0,0,0}
- Using the inequality for TT2 above and picking the largest detection interval at each shot, the result for the first four shots is:
-
TT2(1)≤40, TT2(2)≤63, TT2(3)≤85.5, TT2(4)≤101 - This maps to detection intervals (in μsec of time) of:
-
{TT2(k)−T(k)}k=1,2,3,4={28,23,15.5,15} - The sub-optimal nature of this solution arises because it yields large detection intervals at low elevations (where a long detection interval is not needed) and a small detection intervals at the horizon (where elevation angle is zero degrees, which is where a long detection interval is more desirable).
- As a solution to this issue,
FIG. 24 discloses a process flow where thecontrol circuit 106 can swap out potentially suboptimal detection intervals for more desirable detection intervals. As shown byFIG. 24 ,step 2202 can operate as described in connection withFIG. 22 to generate data corresponding to the detection interval solutions computed in accordance with the models ofFIGS. 23A and 23B . - The
control circuit 106 can also maintain alist 2402 of range points with desired detection intervals. For example, thelist 2402 can identify various shot angles that will intersect with the ground within some defined distance from the lidar system (e.g., some nominally short distance). For a lidar-equipped vehicle, examples of such shot angles would be for shots where the elevation angle is low and expected to be pointing at the road within some defined distance. For these shot angles, the detection interval corresponding to Rmax need not be a large value because it will be known that the shot will hit the ground within the defined distance. Accordingly, for these low elevation angles, thelist 2402 can define a desired Rmax or TT2 value that reflects the expected distance to ground. As another example, thelist 2402 can identify shot angles that lie off the motion path of the lidar system. For example, for a lidar-equipped vehicle, it can be expected that azimuth angles that are large in absolute value will be looking well off to the side of the vehicle. For such azimuth angles, the system may not be concerned about potential targets that are far away because they do not represent collision threats. Accordingly, for these large absolute value azimuth angles, thelist 2402 can define a desired Rmax or TT2 value that reflects the shorter range of potential targets that would be of interest. Range segmentations that can be employed bylist 2402 may include (1) shot angles linked to desired Rmax or TT2 values corresponding to 0-50 m, (2) shot angles linked to desired Rmax or TT2 values corresponding to 50-150 m, and (3) shot angles linked to desired Rmax or TT2 values corresponding to 150-300 m. - Then, at
step 2404, thecontrol circuit 106 can compare the assigned detection interval solutions produced bystep 2202 with thelist 2402. If there are any shots with assigned detection interval solutions that fall outside the desired detection intervals fromlist 2402, thecontrol circuit 106 can then swap out the assigned detection interval for the desired detection interval from list 2402 (for each such shot). Thus,step 2404 will replace one or more of the assigned detection intervals for one or more shots with the desired detection intervals from thelist 2402. - The
control circuit 106 can then proceed to step 2406 where it re-assigns detection intervals to the shots that were not altered bystep 2404. That is, the shots that did not have their detection intervals swapped out atstep 2404 can have their detection intervals re-computed using the models ofFIGS. 23A and 23B . But, withstep 2406, there will be fewer free variables because one or more of the shots will already have defined detection intervals. By re-solving the state space equation with the smaller set of shots, more optical detection intervals for those shots can be computed because there will be more space available to assign to shots that may benefit from longer detection intervals. - For example, we can re-consider the toy example from above in the context of the
FIG. 24 process flow. In this example,step 2404 will operate to impose a 50 m value for Rmax on the shots targeting the elevation of −10 degrees. This 50 m value for Rmax translates to around 0.3 μsec. This relaxes the −10 degree cases to: -
- TT2(1)=T(1)+0.3=12.3
- TT2(2)=40.3
- This means that both
processors 2022 are free whenShot 3 is taken at time 70 (whereShot 3 is the first shot at the horizon elevation, whose detection interval we wish to make long). TheFIG. 24 process flow can makeShot 3 collect until time 85.5, which frees up a processor 2022 (say Processor A) at time 101, just in time to collect onShot 5. The next shot begins at time 86, whereupon Processor B is free, and Processor B can process 17.5 μsec of data and still free up before the shot at time 121 arrives. - This yields the following for the toy example with respect to
FIG. 24 : -
- Shot Time (μsec)={12 40 70 86 101 121}
- Elevation-Angle (degrees)={−10,−10,0,0,0,0}
- Detection Intervals: {TT2(k)−T(k)}k=1,2,3,4={0.3, 0.3, 15.5, 17.5}
- We can see that the
FIG. 24 process flow has increased the detection interval for the zero degree elevation shots at the expense of those at −10 degrees in elevation, which provides a better set of detection intervals for the shot list. - Lidar System Deployment:
- The inventors further note that, in an example embodiment, the
lidar receiver 1400 and thelidar transmitter 100 are deployed in the lidar system in a bistatic architecture. With the bistatic architecture, there is a spatial offset of the field of view for thelidar transmitter 100 relative to the field of view for thelidar receiver 1400. This spatial separation provides effective immunity from flashes and first surface reflections that arise when a laser pulse shot is fired. For example, an activated pixel cluster of thearray 1802 can be used to detect returns at the same time that thelidar transmitter 100 fires a laser pulse shot 122 because the spatial separation prevents the flash from the newly fired laser pulse shot 122 from blinding the activated pixel cluster. Similarly, the spatial separation also prevents thereceiver 1400 from being blinded by reflections from surfaces extremely close to the lidar system such as glass or other transparent material that might be located at or extremely close to the egress point for the fired laser pulse shot 122. An additional benefit that arises from this immunity to shot flashes and nearby first surface reflections is that it permits the bistatic lidar system to be positioned in advantageous locations. For example, in an automotive or other vehicle use case as shown byFIG. 25 , thebistatic lidar system 2500 can be deployed inside a climate-controlledcompartment 2502 of the vehicle 2504 (such as the passenger compartment), which reduces operational risks to the lidar system arising from extreme temperatures. For example, the air-conditioning inside thecompartment 2502 can reduce the risk of thelidar system 2500 being exposed to excessive temperatures. Accordingly,FIG. 25 shows an example where thebistatic lidar system 2500 is deployed as part of or connected to a rearview mirror assembly 2510 or similar location incompartment 2502 where thelidar system 2500 can firelaser pulse shots 122 and detectreturns 1402 through the vehicle'swindshield 2512. It should be understood that the components ofFIG. 27 are not shown to scale. - Multi-Channel Readout for Returns:
- For another example embodiment, it should be understood that the detection timing constraint discussed above where the detection intervals are non-overlapping can be removed if a practitioner chooses to deploy multiple readout channels as part of the
photodetector circuitry 1800, where these multiple readout channels are capable of separately reading the signals sensed by different activated pixel clusters at the same time.FIG. 26 shows anexample receiver 1400 that includes multiple readout channels (e.g., M readout channels). Each readout channel can include a multiplexer 2600 that reads the signals sensed by a given activated cluster ofpixels 1804, in which case thelidar receiver 1400 is capable of detecting returns that impact different pixel clusters of thearray 1802 at the same time. It should be understood that amplifier circuitry can be placed between thearray 1802 and the multiplexers 2600 as described above with reference toFIG. 27 and as described in the above-referenced and incorporated U.S. Pat. Nos. 9,933,513 and 10,754,015. Through the use of multiple readout channels as exemplified byFIG. 26 , practitioners can relax the constraint that the detection intervals for detecting returns from different shots be non-overlapping. Among other benefits, this approach can open up possibilities for longer range detections that might otherwise be missed because collections from a first pixel cluster needed to detect the long range return would have stopped so thereceiver 1400 could start collection from a second pixel cluster needed to detect a shorter range return. With the approach ofFIG. 26 , collections from the first pixel cluster can continue for a longer time period, even when collections are occurring from the second pixel cluster through a different readout channel, thereby enabling the detection of the longer range return. - While the invention has been described above in relation to its example embodiments, various modifications may be made thereto that still fall within the invention's scope.
- For example, while the example embodiments discussed above involve a mirror subsystem architecture where the resonant mirror (mirror 110) is optically upstream from the point-to-point step mirror (mirror 112), it should be understood that a practitioner may choose to position the resonant mirror optically downstream from the point-to-point step mirror.
- As another example, while the
example mirror subsystem 104 discussed above employsmirrors mirror subsystem 104 may be used. As an example, mirrors 110 and 112 can scan along the same axis, which can then produce an expanded angular range for themirror subsystem 104 along that axis and/or expand the angular rate of change for themirror subsystem 104 along that axis. As yet another example, themirror subsystem 104 can include only a single mirror (mirror 110) that scans along a first axis. If there is a need for thelidar transmitter 100 to also scan along a second axis, thelidar transmitter 100 could be mechanically adjusted to change its orientation (e.g., mechanically adjusting thelidar transmitter 100 as a whole to point at a new elevation whilemirror 110 within thelidar transmitter 100 is scanning across azimuths). - As yet another example, a practitioner may find it desirable to drive
mirror 110 with a time-varying signal other than a sinusoidal control signal. In such a circumstance, the practitioner can adjust themirror motion model 308 to reflect the time-varying motion ofmirror 110. - As still another example, it should be understood that the techniques described herein can be used in non-automotive applications. For example, a lidar system in accordance with any of the techniques described herein can be used in vehicles such as airborne vehicles, whether manned or unmanned (e.g., airplanes, drones, etc.). Further still, a lidar system in accordance with any of the techniques described herein need not be deployed in a vehicle and can be used in any lidar application where there is a need or desire for hyper temporal control of laser pulses and associated lidar processing.
- These and other modifications to the invention will be recognizable upon review of the teachings herein.
Claims (29)
1. A lidar system comprising:
a photodetector circuit, the photodetector circuit comprising an array of pixels for sensing incident light; and
a control circuit;
wherein the control circuit (1) processes a shot list, the shot list comprising data that defines a plurality of laser pulse shots that target a plurality of range points in a field of view and (2) determines a plurality of detection intervals associated with the laser pulse shots based on the processed shot list and defined criteria, the detection intervals for detecting returns from their associated laser pulse shots, and wherein the defined criteria comprises a cost function that optimizes determination of the detection intervals for a plurality of the laser pulse shots from the shot list; and
wherein the photodetector circuit selectively starts and stops collections from a plurality of pixels of the array in accordance with the determined detection intervals to control the photodetector circuit to sense the returns from the laser pulse shots.
2. The system of claim 1 wherein the cost function comprises a state space equation that solves for the determined detection intervals using multiple simultaneous inequality constraint equations.
3. The system of claim 2 wherein the control circuit uses quadratic programming to solve for the determined detection intervals according to the state space equation and the multiple simultaneous inequality constraint equations.
4. The system of claim 1 wherein the control circuit, for each of a plurality of the laser pulse shots, identifies a pixel set of the array to use for sensing a return from that laser pulse shot, and wherein the determined detection intervals are associated with corresponding identified pixel sets; and
wherein the photodetector circuit starts and stops collections from the identified pixel sets in accordance with their associated corresponding determined detection intervals.
5. The system of claim 4 wherein the control circuit identifies the pixel sets based on the range points that are targeted by the laser pulse shots.
6. The system of claim 5 wherein the shot list identifies the targeted range points for the laser pulse shots by azimuth and elevation angles.
7. The system of claim 4 wherein each of the identified pixel sets comprises one or more of the pixels of the array.
8. The system of claim 4 wherein the identified pixel sets follow a pattern that correspond to the range points targeted by the laser pulse shots.
9. The system of claim 4 wherein each of a plurality of the determined detection intervals comprises (1) first data that indicates when to start collection from its corresponding identified pixel set and (2) second data that indicates when to stop collection its corresponding identified pixel set.
10. The system of claim 9 wherein, for each of a plurality of the determined detection intervals, the first and second data comprise estimates of minimum and maximum ranges for the range point targeted by the laser pulse shot associated with that determined detection interval.
11. The system of claim 10 wherein the control circuit translates the minimum and maximum range estimates into start and stop collection times for the identified pixel sets associated with the determined detection intervals.
12. The system of claim 9 wherein, for each of a plurality of the determined detection intervals, the first and second data comprise start and stop collection times for the identified pixel set associated with that determined detection interval.
13. The system of claim 1 wherein the determined detection intervals are non-overlapping.
14. The system of claim 1 wherein the control circuit activates pixels of the array to be used for detecting the returns sufficiently prior to when collections are to start from the activated pixels for a pixel settle time to have passed when the collections are to start from the activated pixels.
15. The system of claim 1 wherein the defined criteria further comprises data indicative of scheduled fire times for next laser pulse shots from the shot list.
16. The system of any of claim 1 further comprising:
a signal processing circuit that processes sensed signal data from the photodetector circuit to (1) detect the returns within the sensed signal data and (2) compute return data for the detected returns.
17. The system of claim 16 wherein the signal processing circuit comprises a plurality of processors that share processing of the sensed signal data.
18. The system claim 1 further comprising:
a lidar transmitter, wherein the lidar transmitter comprises a scannable mirror, and wherein the lidar transmitter transmits the laser pulse shots toward the targeted range points via the scannable mirror.
19. The system of claim 18 wherein the lidar transmitter scans the scannable mirror in a resonant mode.
20. The system of claim 19 wherein the lidar transmitter scans the scannable mirror in the resonant mode at a scan frequency in a range between 100 Hz and 20 kHz.
21. The system of claim 19 wherein the lidar transmitter scans the scannable mirror in the resonant mode at a scan frequency in a range between 10 kHz and 15 kHz.
22. The system of claim 18 wherein the scannable mirror comprises a first scannable mirror and a second scannable mirror, wherein the lidar transmitter transmits the laser pulse shots toward the targeted range points via the first and second scannable mirrors.
23. The system of claim 22 wherein the lidar transmitter scans the second scannable mirror in a point-to-point mode according to a step function that varies as a function of the range points targeted with the laser pulse shots.
24. The system of claim 18 wherein the lidar transmitter and the photodetector circuit are in a bistatic arrangement with respect to each other.
25. The system of claim 18 further comprising a laser source that generates the laser pulse shots, and wherein the control circuit schedules the laser pulse shots in the shot list according to a laser energy model for the laser source.
26. The system of claim 25 wherein the control circuit schedules the laser pulse shots in the shot list according to the laser energy model and a mirror motion model for the scannable mirror.
27. The system of claim 1 wherein the array comprises a two-dimensional (2D) array of pixels.
28. A method for controlling a lidar receiver, wherein the lidar receiver comprises a photodetector, the photodetector comprising an array of pixels, the method comprising:
processing a shot list, the shot list comprising data that defines a plurality of laser pulse shots that target a plurality of range points in a field of view;
determining a plurality of detection intervals associated with the laser pulse shots based on the processed shot list and defined criteria, the detection intervals for detecting returns from their associated laser pulse shots, and wherein the defined criteria comprises a cost function that optimizes determination of the detection intervals for a plurality of the laser pulse shots from the shot list;
selectively starting and stopping collections from pixels of the array in accordance with the determined detection intervals to control the photodetector to detect the returns from the laser pulse shots.
29. An article of manufacture for controlling a lidar receiver, wherein the lidar receiver comprises a photodetector, the photodetector comprising an array of pixels, the article of manufacture comprising:
machine-readable code that is resident on a non-transitory machine-readable storage medium, wherein the code defines processing operations to be performed by a processor to cause the processor to:
process a shot list, the shot list comprising data that defines a plurality of laser pulse shots that target a plurality of range points in a field of view;
determine a plurality of detection intervals associated with the laser pulse shots based on the processed shot list and defined criteria, the detection intervals for detecting returns from their associated laser pulse shots, and wherein the defined criteria comprises a cost function that optimizes determination of the detection intervals for a plurality of the laser pulse shots from the shot list;
selectively start and stop collections from pixels of the array in accordance with the determined detection intervals to control the photodetector to detect the returns from the laser pulse shots.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/490,260 US20220308186A1 (en) | 2021-03-26 | 2021-09-30 | Hyper Temporal Lidar with Optimized Range-Based Detection Intervals |
PCT/US2022/028261 WO2022240714A1 (en) | 2021-05-10 | 2022-05-09 | Hyper temporal lidar with controllable detection intervals |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202163166475P | 2021-03-26 | 2021-03-26 | |
US202163186661P | 2021-05-10 | 2021-05-10 | |
US17/490,260 US20220308186A1 (en) | 2021-03-26 | 2021-09-30 | Hyper Temporal Lidar with Optimized Range-Based Detection Intervals |
Publications (1)
Publication Number | Publication Date |
---|---|
US20220308186A1 true US20220308186A1 (en) | 2022-09-29 |
Family
ID=83363978
Family Applications (11)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/490,204 Pending US20220308218A1 (en) | 2021-03-26 | 2021-09-30 | Hyper Temporal Lidar with Shot-Specific Detection Control |
US17/490,280 Active US11686846B2 (en) | 2021-03-26 | 2021-09-30 | Bistatic lidar architecture for vehicle deployments |
US17/490,231 Active US11686845B2 (en) | 2021-03-26 | 2021-09-30 | Hyper temporal lidar with controllable detection intervals based on regions of interest |
US17/490,248 Pending US20220308220A1 (en) | 2021-03-26 | 2021-09-30 | Hyper Temporal Lidar with Controllable Detection Intervals Based on Location Information |
US17/490,265 Active US11480680B2 (en) | 2021-03-26 | 2021-09-30 | Hyper temporal lidar with multi-processor return detection |
US17/490,273 Pending US20220308222A1 (en) | 2021-03-26 | 2021-09-30 | Hyper Temporal Lidar with Multi-Channel Readout of Returns |
US17/490,260 Pending US20220308186A1 (en) | 2021-03-26 | 2021-09-30 | Hyper Temporal Lidar with Optimized Range-Based Detection Intervals |
US17/490,289 Active US11619740B2 (en) | 2021-03-26 | 2021-09-30 | Hyper temporal lidar with asynchronous shot intervals and detection intervals |
US17/490,213 Pending US20220308214A1 (en) | 2021-03-26 | 2021-09-30 | Hyper Temporal Lidar with Controllable Detection Intervals Based on Range Estimates |
US17/490,194 Pending US20220308184A1 (en) | 2021-03-26 | 2021-09-30 | Hyper Temporal Lidar with Controllable Detection Intervals |
US17/490,221 Pending US20220308219A1 (en) | 2021-03-26 | 2021-09-30 | Hyper Temporal Lidar with Controllable Detection Intervals Based on Environmental Conditions |
Family Applications Before (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/490,204 Pending US20220308218A1 (en) | 2021-03-26 | 2021-09-30 | Hyper Temporal Lidar with Shot-Specific Detection Control |
US17/490,280 Active US11686846B2 (en) | 2021-03-26 | 2021-09-30 | Bistatic lidar architecture for vehicle deployments |
US17/490,231 Active US11686845B2 (en) | 2021-03-26 | 2021-09-30 | Hyper temporal lidar with controllable detection intervals based on regions of interest |
US17/490,248 Pending US20220308220A1 (en) | 2021-03-26 | 2021-09-30 | Hyper Temporal Lidar with Controllable Detection Intervals Based on Location Information |
US17/490,265 Active US11480680B2 (en) | 2021-03-26 | 2021-09-30 | Hyper temporal lidar with multi-processor return detection |
US17/490,273 Pending US20220308222A1 (en) | 2021-03-26 | 2021-09-30 | Hyper Temporal Lidar with Multi-Channel Readout of Returns |
Family Applications After (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/490,289 Active US11619740B2 (en) | 2021-03-26 | 2021-09-30 | Hyper temporal lidar with asynchronous shot intervals and detection intervals |
US17/490,213 Pending US20220308214A1 (en) | 2021-03-26 | 2021-09-30 | Hyper Temporal Lidar with Controllable Detection Intervals Based on Range Estimates |
US17/490,194 Pending US20220308184A1 (en) | 2021-03-26 | 2021-09-30 | Hyper Temporal Lidar with Controllable Detection Intervals |
US17/490,221 Pending US20220308219A1 (en) | 2021-03-26 | 2021-09-30 | Hyper Temporal Lidar with Controllable Detection Intervals Based on Environmental Conditions |
Country Status (1)
Country | Link |
---|---|
US (11) | US20220308218A1 (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220308218A1 (en) | 2021-03-26 | 2022-09-29 | Aeye, Inc. | Hyper Temporal Lidar with Shot-Specific Detection Control |
US20220308187A1 (en) | 2021-03-26 | 2022-09-29 | Aeye, Inc. | Hyper Temporal Lidar Using Multiple Matched Filters to Determine Target Retro-Reflectivity |
US11460556B1 (en) | 2021-03-26 | 2022-10-04 | Aeye, Inc. | Hyper temporal lidar with shot scheduling for variable amplitude scan mirror |
US20230256988A1 (en) * | 2022-02-17 | 2023-08-17 | Gm Cruise Holdings Llc | Dynamic lidar adjustments based on av road conditions |
Family Cites Families (268)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4017146A (en) | 1976-03-04 | 1977-04-12 | The United States Of America As Represented By The Secretary Of The Navy | Transmitter for use in a laser communication and ranging system |
US6926227B1 (en) | 1977-04-27 | 2005-08-09 | Bae Systems Information And Electronics Systems Integration Inc. | Extended range, light weight laser target designator |
DE3245939C2 (en) | 1982-12-11 | 1985-12-19 | Fa. Carl Zeiss, 7920 Heidenheim | Device for generating an image of the fundus |
DE3750840D1 (en) | 1986-11-04 | 1995-01-19 | Fritz Kruesi Maschinenbau | Device for processing a workpiece made of wood, in particular wooden beams. |
US4888785A (en) | 1988-01-19 | 1989-12-19 | Bell Communications Research, Inc. | Miniature integrated optical beam splitter |
US5625644A (en) | 1991-12-20 | 1997-04-29 | Myers; David J. | DC balanced 4B/8B binary block code for digital data communications |
US5408351A (en) | 1992-10-15 | 1995-04-18 | At&T Corp. | Optical communication system |
JPH0798381A (en) | 1993-08-06 | 1995-04-11 | Omron Corp | Scanning type distance measuring device, vehicle mounted with it, and light detecting device |
JP3042278B2 (en) | 1993-09-17 | 2000-05-15 | 三菱電機株式会社 | Distance measuring device |
IL110611A (en) | 1994-08-09 | 1997-01-10 | Israel State | Apparatus and method for laser imaging |
US5557444A (en) | 1994-10-26 | 1996-09-17 | University Of Washington | Miniature optical scanner for a two axis scanning system |
US5596600A (en) | 1995-04-06 | 1997-01-21 | Mayflower Communications Company, Inc. | Standalone canceller of narrow band interference for spread spectrum receivers |
JPH09243943A (en) | 1996-03-13 | 1997-09-19 | Minolta Co Ltd | Laser beam scanning optical device |
US5831719A (en) | 1996-04-12 | 1998-11-03 | Holometrics, Inc. | Laser scanning system |
US5988862A (en) | 1996-04-24 | 1999-11-23 | Cyra Technologies, Inc. | Integrated system for quickly and accurately imaging and modeling three dimensional objects |
US5815250A (en) | 1997-05-27 | 1998-09-29 | Coherent Technologies, Inc. | Doublet pulse coherent laser radar for precision range and velocity measurements |
JPH11153664A (en) | 1997-09-30 | 1999-06-08 | Sumitomo Electric Ind Ltd | Object detector utilizing repetitively pulsed light |
US5870181A (en) | 1997-10-28 | 1999-02-09 | Alliant Defense Electronics Systems, Inc. | Acoustic optical scanning of linear detector array for laser radar |
US6339604B1 (en) | 1998-06-12 | 2002-01-15 | General Scanning, Inc. | Pulse control in laser systems |
US6205275B1 (en) | 1998-06-22 | 2001-03-20 | Brian E. Melville | Fiber optic image transfer assembly and method of using |
US6031601A (en) | 1998-07-08 | 2000-02-29 | Trimble Navigation Limited | Code-space optical electronic distance meter |
JP3832101B2 (en) | 1998-08-05 | 2006-10-11 | 株式会社デンソー | Distance measuring device |
US6245590B1 (en) | 1999-08-05 | 2001-06-12 | Microvision Inc. | Frequency tunable resonant scanner and method of making |
WO2001089165A1 (en) | 2000-05-16 | 2001-11-22 | Nortel Networks Limited | Cellular communications system receivers |
CA2311435C (en) | 2000-06-13 | 2004-04-20 | Ibm Canada Limited-Ibm Canada Limitee | Capacitor regulated high efficiency driver for light emitting diode |
DE10104022A1 (en) | 2001-01-31 | 2002-08-01 | Bosch Gmbh Robert | Radar device and method for coding a radar device |
AU2002339874A1 (en) | 2001-05-23 | 2002-12-03 | Canesta, Inc. | Enhanced dynamic range conversion in 3-d imaging |
US6683539B2 (en) | 2001-12-27 | 2004-01-27 | Koninklijke Philips Electronics N.V. | Computer vision based parking assistant |
FR2835120B1 (en) | 2002-01-21 | 2006-10-20 | Evolium Sas | METHOD AND DEVICE FOR PREPARING SIGNALS TO BE COMPARED TO ESTABLISH PRE-DISTORTION ON THE INPUT OF AN AMPLIFIER |
JP2003256820A (en) | 2002-03-05 | 2003-09-12 | Casio Comput Co Ltd | Image reading device and its sensitivity setting method |
GB0223512D0 (en) | 2002-10-10 | 2002-11-13 | Qinetiq Ltd | Bistatic laser radar apparatus |
US6836320B2 (en) | 2002-10-23 | 2004-12-28 | Ae Systems Information And Electronic Systems Integration Inc. | Method and apparatus for active boresight correction |
JP2004157044A (en) | 2002-11-07 | 2004-06-03 | Nippon Signal Co Ltd:The | Scanning type laser radar |
CN1273841C (en) | 2002-12-24 | 2006-09-06 | 中国科学院上海技术物理研究所 | Adaptive variable-speed scanning laser imager |
US6870815B2 (en) | 2003-01-30 | 2005-03-22 | Atheros Communications, Inc. | Methods for implementing a dynamic frequency selection (DFS) and a temporary channel selection feature for WLAN devices |
US7436494B1 (en) | 2003-03-28 | 2008-10-14 | Irvine Sensors Corp. | Three-dimensional ladar module with alignment reference insert circuitry |
US6704619B1 (en) | 2003-05-24 | 2004-03-09 | American Gnc Corporation | Method and system for universal guidance and control of automated machines |
JP2005043868A (en) | 2003-07-09 | 2005-02-17 | Sony Corp | Image projection device and image projection method |
US7474332B2 (en) | 2003-08-28 | 2009-01-06 | Raytheon Company | Synthetic aperture ladar system and method using real-time holography |
US7064810B2 (en) | 2003-09-15 | 2006-06-20 | Deere & Company | Optical range finder with directed attention |
JP2005233716A (en) | 2004-02-18 | 2005-09-02 | Omron Corp | Radar device |
US7643966B2 (en) | 2004-03-10 | 2010-01-05 | Leica Geosystems Ag | Identification of 3D surface points using context-based hypothesis testing |
ITRM20040249A1 (en) | 2004-05-17 | 2004-08-17 | Univ Roma | HIGH PRECISION SURVEILLANCE SYSTEM BY MULTILATERATION OF SSR SIGNALS. |
JP3962929B2 (en) | 2004-05-18 | 2007-08-22 | 防衛省技術研究本部長 | Laser distance image generating apparatus and method |
KR20050117047A (en) | 2004-06-09 | 2005-12-14 | 삼성전자주식회사 | Optical system for scanning angle enlargement and laser scanning apparatus applied the same |
US7236235B2 (en) | 2004-07-06 | 2007-06-26 | Dimsdale Engineering, Llc | System and method for determining range in 3D imaging systems |
US7038608B1 (en) | 2004-12-16 | 2006-05-02 | Valeo Raytheon Systems, Inc. | Digital to analog converter |
WO2006076474A1 (en) | 2005-01-13 | 2006-07-20 | Arete Associates | Optical system with wavefront sensor |
US7843978B2 (en) | 2005-02-04 | 2010-11-30 | Jds Uniphase Corporation | Passively Q-switched laser with adjustable pulse repetition rate |
US7375804B2 (en) | 2005-03-01 | 2008-05-20 | Lockheed Martin Corporation | Single detector receiver for multi-beam LADAR systems |
US7532311B2 (en) | 2005-04-06 | 2009-05-12 | Lockheed Martin Coherent Technologies, Inc. | Efficient lidar with flexible target interrogation pattern |
EP1712888A1 (en) | 2005-04-11 | 2006-10-18 | Agilent Technologies Inc | Time-of-flight measurement using pulse sequences |
JP2006329971A (en) | 2005-04-27 | 2006-12-07 | Sanyo Electric Co Ltd | Detector |
JP2006308482A (en) | 2005-04-28 | 2006-11-09 | Sanyo Electric Co Ltd | Detector |
US7415082B2 (en) | 2005-05-31 | 2008-08-19 | Harris Corporation | Receiver including synch pulse detection and associated method |
US7539231B1 (en) | 2005-07-15 | 2009-05-26 | Lockheed Martin Corporation | Apparatus and method for generating controlled-linewidth laser-seed-signals for high-powered fiber-laser amplifier systems |
US7907333B2 (en) | 2005-07-27 | 2011-03-15 | The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration | Optical source and apparatus for remote sensing |
US7417717B2 (en) | 2005-10-05 | 2008-08-26 | Utah State University | System and method for improving lidar data fidelity using pixel-aligned lidar/electro-optic data |
US20090242468A1 (en) | 2005-10-17 | 2009-10-01 | Tim Corben | System for Controlling the Concentration of a Detrimental Substance in a Sewer Network |
US7397019B1 (en) | 2005-10-19 | 2008-07-08 | Alliant Techsystems, Inc. | Light sensor module, focused light sensor array, and an air vehicle so equipped |
US7944548B2 (en) | 2006-03-07 | 2011-05-17 | Leica Geosystems Ag | Increasing measurement rate in time of flight measurement apparatuses |
US8027249B2 (en) * | 2006-10-18 | 2011-09-27 | Shared Spectrum Company | Methods for using a detector to monitor and detect channel occupancy |
US8958057B2 (en) | 2006-06-27 | 2015-02-17 | Arete Associates | Camera-style lidar setup |
WO2008008970A2 (en) | 2006-07-13 | 2008-01-17 | Velodyne Acoustics, Inc | High definition lidar system |
EP1901093B1 (en) | 2006-09-15 | 2018-11-14 | Triple-IN Holding AG | Capture of distance images |
US7701558B2 (en) | 2006-09-22 | 2010-04-20 | Leica Geosystems Ag | LIDAR system |
US8884763B2 (en) | 2006-10-02 | 2014-11-11 | iRobert Corporation | Threat detection sensor suite |
US8072663B2 (en) | 2006-10-30 | 2011-12-06 | Autonosys Inc. | Scanning system for lidar |
US7961906B2 (en) | 2007-01-03 | 2011-06-14 | Science Applications International Corporation | Human detection with imaging sensors |
US7878657B2 (en) | 2007-06-27 | 2011-02-01 | Prysm, Inc. | Servo feedback control based on invisible scanning servo beam in scanning beam display systems with light-emitting screens |
IL185355A (en) | 2007-08-19 | 2012-05-31 | Sason Sourani | Optical device for projection of light beams |
US7746450B2 (en) | 2007-08-28 | 2010-06-29 | Science Applications International Corporation | Full-field light detection and ranging imaging system |
JP2009130380A (en) | 2007-11-19 | 2009-06-11 | Ricoh Co Ltd | Image reading device and image forming apparatus |
TWI345381B (en) | 2007-12-31 | 2011-07-11 | E Pin Optical Industry Co Ltd | Mems scan controller with clock frequency and method of control thereof |
US8774950B2 (en) | 2008-01-22 | 2014-07-08 | Carnegie Mellon University | Apparatuses, systems, and methods for apparatus operation and remote sensing |
US7894044B1 (en) | 2008-03-11 | 2011-02-22 | Oceanit Laboratories, Inc. | Laser for coherent LIDAR |
US20090292468A1 (en) | 2008-03-25 | 2009-11-26 | Shunguang Wu | Collision avoidance method and system using stereo vision and radar sensor fusion |
US7720126B2 (en) | 2008-05-06 | 2010-05-18 | Bae Systems Information And Electronic Systems Integration Inc. | Multi-pass laser amplifier with staged gain mediums of varied absorption length |
TW200947164A (en) | 2008-05-09 | 2009-11-16 | E Pin Optical Industry Co Ltd | MEMS scan controller with inherence frequency and method of control thereof |
US9041915B2 (en) | 2008-05-09 | 2015-05-26 | Ball Aerospace & Technologies Corp. | Systems and methods of scene and action capture using imaging system incorporating 3D LIDAR |
WO2009142758A1 (en) | 2008-05-23 | 2009-11-26 | Spectral Image, Inc. | Systems and methods for hyperspectral medical imaging |
US7982861B2 (en) | 2008-07-31 | 2011-07-19 | The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration | Time delay and distance measurement |
US7924441B1 (en) | 2008-08-08 | 2011-04-12 | Mirrorcle Technologies, Inc. | Fast and high-precision 3D tracking and position measurement with MEMS micromirrors |
US20100204964A1 (en) | 2009-02-09 | 2010-08-12 | Utah State University | Lidar-assisted multi-image matching for 3-d model and sensor pose refinement |
US8120754B2 (en) | 2009-02-19 | 2012-02-21 | Northrop Grumman Systems Corporation | Light detection and ranging apparatus |
US9091754B2 (en) | 2009-09-02 | 2015-07-28 | Trimble A.B. | Distance measurement methods and apparatus |
US8081301B2 (en) | 2009-10-08 | 2011-12-20 | The United States Of America As Represented By The Secretary Of The Army | LADAR transmitting and receiving system and method |
ES2387484B1 (en) | 2009-10-15 | 2013-08-02 | Alfa Imaging S.A. | SWEEP MULTIESPECTRAL SYSTEM. |
US8635091B2 (en) | 2009-12-17 | 2014-01-21 | Hartford Fire Insurance Company | Systems and methods for linking vehicles to telematics-enabled portable devices |
US20110149268A1 (en) | 2009-12-17 | 2011-06-23 | Marchant Alan B | Dynamic 3d wind mapping system and method |
DE112011100458B4 (en) | 2010-02-05 | 2024-02-01 | Trimble Navigation Limited | Systems and methods for processing mapping and modeling data |
US20110260036A1 (en) | 2010-02-22 | 2011-10-27 | Baraniuk Richard G | Temporally- And Spatially-Resolved Single Photon Counting Using Compressive Sensing For Debug Of Integrated Circuits, Lidar And Other Applications |
JP5251902B2 (en) | 2010-03-02 | 2013-07-31 | オムロン株式会社 | Laser processing equipment |
US8605262B2 (en) | 2010-06-23 | 2013-12-10 | The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration | Time shifted PN codes for CW LiDAR, radar, and sonar |
US8570406B2 (en) | 2010-08-11 | 2013-10-29 | Inview Technology Corporation | Low-pass filtering of compressive imaging measurements to infer light level variation |
US8736818B2 (en) | 2010-08-16 | 2014-05-27 | Ball Aerospace & Technologies Corp. | Electronically steered flash LIDAR |
US8648702B2 (en) | 2010-08-20 | 2014-02-11 | Denso International America, Inc. | Combined time-of-flight and image sensor systems |
US20120236379A1 (en) | 2010-08-23 | 2012-09-20 | Lighttime, Llc | Ladar using mems scanning |
CN102023082A (en) | 2010-09-29 | 2011-04-20 | 中国科学院上海光学精密机械研究所 | Device and method for detecting dynamic properties of two-dimensional directional mirror |
CA2815393C (en) | 2010-10-22 | 2019-02-19 | Neptec Design Group Ltd. | Wide angle bistatic scanning optical ranging sensor |
EP2469301A1 (en) | 2010-12-23 | 2012-06-27 | André Borowski | Methods and devices for generating a representation of a 3D scene at very high speed |
JP2012202776A (en) | 2011-03-24 | 2012-10-22 | Toyota Central R&D Labs Inc | Distance measuring device |
JP5532003B2 (en) | 2011-03-31 | 2014-06-25 | 株式会社デンソーウェーブ | Laser radar equipment |
AT511310B1 (en) | 2011-04-07 | 2013-05-15 | Riegl Laser Measurement Sys | PROCESS FOR REMOTE MEASUREMENT |
US9482529B2 (en) | 2011-04-15 | 2016-11-01 | Faro Technologies, Inc. | Three-dimensional coordinate scanner and method of operation |
JP5702230B2 (en) | 2011-06-01 | 2015-04-15 | 日本信号株式会社 | Optical scanning device |
JP5978565B2 (en) | 2011-06-30 | 2016-08-24 | 富士通株式会社 | Monitoring device and monitoring method |
US9069061B1 (en) | 2011-07-19 | 2015-06-30 | Ball Aerospace & Technologies Corp. | LIDAR with analog memory |
US10027952B2 (en) | 2011-08-04 | 2018-07-17 | Trx Systems, Inc. | Mapping and tracking system with features in three-dimensional space |
US9916538B2 (en) | 2012-09-15 | 2018-03-13 | Z Advanced Computing, Inc. | Method and system for feature detection |
US9083148B2 (en) | 2012-01-11 | 2015-07-14 | Kongsberg Seatex As | Real time equivalent model, device and apparatus for control of master oscillator power amplifier laser |
JP2013156139A (en) | 2012-01-30 | 2013-08-15 | Ihi Corp | Moving object detecting apparatus and moving object detecting method |
JP5985661B2 (en) | 2012-02-15 | 2016-09-06 | アップル インコーポレイテッド | Scan depth engine |
US20130268862A1 (en) | 2012-03-07 | 2013-10-10 | Willow Garage Inc. | Point cloud data hierarchy |
US9915726B2 (en) | 2012-03-16 | 2018-03-13 | Continental Advanced Lidar Solutions Us, Llc | Personal LADAR sensor |
US9335220B2 (en) | 2012-03-22 | 2016-05-10 | Apple Inc. | Calibration of time-of-flight measurement using stray reflections |
US9315178B1 (en) | 2012-04-13 | 2016-04-19 | Google Inc. | Model checking for autonomous vehicles |
US9176240B2 (en) | 2012-07-18 | 2015-11-03 | Kabushiki Kaisha Toshiba | Apparatus and method for channel count reduction in solid-state-based positron emission tomography |
US9052721B1 (en) | 2012-08-28 | 2015-06-09 | Google Inc. | Method for correcting alignment of vehicle mounted laser scans with an elevation map for obstacle detection |
EP2708914A1 (en) | 2012-09-18 | 2014-03-19 | Sick Ag | Optoelectronic sensor and method for recording a depth map |
EP2708913A1 (en) | 2012-09-18 | 2014-03-19 | Sick Ag | Opto-electronic sensor and object detection method |
US9383753B1 (en) | 2012-09-26 | 2016-07-05 | Google Inc. | Wide-view LIDAR with areas of special attention |
JP6236758B2 (en) | 2012-10-09 | 2017-11-29 | 株式会社豊田中央研究所 | Optical distance measuring device |
CN103033806A (en) | 2012-12-27 | 2013-04-10 | 山东理工大学 | Method and device for airborne laser scanning flying height change real-time compensation |
US9285477B1 (en) | 2013-01-25 | 2016-03-15 | Apple Inc. | 3D depth point cloud from timing flight of 2D scanned light beam pulses |
US20140211194A1 (en) | 2013-01-27 | 2014-07-31 | Quanergy Systems, Inc. | Cost-effective lidar sensor for multi-signal detection, weak signal detection and signal disambiguation and method of using same |
US9019592B2 (en) | 2013-02-01 | 2015-04-28 | Institut National D'optique | System and method for emitting optical pulses in view of a variable external trigger signal |
ES2512965B2 (en) | 2013-02-13 | 2015-11-24 | Universitat Politècnica De Catalunya | System and method to scan a surface and computer program that implements the method |
US9128190B1 (en) | 2013-03-06 | 2015-09-08 | Google Inc. | Light steering device with an array of oscillating reflective slats |
US9110169B2 (en) | 2013-03-08 | 2015-08-18 | Advanced Scientific Concepts, Inc. | LADAR enabled impact mitigation system |
CA2907452A1 (en) | 2013-03-15 | 2014-09-18 | Peloton Technology Inc. | Vehicle platooning systems and methods |
US9251587B2 (en) | 2013-04-05 | 2016-02-02 | Caterpillar Inc. | Motion estimation utilizing range detection-enhanced visual odometry |
US9674413B1 (en) | 2013-04-17 | 2017-06-06 | Rockwell Collins, Inc. | Vision system and method having improved performance and solar mitigation |
US9085354B1 (en) | 2013-04-23 | 2015-07-21 | Google Inc. | Systems and methods for vertical takeoff and/or landing |
US9069080B2 (en) | 2013-05-24 | 2015-06-30 | Advanced Scientific Concepts, Inc. | Automotive auxiliary ladar sensor |
US20150006616A1 (en) | 2013-06-28 | 2015-01-01 | Broadcom Corporation | Host Offloading Architecture |
CN103324945B (en) | 2013-07-08 | 2016-12-28 | 南京大学 | A kind of forest point cloud classifications method based on pattern recognition |
US9261881B1 (en) | 2013-08-01 | 2016-02-16 | Google Inc. | Filtering noisy/high-intensity regions in laser-based lane marker detection |
US9280899B2 (en) | 2013-08-06 | 2016-03-08 | GM Global Technology Operations LLC | Dynamic safety shields for situation assessment and decision making in collision avoidance tasks |
US9435653B2 (en) | 2013-09-17 | 2016-09-06 | GM Global Technology Operations LLC | Sensor-aided vehicle positioning system |
KR101551667B1 (en) | 2013-11-27 | 2015-09-09 | 현대모비스(주) | LIDAR Sensor System |
WO2015100483A1 (en) | 2014-01-06 | 2015-07-09 | Geodigital International Inc. | Determining portions of a roadway model requiring updating |
US9305219B2 (en) | 2014-01-23 | 2016-04-05 | Mitsubishi Electric Research Laboratories, Inc. | Method for estimating free space using a camera system |
US9658322B2 (en) | 2014-03-13 | 2017-05-23 | Garmin Switzerland Gmbh | LIDAR optical scanner system |
US9626566B2 (en) | 2014-03-19 | 2017-04-18 | Neurala, Inc. | Methods and apparatus for autonomous robotic control |
CN103885065B (en) | 2014-03-21 | 2016-04-13 | 中国科学院上海光学精密机械研究所 | Dual wavelength dipulse without fuzzy laser ranging system |
CN106165399B (en) | 2014-04-07 | 2019-08-20 | 三星电子株式会社 | The imaging sensor of high-resolution, high frame per second, low-power |
US9360554B2 (en) * | 2014-04-11 | 2016-06-07 | Facet Technology Corp. | Methods and apparatus for object detection and identification in a multiple detector lidar array |
FR3020616B1 (en) | 2014-04-30 | 2017-10-27 | Renault Sas | DEVICE FOR SIGNALING OBJECTS TO A NAVIGATION MODULE OF A VEHICLE EQUIPPED WITH SAID DEVICE |
US20150334371A1 (en) | 2014-05-19 | 2015-11-19 | Rockwell Automation Technologies, Inc. | Optical safety monitoring with selective pixel array analysis |
FR3021938B1 (en) | 2014-06-04 | 2016-05-27 | Commissariat Energie Atomique | PARKING ASSIST DEVICE AND VEHICLE EQUIPPED WITH SUCH A DEVICE. |
JP6459239B2 (en) | 2014-06-20 | 2019-01-30 | 船井電機株式会社 | Laser range finder |
CN106461785B (en) | 2014-06-27 | 2018-01-19 | Hrl实验室有限责任公司 | Limited scanning laser radar |
US9575341B2 (en) | 2014-06-28 | 2017-02-21 | Intel Corporation | Solid state LIDAR circuit with waveguides tunable to separate phase offsets |
US10068373B2 (en) | 2014-07-01 | 2018-09-04 | Samsung Electronics Co., Ltd. | Electronic device for providing map information |
US9575184B2 (en) | 2014-07-03 | 2017-02-21 | Continental Advanced Lidar Solutions Us, Inc. | LADAR sensor for a dense environment |
US9377533B2 (en) | 2014-08-11 | 2016-06-28 | Gerard Dirk Smits | Three-dimensional triangulation and time-of-flight based tracking systems and methods |
US10386464B2 (en) | 2014-08-15 | 2019-08-20 | Aeye, Inc. | Ladar point cloud compression |
US9278689B1 (en) | 2014-11-13 | 2016-03-08 | Toyota Motor Engineering & Manufacturing North America, Inc. | Autonomous vehicle detection of and response to emergency vehicles |
US9638801B2 (en) | 2014-11-24 | 2017-05-02 | Mitsubishi Electric Research Laboratories, Inc | Depth sensing using optical pulses and fixed coded aperature |
WO2016096647A1 (en) | 2014-12-19 | 2016-06-23 | Koninklijke Philips N.V. | Laser sensor module |
US9692937B1 (en) | 2014-12-22 | 2017-06-27 | Accusoft Corporation | Methods and apparatus for identifying lines in an image and using identified lines |
US9581967B1 (en) | 2015-01-07 | 2017-02-28 | Lockheed Martin Coherent Technologies, Inc. | Motion compensated multi-wavelength digital holography |
US10274600B2 (en) | 2015-03-27 | 2019-04-30 | Sensors Unlimited, Inc. | Laser designator pulse detection |
US9698182B2 (en) | 2015-03-30 | 2017-07-04 | Hamilton Sundstrand Corporation | Digital imaging and pulse detection array |
US9880263B2 (en) | 2015-04-06 | 2018-01-30 | Waymo Llc | Long range steerable LIDAR system |
US10289940B2 (en) | 2015-06-26 | 2019-05-14 | Here Global B.V. | Method and apparatus for providing classification of quality characteristics of images |
US10527726B2 (en) | 2015-07-02 | 2020-01-07 | Texas Instruments Incorporated | Methods and apparatus for LIDAR with DMD |
US10282591B2 (en) | 2015-08-24 | 2019-05-07 | Qualcomm Incorporated | Systems and methods for depth map sampling |
CN108369274B (en) | 2015-11-05 | 2022-09-13 | 路明亮有限责任公司 | Lidar system with improved scan speed for high resolution depth mapping |
US10520602B2 (en) | 2015-11-30 | 2019-12-31 | Luminar Technologies, Inc. | Pulsed laser for lidar system |
JP6854828B2 (en) | 2015-12-18 | 2021-04-07 | ジェラルド ディルク スミッツ | Real-time position detection of an object |
EP3196593B1 (en) | 2016-01-21 | 2018-01-17 | Safran Vectronix AG | Stabilized observation with lrf function |
US10627490B2 (en) | 2016-01-31 | 2020-04-21 | Velodyne Lidar, Inc. | Multiple pulse, LIDAR based 3-D imaging |
EP3206045B1 (en) | 2016-02-15 | 2021-03-31 | Airborne Hydrography AB | Single-photon lidar scanner |
WO2017143217A1 (en) | 2016-02-18 | 2017-08-24 | Aeye, Inc. | Adaptive ladar receiver |
US10042159B2 (en) | 2016-02-18 | 2018-08-07 | Aeye, Inc. | Ladar transmitter with optical field splitter/inverter |
US20170242102A1 (en) | 2016-02-18 | 2017-08-24 | Aeye, Inc. | Ladar System with Dichroic Photodetector for Tracking the Targeting of a Scanning Ladar Transmitter |
US9933513B2 (en) | 2016-02-18 | 2018-04-03 | Aeye, Inc. | Method and apparatus for an adaptive ladar receiver |
US10908262B2 (en) | 2016-02-18 | 2021-02-02 | Aeye, Inc. | Ladar transmitter with optical field splitter/inverter for improved gaze on scan area portions |
WO2017143183A1 (en) | 2016-02-18 | 2017-08-24 | Aeye, Inc. | Ladar transmitter with improved gaze on scan area portions |
US10134280B1 (en) | 2016-02-23 | 2018-11-20 | Taehyun You | Vehicular notifications |
JP7149256B2 (en) | 2016-03-19 | 2022-10-06 | ベロダイン ライダー ユーエスエー,インコーポレイテッド | Integrated illumination and detection for LIDAR-based 3D imaging |
CN109073376A (en) | 2016-03-21 | 2018-12-21 | 威力登激光雷达有限公司 | The imaging of the 3-D based on LIDAR is carried out with the exposure intensity of variation |
WO2017165316A1 (en) | 2016-03-21 | 2017-09-28 | Velodyne Lidar, Inc. | Lidar based 3-d imaging with varying pulse repetition |
EP3433634B8 (en) | 2016-03-21 | 2021-07-21 | Velodyne Lidar USA, Inc. | Lidar based 3-d imaging with varying illumination field density |
US10473784B2 (en) | 2016-05-24 | 2019-11-12 | Veoneer Us, Inc. | Direct detection LiDAR system and method with step frequency modulation (FM) pulse-burst envelope modulation transmission and quadrature demodulation |
US10797460B2 (en) | 2016-07-13 | 2020-10-06 | Waymo Llc | Systems and methods for laser power interlocking |
US10859700B2 (en) | 2016-07-21 | 2020-12-08 | Z-Senz Llc | Systems, devices, and/or methods for encoded time of flight light detection and ranging |
US20210109197A1 (en) | 2016-08-29 | 2021-04-15 | James Thomas O'Keeffe | Lidar with guard laser beam and adaptive high-intensity laser beam |
WO2018044958A1 (en) | 2016-08-29 | 2018-03-08 | Okeeffe James | Laser range finder with smart safety-conscious laser intensity |
US9928432B1 (en) | 2016-09-14 | 2018-03-27 | Nauto Global Limited | Systems and methods for near-crash determination |
IL247944B (en) | 2016-09-20 | 2018-03-29 | Grauer Yoav | Pulsed light illuminator having a configurable setup |
CN110286388B (en) | 2016-09-20 | 2020-11-03 | 创新科技有限公司 | Laser radar system, method of detecting object using the same, and medium |
US11675078B2 (en) | 2016-10-06 | 2023-06-13 | GM Global Technology Operations LLC | LiDAR system |
EP3532863A4 (en) | 2016-10-31 | 2020-06-03 | Gerard Dirk Smits | Fast scanning lidar with dynamic voxel probing |
CN110268283B (en) | 2016-11-16 | 2023-09-01 | 应诺维思科技有限公司 | Lidar system and method |
US10942257B2 (en) | 2016-12-31 | 2021-03-09 | Innovusion Ireland Limited | 2D scanning high precision LiDAR using combination of rotating concave mirror and beam steering devices |
EP3566078A1 (en) | 2017-01-03 | 2019-11-13 | Innoviz Technologies Ltd. | Lidar systems and methods for detection and classification of objects |
US10634790B2 (en) | 2017-01-13 | 2020-04-28 | The United States Of America As Represented By The Secretary Of The Navy | Digital passband processing of wideband modulated optical signals for enhanced imaging |
EP3351168B1 (en) | 2017-01-20 | 2019-12-04 | Nokia Technologies Oy | Arterial pulse measurement |
WO2018148153A1 (en) | 2017-02-08 | 2018-08-16 | Giant Leap Holdings, Llc | Light steering and focusing by dielectrophoresis |
EP3583442B1 (en) | 2017-02-17 | 2023-10-25 | Aeye, Inc. | Method and system for ladar pulse deconfliction |
EP3589990A4 (en) | 2017-03-01 | 2021-01-20 | Ouster, Inc. | Accurate photo detector measurements for lidar |
US11105925B2 (en) * | 2017-03-01 | 2021-08-31 | Ouster, Inc. | Accurate photo detector measurements for LIDAR |
CN110537103A (en) | 2017-03-16 | 2019-12-03 | 法斯特里3D公司 | Optimize the method and apparatus that transmitter and detector use in active remote sensing application |
US10365351B2 (en) | 2017-03-17 | 2019-07-30 | Waymo Llc | Variable beam spacing, timing, and power for vehicle sensors |
US10007001B1 (en) | 2017-03-28 | 2018-06-26 | Luminar Technologies, Inc. | Active short-wave infrared four-dimensional camera |
US10267899B2 (en) | 2017-03-28 | 2019-04-23 | Luminar Technologies, Inc. | Pulse timing based on angle of view |
US10209359B2 (en) | 2017-03-28 | 2019-02-19 | Luminar Technologies, Inc. | Adaptive pulse rate in a lidar system |
US10545240B2 (en) | 2017-03-28 | 2020-01-28 | Luminar Technologies, Inc. | LIDAR transmitter and detector system using pulse encoding to reduce range ambiguity |
US20180284234A1 (en) | 2017-03-29 | 2018-10-04 | Luminar Technologies, Inc. | Foveated Imaging in a Lidar System |
US10241198B2 (en) | 2017-03-30 | 2019-03-26 | Luminar Technologies, Inc. | Lidar receiver calibration |
US10556585B1 (en) | 2017-04-13 | 2020-02-11 | Panosense Inc. | Surface normal determination for LIDAR range samples by detecting probe pulse stretching |
US10677897B2 (en) | 2017-04-14 | 2020-06-09 | Luminar Technologies, Inc. | Combining lidar and camera data |
US20180306927A1 (en) | 2017-04-21 | 2018-10-25 | GM Global Technology Operations LLC | Method and apparatus for pulse repetition sequence with high processing gain |
CN111751836B (en) | 2017-07-05 | 2021-05-14 | 奥斯特公司 | Solid-state optical system |
KR102596018B1 (en) | 2017-07-07 | 2023-10-30 | 에이아이, 아이엔씨. | Radar transmitter with reimager |
US20190018119A1 (en) * | 2017-07-13 | 2019-01-17 | Apple Inc. | Early-late pulse counting for light emitting depth sensors |
US10627492B2 (en) * | 2017-08-01 | 2020-04-21 | Waymo Llc | Use of extended detection periods for range aliasing detection and mitigation in a light detection and ranging (LIDAR) system |
EP3451021A1 (en) | 2017-08-30 | 2019-03-06 | Hexagon Technology Center GmbH | Measuring device with scan functionality and adjustable receiving areas of the receiver |
EP3652555B1 (en) | 2017-08-31 | 2024-03-06 | SZ DJI Technology Co., Ltd. | A solid state light detection and ranging (lidar) system system and method for improving solid state light detection and ranging (lidar) resolution |
JP2019052981A (en) | 2017-09-15 | 2019-04-04 | 株式会社東芝 | Distance measuring device |
US10641900B2 (en) | 2017-09-15 | 2020-05-05 | Aeye, Inc. | Low latency intra-frame motion estimation based on clusters of ladar pulses |
US11460550B2 (en) | 2017-09-19 | 2022-10-04 | Veoneer Us, Llc | Direct detection LiDAR system and method with synthetic doppler processing |
US11567175B2 (en) | 2017-09-29 | 2023-01-31 | Infineon Technologies Ag | Apparatuses and method for light detection and ranging |
US11415675B2 (en) | 2017-10-09 | 2022-08-16 | Luminar, Llc | Lidar system with adjustable pulse period |
US10509127B2 (en) | 2017-12-13 | 2019-12-17 | Luminar Technologies, Inc. | Controlling vehicle sensors based on road configuration |
US11340339B2 (en) * | 2017-12-22 | 2022-05-24 | Waymo Llc | Systems and methods for adaptive range coverage using LIDAR |
US11137498B2 (en) | 2018-02-12 | 2021-10-05 | Microvision, Inc. | Scanning rangefinding system with variable field of view |
WO2019199775A1 (en) | 2018-04-09 | 2019-10-17 | Innovusion Ireland Limited | Lidar systems and methods for exercising precise control of a fiber laser |
US10158038B1 (en) * | 2018-05-17 | 2018-12-18 | Hi Llc | Fast-gated photodetector architectures comprising dual voltage sources with a switch configuration |
US10670719B2 (en) | 2018-05-31 | 2020-06-02 | Beijing Voyager Technology Co., Ltd. | Light detection system having multiple lens-receiver units |
US10466342B1 (en) | 2018-09-30 | 2019-11-05 | Hesai Photonics Technology Co., Ltd. | Adaptive coding for lidar systems |
US10627516B2 (en) * | 2018-07-19 | 2020-04-21 | Luminar Technologies, Inc. | Adjustable pulse characteristics for ground detection in lidar systems |
US20200041618A1 (en) | 2018-08-02 | 2020-02-06 | Infineon Technologies Ag | Matrix Light Source and Detector Device for Solid-State Lidar |
US11408983B2 (en) * | 2018-10-01 | 2022-08-09 | Infineon Technologies Ag | Lidar 2D receiver array architecture |
US10788659B2 (en) | 2018-10-24 | 2020-09-29 | Infinean Technologies Ag | Monitoring of MEMS mirror properties |
US11327177B2 (en) | 2018-10-25 | 2022-05-10 | Aeye, Inc. | Adaptive control of ladar shot energy using spatial index of prior ladar return data |
US11598862B2 (en) * | 2018-11-20 | 2023-03-07 | The University Court Of The University Of Edinburgh | Methods and systems for spatially distributed strobing comprising a control circuit to provide a strobe signal to activate a first subset of the detector pixels of a detector array while leaving a second subset of the detector pixels inactive |
US11313968B2 (en) | 2018-12-14 | 2022-04-26 | Texas Instruments Incorporated | Interference signal rejection in LIDAR systems |
US11709231B2 (en) | 2018-12-21 | 2023-07-25 | Infineon Technologies Ag | Real time gating and signal routing in laser and detector arrays for LIDAR application |
US11175390B2 (en) | 2018-12-24 | 2021-11-16 | Beijing Voyager Technology Co., Ltd. | Real-time estimation of DC bias and noise power of light detection and ranging (LiDAR) |
US11585906B2 (en) | 2018-12-26 | 2023-02-21 | Ouster, Inc. | Solid-state electronic scanning laser array with high-side and low-side switches for increased channels |
US20210111533A1 (en) | 2019-03-01 | 2021-04-15 | Gan Systems Inc. | Fast pulse, high current laser drivers |
US11041944B2 (en) | 2019-03-01 | 2021-06-22 | Beijing Voyager Technology Co., Ltd. | Constant false alarm rate detection in pulsed LiDAR systems |
EP3963355A1 (en) | 2019-03-08 | 2022-03-09 | OSRAM GmbH | Component for a lidar sensor system, lidar sensor system, lidar sensor device, method for a lidar sensor system and method for a lidar sensor device |
US11513223B2 (en) | 2019-04-24 | 2022-11-29 | Aeye, Inc. | Ladar system and method with cross-receiver |
US20200341144A1 (en) | 2019-04-26 | 2020-10-29 | Ouster, Inc. | Independent per-pixel integration registers for lidar measurements |
DE102019208386A1 (en) | 2019-06-07 | 2020-12-10 | Infineon Technologies Ag | Laser scanning control system and method |
DE102019209112A1 (en) | 2019-06-24 | 2020-12-24 | Infineon Technologies Ag | A lidar system that selectively changes a size of the field of view |
US11750779B2 (en) * | 2019-08-20 | 2023-09-05 | Ricoh Company, Ltd. | Light deflector, optical scanning system, image projection device, image forming apparatus, and lidar device |
US11287530B2 (en) | 2019-09-05 | 2022-03-29 | ThorDrive Co., Ltd | Data processing system and method for fusion of multiple heterogeneous sensors |
WO2021046547A1 (en) | 2019-09-06 | 2021-03-11 | Ouster, Inc. | Processing of lidar images |
US11604259B2 (en) | 2019-10-14 | 2023-03-14 | Infineon Technologies Ag | Scanning LIDAR receiver with a silicon photomultiplier detector |
WO2021159226A1 (en) | 2020-02-10 | 2021-08-19 | Hesai Technology Co., Ltd. | Adaptive emitter and receiver for lidar systems |
US11467394B2 (en) | 2020-02-28 | 2022-10-11 | Infineon Technologies Ag | Capacitive charge based self-sensing and position observer for electrostatic MEMS mirrors |
US11327449B2 (en) | 2020-05-29 | 2022-05-10 | Mitsubishi Electric Research Laboratories, Inc. | Nonlinear optimization for stochastic predictive vehicle control |
US11621539B2 (en) * | 2020-06-02 | 2023-04-04 | Analog Devices, Inc. | Multi-phase laser driver techniques |
CA3125618C (en) | 2020-07-21 | 2023-05-23 | Leddartech Inc. | Beam-steering device particularly for lidar systems |
CA3230192A1 (en) | 2020-07-21 | 2021-10-10 | Leddartech Inc. | Systems and methods for wide-angle lidar using non-uniform magnification optics |
US11119219B1 (en) | 2020-08-10 | 2021-09-14 | Luminar, Llc | Lidar system with input optical element |
WO2022144588A1 (en) | 2020-12-30 | 2022-07-07 | Innoviz Technologies Ltd. | Lidar system with automatic pitch and yaw correction |
US20220308218A1 (en) | 2021-03-26 | 2022-09-29 | Aeye, Inc. | Hyper Temporal Lidar with Shot-Specific Detection Control |
US11460556B1 (en) | 2021-03-26 | 2022-10-04 | Aeye, Inc. | Hyper temporal lidar with shot scheduling for variable amplitude scan mirror |
US11486977B2 (en) | 2021-03-26 | 2022-11-01 | Aeye, Inc. | Hyper temporal lidar with pulse burst scheduling |
US11604264B2 (en) | 2021-03-26 | 2023-03-14 | Aeye, Inc. | Switchable multi-lens Lidar receiver |
US20220308187A1 (en) | 2021-03-26 | 2022-09-29 | Aeye, Inc. | Hyper Temporal Lidar Using Multiple Matched Filters to Determine Target Retro-Reflectivity |
-
2021
- 2021-09-30 US US17/490,204 patent/US20220308218A1/en active Pending
- 2021-09-30 US US17/490,280 patent/US11686846B2/en active Active
- 2021-09-30 US US17/490,231 patent/US11686845B2/en active Active
- 2021-09-30 US US17/490,248 patent/US20220308220A1/en active Pending
- 2021-09-30 US US17/490,265 patent/US11480680B2/en active Active
- 2021-09-30 US US17/490,273 patent/US20220308222A1/en active Pending
- 2021-09-30 US US17/490,260 patent/US20220308186A1/en active Pending
- 2021-09-30 US US17/490,289 patent/US11619740B2/en active Active
- 2021-09-30 US US17/490,213 patent/US20220308214A1/en active Pending
- 2021-09-30 US US17/490,194 patent/US20220308184A1/en active Pending
- 2021-09-30 US US17/490,221 patent/US20220308219A1/en active Pending
Also Published As
Publication number | Publication date |
---|---|
US20220308214A1 (en) | 2022-09-29 |
US20220317255A1 (en) | 2022-10-06 |
US20220308185A1 (en) | 2022-09-29 |
US20220308221A1 (en) | 2022-09-29 |
US11686845B2 (en) | 2023-06-27 |
US20220308218A1 (en) | 2022-09-29 |
US11686846B2 (en) | 2023-06-27 |
US11619740B2 (en) | 2023-04-04 |
US20220308216A1 (en) | 2022-09-29 |
US20220308219A1 (en) | 2022-09-29 |
US20220308220A1 (en) | 2022-09-29 |
US11480680B2 (en) | 2022-10-25 |
US20220308184A1 (en) | 2022-09-29 |
US20220308222A1 (en) | 2022-09-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11474213B1 (en) | Hyper temporal lidar with dynamic laser control using marker shots | |
US11686846B2 (en) | Bistatic lidar architecture for vehicle deployments | |
US11467263B1 (en) | Hyper temporal lidar with controllable variable laser seed energy | |
US11500093B2 (en) | Hyper temporal lidar using multiple matched filters to determine target obliquity | |
US20230251355A1 (en) | Hyper Temporal Lidar with Dynamic Shot Scheduling Using a Laser Energy Model | |
WO2022261295A1 (en) | Hyper temporal lidar with controllable pulse bursts | |
WO2022240714A1 (en) | Hyper temporal lidar with controllable detection intervals |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
AS | Assignment |
Owner name: AEYE, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:REDDY, NAVEEN;STEINHARDT, ALLAN;DUSSAN, LUIS;AND OTHERS;SIGNING DATES FROM 20220906 TO 20220914;REEL/FRAME:061107/0030 |