WO2019054917A1 - Time-of-flight scheimpflug lidar - Google Patents

Time-of-flight scheimpflug lidar Download PDF

Info

Publication number
WO2019054917A1
WO2019054917A1 PCT/SE2018/050908 SE2018050908W WO2019054917A1 WO 2019054917 A1 WO2019054917 A1 WO 2019054917A1 SE 2018050908 W SE2018050908 W SE 2018050908W WO 2019054917 A1 WO2019054917 A1 WO 2019054917A1
Authority
WO
WIPO (PCT)
Prior art keywords
signal
pixel
light
particle
sensor
Prior art date
Application number
PCT/SE2018/050908
Other languages
French (fr)
Inventor
Can XU
Mikkel Brydegaard
Original Assignee
Neolund Ab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neolund Ab filed Critical Neolund Ab
Publication of WO2019054917A1 publication Critical patent/WO2019054917A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/10Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/003Bistatic lidar systems; Multistatic lidar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/95Lidar systems specially adapted for specific applications for meteorological use
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Definitions

  • the present disclosure relates to laser projection systems and more particularly to Scheimpflug LIDAR systems and methods.
  • Short pulse (e.g. ⁇ 5 ns), high peak-power (e.g. -100 MW) lasers are commonly used as optical pulse sources in systems and devices for light detection and ranging (LIDAR) used to obtain a distributed echo from a clear bulk transmission media (e.g. gas or liquid bulks).
  • LIDAR light detection and ranging
  • An advantage of short pulse, high peak power lasers over continuous output lasers is that the energy output can be compressed into a very short time period, resulting in very high energy per unit time and providing a good signal to ambient light background ratio and potentially good signal to noise ratio.
  • Short pulse, high power LIDAR is used for various applications including:
  • Scheimpflug LIDAR is a LIDAR system described in "Atmospheric aerosol monitoring by an elastic Scheimpflug lidar system" Mei L. et al. Using Scheimpflug LIDAR it is possible to image a laser beam over a large range of distances simultaneously onto a sensor, thus the distance is mapped to the spatial pixel distribution of the sensor. Different from conventional imaging, Scheimpflug LIDAR can provide excellent low exposure times and very large depth of focus simultaneously. This allows a large range of applications, in particular the ones mentioned above.
  • Figure 1 shows an example of the apparatus of a Scheimpflug LIDAR.
  • Hardware processor 10 drives light source 20 to emit light along a first axis 30.
  • Light detection arrangement comprises a lens arrangement 50 having a lens plane 60 and being configured to direct the light scattered by the scattering particle to a light sensor 70.
  • Light sensor has a pixel column aligned to an image plane 80 and configured to output a sensor signal 75 to the hardware processor.
  • the first axis, the lens plane, and the image plane intersect such that a Scheimpflug condition is achieved.
  • a displaced image plane 82, a front focal plane 62 of the lens arrangement, and a relationship between the light source and the light detection arrangement fulfil the Hinge rule at intersection 63.
  • Hardware processor 10 processes the sensor signal to determine a pixel signal for one or more pixels of the light sensor. From the plurality of pixel signals, the hardware processor determines a distance of the scattering particle from the detection arrangement.
  • One aspect provide a method for detecting a distance of a scattering particle comprising: emitting modulated light along a first axis according to an emitter signal, generating a sensor signal using a detection arrangement comprising: a lens arrangement having a lens plane and being configured to direct modulated light scattered by the scattering particle on to a light sensor, the light sensor having at least one pixel column aligned to an image plane and configured to output a sensor signal, wherein the first axis, the lens plane, and the image plane intersect such that a Scheimpfiug condition is achieved, processing the sensor signal to determine a pixel signal for one or more pixels of the light sensor, determining, from the one or more pixel signals, a particle pixel signal indicative of a particle at a point along the first axis, determining a distance of the scattering particle from the detection arrangement in dependence on at least the particle pixel signal.
  • a system comprising: a light source configured to emit modulated light along at least a first axis according to an emitter signal, a light detection arrangement comprising: a lens arrangement having a lens plane and being configured to direct modulated light scattered by the scattering particle to a light sensor, the light sensor having at least one pixel column aligned to an image plane and configured to output a sensor signal, wherein the first axis, the lens plane, and the image plane intersect such that a Scheimpflug condition is achieved, a hardware processor configured to: process the sensor signal to determine a pixel signal for one or more pixels of the light sensor, determine, from the plurality of pixel signals, a particle pixel signal indicative of a particle at a point along the first axis, determine a distance of the scattering particle from the detection arrangement in dependence on at least the object pixel signal.
  • Figure 1 shows an apparatus of a Scheimpflug LIDAR as known in the prior art.
  • Figures 2a and 2b show embodiments of an apparatus of a time-of-flight Scheimpflug LIDAR.
  • Figure 3 shows a flow diagram for a calibration method.
  • Figure 4a shows a light detection arrangement 40 according to an embodiment.
  • Figure 4b shows a light sensor 70 according to an embodiment.
  • Figure 4c shows a light detection arrangement 40 according to an alternative embodiment.
  • Figure 5 shows a signal timing diagram for components of the sensor signal.
  • Figure 6 shows an object distance from the light source 20/light detection arrangement 40.
  • Figure 7 shows a signal timing diagram for received scattered light.
  • Figure 8 shows a signal timing diagram for the received scattered light with respect to range.
  • Figure 9 shows a signal timing diagram for received scattered light when the travel time is half of the emission signal period.
  • Figure 10 shows an application of linear regression to predict an optimally anti-correlated modulation frequency.
  • Figure 2 shows an apparatus comprising a similar arrangement of features to that shown in figure 1.
  • hardware processor 10 further comprises the timing circuitry, controller hardware, and/or processor instructions to form a time-of-flight controller 12.
  • Figure 3 shows a flow chart describing a method for performing automatic calibration of a time-of-flight (ToF) Scheimpflug LIDAR system.
  • ToF time-of-flight
  • time-of-flight controller 12 controls light source 20 to emit modulated light along a first axis 30 via emitter signal 25.
  • the light source is a coherent light source, such as a semiconductor laser diode or a quantum cascade laser.
  • Another suitable light source may comprise an incoherent light source, such as a super luminescent diode.
  • the light source is a continuous wave (CW) light source that produces a continuous output beam, as opposed to a pulsed diode, q-switched, gain- switched or mode locked laser, which have a pulsed output beam.
  • CW continuous wave
  • time-of-flight controller 12 generates emitter signal 25 which is used to control the modulation of the emitted light, either directly by driving the light source with emitter signal 25, or indirectly, by providing a control signal to light source 20.
  • Time-of-flight controller 12 may be a PC, mobile device or other general computing device. Alternatively, time-of-flight controller 12 may be a purpose-built hardware component.
  • the emitter signal 25 is generated in dependence on a clock signal. In another embodiment, emitter signal 25 is generated in dependence on sensor signal 75.
  • Sensor signal 75 may comprise a synchronisation signal generated by light sensor 70.
  • the synchronisation signal is generated by the light sensor 70 each time sampling of the pixel values has been completed or when sampling is started, i.e. the synchronisation signal can be generated at the start of a sampling frame, the end of the sampling frame, or some intermediate time in between.
  • Sensor signal 75 is then transmitted to time-of-flight controller 12, where the synchronisation signal of sensor signal 75 is used to control the modulation of the emitted light, i.e. if the light source should be on or off.
  • the line rate controls the cycle frequency at which the light sensor 70 records images from the sensing pixels.
  • the line rate may be controlled by hardware processor 10.
  • the light sensor 70 may generate a synchronisation signal which is sent to time-of- flight controller 12 as part of sensor signal 75.
  • the synchronisation signal may be generated at any time between the start and end of the cycle.
  • the light source may be a continuous wave (CW) light source that produces a continuous output beam.
  • CW continuous wave
  • the present system does not require a high-quality modulation waveform and complex phase detection hardware or scheme to perform range analysis.
  • a number of different light modulation schemes may be suitable for the present application. These include:
  • Saw tooth - A saw tooth embodiment may combine the advantages of the above modulation schemes.
  • the emitted modulated light has a waveform comprising a pulse length of between ⁇ . ⁇ ⁇ and 100ms, and a duty cycle of between 1% and 99% (preferably 50% 33%), or 25%).
  • the signal period may be between 200ns and 200ms.
  • the emitted modulated light has a waveform comprising a pulse length of between ⁇ ⁇ and 10ms, and a duty cycle of between 1%> and 99%.
  • the signal period may be between 2 ⁇ and 20ms.
  • lens 21 is a dioptric converging lens to provide a substantially collimated light beam along axis 30.
  • Figure 2b shows an alternative embodiment to the embodiment shown in figure 2a, wherein light source 20 comprises a catoptric mirror lens 68. Light is emitted from emitter 23 onto mirror 68. Mirror 68 reflects and substantially collimates the light to travel along axis 30.
  • An advantage of light source 20 comprising a catoptric mirror lens is the reduction of chromatic aberration that might be a side effect of a dioptrics system, such as that shown in figure 2a.
  • Light travelling along axis 30 may interact with a solid object, liquid, or gas.
  • the embodiments described here are intended to cover all of these scenarios. Consequently, the term ' scattering particle' is used to clarify that it is a particle of the solid object, liquid, or gas that causes the scattering of one of more photons travelling along axis 30 back towards light detection arrangement 40.
  • a solid object is likely to provide the best signal for the purposes of calibration.
  • Light detection arrangement 40 may comprise a number of configurations. Two configurations are provided below but it is understood that other techniques for capturing and focusing light are known to the skilled man and may be used with the present techniques.
  • Figure 4a shows a light detection arrangement 40 that comprises a lens arrangement 50 having a lens plane 60 and being configured to direct the light scattered by the scattering particle to a light sensor 70.
  • Lens 50 may be a refractive lens formed from any suitable light transmissive material known to the skilled man, such as glass or plastic.
  • light sensor 70 comprises a plurality of individual light sensing pixels 410 formed in one or more pixel columns 420 according to an image plane 80.
  • Light sensor 70 may comprise semiconductor charge-coupled devices (CCD), active pixel sensors in complementary metal-oxide-semiconductor (CMOS), N-type metal-oxide- semiconductor ( MOS, Live MOS) technologies, Multi-Anode Photo Multiplying Tube, Avalanche Photo Diode arrays, InGaAs, InSb, HgCdTe arrays or other light sensor technologies known to the skilled man.
  • CMOS complementary metal-oxide-semiconductor
  • MOS N-type metal-oxide- semiconductor
  • MOS N-type metal-oxide- semiconductor
  • Multi-Anode Photo Multiplying Tube Multi-Anode Photo Multiplying Tube
  • Avalanche Photo Diode arrays InGaAs, InSb, HgCdTe arrays or other light sensor technologies known to the skilled man.
  • a sensor signal 75 is generated by light sensor 70 and transmitted to hardware processor 10.
  • light sensor 70 is tilted at the Brewster's angle.
  • the angle of tilt is measured between the axis of the incident light and the normal of a surface of light sensor 70. This advantageously allows optimised sensor sensitivity to p- polarized light.
  • the amount of light allowed to pass through a protective window layer and other optical layers above light sensor 70 before reaching an active area of the light sensor 70 is maximised and the ghosting resulting from light reflected between the sensor, the optical layers and the surrounding medium is minimized.
  • image plane 80 is angled relative to the lens plane 60 such that it intersects axis 30 at the same point that lens plane 60 intersects axis 30. This is achieved by mounting light sensor 70 in the light detection arrangement 40 using an angled mount.
  • a displaced image plane 82 and a front focal plane 62 of the lens arrangement both intersect at intersection point 63 on axis 30. Displaced image plane 82 is parallel to image plane 80 but displaced in order to intersect with the optical centre of lens 50.
  • Front focal plane 62 is displaced along the same vector as image plane 80 is from displaced image plane 82.
  • Figure 4c shows an alternative light detection arrangement 40 to that of figure 4a.
  • the light detection arrangement 40 of figure 4c comprises a catoptric mirror lens.
  • Mirror lens 42 directs light received by light detection arrangement 40 onto light sensor 70 mounted within light detection arrangement 40.
  • An advantage of the light detection arrangement 40 of figure 4c is the reduction of chromatic aberration that might be a side effect of a dioptric system, such as that described in figure 4a.
  • Another advantage in using a catoptric mirror lens is in the poor availability of large refractive lenses for exotic wavelengths.
  • light sensor 70 generates a sensor signal 75 in dependence on the light received onto light sensor 70.
  • light sensor 70 comprises a single column of light sensing pixels, wherein the output of each sensing pixel is transmitted to hardware processor 10 as part of sensor signal 75.
  • light sensor 70 comprises multiple columns of light sensing pixels 410.
  • the output of each sensing pixel is transmitted to hardware processor 10 as part of sensor signal 75.
  • an average value for the sensing pixels of each row is transmitted to hardware processor 10 as part of sensor signal 75. This averaging improves the signal to noise ratio in environments with large amounts of noise.
  • light sensor 70 is configured to operate according to a line rate.
  • the line rate controls the cycle frequency at which the light sensor 70 samples the values of the sensing pixels.
  • the line rate may be controlled by hardware processor 10. Once the light sensor 70 has recorded the pixel values for a cycle, the light sensor 70 may generate a synchronisation signal which is sent to time-of-flight controller 12 as part of sensor signal 75.
  • step 340 hardware processor 10 processes sensor signal 75 to determine a pixel signal for one or more pixels 410 of the light sensor 70.
  • Hardware processor 10 and/or time-of- flight controller 12 may comprise signal processing components known to the skilled man for performing such a determining step, including digital and analogue components such as CPUs, data registers, ADCs, DACs, etc.
  • pixel signals are digital signals indicative of light received at the pixel signal over time, and are stored in a memory.
  • pixel signals are analogue signals.
  • a pixel signal for a pixel comprises two components: a pixel background signal 520 and pixel emission signal 510.
  • the pixel background signal corresponds to the light received by the pixel over periods 540 between pulses of the emitted modulated light 500.
  • the pixel emission signal corresponds to the light received by the pixel during pulse periods 530 of the modulated light of the emitted modulated light.
  • step 350 where a particle is present along first axis 30, hardware processor 10 determines the presence of the particle.
  • pixel emission signal 510 and pixel background signal 520 for a pixel are processed together to determine the presence of the particle.
  • a differential between peak values of the pixel emission signal 510 and the pixel background signal 520 may be determined. If the differential exceeds a threshold value, the particle is determined to be present at the pixel's mapped pixel range.
  • pixel emission signal 510 may be normalised by pixel background signal 520 before comparison with a threshold signal to determine the presence of a particle.
  • Some embodiments provide determining the presence of a particle in dependence on just pixel background signal 520 or pixel emission signal 510 alone. This may be advantageous where pixel emission signal 510 and pixel background signal 520 are substantially matched.
  • the pixel (known as the target pixel) is used for performing the analysis described in step 360.
  • time-of-flight controller 12 determines a distance of the particle using the pixel signal in dependence on a time-of-flight of the emitted modulated light.
  • FIG 6 light travels from light source 20 to object or particle 90 and back to light detection arrangement 40.
  • the distance between the apparatus and the object is D.
  • the time taken to travel distance D may be calculated if distance D is known. Where distance D is not known, it may be determined from the travel time of a light pulse. In the present embodiment, such an analysis is performed by comparing the timing of the emitted signal to the timing of light scattered back from object 90 for the pixel identified in step 350.
  • Figure 7 is similar to figure 5 but shows the time delay between the emitted pulses 550 and the detected pulses 560 and 570 detected as part of the pixel emission signal 510 and pixel background signal 520 respectively.
  • Figure 8 shows the position of pulses 560 and 570 with respect to the range of object 90.
  • Line 590 shows the range of the object for the pixel signal shown in figure 7.
  • Line 580 shows the range of the object for the pixel signal shown in figure 9.
  • the distance of object 90 is such that the light travel time is a quarter of the signal period of 550.
  • it is possible to determine the distance of object D, i.e. D A * + B
  • A is a system specific constant (e.g. set at factory),
  • / is the frequency of the emitted modulated light signal
  • Figure 9 shows a scenario where pulses 570 are substantially matched with pulses 560, allowing the distance of object 90 to be determined.
  • Pulses 570 and pulses 560 are determined to be 'matched' when a specific relationship exists between them.
  • pulses 570 and pulses 560 may be matched when they are substantially anti-correlated (i.e. that the pulses are approximately half a period out of phase) with respect to one another.
  • the pulses may also be matched when the pulses are substantially in-phase, or in another predefined relationship, .e.g, a quarter of a period out of phase.
  • either distance D or the modulation frequency of the emitted light used may result in pulses 570 and pulses 560 being unmatched.
  • the modulation frequency of the emitted light is scanned across a range of values to determine a modulation frequency that provides optimally matched pulses 570 and pulses 560 in order to accurately calculate distance D.
  • An optimal anti-correlation between pulses 570 and pulses 560 may be determined by measuring a difference between an integral of signal 510 and 520. At a modulation frequency where the difference between signal 520 and 510 is smallest, the best anti-correlation between pulses 570 and pulses 560 is achieved. Therefore, at that modulation frequency, a determination of the distance of object 90 is likely to be most accurate.
  • the period of the waveform of the emitted modulated light is varied across a range of 100ns to 100ms.
  • One embodiment provides a method of scanning the modulation frequency of the light source 20 and/or the line rate of light sensor 70 to find a minimum difference between an integral of signal 510 and 520.
  • a range of modulation frequencies are tested and the modulation frequency having the best anti-correlation between pulses 570 and pulses 560 is used to determine distance D.
  • the system is configured to begin a scan using a low frequency emission modulation (e.g.100 Hz) and increase the scan frequency in increments of e.g. 100 Hz up to the maximum line rate of sensor 75.
  • the scan is performed in the reverse direction, starting at the maximum line rate of sensor 75 and working downwards.
  • a type of 'divide and conquer search' is employed to determine and optimal emission modulation frequency. In this embodiment, this works by recursively breaking down the frequency search space into multiple parts and testing each one. Where one part shows a better anti-correlation than the others, the part is then divided into multiple parts and the search continues within that part.
  • scanning is performed by varying the line rate of sensor 75 from hardware controller 10.
  • light source modulation is controlled in dependence on the synchronisation signal generated by the sensor 75. Therefore, scanning by varying the line rate effectively scans the light source modulation frequency in turn.
  • the line rate of sensor 75 and the light source modulation frequency are controlled directly by hardware controller 10.
  • the frequency of the modulation is held fixed and a time delay between the periods 530 and 540 is introduced.
  • this time delay within a finite range, e.g., 1 ns to 100 ms, it is possible to either directly determine when the pixel background signal and the pixel emission signal are substantially matched, or through means of regression or fitting (described below) to extrapolate and determine when the pixel background signal and the pixel emission signal are substantially matched. These conditions can then be used to compute the distance.
  • the period 540 may be extended, producing an equivalent effect.
  • linear regression is used to predict an optimally anti-correlated modulation frequency.
  • the normalized intensity of signal 510 is compared with the modulation frequency of the light source 20 (and the line rate of the light sensor 75).
  • the normalized intensity of signal 520 may be compared with the modulation frequency of the light source 20.
  • Other methods of normalisation known to the skilled man may be applied. This process is performed with a variety of modulation frequencies varied according methods described above.
  • Line 1000 is fitted to the data points generated from the plurality of comparisons, and may be extrapolated to estimate a crossing with the axis where the normalised intensity is zero. The line rate value of the crossing may then be used to determine D directly. In another embodiment, the line rate value of the crossing may then be used as a test modulation frequency to confirm the accuracy of the extrapolation.
  • step 370 hardware processor 10 performs a calibration step to match the pixel to the determined distance.
  • an associated distance distribution map is used to determine a calibrated focal range for each of the pixels 410 or individual rows of pixels.
  • the associated distance distribution map stores each pixel and corresponding focal range in memory.
  • the focal range for each of the pixels is described functionally and the function variables are stored in memory.
  • this distance is used to calibrate the focal ranges associated with the pixels.
  • a memory store is used to record a calibrated focal range for each of the pixels 410
  • a new calibrated focal range is calculated for the pixel upon which the scattered light from object 90 is focussed (i.e. the target pixel) in dependence on the distance of object 90.
  • Focal range for each of the other pixels may then be recalculated according to the calibrated focal range for the target pixel.
  • the focal range for each of the pixels is described functionally and the function variables are stored in memory, a new set of function variables are determined in dependence on the target pixel and the distance of object 90.

Abstract

A system and method is provided for detecting a distance of scattering particle. Modulated light is emitted along a first axis according to an emitter signal. A lens arrangement having a lens plane is configured to direct modulated light scattered by the scattering particle on to a light sensor, the light sensor having at least one pixel column aligned to an image plane and configured to output a sensor signal. The first axis, the lens plane, and the image plane intersect such that a Scheimpflug condition is achieved. The sensor signal is processed to determine a pixel signal for one or more pixels of the light sensor by determining, from the one or more pixel signals, a particle pixel signal indicative of a particle at a point along the first axis, and therefore determining a distance of the scattering particle from the detection arrangement in dependence on at least the particle pixel signal.

Description

TIME-OF-FLIGHT SCHEIMPFLUG LIDAR
Technical Field
The present disclosure relates to laser projection systems and more particularly to Scheimpflug LIDAR systems and methods.
Background Art
Short pulse (e.g. ~5 ns), high peak-power (e.g. -100 MW) lasers are commonly used as optical pulse sources in systems and devices for light detection and ranging (LIDAR) used to obtain a distributed echo from a clear bulk transmission media (e.g. gas or liquid bulks). An advantage of short pulse, high peak power lasers over continuous output lasers is that the energy output can be compressed into a very short time period, resulting in very high energy per unit time and providing a good signal to ambient light background ratio and potentially good signal to noise ratio.
Short pulse, high power LIDAR is used for various applications including:
1) Remote aerosol monitoring, where analysis of an atmospheric backscattering echo of a short pulse, high power LIDAR provides a superior solution to analysis using electrochemical sensors or other optical sensors.
2) Combustion and process monitoring.
3) Emission / pollution monitoring, e.g., over a factory or other natural emission sources.
Problems with short pulse, high power LIDAR systems are that they are big, expensive, complex, and difficult to transport. Where a short pulse, high power LIDAR is required to be portable, it would typically need to be mounted to a vehicle or container. Furthermore, such systems require significant power to operate. Another problem is that certain applications of short pulse, high power LIDAR have specific requirements that may make the application more difficult, e.g. monitoring of combustion processes in combustion chambers requires two separate optical access windows, making the testing equipment significantly more complex. For aerosol monitoring, the long measurement times and averaging times necessary when using standard short pulse, high power LIDAR systems provide limited temporal performance.
Scheimpflug LIDAR is a LIDAR system described in "Atmospheric aerosol monitoring by an elastic Scheimpflug lidar system" Mei L. et al. Using Scheimpflug LIDAR it is possible to image a laser beam over a large range of distances simultaneously onto a sensor, thus the distance is mapped to the spatial pixel distribution of the sensor. Different from conventional imaging, Scheimpflug LIDAR can provide excellent low exposure times and very large depth of focus simultaneously. This allows a large range of applications, in particular the ones mentioned above. Figure 1 shows an example of the apparatus of a Scheimpflug LIDAR. Hardware processor 10 drives light source 20 to emit light along a first axis 30. The light travels along axis 30 until being scattered back towards light detection arrangement 40 by a particle 90. Light detection arrangement comprises a lens arrangement 50 having a lens plane 60 and being configured to direct the light scattered by the scattering particle to a light sensor 70. Light sensor has a pixel column aligned to an image plane 80 and configured to output a sensor signal 75 to the hardware processor. The first axis, the lens plane, and the image plane intersect such that a Scheimpflug condition is achieved. Furthermore, a displaced image plane 82, a front focal plane 62 of the lens arrangement, and a relationship between the light source and the light detection arrangement fulfil the Hinge rule at intersection 63. Hardware processor 10 processes the sensor signal to determine a pixel signal for one or more pixels of the light sensor. From the plurality of pixel signals, the hardware processor determines a distance of the scattering particle from the detection arrangement.
One disadvantage with standard Scheimpflug LIDAR systems is that, as with standard short pulse, high power LIDAR systems, range calibration needs to be done in production and periodically monitored and re-calibrated in the field. For Scheimpflug LIDAR systems, one way to calibrate the system in the field is through the use of target objects at known distances. Calibration comprises matching pixels of the array to a known set of ranges. Such calibration is inconvenient and requires prepared targets for calibration.
Therefore, what is needed is a way of eliminating the production and pre-use calibration requirements, i.e. a system capable of automatic calibration.
Summary
It is an objective of the invention to at least partly overcome one or more of the above- identified limitations of the prior art.
One or more of these objectives, as well as further objectives that may appear from the description below, are at least partly achieved by means of a method for data processing, a computer readable medium, devices for data processing, and a sensing apparatus according to the independent claims, embodiments thereof being defined by the dependent claims. One aspect provide a method for detecting a distance of a scattering particle comprising: emitting modulated light along a first axis according to an emitter signal, generating a sensor signal using a detection arrangement comprising: a lens arrangement having a lens plane and being configured to direct modulated light scattered by the scattering particle on to a light sensor, the light sensor having at least one pixel column aligned to an image plane and configured to output a sensor signal, wherein the first axis, the lens plane, and the image plane intersect such that a Scheimpfiug condition is achieved, processing the sensor signal to determine a pixel signal for one or more pixels of the light sensor, determining, from the one or more pixel signals, a particle pixel signal indicative of a particle at a point along the first axis, determining a distance of the scattering particle from the detection arrangement in dependence on at least the particle pixel signal.
Another aspect provides a system comprising: a light source configured to emit modulated light along at least a first axis according to an emitter signal, a light detection arrangement comprising: a lens arrangement having a lens plane and being configured to direct modulated light scattered by the scattering particle to a light sensor, the light sensor having at least one pixel column aligned to an image plane and configured to output a sensor signal, wherein the first axis, the lens plane, and the image plane intersect such that a Scheimpflug condition is achieved, a hardware processor configured to: process the sensor signal to determine a pixel signal for one or more pixels of the light sensor, determine, from the plurality of pixel signals, a particle pixel signal indicative of a particle at a point along the first axis, determine a distance of the scattering particle from the detection arrangement in dependence on at least the object pixel signal.
Brief Description of Drawings
These and other aspects, features and advantages of which embodiments of the invention are capable of will be apparent and elucidated from the following description of embodiments of the present invention, reference being made to the accompanying drawings:
Figure 1 shows an apparatus of a Scheimpflug LIDAR as known in the prior art.
Figures 2a and 2b show embodiments of an apparatus of a time-of-flight Scheimpflug LIDAR.
Figure 3 shows a flow diagram for a calibration method.
Figure 4a shows a light detection arrangement 40 according to an embodiment.
Figure 4b shows a light sensor 70 according to an embodiment.
Figure 4c shows a light detection arrangement 40 according to an alternative embodiment. Figure 5 shows a signal timing diagram for components of the sensor signal. Figure 6 shows an object distance from the light source 20/light detection arrangement 40.
Figure 7 shows a signal timing diagram for received scattered light.
Figure 8 shows a signal timing diagram for the received scattered light with respect to range.
Figure 9 shows a signal timing diagram for received scattered light when the travel time is half of the emission signal period.
Figure 10 shows an application of linear regression to predict an optimally anti-correlated modulation frequency.
Detailed Description of Embodiments
Specific examples of the invention will now be described with reference to the accompanying drawings. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. The terminology used in the detailed description of the embodiments illustrated in the accompanying drawings is not intended to be limiting of the invention. In the drawings, like numbers refer to like elements
In the first embodiment, a Scheimpflug LIDAR system using a time-of-flight (ToF) feature is described. Figure 2 shows an apparatus comprising a similar arrangement of features to that shown in figure 1. In the first embodiment, hardware processor 10 further comprises the timing circuitry, controller hardware, and/or processor instructions to form a time-of-flight controller 12.
Figure 3 shows a flow chart describing a method for performing automatic calibration of a time-of-flight (ToF) Scheimpflug LIDAR system.
In step 310, time-of-flight controller 12 controls light source 20 to emit modulated light along a first axis 30 via emitter signal 25. In one embodiment, the light source is a coherent light source, such as a semiconductor laser diode or a quantum cascade laser. Another suitable light source may comprise an incoherent light source, such as a super luminescent diode. Preferably, the light source is a continuous wave (CW) light source that produces a continuous output beam, as opposed to a pulsed diode, q-switched, gain- switched or mode locked laser, which have a pulsed output beam. In an embodiment, time-of-flight controller 12 generates emitter signal 25 which is used to control the modulation of the emitted light, either directly by driving the light source with emitter signal 25, or indirectly, by providing a control signal to light source 20. Time-of-flight controller 12 may be a PC, mobile device or other general computing device. Alternatively, time-of-flight controller 12 may be a purpose-built hardware component.
In one embodiment, the emitter signal 25 is generated in dependence on a clock signal. In another embodiment, emitter signal 25 is generated in dependence on sensor signal 75. Sensor signal 75 may comprise a synchronisation signal generated by light sensor 70. In this embodiment, the synchronisation signal is generated by the light sensor 70 each time sampling of the pixel values has been completed or when sampling is started, i.e. the synchronisation signal can be generated at the start of a sampling frame, the end of the sampling frame, or some intermediate time in between. Sensor signal 75 is then transmitted to time-of-flight controller 12, where the synchronisation signal of sensor signal 75 is used to control the modulation of the emitted light, i.e. if the light source should be on or off.
The line rate controls the cycle frequency at which the light sensor 70 records images from the sensing pixels. The line rate may be controlled by hardware processor 10. Once the light sensor 70 has recorded the pixel values for a cycle or starts a new recording cycle, the light sensor 70 may generate a synchronisation signal which is sent to time-of- flight controller 12 as part of sensor signal 75. Alternatively, the synchronisation signal may be generated at any time between the start and end of the cycle.
As described above, the light source may be a continuous wave (CW) light source that produces a continuous output beam. Unlike conventional modulated systems for range measurement requiring e.g. a well-defined modulation sine waveform, and which typically rely on measuring the phase shift between the emitted and received light, the present system does not require a high-quality modulation waveform and complex phase detection hardware or scheme to perform range analysis.
A number of different light modulation schemes may be suitable for the present application. These include:
1) Square wave - A square wave TTL standard 2) Triangular wave - As the signal varies more gradually over time and in a relatively linear way, more information may be encoded into the signal. Furthermore, laser emission wavelength shifting can be achieved.
3) Saw tooth - A saw tooth embodiment may combine the advantages of the above modulation schemes.
In an embodiment, the emitted modulated light has a waveform comprising a pulse length of between Ο. ΐ μβ and 100ms, and a duty cycle of between 1% and 99% (preferably 50% 33%), or 25%). According to such an embodiment, the signal period may be between 200ns and 200ms. In a preferred embodiment, the emitted modulated light has a waveform comprising a pulse length of between Ι μβ and 10ms, and a duty cycle of between 1%> and 99%. According to such an embodiment, the signal period may be between 2μβ and 20ms.
Light emitted by emitter 23 of light source 20 passes through lens 21 before travelling along axis 30. In a preferred embodiment, lens 21 is a dioptric converging lens to provide a substantially collimated light beam along axis 30. Figure 2b shows an alternative embodiment to the embodiment shown in figure 2a, wherein light source 20 comprises a catoptric mirror lens 68. Light is emitted from emitter 23 onto mirror 68. Mirror 68 reflects and substantially collimates the light to travel along axis 30. An advantage of light source 20 comprising a catoptric mirror lens is the reduction of chromatic aberration that might be a side effect of a dioptrics system, such as that shown in figure 2a.
Light travelling along axis 30 may interact with a solid object, liquid, or gas. The embodiments described here are intended to cover all of these scenarios. Consequently, the term ' scattering particle' is used to clarify that it is a particle of the solid object, liquid, or gas that causes the scattering of one of more photons travelling along axis 30 back towards light detection arrangement 40. For the purposes of the calibration techniques described below, it is understood that a solid object is likely to provide the best signal for the purposes of calibration.
In step 320, light scattered by the scattering particle 90 is received by light detection arrangement 40 and directed onto the light sensor 70. Light detection arrangement 40 may comprise a number of configurations. Two configurations are provided below but it is understood that other techniques for capturing and focusing light are known to the skilled man and may be used with the present techniques. Figure 4a shows a light detection arrangement 40 that comprises a lens arrangement 50 having a lens plane 60 and being configured to direct the light scattered by the scattering particle to a light sensor 70. Lens 50 may be a refractive lens formed from any suitable light transmissive material known to the skilled man, such as glass or plastic. As shown in figure 4b, light sensor 70 comprises a plurality of individual light sensing pixels 410 formed in one or more pixel columns 420 according to an image plane 80. Light sensor 70 may comprise semiconductor charge-coupled devices (CCD), active pixel sensors in complementary metal-oxide-semiconductor (CMOS), N-type metal-oxide- semiconductor ( MOS, Live MOS) technologies, Multi-Anode Photo Multiplying Tube, Avalanche Photo Diode arrays, InGaAs, InSb, HgCdTe arrays or other light sensor technologies known to the skilled man. A sensor signal 75 is generated by light sensor 70 and transmitted to hardware processor 10.
In one embodiment, light sensor 70 is tilted at the Brewster's angle. In this embodiment, the angle of tilt is measured between the axis of the incident light and the normal of a surface of light sensor 70. This advantageously allows optimised sensor sensitivity to p- polarized light. By tilting light sensor 70 at the Brewster's angle, the amount of light allowed to pass through a protective window layer and other optical layers above light sensor 70 before reaching an active area of the light sensor 70 is maximised and the ghosting resulting from light reflected between the sensor, the optical layers and the surrounding medium is minimized.
In order to use the Scheimpflug principle to allow light detection arrangement 40 to focus the received light onto light sensor 70 for a large range of distances along axis 30, several conditions are met by the above embodiment. According to a first condition, image plane 80 is angled relative to the lens plane 60 such that it intersects axis 30 at the same point that lens plane 60 intersects axis 30. This is achieved by mounting light sensor 70 in the light detection arrangement 40 using an angled mount. According to a second condition known as the Hinge rule, a displaced image plane 82 and a front focal plane 62 of the lens arrangement both intersect at intersection point 63 on axis 30. Displaced image plane 82 is parallel to image plane 80 but displaced in order to intersect with the optical centre of lens 50. Front focal plane 62 is displaced along the same vector as image plane 80 is from displaced image plane 82.
Figure 4c shows an alternative light detection arrangement 40 to that of figure 4a. The light detection arrangement 40 of figure 4c comprises a catoptric mirror lens. Mirror lens 42 directs light received by light detection arrangement 40 onto light sensor 70 mounted within light detection arrangement 40. An advantage of the light detection arrangement 40 of figure 4c is the reduction of chromatic aberration that might be a side effect of a dioptric system, such as that described in figure 4a. Another advantage in using a catoptric mirror lens is in the poor availability of large refractive lenses for exotic wavelengths.
In step 330, light sensor 70 generates a sensor signal 75 in dependence on the light received onto light sensor 70. In one embodiment, light sensor 70 comprises a single column of light sensing pixels, wherein the output of each sensing pixel is transmitted to hardware processor 10 as part of sensor signal 75. In another embodiment shown in figure 4b, light sensor 70 comprises multiple columns of light sensing pixels 410. In one embodiment, the output of each sensing pixel is transmitted to hardware processor 10 as part of sensor signal 75. In an alternative embodiment, an average value for the sensing pixels of each row is transmitted to hardware processor 10 as part of sensor signal 75. This averaging improves the signal to noise ratio in environments with large amounts of noise. In one embodiment, light sensor 70 is configured to operate according to a line rate. The line rate controls the cycle frequency at which the light sensor 70 samples the values of the sensing pixels. The line rate may be controlled by hardware processor 10. Once the light sensor 70 has recorded the pixel values for a cycle, the light sensor 70 may generate a synchronisation signal which is sent to time-of-flight controller 12 as part of sensor signal 75.
In step 340, hardware processor 10 processes sensor signal 75 to determine a pixel signal for one or more pixels 410 of the light sensor 70. Hardware processor 10 and/or time-of- flight controller 12 may comprise signal processing components known to the skilled man for performing such a determining step, including digital and analogue components such as CPUs, data registers, ADCs, DACs, etc. In one embodiment, pixel signals are digital signals indicative of light received at the pixel signal over time, and are stored in a memory. In another embodiment, pixel signals are analogue signals.
In an embodiment shown in figure 5, a pixel signal for a pixel comprises two components: a pixel background signal 520 and pixel emission signal 510. The pixel background signal corresponds to the light received by the pixel over periods 540 between pulses of the emitted modulated light 500. The pixel emission signal, corresponds to the light received by the pixel during pulse periods 530 of the modulated light of the emitted modulated light. In step 350, where a particle is present along first axis 30, hardware processor 10 determines the presence of the particle. In an embodiment, pixel emission signal 510 and pixel background signal 520 for a pixel are processed together to determine the presence of the particle. Preferably, a differential between peak values of the pixel emission signal 510 and the pixel background signal 520 may be determined. If the differential exceeds a threshold value, the particle is determined to be present at the pixel's mapped pixel range. Alternatively, pixel emission signal 510 may be normalised by pixel background signal 520 before comparison with a threshold signal to determine the presence of a particle.
Some embodiments provide determining the presence of a particle in dependence on just pixel background signal 520 or pixel emission signal 510 alone. This may be advantageous where pixel emission signal 510 and pixel background signal 520 are substantially matched.
Once a particle is determined to be present at the pixel's mapped pixel range, the pixel (known as the target pixel) is used for performing the analysis described in step 360.
In step 360, time-of-flight controller 12 determines a distance of the particle using the pixel signal in dependence on a time-of-flight of the emitted modulated light.
As shown in figure 6, light travels from light source 20 to object or particle 90 and back to light detection arrangement 40. The distance between the apparatus and the object is D. As the speed of light is known and essentially fixed, the time taken to travel distance D may be calculated if distance D is known. Where distance D is not known, it may be determined from the travel time of a light pulse. In the present embodiment, such an analysis is performed by comparing the timing of the emitted signal to the timing of light scattered back from object 90 for the pixel identified in step 350. Figure 7 is similar to figure 5 but shows the time delay between the emitted pulses 550 and the detected pulses 560 and 570 detected as part of the pixel emission signal 510 and pixel background signal 520 respectively.
Figure 8 shows the position of pulses 560 and 570 with respect to the range of object 90. Line 590 shows the range of the object for the pixel signal shown in figure 7. Line 580 shows the range of the object for the pixel signal shown in figure 9.
In figure 9, a scenario is shown where the time delay between the emission pulses 550 and the detected pulses 560 and 570 is such that pulse 560 is substantially equal to pulse 570, just offset by an amount, i.e. pulses 570 are substantially anti-correlated with emission pulses 550.
In this scenario, the distance of object 90 is such that the light travel time is a quarter of the signal period of 550. In this scenario, it is possible to determine the distance of object D, i.e. D = A * + B
(2 * 2 * /)
where A is a system specific constant (e.g. set at factory),
c is the speed of light,
/ is the frequency of the emitted modulated light signal,
and B is a system specific constant.
Using the above equation (with ^4 = 1 and B = 0):
Figure imgf000012_0001
Figure 9 shows a scenario where pulses 570 are substantially matched with pulses 560, allowing the distance of object 90 to be determined. Pulses 570 and pulses 560 are determined to be 'matched' when a specific relationship exists between them. As described above, pulses 570 and pulses 560 may be matched when they are substantially anti-correlated (i.e. that the pulses are approximately half a period out of phase) with respect to one another. The pulses may also be matched when the pulses are substantially in-phase, or in another predefined relationship, .e.g, a quarter of a period out of phase.
As shown in figure 7, either distance D or the modulation frequency of the emitted light used may result in pulses 570 and pulses 560 being unmatched. In one embodiment, the modulation frequency of the emitted light is scanned across a range of values to determine a modulation frequency that provides optimally matched pulses 570 and pulses 560 in order to accurately calculate distance D. An optimal anti-correlation between pulses 570 and pulses 560 may be determined by measuring a difference between an integral of signal 510 and 520. At a modulation frequency where the difference between signal 520 and 510 is smallest, the best anti-correlation between pulses 570 and pulses 560 is achieved. Therefore, at that modulation frequency, a determination of the distance of object 90 is likely to be most accurate. Preferably, the period of the waveform of the emitted modulated light is varied across a range of 100ns to 100ms.
One embodiment provides a method of scanning the modulation frequency of the light source 20 and/or the line rate of light sensor 70 to find a minimum difference between an integral of signal 510 and 520. In one embodiment, a range of modulation frequencies are tested and the modulation frequency having the best anti-correlation between pulses 570 and pulses 560 is used to determine distance D. In one embodiment, the system is configured to begin a scan using a low frequency emission modulation (e.g.100 Hz) and increase the scan frequency in increments of e.g. 100 Hz up to the maximum line rate of sensor 75. In an alternative, the scan is performed in the reverse direction, starting at the maximum line rate of sensor 75 and working downwards. In another embodiment, a type of 'divide and conquer search' is employed to determine and optimal emission modulation frequency. In this embodiment, this works by recursively breaking down the frequency search space into multiple parts and testing each one. Where one part shows a better anti-correlation than the others, the part is then divided into multiple parts and the search continues within that part.
In one embodiment, scanning is performed by varying the line rate of sensor 75 from hardware controller 10. In this embodiment, light source modulation is controlled in dependence on the synchronisation signal generated by the sensor 75. Therefore, scanning by varying the line rate effectively scans the light source modulation frequency in turn. In another embodiment, the line rate of sensor 75 and the light source modulation frequency are controlled directly by hardware controller 10.
In another embodiment, the frequency of the modulation is held fixed and a time delay between the periods 530 and 540 is introduced. By varying this time delay within a finite range, e.g., 1 ns to 100 ms, it is possible to either directly determine when the pixel background signal and the pixel emission signal are substantially matched, or through means of regression or fitting (described below) to extrapolate and determine when the pixel background signal and the pixel emission signal are substantially matched. These conditions can then be used to compute the distance. Similarly, the period 540 may be extended, producing an equivalent effect. In another embodiment shown in figure 10, linear regression is used to predict an optimally anti-correlated modulation frequency. In figure 10, the normalized intensity of signal 510, normalised with respect to signal 520, is compared with the modulation frequency of the light source 20 (and the line rate of the light sensor 75). Alternatively, the normalized intensity of signal 520, normalised with respect to signal 510, may be compared with the modulation frequency of the light source 20. Other methods of normalisation known to the skilled man may be applied. This process is performed with a variety of modulation frequencies varied according methods described above. Line 1000 is fitted to the data points generated from the plurality of comparisons, and may be extrapolated to estimate a crossing with the axis where the normalised intensity is zero. The line rate value of the crossing may then be used to determine D directly. In another embodiment, the line rate value of the crossing may then be used as a test modulation frequency to confirm the accuracy of the extrapolation.
In step 370, hardware processor 10 performs a calibration step to match the pixel to the determined distance.
In an embodiment, an associated distance distribution map is used to determine a calibrated focal range for each of the pixels 410 or individual rows of pixels. In one embodiment, the associated distance distribution map stores each pixel and corresponding focal range in memory. Alternatively, the focal range for each of the pixels is described functionally and the function variables are stored in memory. By enabling the system to determine the focal range of an individual pixel or row of pixels, the system can determine the distance range of an object in dependence on an analysis of which pixels receive the most amount of scattered light. E.g. where a pixel of the light sensor 70 is receiving an amount of light above a threshold, it may be determined that a scattering object is located within the focal range associated with that pixel. However, a set of focal ranges associated with the pixels of the image sensor may become inaccurate or completely invalid where environment changes, mechanical changes, temperature changes or other changes occur.
In an embodiment, where the distance of the object 90 is determined according to steps 310 to 360, this distance is used to calibrate the focal ranges associated with the pixels. Where a memory store is used to record a calibrated focal range for each of the pixels 410, a new calibrated focal range is calculated for the pixel upon which the scattered light from object 90 is focussed (i.e. the target pixel) in dependence on the distance of object 90. Focal range for each of the other pixels may then be recalculated according to the calibrated focal range for the target pixel. Where the focal range for each of the pixels is described functionally and the function variables are stored in memory, a new set of function variables are determined in dependence on the target pixel and the distance of object 90.
The present invention has been described above with reference to specific embodiments. However, other embodiments than the above described are equally possible within the scope of the invention. The different features and steps of the invention may be combined in other combinations than those described. The scope of the invention is only limited by the appended patent claims.
More generally, those skilled in the art will readily appreciate that all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings of the present invention is/are used.

Claims

Claims
1. A method for detecting a distance of a scattering particle (90) comprising:
emitting continuous wave light modulated according to an emitter signal (25) along a first axis (30) ,
generating a sensor signal using a detection arrangement (40) comprising:
a lens arrangement (50) having a lens plane (60) and being configured to direct modulated light scattered by the scattering particle on to a light sensor (70),
the light sensor (70) having at least one pixel column aligned to an image plane (80) and configured to output a sensor signal (75),
wherein the first axis (30), the lens plane (60), and the image plane (80) intersect such that a Scheimpflug condition is achieved,
processing the sensor signal to determine a pixel signal for one or more pixels of the light sensor (70),
determining, from the one or more pixel signals, a particle pixel signal indicative of a particle at a point along the first axis (30),
determining a distance of the scattering particle (90) from the detection arrangement (40) in dependence on at least the particle pixel signal.
2. The method of claim 1, wherein the emitted modulated light has a waveform comprising a pulse length of between 100ms and 100ns and a duty cycle of between 1% and 99%.
3. The method of claims 1 or 2, wherein the particle pixel signal comprises;
a pixel background signal generated by only sampling the light sensor (70) between pulses of the emitted modulated light, and
a modulated light signal indicative of pulses of the modulated light, and wherein the distance of the scattering particle (90) is determined to be a function of the pixel background signal and the modulated light signal.
4. The method of claim 3, wherein the modulated light signal is dependent on at least one of:
the emitter signal (25),
a clock signal, and
an output signal from the light sensor (70).
5. The method of claims 3 or 4, wherein the distance of the scattering particle (90) is determined to be a function of at least one property of the waveform of the emitted modulated light when the pixel background signal matches the pixel emission signal.
6. The method of claim 5, wherein the pixel background signal matches the pixel emission signal when the pixel background signal is anti-correlated with the pixel emission signal.
7. The method of claims 2 to 6, wherein the distance of the scattering particle (90) is determined to be a function of a temporal property of the waveform of the emitted modulated light and the speed of light.
8. The method of any preceding claims, wherein a sample frequency of the light sensor (70) is matched to a period of the waveform of the emitted modulated light.
9. The method of claims 5 to 8, wherein the pixel background signal does not match the pixel emission signal, the method further comprising varying the modulation of the emitted modulated light until the pixel background signal matches the pixel emission signal.
10. The method of claim 9, wherein the method comprising varying the period of the waveform of the emitted modulated light across a range of 100 ns and 100 ms.
11. The method of claim 9, wherein the method comprising modifying the length of a time delay between pulses of the emitted modulated light by up to 100 ms.
12. The method of claim 9, 10 or 11, wherein a scan scheme is employed to identify a modulation of the emitted modulated light that results in the pixel background signal matching the pixel pulse signal, wherein the scan scheme comprises at least one of:
1) High to low or low to high modulation frequency or time delay scanning
2) Divide and conquer scanning
3) Linear regression directed scanning
13. The method of claims 4-11, wherein the step of determining the particle pixel signal comprises determining that a signal value of the pixel background signal is above a threshold value.
14. The method of any preceding claims, wherein determining, from the one or more pixel signals, a plurality of particle pixel signals indicative of a plurality of particles (90) at a plurality of positions along the first axis (30),
determining a distance of the plurality of particles from the detection arrangement (40) in dependence on the plurality of particle pixel signals.
15. The method of any preceding claims, wherein the light sensor (70) comprises multiple pixel columns.
16. The method of claim 14, wherein pixels values of pixel rows are averaged to form a single pixel value.
17. The method of any preceding claims, wherein the light sensor (70) having an associated distance distribution map, mapping each pixel of the light sensor (70) to a distance range,
updating the distance distribution map to match the pixel corresponding to the particle pixel signal to the determined distance of the scattering particle (90).
18. The method of any preceding claim, wherein the emitted modulated light is collimated coherent light.
19. The method of any preceding claim, wherein the lens arrangement comprises at least one of:
an optical lens comprising one or more light refracting components, and a mirror lens comprising a catadioptric or catoptric optical system.
20. The method of any preceding claim, wherein a displaced image plane (82), a front focal plane (62) of the lens arrangement (50), and a relationship between the light source (20) and the light detection arrangement (40) fulfil the Hinge rule intersection (63).
21. The method of any preceding claim, wherein an angle between the incident light received by light sensor (70) from the scattering particle (90) and a normal of a surface of light sensor 70 is the Brewster's angle.
22. A system (100) comprising:
a light source (20) configured to emit continuous wave light modulated according to an emitter signal along at least a first axis (30), a light detection arrangement (40) comprising:
a lens arrangement (50, 52) having a lens plane (60) and being configured to direct modulated light scattered by the scattering particle to a light sensor (70),
the light sensor (70) having at least one pixel column aligned to an image plane (80) and configured to output a sensor signal (75),
wherein the first axis (30), the lens plane (60), and the image plane (80) intersect such that a Scheimpflug condition is achieved,
a hardware processor (10) configured to:
process the sensor signal to determine a pixel signal for one or more pixels of the light sensor (70),
determine, from the plurality of pixel signals, a particle pixel signal indicative of a particle at a point along the first axis (30),
determine a distance of the scattering particle (90) from the detection arrangement (40) in dependence on at least the object pixel signal.
PCT/SE2018/050908 2017-09-13 2018-09-11 Time-of-flight scheimpflug lidar WO2019054917A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SE1730251 2017-09-13
SE1730251-4 2017-09-13

Publications (1)

Publication Number Publication Date
WO2019054917A1 true WO2019054917A1 (en) 2019-03-21

Family

ID=65722933

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2018/050908 WO2019054917A1 (en) 2017-09-13 2018-09-11 Time-of-flight scheimpflug lidar

Country Status (1)

Country Link
WO (1) WO2019054917A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113655464A (en) * 2021-09-28 2021-11-16 浙江师范大学 Method for improving spatial resolution of Samm imaging laser radar

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LIANG MEI ET AL.: "Atmospheric aerosol monitoring by an elastic Scheimpflug Lidar system", OPTICS EXPRESS, vol. 23, no. 24, 2015, XP055569153, DOI: doi:10.1364/OE.23.0A1613 *
MIKKEL BRYDEGAARD ET AL.: "The Scheimpflug lidar method", PROC. SPIE 10406, LIDAR REMOTE SENSING FOR ENVIRONMENTAL MONITORING 2017, 30 August 2017 (2017-08-30), pages 1040601, XP060095347 *
RYDHMER, K. ET AL.: "Applied hyperspectral LIDAR for monitoring fauna dispersal in aquatic Environments", DIVISION OF COMBUSTION PHYSICS, LUND REPORTS ON COMBUSTION PHYSICS, LRCP-196, May 2016 (2016-05-01), pages 19, XP055569159, ISSN: 1102-8718 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113655464A (en) * 2021-09-28 2021-11-16 浙江师范大学 Method for improving spatial resolution of Samm imaging laser radar
CN113655464B (en) * 2021-09-28 2023-09-29 浙江师范大学 Method for improving spatial resolution of sham imaging laser radar

Similar Documents

Publication Publication Date Title
US10627491B2 (en) Integrated LIDAR illumination power control
US7554652B1 (en) Light-integrating rangefinding device and method
CN109521435B (en) Distance measuring device
CN109507680B (en) Distance measuring device
US7800739B2 (en) Distance measuring method and distance measuring element for detecting the spatial dimension of a target
JP2021510417A (en) LIDAR-based distance measurement with layered power control
CA2716980C (en) Light-integrating rangefinding device and method
EP3882659A1 (en) Method of calculating distance-correction data, range-finding device, and mobile object
US11252359B1 (en) Image compensation for sensor array having bad pixels
US10514447B2 (en) Method for propagation time calibration of a LIDAR sensor
CN112219135A (en) Distance measuring device, distance measuring method and mobile platform
US10859681B2 (en) Circuit device, object detecting device, sensing device, mobile object device and object detecting device
CN114402225A (en) Distance measuring method, distance measuring device and movable platform
US20210026012A1 (en) Distance measuring device, and distance measuring method
WO2019054917A1 (en) Time-of-flight scheimpflug lidar
US11444432B2 (en) Laser driver pulse shaping control
EP3709050B1 (en) Distance measuring device, distance measuring method, and signal processing method
US20230375678A1 (en) Photoreceiver having thresholded detection
US11802962B2 (en) Method for multipath error compensation and multipath error-compensated indirect time of flight range calculation apparatus
RU2776816C2 (en) Distance measurements based on lidar system with multilevel power control
WO2022198638A1 (en) Laser ranging method, laser ranging device, and movable platform
US11585910B1 (en) Non-uniformity correction of photodetector arrays
US20230194685A1 (en) Active/passive pixel current injection and bias testing
WO2023112884A1 (en) Ranging device, determining device, determining method, and program
WO2022103555A1 (en) Laser emission control in light detection and ranging (lidar) systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18856091

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18856091

Country of ref document: EP

Kind code of ref document: A1