EP1590683A1 - Procede et systeme lidar proche infrarouge, inrarouge et ultraviolet - Google Patents

Procede et systeme lidar proche infrarouge, inrarouge et ultraviolet

Info

Publication number
EP1590683A1
EP1590683A1 EP04702184A EP04702184A EP1590683A1 EP 1590683 A1 EP1590683 A1 EP 1590683A1 EP 04702184 A EP04702184 A EP 04702184A EP 04702184 A EP04702184 A EP 04702184A EP 1590683 A1 EP1590683 A1 EP 1590683A1
Authority
EP
European Patent Office
Prior art keywords
light
objects
receiving
streak
plural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP04702184A
Other languages
German (de)
English (en)
Inventor
Gregory J. Fetzer
David N. Sitter
Douglas Gugler
William L. Ryder
Andrew J. Griffis
David Miller
Asher Gelbart
Shannon Bybee-Driscoll
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Arete Associates Inc
Original Assignee
Arete Associates Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Arete Associates Inc filed Critical Arete Associates Inc
Publication of EP1590683A1 publication Critical patent/EP1590683A1/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4816Constructional features, e.g. arrangements of optical elements of receivers alone

Definitions

  • This invention relates generally to systems and methods for automatically detecting light reflected or scattered from an object, and determining distance to the object. Also found, in preferred applications of the invention, are other properties of the detected object —such as for e ⁇ ample reflectance, velocity, and three- dimensional relationships among plural detected objects .
  • lidar or “light detection and ranging” — analogous to the better-known “radar” that uses the radio portions of the electromagnetic spectrum. Because most lidar systems use pulsed lasers as excitation, the acronym “lidar” is sometimes said to instead represent “laser illumination detection and ranging” .
  • a sharp pulse of light is projected toward an object, or field of objects, that is of interest.
  • the object or objects reflect — for turbid media a more descriptive term is "scatter" — a portion of this excitation radiation back toward the system, where the return radiation is time resolved.
  • High-resolution lidar imaging provides fully three-dimensional images of far higher resolution, on one hand, and that also have distinct advantages in comparison to common two-dimensional imaging (e. ⁇ . photographs) on the other hand.
  • some of the advantages provided by the additional range information are the ability to remove clutter, to accurately discriminate decoys from objects of real interest, and to provide additional criteria for detection and classification.
  • High-resolution three-dimensional imaging may provide volumet- ric pixel sizes of approximately 0.75 mrad by 0.75 mrad by 7.5 cm. Such imaging requires high bandwidth (2 GHs) lidar receivers with small instantaneous fields of view (IFOV) and many pixels in the two-dimensional imaging directions .
  • IFOV instantaneous fields of view
  • the optical return is made to take the form of a substantially one- dimensional image (i. e. slit-shaped, extending in and out of the plane of Fig. 1) , or is reformatted 23 as such an image.
  • a photocathode screen 24 of the streak tube 18 forms a one-dimensional electronic image 25, which is refined by electron-imaging components 26 within the streak tube.
  • position along these unidimensional optical and electronic images 22 , 25 may either represent location along a simple thin image slice of the object field, or represent position in a very complex composite, positionally encoded version of a two-dimensional scene. This will be explained shortly.
  • a very rapidly varying electrical deflection voltage 28, applied across deflection electrodes 27, sweeps 29 the one-dimensional electronic image 25 quickly down a phosphor- coated surface 31, forming a two-dimensional visible image on the phosphor screen.
  • the sweep direction 29 then represents time — and accordingly distance, to each backscattering object — while the orthogonal direction on the screen (again, esstending in and out of the plane of Fig. 1) . represents position along the input optical image, whether a simple image slice or an encoded scene.
  • Relative motion between the apparatus and the object field is provided, as for instance by operating the apparatus in an aircraft that makes regular advance over a volume of seawater, while laser- beam pulses are projected toward the water.
  • the pulsed laser beam is formed into the shape of a thin fan — the thin dimension of the fan-shaped beam being oriented along the "track" (direction) of this relative motion.
  • the broadly diverging wide dimension of the fan beam often called the "cross-track" dimension
  • the narrow dimension of the fan beam often called the "cross-track” dimension
  • the narrow-track dimension is at right angles to the direction of motion: this is the above- mentioned case of direct physical correspondence between the unidimensional optical or electronic image and a real slice of an object image.
  • the Gle ⁇ kler patent mentioned above shows that two or more such one-dimensional images can be processed simultaneously — yielding a corresponding number of time-resolved pulse returns .
  • Each laser pulse thus generates at the receiver, after time- resolution of the return pulse, at least one two-dimensional snapshot data set representing range (time) vs . azimuth (cross-track detail) for the instantaneous relative position of the system and object field.
  • Successive pulses, projected and captured during the continuing relative motion provide many further data frames to complete a third dimension of the volumetric image.
  • the resulting three-dimensional image can be visualised simply by directly observing the streak-tube phosphor screen, or by captur- ing the screen display with a CCD or other camera at the frame rate (one frame per initiating laser pulse) for later viewing.
  • Another option is to analyse the captured data, e. . in a computer, by any of myriad application-appropriate algorithms .
  • the resulting backscatter pulse correspondingly, is all time resolved concurrently — typically requiring, at least for a streak- tube, temporary mapping of the two-dimensional return into a one-dimensional (i. e. line) image that the tube can sweep.
  • mapping is performed by a custom fiber-optic prism.
  • mapping may be done in a very great variety of ways. For example successive raste -equivalent optical-image slices can be placed end-to-end along the input photocathode, or individual pixels can be subjected to a completely arbitrary reassignment to positions along the cathode. Any mapping intermediate between these e ⁇ stremes is also possible.
  • the data can be remapped to recover a multiplicity of original two-dimensional image-data frames — each now having its family of ranged variants. If preferred the full three-dimensional data set can be unfolded in some other way for analysis as desired.
  • Streak-tube imaging lidar is thus a proven technology, demonstrated in both pushbroom and flash configurations. 1, Unfortunate- ly, however, it is heretofore usable only in the visible-ultraviolet portion of the electromagnetic spectrum, whereas several important applications favor operation in longer-wavelength spectral regions .
  • TMeye saf e requirements for many operating environments .
  • the human eye is extremely sensitive to visible radiation. Severe retinal damage can occur if someone is exposed to radiation transmitted by a conventional streak-tube lidar system.
  • NIR near-infrared
  • the near-infrared is far from the only spectral re- gion in which lidar operation would be very advantageous.
  • the more- remote infrared portion of the electromagnetic spectrum (3 to 12 ⁇ m) overlaps strong absorption atures of many molecule .
  • wavelengths in this region are particularly attractive for monitoring gaseous contaminant concentrations such as those encountered in atmospheric pollution or industrial process control .
  • C0 2 lasers operating at 9 to 11 ⁇ m can produce high power and have been deployed in space for a number of applications.
  • the present invention is well suited for use with C0 2 -laser-based imaging lidar systems .
  • differential or ratio measurements for example, differential absorption spectroscopy, and other analogous plural- or multispectral investigations .
  • this has not been practical in th® lidar field, even for measurements comparing and contrasting- mssicurements as between the visible and ultraviolet.
  • United States Patent 6,349,016 of Larry Coldren is representative of advanced sophistication in a field previously related only to optical communications, optical switching and the like. To the best of the knowledge of the present inventors, that field has never previously been connected with lidar operations or any other form of three-dimensional imaging.
  • the present invention offers just such refinement.
  • the invention has major facets or aspects, some of which can be used independently — although, to optimize enjoyment of their advantages, certain of these aspects or facets are best practiced (and most- preferably practiced) in conjunction together.
  • the invention is apparatus for detecting objects and determining their distance, to form a two-dimensional or three-dimensional image.
  • the apparatus includes some means for receiving light scattered from the objects and in response forming a corresponding light of a different wavelength from the scattered light.
  • these means will be called simply the "receiving-and-forming means”.
  • the receiving-and- forming means will instead be called a "wavelength converter” (although the term “converter” may be semantically imprecise, as discussed later in this document) .
  • ⁇ C the lower-case Greek letter ⁇ (lambda) that is the traditional symbol for wavelength.
  • the first aspect of the invention also includes some means for time-resolving the corresponding light to determine respective dis- tances of the objects. Again for generality and breadth these means will be called the “resolving means” .
  • inserting the receiving-and-forming means in advance of the time-resolving means can provide to the latter (e. q. a streak tube) — even if the scattered light is not visible light — substantially the same visible optical signal that would be obtained by receiving visible scattered light directly from the objects.
  • the receiving-and-forming means thereby enable the external portions of the overall system to operate in almost any wavelength region; and can free the system from wavelength limitations of the time-resolv- ing means. In this way the heretofore-intractable problems discussed above are substantially eliminated.
  • the apparatus is further for use in determining reflectance of the objects; and the receiving-and-forming means include some means for measuring and recording gray-level information in the received, and formed light.
  • the receiving-and-forming means include a. first, optointermediate stage that receives the scattered light and in response forms a corresponding intermediate signal. Accordingly the receiving-and-forming means also include a second, intermedioptical stage that receives the intermediate signal and in response forms the corresponding light.
  • optointermediate stage is here meant a subsystem that receives optical signals (the lidar return beam, in particular) and generates a corresponding signal in some intermediate domain — which may be electronic (in the present day, possibly the only practical such domain) , or optical, or quantum-based, or a signal formed in yet some other medium.
  • intermediate- cal stage analogously describes a converse subsystem that receives and operates on that intermediate signal to generate the corresponding optical output. If this basic preference of employing two stages that communicate through a common intermediate signal is observed, then two alternative subpreferences arise: preferably the intermediate signal includes either an optical signal or an electronic signal .
  • the time-resolving means include a streak- camera device; and that the system further include a light source, and some means for projecting pulses of light from the source to-yard the objects for scattering back toward the receiving-and-forming mean .
  • the streak-camera device be incorporated into a repetitively pulsed pushbroom system, or into a flash lidar system.
  • the system also include an aircraft or other vehicle transporting the receiving- and-forming means, and the streak lidar device as well, relative to the objects.
  • the apparatus be stationary and the scene made to move. In principle the pushbroom mode simply involves relative motion between the two.
  • the streak-camera device in- elude a multislit streak tube.
  • the intermediate signal include an electronic signal
  • the first stage include an optoelectronic stage
  • the second stage include an electrooptical stage.
  • the optoelectronic stage include light-sensitive semiconductor devices — and these devices in turn include photodi- odes, e. q. PIN ("P-intrinsic") diodes, or alternatively avalanche photodiodes .
  • the electrooptical stage include vertical-cavity surface-emitting la- sers, or light-emitting diodes, connected to receive the electronic signal from the PIN diodes .
  • the electrooptical stage include edge-emitting lasers, or quantum-dot lasers, or microelectromechanical systems — any of these devices being connected to receive the electronic signal from the PIN or other diodes .
  • the apparatus further include utilisation means responsive to the time-resolving means.
  • "Utilisation means” are any means that utilise the resulting output information from the time-resolving means .
  • the utilization means are one or more of:
  • a monitor that displays an image of the objects for viewing by a person at the apparatus
  • a data-processing device for analysing the objects or images of them
  • means for determination of hostile conditions, and resulting security measures including but not limited to automatically deployed area-sealing bulkheads.
  • the receiving-and-forming means include discrete arrays of light-sensing and light-producing components respectively.
  • the receiving and forming means also include a discrete array of circuitry for controlling the forming means in response to the re- ceiving means.
  • the receiving and forming means include at least one monolithic hybrid of light-sensing and light-producing components.
  • the monolithic hybrid further include cir- cuitry for controlling the forming means in response to the receiving means .
  • the invention is a method for detecting and ranging objects.
  • the method includes the step of receiving light scattered from the objects.
  • the method also includes the step of, in response to the scattered light, forraing a corresponding light of a different wavelength from the scattered light.
  • the method includes the step of time-resolving the corresponding light to determine respective distances of the objects.
  • the second major aspect of the invention thus signifi- cantly advances the art, nevertheless to optimize enjoyment of its benefits preferably the invention is practiced in conjunction with certain additional features or characteristics.
  • the method is further for use in determining reflectance of the objects; and the receiving and forming steps both preserve at least some gray-level (i. e. relative intensity) information in the scattered light.
  • gray-level i. e. relative intensity
  • the receiving step receive return light in plural wavelength bands, and the forming step form the corresponding light in substantially one common band. If this plural-band preference is observed, it is further preferred that the bands include at least one UV wavelength; and then a still further nested preference is that they include at least one NIR wavelength. (These choices exhibit distinct abilities of the invention; in practice, spectral regions are chosen based on physics to extract unique object data.) Two other alternative basic preferences are that the receiving step include receiving the plural wavelength bands at (1) plural slits, respectively, of a plural-slit streak camera, and (2) plural times, respectively.
  • the method also include the step of, before the receiving step, transmitting light in said plural wavelength bands, substantially simultaneously, toward the objects.
  • the receiving step include transmitting the plural wavelength bands at plural times, respectively.
  • the method also include the step of deriving plural signals from the received light in the plural wavelength bands, respectively. Accordingly the method preferably also includes the step of finding differences or ratios between signals received in the plural wavelength bands .
  • the invention is apparatus for detecting objects and determining their distance and reflectance, to form a two-dimensional or three- dimensional image;
  • the apparatus includes a light source; and means for projecting pulses of light from the source toward the objects for scattering back toward the receiving-and-forming means; means for receiving light scattered from the objects and in response forming a corresponding light of a different wavelength from the scattered light, preserving gray-level information in the received and corresponding light; and means, including a streak camera, for time-resolving the cor- responding light to determine respective distances and reflectances of the objects;
  • the receiving-and-forming means include: a first, optoelectronic stage, including an array of light- sensitive PIN diodes, that receives the scattered light and in re- sponse forms a corresponding electronic signal; a second, electrooptical stage, including an array of vertical- cavity surface- mitting lasers connected to receive the electronic signal from the PIN diodes, that receives the
  • this facet of the invention may represent a description or definition of the third aspect or facet of the invention in a broad and general form. Even as couched in these broad terms, however, it can be seen that this facet of the invention importantly advances the art. In particular, though not wholly independent of the first aspect presented earlier, this facet of the invention aggregates several preferences that may be particularly synergistic. Without in the least denigrating the individual aspects and preferences dis- cussed above, the aggregated system of the third aspect is believed to be especially advantageous in short-term manufacturability and overall practicality.
  • the streak lidar device is incorporated into a repetitively pulsed pushbroom system.
  • Fig. 1 is a block diagram of a streak tube in operation, shown together with a CCD camera and an output-data connection — symbo- lizing processing and utilization means — that all together form a streak-tube imaging lidar (“STIL”) camera;
  • TNL streak-tube imaging lidar
  • Fig. 2 is a schematic diagram of a multipixel wavelength converter (" ⁇ C”) ;
  • Fig. 3 is a typical light-output vs. drive-current ("L-I”) characteristic of a VCSEL;
  • Fig. 4 is a single-channel ⁇ C
  • Fig. 5 is a ⁇ C block diagram used for the purpose of estimating the conversion efficiency of the system shown in Fig. 4;
  • Fig. 6 is an optical-bench layout used to validate the performance of the ⁇ C;
  • Fig. 7 is a group of oscilloscope traces corresponding to various signals in the 2-.C;
  • Fig. 8 is a plot of the receiver and VCSEL output-waveform pulse widths as a function of the drive pulse width (the symbols representing the data points collected during the experiment, and the solid lines representing least-squares-error linear fits to the data) ;
  • Fig. 9 is a lidar image of pulse return from mirror 1 (Fig. ⁇ ) , with time on the vertical axis — increasing upward;
  • Fig. 10 is a like image of pulse return from mirror 2;
  • Fig. 11 is an oscilloscope screen capture showing the laser drive pulse (top) with the corresponding pulse returns from mirror 1 (left pulse, below) and mirror 2 (right) ;
  • Fig. 12 is a like oscilloscope screen capture showing the noise created by the signal generator — producing the second, smaller pulse seen in the lidar imagery, the bottom line being pulse return f om mirror 1 , and the central line that from mirror 2 ;
  • Fig. 13 is a diagram, very highly schematic, showing one of many prospective uses of preferred embodiments of the invention —particularly including an aircraft containing and translating the apparatus in the so-called "pushbroom” pulsed mode, over objects to be imaged in eye-safe mode;
  • Fig. 14 is a like diagram for the so-called "flash" mode;
  • Fig. 15 is a spectral-response curve for InGaAs;
  • Fig. 16 is a diagram of a multichannel test setup for imaging box in front of a wall
  • Fig. 17 is a multichannel streak image of the Fig. 16 wall alone, i. e. without the box, and showing multiple returns from a twelve-pixel prototype system
  • Fig. 18 is a like image captured with the box present, and two feet from the wall (higher reflectivity of the cardboard is indicated here by increased brightness of the return pulses) ;
  • Fig. 19 is a like image with the box four feet from the wall;
  • Fig. 20 is an image very generally like Fig. 18 but with a translucent object (window screen) in front of the wall, substituted for the box — and with the background electronically subtracted;
  • Fig. 21 is a mesh plot of streak return from a screen in front of a wall — showing both the strong return from the solid wall and the weaker signal from the screen
  • Fig. 22 is a graph of predicted AC signal-to-noise ratio for two different types of detectors, namely P-intrinsic diodes and avalanche photodiode ;
  • Fig. 23 is a graph showing predicted signal-to-noise ratio for an overall STIL receiver according to the invention, incorporating the performance of the XC (here D is the receiver collection aper- ture diameter, T r ⁇ is the receiver transmission efficiency, and ⁇ is the atmospheric attenuation coefficient;
  • Fig. 24 is a conceptual block diagram showing the ⁇ C used in a time-sharing plural-wavelength-band lidar system
  • Fig. 25 is a like diagram for a spatial-sharing (plural-slit) plural-wavelength-band lidar system that uses filters to separate wavebands;
  • Fig. 26 is a like diagram of another spatial-sharing system that instead uses a diffraction grating.
  • the invention provides a low-cost alternative to visible-light lidar.
  • NIR radiation pulses can be pro- jected toward objects, and the returned NIR pulses 8 (Fig. 2) converted, at the receiver, into pulses 22' at a visible wavelength.
  • This visible radiation 22' is then directed into a streak tube 18, effectively emulating the visible light 22 (Fig. 1) entering a streak-tube system conventionally.
  • the remainder of the operation is closely analogous to generally conventional operation of the streak tube, excepting only possible effects of positional quantization along the slit direction — and the result is a streak-tube lidar receiver operable for NIR applications .
  • a number of techniques could be used to accomplish the wavelength conversion. It is possible to use nonlinear optical techniques such as Raman scattering, stimulated Raman scattering, and harmonic frequency generation to achieve wavelength conversion.
  • Each of these technique requires relatively complex op- tical schemes ; and generally the conversion efficiency is strongly dependent on the intensity of the light at the converter.
  • particular light 8 of one wavelength drives an intermediate optoelectr ⁇ op ical stage 13-17 (Fig. 2) that generates corresponding light 22' of another wave- length.
  • This approach uses detectors 13, amplifiers 14, 15, and emitters 16 already developed for other applications — particularly optical telecommunication or optical switching.
  • These established technologies, mentioned in subsection c) of the "BACKGROUND" section in this document, include development and marketing of discrete components 4 ' 7 ⁇ 9 — as well as monolithic (common-epitaxy) systems introduced in the Coldren patent. They appear to have never before been associated with lidar or other three-dimensional measurements .
  • Near-infrared light 8 from the object field actuates the ⁇ C, which responds by passing visible light 22' to the streak-tube 18: (1) the visible line image 22' of the backscattered light is formed, as in the conventional system, on a slit in front of the streak-tube photocathode 24 (Fig. 1) , bringing about a corresponding line image 25 of photoelectrons within the tube that is accelerated toward the anode end 31 where the phosphor lies .
  • the photoelectrons e " are electrostatically deflected 29 across or down the phosphor, at right angles to the linear dimension, forming a two-dimensional image on the phosphor — which re- sponds by generating a visible image that is very nearly identical geometrically.
  • These electronic and visible images have spatial (line-image axis) and temporal (deflection/sweep axis) dimensions.
  • a CCD camera 19 captures 34 the visible two-dimensional image formed, or a human operator directly views the phosphor screen.
  • the third dimension is captured as described- earlier — i. e. , either in pushbroom mode (by repetitively pulsing the laser, while providing relative motion between the scene and- the sensor platform 5 ' 6 ) or in flash mode (by pre apping a full two-dimensional scene into a composite line image, and time-resolving that composite image) .
  • the far-reaching objective of this invention is to provide a compact, imaging lidar receiver that operates in the near-IR re- gion of the spectrum and provides high-resolution three-dimensional imagery.
  • the receiver combines a patented streak-tube imaging lidar ("STIL”) receiver from Arete Associates, of Sherman Oaks, California, with a complementary receiver front end that accomplishes the figurative conversion of near-infrared (NIR) light to a visible wavelength.
  • TTL streak-tube imaging lidar
  • the result is a lidar receiver that can operate at wavelengths outside the range of the streak-tube photocathode sensitivity, yet provide imagery that is similar to that currently available with the visible-wavelength STIL systems.
  • the photod tector current is amplified and converted to a voltage signal by a transimpedance am- plifier 14.
  • the output of the amplifier drives a vertical cavity surface-emitting laser (VCSEL) 16 that emits in the visible region of the spectrum.
  • VCSEL vertical cavity surface-emitting laser
  • the VCSEL radiation 22' is incident on the photocathode of the streak tube, and the operation of the streak tube is as described in the earlier "BACKGROUMD" section of this document — in subsection w " of that section.
  • inventions of this invention allow input operation at other wavelengths , merely by replacement of the InGaAs modules with detectors 13 sensitive to the wavelengths of interest — for example InSb or PbSe for two to four microns, or HgCdT® for five to fifteen microns. Also for UV operation, Si detectors are appropriate.
  • the output generators 16 can be LEDs rather than VCSELs. Such a substitution is expected to inflict no more than a loss in sharpness due to optical crosstalk at the output, and at most some degradation of temporal response (i. e. , it is possible that there will be no temporal degradation at all) .
  • InGaAs photodetectors 13 used for telecommunications provide high quantum efficiencies and sufficient bandwidth to serve in this application.
  • the technology is quite mature, and large arrays of detectors are commercially available.
  • the one-dimensional photodiode array is input to an array of transimpedance amplifiers (TIAs) 14 that drives an array of amplifiers 15 (Figs. 4 through ⁇ ) .
  • the signal is then transmitted to the vertical cavity surface emitting laser (VCSEL) array 16 and captured by a conventional streak-tube/CCD camera 18, 19 (Figs. 1, 2 and 6).
  • VCSELs vertical-cavity surface-emitting lasers
  • VCSELs have been selected for the output stage because of their bandwidth and because they are inherently fabricated in array formats .
  • VCSELs are unique, in comparison to other diode lasers, in that they emit from the surface of the structure rather than from the edge. Consequently, they are by nature grown in arrays, and microlens optical arrays 17 can be integrated directly onto the devices — facilitating collima- tion of the output. A like attachment process may be available for light-emitting diode .
  • Comples ⁇ electrical contacts required to support a large array of VCSELs can be formed through so-called "flip-chip bump bonding" .
  • This is detailed e. ⁇ . by Amkor Technology, Inc. (at www.amkor.com/ enablingte ⁇ hnologies/FlipChip/index. ⁇ fm) generally as follows. It is a method of electrically connecting a die to a package carrier. The package carrier, either substrate or leadframe, then provides the connection from the die to the exterior of the package. In "standard” packaging, interconnection between a die and a carrier is made using wire. The die is attached to the carrier, face-up; next a wire is bonded first to the die, then looped and bonded to the carrier. Wires are typically 1 to 5 mm in length, and 25 to 35 ⁇ m in diameter.
  • the interconnection between the die and carrier is instead made through a conductive so-called “bump” that is formed directly on the die surface.
  • the bumped die is then inverted ("flipped over", in packaging parlance) and placed face-down, with the bumps connecting to the carrier directly.
  • a bump is typically 70 to 100 ⁇ m tall, and 100 to 125 ⁇ m in diameter.
  • the flip-chip connection is generally formed with one of two attaching media: solder or conductive adhesive.
  • solder in either eutectic (63% Sn, 37% Pb) or high-lead (97% Pb, 3% Sn) compositions; and solder interconnect is used in the initial flip-chip products that Amkor has brought to market.
  • solder-bumped die is attached to a substrate by a solder- reflow process, very similar to the ball-grid array (BGA) process in which solder balls are attached to a package exterior.
  • BGA ball-grid array
  • the remaining voids between the die and the sub- strate — surrounding the solder bumps — are filled with a specially engineered epoxy called "underfill”.
  • That material is particularly designed to control stress in the solder joints caused by the difference in thermal expansion between the silicon die and the carrier. Once cured, the underfill absorbs that stress, reducing the strain on the solder bumps and thereby greatly increasing the life of the finished package.
  • the chip-attachment and underfill steps are the elements of flip-chip interconnection. Beyond this , as the Amkor presentation concludes, the remainder of package construction surrounding the die can take many forms and can generally utilise existing manufacturing processes and package formats . d) Leveraging technologies
  • VCSELs Physical characteristics of VCSELs are well suited to solving the ⁇ C/STIL problem.
  • individual VCSEL structures are small (about 3 to 10 ⁇ m) although they typically have a high beam divergence unless the output is coupled into a microlens .
  • Addition of a lens array 17 results in a structure with pitch between 100 and 250 ⁇ m.
  • a VCSEL array with 200- ⁇ m pitch can be coupled into the streak tube through a three-to-one fiber taper (also at 17) , providing two hundred fifty-sis: cross-track pixels on a standard-sise (12.3 mm) CCD chip — assuming a streak- tube magnification of 0.7, which is common.
  • VCSEL emission wavelengths can be tailored to match the response peak of the streak-tube photocathode.
  • VCSEL output wavelengths between 600 and 850 nm are easily achievable — with AlGaAs/GaAs or InGaAs/GaAs materials and standard molecular-beam epitaxy techniques .
  • Arrays of up to a thousand elements have been manufactured 7 , and several companies offer commercially available custom arrays 8 ' 9 .
  • ⁇ CS ⁇ Ls have a distinct lasing threshold 41 (Fig. 3) that must be overcome to obtain significant light output.
  • a bias-current circuit 61 (Fig. 4) supplies the VCSEL with electrical current 64 held just below that threshold.
  • drive electronics 14 provide an amplified photocurrent 65 which is added to the quiescent-state current 64.
  • the light/current relationship 42 is very linear from the turn- on point and up toward the region of saturation 43 — accordingly providing nearly linear response to intensity of the lidar return. This characteristic is important where contrast or intensity information in the lidar imagery may provide significant discrimination capabilities . Particularly good examples are polarimetric lidar applications, in which maintaining contrast information is critical.
  • Our prototype incorporates VCSEL drive circuitry 14 (Fig. 4) that provides ample bandwidth and gain to allow operation of a single-pixel lidar system.
  • the electronics required to drive the VCSEL elements are quite simple.
  • InGaAs photodiod-es have an esitremely high quantum efficiency in comparison with a streak-tube photocathode. Therefore noise characteristics at the input end 13 of our ⁇ C are in our favor; and. any gain that can be applied before the newly generated visible light 22' reaches the photocathode 24, without adding significant noise, is advantageous .
  • the invention does involve some tradeoffs .
  • An advisable production configuration will have two hundred fifty-six channels, in a device of suitable size to couple with a streak tube; this configuration places many operational amplifiers 14, 15 in a small space. Accordingly power consumption and physical space must be balanced against the gain-bandwidth product.
  • the solution here is a simple configuration that provides arbitrary but significant gain, and that is readily reduced to an integrated-circuit implementation.
  • SNR signal-to- noise ratio
  • Noise sources include background radiation (P b ) , dark current (I D ) detector shot noise, and the respective amplifier noise terms for the transimpedance and transconductance amplifiers ( ⁇ ⁇ 2 , ⁇ G 2 ) .
  • Equation 2 approximates the SNR for a receiver with bandwidth B.
  • Equation 2 the SNR at the output of the ⁇ C has been computed and plotted (Fig. 22) as a function of bandwidth for an InGaAs PIN and avalanche photodiodes (APD) , and incident energy on the photodetector of 4 fJ.
  • APD avalanche photodiodes
  • the laser pulse width was varied inversely to the bandwidth of the ⁇ C.
  • the lower detectable laser energy for typi- cal STIL receivers is approximately 1 fJ/pixel .
  • use of InGaAs APD's in the ⁇ C will allow SNR performance at 1.5 ⁇ m nearly identical to that of STIL receivers operating at 532 nm.
  • the dominant noise factor in the ⁇ C is the transimpedance am- plifier.
  • the amplifier considered here is a commercial, off-the-shelf ("COTS") item whose design can be improved.
  • COTS off-the-shelf
  • the high transimpedance resistance and the inherent noise in the amplifier can be traded off to some extent.
  • a ⁇ C with a large number of pixels is instead a highly specialized ensemble of integrated circuitry, most-preferably packaged as a hybrid multichip module.
  • the VCSEL in our prototype is a Honeywell SV3644-001 discrete element.
  • Technical specifications of interest for this VCSEL are: 673 n output, 2 V threshold voltage and 2 A threshold current. It can be driven above threshold in short-pulse low-duty-cycle mode from 2 to 100 mA, leading to a 0.01 to 1 mW peak output power range.
  • the receiver module is an InGaAs PIN from Fermionics model number FRL 1500.
  • the VCSEL drive circuitry used (Fig. 4) is discussed in subsection e) above.
  • the prototype took the form of an optical-bench setup using primarily COTS components, including a 1.55 ⁇ m laser diode 9 (Fig.
  • the optical bench setup (Fig. 6) is assembled so that a laser pulse 6 traveling f om the 1.55 ⁇ m laser diode 9 through a beam splitter 5 reflects rom one of two plane mirrors 1 , 2 mounted on the bench. A portion of the NIR reflected return light 4, redirected by the splitter 5, is incident on the input detector 13 of the ⁇ C 10. The resulting VCSEL output is projected through a short iber-optic coupler 17 onto the faceplate of the streak tube 18. h) Bandwidth
  • the pulse generator is not capable of producing pulses shorter than 2.6 nsec, but these observations nevertheless demonstrate di- rectly that the invention achieved a bandwidth of ⁇ 400 MHs very easily — and, by visual interpolation of the screen waveforms, also accomplished a bandwidth extension into the gigahertz regime.
  • the ⁇ C is an excellent match to the already demonstrated high temporal resolution of the streak-tube lidar receiver.
  • the resulting lidar images include a bright flash 81 (Fig. 9) corresponding to reflection from mirror 1, and another such flash 83 (Fig. 10) corresponding to that from mirror 2.
  • the flash 81 from the near mirror 1 is much closer to the ori- ⁇ in of time coordinates (the bottom of the image) than the flash 83 from the far mirror 2. This relationship makes clear that the system is able to detect a range difference from the two signals.
  • any undesirable ringing in the drive circuitry causes the VCSEL current to rise, only instantaneously, above the threshold — releasing a small pulse of light.
  • This small pulse is detectable by the streak- tube camera and appears in the lidar images as a smaller, dimmer pulse 82, 84 (Figs. 9 and 10).
  • a multichannel ⁇ C that we built and tested consists of the original single-channel circuit replicated twelve times.
  • the sin- gle-element InGaAs photodetector has been replaced with a twelve- channel InGaAs photodiode array (Fermionics P/N FD80DA-12) .
  • the array has a 250 ⁇ m pitch between detector elements; otherwise the element size, spectral response and sensitivity are all identical to the original InGaAs diode. Care was taken during the board layout to ensure line lengths were kept uniform from channel to channel to avoid a potential phase mismatch due to signal-propagation delays .
  • the multichannel unit was tested using a doubled Md:YAG laser with an 8 ns pulse generating approximately 1.80 W of average power at ⁇ 532 nm and 1.20 f at ⁇ 106 am. Pulse repetition frequency was 200 Hs.
  • the responsivity 232 (Fig. 15) of the InGaAs sensors to 1064 nm is about 3 db reduced from responsivity 232 at 1500 n .
  • the re- sponse (off-scale, 231) of the InGaAs detectors to ⁇ 532 nm visible light is about 8.5 dB lower still.
  • the ⁇ C board phys- ically blocked the visible light from reaching the streak tube, so images were from the ⁇ 1064 nm light only.
  • the beam from a source laser 309 (Fig. 16) was projected through a Fresnel lens (not shown) to produce a fan beam 303 paral- lei to the focal plane of the detector array 13 (Fig. 2) in the ⁇ C (Fig. 16).
  • the ⁇ C collection optics consist of a 12.0 mm f/1.3 lens 311 positioned approximately 1 cm in front of the array element. The 12.5 mm FOV of the lens roughly approximates the horizontal expansion of the fan beam 303.
  • Our lidar test objects included a wall 241, at a distance 312 of about 5 m (sixteen feet) from the ⁇ C, and also a cardboard box 242 at an adjustable distance 313 in front of the wall. Resulting streak images (Fig. 16) was projected through a Fresnel lens (not shown) to produce a fan beam 303 paral- lei to the focal plane of the detector array 13 (Fig. 2) in the ⁇ C (Fig. 16).
  • the ⁇ C collection optics consist of a 12.0
  • lidar images With the cardboard box 242 (Fig. 16) positioned at a distance 313 of roughly 0.6 m (two feet) in front of the wall (roughly 3.4 m or fourteen feet from the wavelength converter) , resulting lidar images (Fig. 18) immediately show very different responses. Clearly the system is indicating a closer object across part of the cross- track field. In our test images (Figs. 17 through 19) , range is presented from bottom to top: i. e. lower in these images is closer to the source.
  • the twelve-pixel ⁇ C should be built using commercially available VCSEL and detector arrays . In act the array dimension of twelve is based upon commercial availability.
  • VCSEL arrays The primary commercial application for VCSEL arrays is in short-haul communications. These already existing structures can abbreviate development time and reduce the cost of testing the intermediate design.
  • Custom electronics but well within the state of the art, are to be designed — including the transimpedance and transconductance amplifiers .
  • circuit design discussed earlier can be replicated to drive the twelve VCSELs.
  • All components are best made surface-mount types , with strict attention to transmission-line lengths and control of stray capacitance and inductance.
  • PSPICE ⁇ circuit emulation available as software from Cadence Design Systems, Inc. of San Jose, California, is a recommended design support tool prior to board fabrication. We found that its us® minimised errors in layout and operation.
  • Optical subsystem The optics must focus backscattered radiation onto the detector array and deliver the output of the VCSEL array to a streak-tube receiver. Optics to deliver the backseat- tered light are ideally in the form of a simple telescopic lens system that has high throughput near 1.5 ⁇ m.
  • the twelve-element array will be quite short (3 mm) and have few pixels, it is possible to butt-couple (i. e. abut) VCSEL outputs directly to the fiber taper input of the streak-tube receiver .
  • the pixel pitch of the VCSELs should be adequate to minimize channel crosstalk after that coupling is achieved efficiently.
  • One ideal laser for this system is an Nd:YAG unit coupled to an optical parametric oscillator to provide output at 1.5 ⁇ m.
  • An opti- cal pulse slicer is recommended to enable tailoring of pulse widths in the range from 1 nsec up to the normal laser pulse width of 10 nsec.
  • a suitable streak-tube lidar receiver is a Hamamatsu C4187 system coupled to a DALSA 1M60 CCD camera. We have used such a system in multiple STIL programs and find that it provides a solid foundation on which to build experience with the ⁇ C.
  • a useful data-acquisition system for this purpose is based on a commercially available frame grabber and a PC configured to capture and store the images. To minimize peripheral development time and cost it is advisable to obtain access to a suitable software library .
  • Design of a two- hundred-fifty-six-pixel system is significantly more complex than that of a twelve-pixel system. With the expertise developed in the dozen-pixel prototype, a design team can proceed much more confidently and with fewer detour .
  • This effort should encompass design of a ⁇ C with imaging capabilities that can meet real-world objectives.
  • the accompanying table contains lidar system specifications to drive such a design effort.
  • the detector array will be a two-hundred- fifty-six-element PIN InGaAs device. Three possible vendors of such arrays are known to us: Sensors Unlimited, Hamamatsu, and ST.
  • the array is advisably flip-chip bonded, as outlined earlier, to pre- serve the bandwidth of the detector and minimize the physical extent of the connection.
  • a set of transmission lines should interconnect the detector array and the amplifier array.
  • Transimpedance amplifier array Here too it is recommended to work closely with IC design and process specialists to identify the best process for the custom chip or chips to achieve the required combination of a low noise transimpedance amplifier, adequate bandwidth, a suitable gain stage, and output buffering to drive the VCSEL array.
  • the design from COTS components used on a circuit card should then be converted into devices that are readily fabricated in IC form with the process chosen.
  • a PSPIC ⁇ ® model should be developed in advance to show that the design, when implemented in this fashion, provides the performance required. It is preferable to consider the system aspects of the design including packaging and thermal modeling. A final system layout for fabrication at a foundry should be reserved for a later stage of development, but the two-hundred-fifty-six- pixel effort will confirm that the work is on track and provide a clear path to the foundry later.
  • VCSEL array VCSEL arrays are currently being produced in a large number of various formats.
  • the VCSEL array will be two hundred fifty-six elements with a device pitch of ⁇ 250 ⁇ m.
  • the invention is not limited to using VCSELs.
  • the emitters may instead comprise edge-emitting lasers, or quantum diodes or dots, or MEMS devices.
  • a secondary effect of the discrete detectors is an effective reduction in the fill factor of the receiver. This problem can significantly degrade the performance of the system with respect to the more-traditional mode of operation.
  • Such a limitation can be overcome through the use of a microlens array that can be attached or integrated directly onto the detector array. Such a practice is common in CCD and CMOS imaging devices as well as in detector arrays designed for communications and spectros ⁇ opic applications .
  • one of myriad uses may involve an aircraft 101 (Fig. 13), serving as part of the inventive apparatus, that translates 104 the STIL system 100 in the so-called "pushbroom" pulsed mode over or next to objects in a scene 105 to be imaged.
  • the system orm While in motion 10 , the system orms both the downward- or sideward-transmitted near-infrared pulses 103 and the reflected or back-scattered near-infrared pulses 8 within a thin-fan-shaped beam envelope 102. (It will be understood that the return pulses actually are scattered in essentially all directions.
  • the receiver optics confine the collection geometry to the fan shape 102.)
  • the aircraft 101 may, further as an example, be searching for a vehicle 109 that has gone off the road in snowy and foggy mountains 108.
  • a person ⁇ 07 in the mountains may be looking 106 directly at the aircraft and into the transmitted STIL beam pulses , but is not injured by the beam because it is near-IR rather than visible.
  • the interpretive portions 91-94 of the apparatus may also in- elude a monitor 99 that displays an image 98 of the scene 105 for viewing by a person 97 within the aircraft — even though the scene 105 itself might be entirely invisible to direct human view, obscured by fog or clouds (not shown) . Viewing may instead, or in addition, be at a base station (not shown) that receives the results of the data-processing system by telemetry 95.
  • the primary data processing 91 , 92 advantageously produces an image 98 for such viewing — preferably a volume-equivalent series of two-dimensional images as taught in the pushbroom art, including the earlier-mentioned previous patent documents of Arete Associates .
  • the system preferably includes automatically operated interpretive modules 94 that determine whether particular conditions are met (here for example the image-enhanced detection of the vehicle sought) , and operate automatic physical apparatus 95, 96 in response. For example, in some preferred embodiments detection of the desired object (vehicle 109) actuates a broadcast announcement 96.
  • These interpretive and automatically responsive modules 91-96, 99 are only exemplary of many different forms of what may be called "utilisation means" that comprise automatic equipment actu- ated when particular optically detected conditions are met.
  • Others include enabling or denying access to secure facilities through operation of doors and gates, or access to computer systems or to financial services such as credit or banking. Determination of hostile conditions, and resulting security measures such as auto- matically deployed area-sealing bulkheads, is also within the scope of the invention — as for instance in the case of safety screening at airports, stadiums, meeting halls, prisons, laboratories, office buildings and many other sensitive facilities .
  • the NIR beam is eye-safe, the entire system can be operated at close range to people and in fact can be used harmlessly to image people, including their faces, as well as other parts of living bodies e. q. for medical evaluations, as also taught in the earlier patent documents mentioned above.
  • the elements of the environment 105, 107-109 and of automatically operated response 94-96 that are shown shall be regarded as illustrations of all such other kinds of scenes for imaging, and the corresponding appropriate responses, respectively.
  • the invention is not limited to pushbroom operation, but rather can be embodied in e. ⁇ . flash systems. It will be understood, how- ever, that the pushbroom mode makes the most — in terms of resolution or image sharpness — of comparatively modest resources .
  • what is projected 203 (Fig. 14) and returned 208 can be a single rectangular-cross-section beam 202, rather than a succession of fan beams 102 (Fig. 13) .
  • the aircraft 101 may hover, rather than necessarily moving forward at some pace related to frame acquisition, and may be a lighter-than-air craft if desired.
  • the wavelengths of transmitted and recovered pulses 203, 208 are not in the vis- ible part of the spectrum; for many applications they are in the near-IR, but as noted earlier they can be in the infrared or ultraviolet as appropriate to the application. All the illustrations in this document are expressly to be seen as representative of all such diff rent wavelength embodiments .
  • a mapper 212 that rearranges elements (e. ⁇ . pixels) of the image captured by the ⁇ C 10.
  • the mapper 212 may take the form of a fiber-optic prism that is sliced, as described in the earlier-mentioned Knight or Alfano patents, to place successive raster lines of the image 22' end-to-end and thereby form a single common slit-shaped image 213.
  • mapping may instead be accomplished within the ⁇ C, by rerouting electrical connections at some point between the individual detectors 13 and the individual VCSELs (or other emitters) 16.
  • Such an arrangement poses a major challenge to maintaining minimum reactances throughout the system — and especially uniform reactances as between the multiple channels.
  • the flash-mode output image 214 may be regarded as garbled due to the mapper 212 and therefore requiring use of a remapper 215 to restore ordinary image properties of adjacency.
  • This remapping can be accomplished in various ways. The most straightforward is ordinarily a computerized resorting of pixels in the output image 214, to unscramble the effects of the mapper 212.
  • the applicability of the invention is not at all limited to the near-infrared.
  • One important area of use is the more-remote infrared, also a relatively difficult region for devel- opment of streak-tube photocathodes because of the even lower photon energy here than in the near-IR.
  • the infrared portion of the electromagnetic spectrum (3 to 12 ⁇ ) overlaps strong absorption features of many molecules.
  • wavelengths in this region are particularly attractive for monitoring gaseous contaminant concentrations such as those encountered in atmospheric pollution or industrial process control.
  • C0 2 lasers operating at 9 to 11 ⁇ m can produce large amounts of power and have been deployed in space for a number of applications.
  • the wavelength converter (“ ⁇ C") is well suited for use with C0 2 -la- ser-based imaging lidar systems.
  • ⁇ C wavelength converter
  • Even though photon energy in the ultraviolet is ample for development of streak-tube photocathode materials — and in fact such materials do exist — nevertheless the UV too offers fertile ground for applications of the present invention.
  • the particular appeal of the present invention lies in the potential for imaging re- turns from wholly different spectral regions within a single, common streak tube; and if desired even at the same time.
  • two lasers 409a, 409b (Fig. 24) producing respective pulses 403a, 403b in different wavebands — or if preferred a single laser capable of emission in different bands — can be opera- ted in alternation.
  • the returns 408 from an object field 441 are directed to a single, common 2uC 10, which relays the optical signals to a streak tube 18, camera 19 and interpretive stages 34 just as before. This type of operation yields a time-shared system.
  • the converter 10 may have sufficiently uniform response in the two wavebands to enable operation of the camera system 18, 19, 34 for processing of both sets of returns 408.
  • the streak tube 18, or the back-end stage 19 — or combinations of these — can be synchronously adjusted in sensitivity, electronically.
  • An alternative, acceptable in some applications involving relatively stationary object fields, is to collect a complete image or large portion of an image in one of the wavebands based on pulses 403a from one laser 409a; and then change over to collection of a comparable image or portion in the other waveband based on pulses 403b from the other laser 409alia
  • yet another alternative is shi ting 411 of two or more converters 10 , 410 into position in front of the streak tube 18 — or, if preferred, retaining a single converter 10 in position while swapping optical filters (not shown) in front of that single converter 10.
  • a spatially- shared system can be used instead.
  • the system advantageously uses a single laser 509 (Fig. 25) that can emit pulses 503 containing light in plural bands, or in particular plural spectral lines .
  • the return 508 from the object field 541 is likewise in plural optical bands, or at least lines .
  • the streak tube 518 in this case advantageously has a plural-slit photocathode as described in the previously mentioned Gleckler patent document.
  • one wavelength filter 501 is inserted in front of only just one part of the ⁇ C — while a second, different-wavelength filter 502 lies in front of another part.
  • the two filters 501 , 502 can be respectively inserted in front of the two ends of the converter array 10 , which correspondingly feed optical signals into the two slits.
  • the two ends (or more generally plural parts) of a single streak-tube slit can be driven in this way and the lidar images separately interpreted downstream.
  • Yet another option is to use two different ⁇ C sections (not shown) , with different wavelength sensitivities, in lieu of a single converter 10 — and generally without optical filters.
  • a more- pecifi ⁇ and more sophisticated implementation that better conserves optical-signal power uses a diffraction grating 503 (Fig. 26) instead of filters, to separate the wavebands of interest.
  • the invention By capturing images in a single streak tube concurrently, using any of the systems under discussion (Figs. 24 through 26), the invention enables the interpretive parts 34 of the system to develop difference signals, or ratio signals, as between the plural spectral regions. In this way the invention becomes a system capable of, for example, differential-intensity, or differential-absorbance, lidar _ j -,ectroscopy as between, e. q. , the far-IR and the UV — or other such combinations of spectral regions .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

Le fonctionnement du lidar flash et de capteur en peigne à l'extérieur du spectre visible, idéalement dans le proche infrarouge mais également dans IR et U.V., est possible par l'insertion, en amont du frontal du récepteur lidar classique, un dispositif qui reçoit la lumière diffusée par des objets et forme en réponse une lumière correspondante possédant une longueur différente de celle de la lumière diffusée. Des implémentations dans lesquelle on utilise des groupements de composants COTS discrets, idéalement des diodes PIN et VCSEL, avec amplificateurs semi-personnalisés intermédiaires, sont présentées, ainsi que l'utilisation d'un convertisseur monolithique connu. Des mesures multispectrales différentielles et de mise à l'échelle, notamment des données U.V., sont rendues possibles par le partage spatial (ex.fente multiple) ou le partage temporel.
EP04702184A 2003-01-15 2004-01-14 Procede et systeme lidar proche infrarouge, inrarouge et ultraviolet Withdrawn EP1590683A1 (fr)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US44030303P 2003-01-15 2003-01-15
US440303P 2003-01-15
PCT/US2004/000949 WO2004065984A1 (fr) 2003-01-15 2004-01-14 Procede et systeme lidar proche infrarouge, inrarouge et ultraviolet

Publications (1)

Publication Number Publication Date
EP1590683A1 true EP1590683A1 (fr) 2005-11-02

Family

ID=32771802

Family Applications (1)

Application Number Title Priority Date Filing Date
EP04702184A Withdrawn EP1590683A1 (fr) 2003-01-15 2004-01-14 Procede et systeme lidar proche infrarouge, inrarouge et ultraviolet

Country Status (4)

Country Link
EP (1) EP1590683A1 (fr)
AU (1) AU2004206520A1 (fr)
CA (1) CA2546612A1 (fr)
WO (1) WO2004065984A1 (fr)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102373926B1 (ko) * 2016-02-05 2022-03-14 삼성전자주식회사 이동체 및 이동체의 위치 인식 방법
US10274599B2 (en) 2016-06-01 2019-04-30 Toyota Motor Engineering & Manufacturing North America, Inc. LIDAR systems with expanded fields of view on a planar substrate
CN110506220B (zh) 2016-12-30 2023-09-15 图达通智能美国有限公司 多波长lidar设计
CN111279220A (zh) * 2017-08-22 2020-06-12 李平 激光雷达用双轴谐振光束转向镜系统及方法
WO2019164961A1 (fr) 2018-02-21 2019-08-29 Innovusion Ireland Limited Systèmes lidar à couplage de fibres optiques
CN111366946B (zh) * 2018-12-26 2023-10-13 保定市天河电子技术有限公司 一种狱所通道防护方法及装置
CN112697711B (zh) * 2020-12-14 2023-09-19 中国科学院合肥物质科学研究院 一种移动源废气快照式遥测系统
JP7178059B1 (ja) * 2021-05-31 2022-11-25 日本ペイントコーポレートソリューションズ株式会社 塗料組成物及び塗膜

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5467122A (en) * 1991-10-21 1995-11-14 Arete Associates Underwater imaging in real time, using substantially direct depth-to-display-height lidar streak mapping
FR2740227B1 (fr) * 1995-10-20 1997-11-07 Thomson Csf Dispositif de detection tomoscopique laser
GB2380344B (en) * 2000-04-26 2005-04-06 Arete Associates Very fast time resolved imaging in multiparameter measurement space

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2004065984A1 *

Also Published As

Publication number Publication date
WO2004065984A1 (fr) 2004-08-05
AU2004206520A1 (en) 2004-08-05
CA2546612A1 (fr) 2004-08-05

Similar Documents

Publication Publication Date Title
US7652752B2 (en) Ultraviolet, infrared, and near-infrared lidar system and method
Williams Jr Optimization of eyesafe avalanche photodiode lidar for automobile safety and autonomous navigation systems
US11112503B2 (en) Methods and apparatus for three-dimensional (3D) imaging
US7830442B2 (en) Compact economical lidar system
McManamon et al. Comparison of flash lidar detector options
Kostamovaara et al. On laser ranging based on high-speed/energy laser diode pulses and single-photon detection techniques
US6882409B1 (en) Multi-spectral LADAR
CN110178044B (zh) 检测装置、检测系统及检测装置的制作方法
CN111954827B (zh) 利用波长转换的lidar测量系统
US11791604B2 (en) Detector system having type of laser discrimination
Pasquinelli et al. Single-photon detectors modeling and selection criteria for high-background LiDAR
Jiang et al. InGaAsP/InP geiger-mode APD-based LiDAR
Hao et al. Development of pulsed‐laser three‐dimensional imaging flash lidar using APD arrays
WO2004065984A1 (fr) Procede et systeme lidar proche infrarouge, inrarouge et ultraviolet
EP3797317B1 (fr) Lidar infrarouge à courte longueur d'onde
Halmos et al. 3D flash ladar at Raytheon
US11885646B2 (en) Programmable active pixel test injection
US11802945B2 (en) Photonic ROIC having safety features
Richmond et al. Laser radar focal plane array for three-dimensional imaging
US11418006B1 (en) Integrated device for optical time-of-flight measurement
Browder et al. Three-dimensional imaging sensors program
US11460551B2 (en) Virtual array method for 3D robotic vision
US11600654B2 (en) Detector array yield recovery
Turner A Wide Area Bipolar Cascade Resonant Cavity Light Emitting Diode for a Hybrid Range-Intensity

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20050815

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL LT LV MK

RIN1 Information on inventor provided before grant (corrected)

Inventor name: BYBEE-DRISCOLL, SHANNON

Inventor name: GELBART, ASHER

Inventor name: MILLER, DAVID

Inventor name: GRIFFIS, ANDREW, J.

Inventor name: RYDER, WILLIAM, L.

Inventor name: GUGLER, DOUGLAS

Inventor name: SITTER, DAVID, N.

Inventor name: FETZER, GREGORY, J.

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20090309