AU2004206520A1 - Ultraviolet, infrared, and near-infrared lidar system and method - Google Patents

Ultraviolet, infrared, and near-infrared lidar system and method Download PDF

Info

Publication number
AU2004206520A1
AU2004206520A1 AU2004206520A AU2004206520A AU2004206520A1 AU 2004206520 A1 AU2004206520 A1 AU 2004206520A1 AU 2004206520 A AU2004206520 A AU 2004206520A AU 2004206520 A AU2004206520 A AU 2004206520A AU 2004206520 A1 AU2004206520 A1 AU 2004206520A1
Authority
AU
Australia
Prior art keywords
light
objects
receiving
streak
plural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2004206520A
Inventor
Shannon Bybee-Driscoll
Gregory J. Fetzer
Asher Gelbart
Andrew J. Griffis
Douglas Gugler
David Miller
William L. Ryder
David N. Sitter
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Arete Associates Inc
Original Assignee
Arete Associates Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Arete Associates Inc filed Critical Arete Associates Inc
Publication of AU2004206520A1 publication Critical patent/AU2004206520A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4816Constructional features, e.g. arrangements of optical elements of receivers alone

Description

WO2004/065984 PCT/US2004/000949 Ultraviolet, infrared, and near-infrared lidar system and method 5 Wholly incorporated by reference herein is the present in ventors' coowned U. S. provisional patent application 60/440,303, whose priority benefit is hereby asserted. 10 RELATED DOCUMENTS: Closely related documents are other, coowned U. S. utility-pat ent documents - also incorporated by reference in their entirety. Those documents are: Bo.vker et al., patents 6,400,396 (medical 15 scale) and 5,467,122 (ocean scale), and serial 09/125,259 (wide range of scales); McLean at al., serial 09/390,48 (shallow angle); Gleckler et al., serial 10/258,917 (plural slit); and Griffis at al., serial 10/426,907 (without streak tube). Other patents and publications of interest are introduced below. 20 FIELD OF THE INVENTION: This invention relates generally to systems and methods for 25 automatically detecting light reflected or scattered from an object, and determining distance to the object. Also found, in preferred applications of the invention, are other properties of the detected object - such as for example reflectance, velocity, and three dimensional relationships among plural detected objects. 30 BACKGROUND: a) Three-dimensional imaging 35 Some systems and methods for accomplishing these goals are con ventional in the field of so-called "lidar", or "light detection and WO 2004/065984 PCT/US2004/000949 ranging" - analogous to the better-known "radar" that uses the ra dio portions of the electromagnetic spectrum. Because most lidar systems use pulsed lasers as excitation, the acronym "lidar" is sometimes said to instead represent "laser illumination detection 5 and ranging". In a lidar system, a sharp pulse of light is projected toward an object, or field of objects, that is of interest. The object or objects reflect - for turbid media a more descriptive term is "scatter" - a portion of this excitation radiation back toward the 10 system, where the return radiation is time resolved. As in radar, round-trip propagation times for the radiation form a measure of the distances, or ranges, from the apparatus to the respective objects. Radar, however, simply due to the much lon ger wavelengths it employs, cannot provide the resolution available 15 with lidar. High-resolution lidar imaging provides fully three-dimensional images of far higher resolution, on one hand, and that also have distinct advantages in comparison to common two-dimensional imaging ( . photographs) on the other hand. As compared with such ordi 20 nary two-dimensional images, some of the advantages provided by the additional range information are the ability to remove clutter, to accurately discriminate decoys from objects of real interest, and to provide additional criteria for detection and classification. High-resolution three-dimensional imaging may provide volumet 25 ric pixel sizes of approximately 0.75 mrad by 0.75 mrad by 7.5 cm. Such imaging requires high bandwidth (2 GHz) lidar receivers with small instantaneous fields of view (IFOV) and many pixels in the tw^o-dimensional imaging directions. iey to these capabilities is effective and very fine time-res 30 olution of the return optical signals - ordinarily by a streak tube, although modernly very f-st electronics can be substituted in relatively less-demanding applications. Such applications particu larly include measurements at the scale of ocean volumes, in which temporal resolution may be in meters rather than centimeters. 35 Finer work, especially including laboratory-scale measurement or ultimately medical ranging with resolution finer than a millime ter, appears to exceed current-day speed and resolution capabilities 2 WO 2004/065984 PCT/US2004/000949 of electronics and accordingly calls for a streak tube. To use such a device for three-dimensional imaging, the laser pulses must be visible or shorter-wavelength light - so that the optical return pulse 21 (Fig. 1) from the object or objects is likewise visible or 5 ultraviolet light 22. (While visible lidar excitation is hazardous because it damages the retina, shorter-wavelength excitation too is hazardous due to damage to the lens of the eye.) In either event, the optical return is made to take the form of a substantially one dimensional image (i. e. slit-shaped, extending in and out of the 10 plane of Fig. 1), or is reformatted 23 as such an image. In response to that unidimensional optical input 22, in the form of visible or UV light, a photocathode screen 24 of the streak tube 18 forms a one-dimensional electronic image 25, which is re fined by electron-imaging components 26 within the streak tube. (It 15 will be understood that some very special streak-tube photocathodes have been developed to handle wavelengths other than visible; how ever, these are not at all commercial materials, and the use of some such photocathode technologies introduces synchronization problems and other drawbacks.) 20 Depending on any image reformatting that may be performed upstream of the streak tube 18, position along these unidimensional optical and electronic images 22, 25 may either represent location along a simple thin image slice of the object field, or represent position in a very complex composite, positionally encoded version 25 of a two-dimensional scene. This will be explained shortly. Within the streak tube, a very rapidly varying electrical de flection voltage 28, applied across deflection electrodes 27, sweeps 29 the one-dimensional electronic image 25 quickly down a phosphor coated surface 31, forming a two-diimensional risible image on the 30 phosphor screen. The sweep direction 29 then represents time - and accordingly distance, to each bachscattering object - while the or thogonal direction on the screen (again, extending in and out of the plane of Fig. 1).represents position along the input optical image, whether a simple image slice or an encoded scene. 35 The patents mentioned above introduce considerable detail as to behavior and use of a streak tube. They also may represent the 3 WO 2004/065984 PCT/US2004/000949 highest development of a form of lidar imaging familiarly known as "pushbroom" - because data are accumulated a little at a time, in thin strips transverse to a direction of motion. Relative motion between the apparatus and the object field is 5 provided, as for instance by operating the apparatus in an aircraft that makes regular advance over a volume of seawater, while laser beam pulses are projected toward the water. The pulsed laser beam is formed into the shape of a thin fan - the thin dimension of the fan-shaped beam being oriented along the "track" (direction) of this 1o relative motion. In some laboratory-scale systems it is more convenient to instead scan an object or object field past a stationary lidar transceiver. Hence in either instance the broadly diverging wide dimension of the fan beam, often called the "cross-track" dimension, 15 is at right angles to the direction of motion: this is the above mentioned case of direct physical correspondence between the unidi mensional optical or electronic image and a real slice of an object image. The Gleckler patent mentioned above, however, shows that two or more such one-dimensional images can be processed simultaneously 20 -- yielding a corresponding number of time-resolved pulse returns. Each laser pulse thus generates at the receiver, after time resolution of the return pulse, at least one two-dimensional snap shot data set representing range (time) vs. azimuth (cross-track detail) for the instantaneous relative position of the system and 25 object field. Successive pulses, projected and captured during the continuing relative motion, provide many further data frames to com plete a third dimension of the volumetric image. The resulting three-dimensional image can be visualized simply by directly observing the streak-tube phosphor screen, or by captur 30 ing the screen display with a CCD or other camera at the frame rate (one frame per initiating laser pulse) for later viewing. Another option is to analyse the captured data, . _. in a computer, by any of myriad application-appropriate algorithms. 35 Alternative to pushbroom imaging is so-called "flash" lidar, represented by patents Re. 33,865 and 5,412,372 of Knight and Alfano respectively. Here the excitation pulse is ideally formed into a 4 WO2004/065984 PCT/US2004/000949 substantially rectangular beam to illuminate the entire object or object field at once. The resulting backscatter pulse, correspondingly, is all time resolved concurrently - typically requiring, at least for a streak 5 tube, temporary mapping of the two-dimensional return into a one-di mensional (i. e. line) image that the tube can sweep. Such mapping, in the cited patents, is performed by a custom fiber-optic prism., This sort of mapping may be done in a very great variety of ways. For example successive raster-equivalent optical-image slices io can be placed end-to-end along the input photocathode, or individual pixels can be subjected to a completely arbitrary reassignment to positions along the cathode. Any mapping intermediate between these extremes is also possible. After time-resolution if desired the data can be remapped to s15 recover a multiplicity of original two-dimensional image-data frames - each now having its family of ranged variants. If preferred the full three-dimensional data set can be unfolded in some other way for analysis as desired. 20 b) The wavelength limitation Streak-tube imaging lidar is thus a proven technology, demon strated in both pushbroom and flash configurations." 2 Unfortunate 25 ly, however, it is heretofore usable only in the visible-ultraviolet portion of the electromagnetic spectrum, whereas several important applications favor operation in longer-wavelength spectral regions. A critical group of applications relates to so-called "'eye safe" requirements for many operating environments. The human eye so is extremely sensitive to visible radiation. Severe retinal damage can occur if someone is exposed to radiation transmitted by a con ventional streak-tube lidar system. In the near-infrared (NIR), by comparison, there is far less human sensitivity and likewise less risk. Maximum permissible expo 25 sure for NIR radiation at a wavelength of 1.54 pm is typically three orders of magnitude greater than at 532 nm. The main reason is that the lens of the eye does not focus NIR radiation onto the retina. 5 WO 2004/065984 PCT/US2004/000949 Consequently, in applications where humans might be exposed to the transmitted light, it is desirable to operate the lidar at the longer wavelength. In addition, radiation at 1.54 pm is invisible to the human eye, yielding the advantage of inconspicuous operation 5 - which is desirable in many applications. Limitation to the visible/UV is in a sense somewhat artificial, arising as it does merely from lack of a commercial streak tube with a photocathode sensitive to nonvisible radiation - even though NIR sensitive photocathode materials exist.
3 The vendor neither pro 10 duces streak tubes nor will provide the photocathode materials to streak-tube vendors. No streak-tube vendor is currently offering high-quantum-efficiency NIR streak tubes. The near-infrared, however, is far from the only spectral re 15 gion in which lidar operation would be very advantageous. The more remote infrared portion of the electromagnetic spectrum (3 to 12 pm) overlaps strong absorption features of many molecules. As a result wavelengths in this region are particularly attractive for monitor ing gaseous contaminant concentrations such as those encountered in 20 atmospheric pollution or industrial process control. CO2 lasers operating at 9 to 11 pm can produce high power and have been deployed in space for a number of applications. As will appear from a later section of this document, the present invention is well suited for use with CO,-laser-based imaging lidar systems. 25 Moreover, in other fields of optical measurement and analysis it is possible to make differential or ratio measurements - for example, differential absorption spectroscopy, and other analogous plural- or multispectral investigations. Heretofore this has not been practical in the lidar field, even for measurements comparing 30 and contrasting measurements as between the visible and ultraviolet. c) Other technology not heretofore associated with lidar 35 United States Patent 6,349,016 of Larry Coldren is representa tive of advanced sophistication in a field previously related only to optical communications, optical switching and the like. To the 6 WO 2004/065984 PCT/US2004/000949 best of the knowledge of the present inventors, that field has never previously been connected with lidar operations or any other form of three-dimensional imaging. Tabulated below is other related work of Costello et al.
3 and 5 Francis et al.,7 as well as related commercial product literature.
8 '9 These materials too are essentially representative of modern advan ces in optical switching and communications, unconnected with lidar. 10 d) Conclusion As can now be seen, the related art fails to resolve the pre viously described problems of lidar unavailability for operation outside the visible wavelength region. The efforts outlined above, 15 although praiseworthy, leave room for considerable refinement. SUMMARY OF THE DISCLOSURE: 20 The present invention offers just such refinement. The inven tion has major facets or aspects, some of which can be used inde pendently - although, to optimize enjoyment of their advantages, certain of these aspects or facets are best practiced (and most 25 preferably practiced) in conjunction together. In preferred embodiments of its first major independent facet or aspect, the invention is apparatus for detecting objects and de termining their distance, to form a two-dimensional or three-dimen sional image. The apparatus includes some means for receiving light 30 scattered from the objects and in response forming a corresponding light of a different wavelength from the scattered light. For pur poses of breadth and generality in discussing the invention, these means will be called simply the "receiving-and-forming means". In less-formal portions of this document, the receiving-and 35 forming means will instead be called a "wavelength converter" (al though the term "converter" may be semantically imprecise, as dis cussed later in this document). Hereinafter this phrase will be 7 WO 2004/065984 PCT/US2004/000949 abbreviated "kC", using the lower-case Greek letter ). (lambda) that is the traditional symbol for wavelength. The first aspect of the invention also includes some means for time-resolving the corresponding light to determine respective dis 5 tances of the objects. Again for generality and breadth these means will be called the "resolving means". The foregoing may represent a description or definition of the first aspect or facet of the invention in its broadest or most gen 10 eral form. Even as couched in these broad terms, however, it can be seen that this facet of the invention importantly advances the art. In particular, inserting the receiving-and-forming means in ad vance of the time-resolving means can provide to the latter (a. q. a streak tube) - even if the scattered light is not visible light 15 substantially the same visible optical signal that would be obtained by receiving visible scattered light directly from the objects. The receiving-and-forming means thereby enable the external portions of the overall system to operate in almost any wavelength region; and can free the system from wavelength limitations of the time-resolv 20 ing means. In this way the heretofore-intractable problems dis cussed above are substantially eliminated. Although the first major aspect of the invention thus signifi cantly advances the art, nevertheless to optimize enjoyment of its 25 benefits preferably the invention is practiced in conjunction with certain additional features or characteristics. In particular, preferably the apparatus is further for use in determining reflec tance of the objects; and the receiving-and-forming means include some means for measuring and recording gray-level information in the 30so received and formed light. Another basic preference is that the receiving-and-forming means include a first, optointermediate stage that receives the scattered light and in response forms a corresponding intermediate signal. Accordingly the receiving-and-forming means also include a 35 second, intermedioptical stage that receives the intermediate signal and in response forms the corresponding light. 8 WO 2004/065984 PCT/US2004/000949 By the coined phrase "optointermediate stage" is here meant a subsystem that receives optical signals (the lidar return beam, in particular) and generates a corresponding signal in some intermedi ate domain -- which may be electronic (in the present day, possibly 5 the only practical such domain), or optical, or quantum-based, or a signal formed in yet some other medium. The phrase "intermediopti cal stage" analogously describes a converse subsystem that receives and operates on that intermediate signal to generate the correspond ing optical output. 10 If this basic preference of employing two stages that communi cate through a common intermediate signal is observed, then two al ternative subpreferences arise: preferably the intermediate signal includes either an optical signal or an electronic signal. Other subpreferences are that the time-resolving means include a streak 15 camera device; and that the system further include a light source, and some means for projecting pulses of light from the source toward the objects for scattering back toward the receiving-and-forming means. If the system complies with the latter subpreference (inclusion 20 of a source, with projecting means), then two alternative preferen ces are that the streak-camera device be incorporated into a repeti tively pulsed pushbroom system, or into a flash lidar system. In the pushbroom case it is still further preferred that the system al so include an aircraft or other vehicle transporting the receiving 25 and-forming means, and the streak lidar device as well, relative to the objects. (An alternative preference is the converse-- i. e., that the apparatus be stationary and the scene made to move. In principle the pushbroom mode simply involves relative motion between the twoo.) Rnother preference is that the streah-camesa device in 30 clude a multislit streak tube. Also in the case of the basic two-staEge preference, it is fur ther preferred that the intermediate signal include an electronic signal, the first stage include an optoelectronic stage, and the second stage include an electrooptical stage. In this event it is 35 also preferred that the optoelectronic stage include light-sensitive semiconductor devices - and these devices in turn include photodi 9 WO 2004/065984 PCT/US2004/000949 odes, e _ PIN ("P-intrinsic") diodes, or alternatively avalanche photodiodes. If this is so, then yet another nested preference is that the electrooptical stage include vertical-cavity surface-emitting la 5 sers, or light-emitting diodes, connected to receive the electronic signal from the PIN diodes. An alternative is that the electroopti cal stage include edge-emitting lasers, or quantum-dot lasers, or microelectromechanical systems - any of these devices being connec ted to receive the electronic signal from the PIN or other diodes. 10 Although these output-stage preferences have been presented as nes ted subpreferences to the use of PIN or avalanche diodes, they are also preferred even if the input stage uses some other kind of pho tosensitive device. Another basic preference is that the apparatus further include 15 utilization means responsive to the time-resolving means. "Utiliza tion means" are any means that utilize the resulting output informa tion from the time-resolving means. Preferably the utilization means are one or more of: 20 W interpretive means for characterizing the objects based on the time-resolved light; " a monitor that displays an image of the objects for viewing by a person at the apparatus; 25 " a monitor at a base station for reviewing the objects or rela ted data received from the resolving by means by telemetry; a data-processing dea-ice for analyzing the objects or images of 30 them; automatically operated interpretive modules that determine whether particular conditions are met; 35 announcement-broadcasting means or other automatic physical apparatus connected to operate in response to the time-resolv ing means; 10 WO 2004/065984 PCT/US2004/000949 " means for enabling or denying access to secure facilities through operation of doors and gates, or access to computer systems or to financial services including but not limited to credit or banking; and 5 " means for determination of hostile conditions, and resulting security measures including but not limited to automatically deployed area-sealing bulkheads. 10 Another basic preference is that the receiving-and-forming means include discrete arrays of light-sensing and light-producing components respectively. In this event it is further preferred that the receiving and forming means also include a discrete array of circuitry for controlling the forming means in response to the re 15 ceiving means. An alternative to these last-recited preferences is that the receiving and forming means include at least one monolithic hybrid of light-sensing and light-producing components. Here it is corre spondingly preferred that the monolithic hybrid further include cir 20 cuitry for controlling the forming means in response to the receiv ing means. In preferred embodiments of its second major independent facet 25 or aspect, the invention is a method for detecting and ranging ob jects. The method includes the step of receiving light scattered from the objects. The method also includes the step of, in response to the scat tered light, forming a corresponding light of a different wsaYelength 30 from the scattered light. In addition the method includes the step of time-resolving the corresponding light to detentmine respective distances of the objects. The foregoing may represent a description or definition of the second aspect or facet of the invention in its broadest or most gen 35 eral form. Even as couched in these broad terms, however, it can be seen that this facet of the invention importantly advances the art. 11 WO2004/065984 PCT/US2004/000949 In particular this second, method facet of the invention close ly parallels the first, apparatus aspect discussed above. It con fers the same advantages over prior art, essentially transcending the wavelength limitations of current commercial streak tubes and 5 thereby enabling lidar measurements to be made in the eye-safe near infrared, for applications involving the likelihood of bystanders; or in the infrared or ultraviolet for the various other applications mentioned earlier. Although the second major aspect of the invention thus signifi 10 cantly advances the art, nevertheless to optimize enjoyment of its benefits preferably the invention is practiced in conjunction with certain additional features or characteristics. In particular, preferably the method is further for use in determining reflectance of the objects; and the receiving and forming steps both preserve at s15 least some gray-level (i. e. relative intensity) information in the scattered light. Most of the preferences introduced above with regard to the first aspect of the invention are equally applicable to the second aspect now under discussion. Another basic preference (also applying to the first two facets 20 of the invention) is that the receiving step receive return light in plural wavelength bands, and the forming step form the corresponding light in substantially one common band. If this plural-band prefer ence is observed, it is further preferred that the bands include at least one UV wavelength; and then a still further nested preference 25 is that they include at least one NIR wavelength. (These choices exhibit distinct abilities of the invention; in practice, spectral regions are chosen based on physics to extract unique object data.) Two other alternative basic preferences are that the receiving step include receiving the plural wavelength bands at (1) plural o30 slits, respectively, of a plural-slit streak camera, and (2) plural times, respectively. In the first of these cases it is further pre ferred that the method also include the step of, before the receiv ing step, transmitting light in said plural wavelength bands, sub stantially simultaneously, toward the objects. In the second of the 35 just-stated two cases it is instead further preferred that the re ceiving step include transmitting the plural wavelength bands at plural times, respectively. 12 WO2004/065984 PCT/US2004/000949 Yet another basic preference is that the method also include the step of deriving plural signals from the received light in the plural wavelength bands, respectively. Accordingly the method pref erably also includes the step of finding differences or ratios be 5 tween signals received in the plural wavelength bands. In preferred embodiments of its third major facet or aspect, the invention is apparatus for detecting objects and determining O10 their distance and reflectance, to form a two-dimensional or three dimensional image; the apparatus includes a light source; and means for projecting pulses of light from the source toward the objects for scattering back toward the receiving-and-forming means; 15 means for receiving light scattered from the objects and in response forming a corresponding light of a different wavelength from the scattered light, preserving gray-level information in the received and corresponding light; and means, including a streak camera, for time-resolving the cor 20 responding light to determine respective distances and reflectances of the objects; wherein the receiving-and-forming means include: a first, optoelectronic stage, including an array of light sensitive PIN diodes, that receives the scattered light and in re 25 sponse forms a corresponding electronic signal; a second, electrooptical stage, including an array of vertical cavity surface-emitting lasers connected to receive the electronic signal from the PIN diodes, that receives the electronic signal and in response forms the corresponding light; and 30 an electronic circuit array connecting the electronic signal from the first stage to the second stage, and modifying the signal to operate the second stage. The foregoing may represent a description or definition of the 35 third aspect or facet of the invention in a broad and general form. Even as couched in these broad terms, however, it can be seen that this facet of the invention importantly advances the art. 13 WO 2004/065984 PCT/US2004/000949 In particular, though not wholly independent of the first aspect presented earlier, this facet of the invention aggregates several preferences that may be particularly synergistic. Without in the least denigrating the individual aspects and preferences dis 5 cussed above, the aggregated system of the third aspect is believed to be especially advantageous in short-term manufacturability and overall practicality. Although the third major aspect of the invention thus signifi Io cantly advances the art, nevertheless to optimize enjoyment of its benefits preferably the invention is practiced in conjunction with certain additional features or characteristics. In particular, preferably the streak lidar device is incorporated into a repeti tively pulsed pushbroom system. In this case it is further prefera 15 ble to include in the apparatus an aircraft or other vehicle trans porting the receiving-and-forming means and the streak lidar device relative to the objects - and also to include utilization means responsive to the time-resolving means. 20 It is to be understood that the foregoing enumerations of pref erences for the three aspects of the invention are intended to be representative, not exhaustive. Accordingly many preferred forms of the invention set forth in the following detailed description or 25 claims are within the scope of the present invention though not introduced above. All of the foregoing operational principles and advantages of the invention will be more fully appreciated upon consideration of the following detailed description, with reference to the appended 30 drawings, of which: BRIEF DESCRIPTION OF THE DRAWINGS 35 Fig. 1 is a block diagram of a streak tube in operation, shown together with a CCD camera and an output-data connection - symbo 14 WO2004/065984 PCT/US2004/000949 lizing processing and utilization means - that all together form a streak-tube imaging lidar ("STIL") camera; Fig. 2 is a schematic diagram of a multipixel wavelength con verter (")C"); s Fig. 3 is a typical light-output vs. drive-current ("L-I") characteristic of a VCSEL; Fig. 4 is a single-channel kC; Fig. 5 is a XC block diagram used for the purpose of estimating the conversion efficiency of the system shown in Fig. 4; 1o Fig. 6 is an optical-bench layout used to validate the perfor mance of the XC; Fig. 7 is a group of oscilloscope traces corresponding to vari ous signals in the AC; Fig. 8 is a plot of the receiver and VCSEL output-waveform s15 pulse widths as a function of the drive pulse width (the symbols representing the data points collected during the experiment, and the solid lines representing least-squares-error linear fits to the data); Fig. 9 is a lidar image of pulse return from mirror 1 (Fig. 6), 20 with time on the vertical axis - increasing upward; Fig. 10 is a like image of pulse return from mirror 2; Fig. 11 is an oscilloscope screen capture showing the laser drive pulse (top) with the corresponding pulse returns from mirror I (left pulse, below) and mirror 2 (right); 25 Fig. 12 is a like oscilloscope screen capture showing the nois( created by the signal generator - producing the second, smaller pulse seen in the lidar imagery, the bottom line being pulse return from mirror 1, and the central line that from mirror 2; Fig. 13 is a diagram, very highly schematic, showing one of 30 many prospective uses of preferred embodiments of the invention particularly including an aircraft containing and translating the apparatus in the so-called "pushbroom" pulsed mode, over objects to be imaged in eye-safe mode; Fig. 14 is a like diagram for the so-called "flash" mode; 35 Fig. 15 is a spectral-response curve for InGaAs; Fig. 16 is a diagram of a multichannel test setup for imaging box in front of a wall; 15 WO 2004/065984 PCT/US2004/000949 Fig. 17 is a multichannel streak image of the Fig. 16 wall alone, i. e. without the box, and showing multiple returns from a twelve-pixel prototype system; Fig. 18 is a like image captured with the box present, and two 5 feet from the wall (higher reflectivity of the cardboard is indi cated here by increased brightness of the return pulses); Fig. 19 is a like image with the box four feet from the wall; Fig. 20 is an image very generally like Fig. 18 but with a translucent object (window screen) in front of the wall, substituted 10 for the box - and with the background electronically subtracted; Fig. 21 is a mesh plot of streak return from a screen in front of a wall - showing both the strong return from the solid wall and the weaker signal from the screen Fig. 22 is a graph of predicted AC signal-to-noise ratio for 15 two different types of detectors, namely P-intrinsic diodes and avalanche photodiodes; Fig. 23 is a graph showing predicted signal-to-noise ratio for an overall STIL receiver according to the invention, incorporating the performance of the AC (here D is the receiver collection aper 20 ture diameter, !c is the receiver transmission efficiency, and a is the atmospheric attenuation coefficient; Fig. 24 is a conceptual block diagram showing the AC used in a time-sharing plural-wavelength-band lidar system; Fig. 25 is a like diagram for a spatial-sharing (plural-slit) 25 plural-wavelength-band lidar system that uses filters to separate wavebands; and Fig. 26 is a like diagram of another spatial-sharing system that instead uses a diffraction grating. 30 DETAILED DESCRIPTION OF THE PREFERRED EMODIEMNTS 35 In preferred embodiments, the invention provides a low-cost al ternative to visible-light lidar. NIR radiation pulses can be pro 16 WO 2004/065984 PCT/US2004/000949 jected toward objects, and the returned NIR pulses 8 (Fig. 2) con verted, at the receiver, into pulses 22' at a visible wavelength. This visible radiation 22' is then directed into a streak tube 18, effectively emulating the visible light 22 (Fig. 1) entering a 5 streak-tube system conventionally. The remainder of the operation is closely analogous to generally conventional operation of the streak tube, excepting only possible effects of positional quanti zation along the slit direction - and the result is a streak-tube lidar receiver operable for NIR applications. o10 In principle a number of techniques could be used to accomplish the wavelength conversion. It is possible to use nonlinear optical techniques such as Raman scattering, stimulated Raman scattering, and harmonic frequency generation to achieve wavelength conversion. Each of these techniques, however, requires relatively complex op 15 tical schemes; and generally the conversion efficiency is strongly dependent on the intensity of the light at the converter. Such dependence is usually a prohibitive condition at a lidar receiver, where the return signals are ordinarily small (on the order of picowatts). In addition, it is difficult to obtain large 20 wavelength translations, particularly in the direction of increasing energy per output photon. In some instances it is possible to provide optical gain at the receiver to improve the efficiency of the wavelength conversion. Such techniques, however, greatly increase the complexity and cost 25 of the receiver.
4 It is possible to instead accomplish a kind of wavelength "con version" electronically. Here the term conversion is somewhat more figurative than in, for eample, nonlinear optical techniques - for 30 the present technique does not change ewvlength of particular light to another wavelength of virtually the same light. Rather, in preferred embodiments particular light 8 of one wnavelength drives an intermediate optoelectrooptical stage 13-17 (Fig. 2) that generates corresponding light 22' of another wave 35 length. This approach uses detectors 13, amplifiers 14, 15, and emitters 16 already developed for other applications - particularly optical telecommunication or optical switching. 17 WO 2004/065984 PCT/US2004/000949 These established technologies, mentioned in subsection c) of the "BACKGROUND" section in this document, include development and marketing of discrete components' , - as well as monolithic (com mon-epitaxy) systems introduced in the Coldren patent. They appear 5 to have never before been associated with lidar or other three-di mensional measurements. Nevertheless they are well suited to developing an electronic wavelength "converter" (herein abbreviated "IC" as noted earlier). We have built and demonstrated just such a converter, integrated o10 into a high-bandwidth, high-quantum-efficiency streak-tube lidar receiver. The bandwidth of the system excluding the converter is into the teraherts range, or over 1 GHz considering the response limitations of the converter itself. 15 Near-infrared light 8 from the object field actuates the kC, which responds by passing visible light 22' to the streak-tube 18: (1) the visible line image 22' of the backscattered light is formed, as in the conventional system, on a slit in front of the streak-tube photocathode 24 (Fig. 1), bringing about a corresponding line image 20 25 of photoelectrons within the tube that is accelerated toward the anode end 31 where the phosphor lies. (2) The photoelectrons e- are electrostatically deflected 29 across or down the phosphor, at right angles to the linear dimen sion, forming a two-dimensional image on the phosphor - which re 25 sponds by generating a visible image that is very nearly identical geometrically. These electronic and visible images have spatial (line-image axis) and temporal (deflection/sweep axis) dimensions. Finally (3) a CCD camera 19 captures 34 the visible two-dimen sional image formed, or a human operator directly views the phosphor 30 screen. Typically, the third dimension is captured as described earlier - i. e., either in pushbroom mode (by repetitively pulsing the laser, while providing relative motion between the scene and the sensor platform' c) or in flash mode (by premapping a full two-dimen sional scene into a composite line image, and time-resolving that 35 composite image). Thus the far-reaching objective of this invention is to provide a compact, imaging lidar receiver that operates in the near-IR re 18 WO 2004/065984 PCT/US2004/000949 gion of the spectrum and provides high-resolution three-dimensional imagery. The receiver combines a patented streak-tube imaging lidar ("STIL") receiver from Areth Associates, of Sherman Oaks, Califor nia, with a complementary receiver front end that accomplishes the 5 figurative conversion of near-infrared (NIR) light to a visible wavelength. The result is a lidar receiver that can operate at wavelengths outside the range of the streak-tube photocathode sensitivity, yet provide imagery that is similar to that currently available with the 10 visible-wavelength STIL systems. a) An electronic IC 15 Conventionally the backscattered laser return is focused through a slit on the streak-tube faceplate prior to imaging on the photocathode 24. Placing the IC at this position and converting near-infrared light 8 to visible 22' enables the streak tube 18 to be used for NIR applications. 20 A linear array (Fig. 2) of high-bandwidth photodetectors 13 (e. q. PIN InGaAs) is placed at the image plane, i. e. slit entrance to the streak tube 18. For each photodetector element the photodetector current is amplified and converted to a voltage signal by a transimpedance am 25 plifier 14. The output of the amplifier drives a vertical cavity surface-emitting laser (VCSEL) 16 that emits in the visible region of the spectrum. The VCSEL radiation 22' is incident on the photo cathode of the streak tube, and the operation of the streak tube is as described in the earlier "BACKGROUND" section of this documient- 30 in subsection "a) of that section. Other ermbodiments of this invention allow input operation at other a-velengths, merely by replacent of the InGsAs modules with detectors 13 sensitive to the wavelengths of interest - for example InSb or PbSe for two to four microns, or HgCdTe for five to fifteen 35 microns. Also for UV operation, Si detectors are appropriate. At the output (streak-tube input) end of the XC subsystem, for economy the output generators 16 can be LEDs rather than VCSELs. 19 WO 2004/065984 PCT/US2004/000949 Such a substitution is expected to inflict no more than a loss in sharpness due to optical crosstalk at the output, and at most some degradation of temporal response (i. e., it is possible that there will be no temporal degradation at all). 5 b) Component Selection InGaAs photodetectors 13 used for telecommunications provide io high quantum efficiencies and sufficient bandwidth to serve in this application. The technology is quite mature, and large arrays of detectors are commercially available. It is possible to obtain arrays of receivers, which include the detector 13 and individually addressable transimpedance amplifiers 15 14. Therefore provision of this component is not limiting. The one-dimensional photodiode array is input to an array of transimpedance amplifiers (TIAs) 14 that drives an array of amplifi ers 15 (Figs. 4 through 6). The signal is then transmitted to the vertical cavity surface emitting laser (VCSEL) array 16 and captured 20 by a conventional streak-tube/CCD camera 18, 19 (Figs. 1, 2 and 6). Vertical-cavity surface-emitting lasers (VCSELs) have been se lected for the output stage because of their bandwidth and because they are inherently fabricated in array formats. VCSELs are unique, in comparison to other diode lasers, in that they emit from the 25 surface of the structure rather than from the edge. Consequently, they are by nature grown in arrays, and microlens optical arrays 17 can be integrated directly onto the devices - facilitating collima tion of the output. A like attachment process may be available for light-emitting diodes. 30 c) Component assembly Complex electrical contacts required to support a large array 35 of VCSELs can be formed through so-called "flip-chip bump bonding". This is detailed e._~_ by Amkor Technology, Inc. (at www.amkor.com/ enablingtechnologies/FlipChip/index.cfm) generally as follows. 20 WO 2004/065984 PCT/US2004/000949 It is a method of electrically connecting a die to a package carrier. The package carrier, either substrate or leadframe, then provides the connection from the die to the exterior of the package. In "standard" packaging, interconnection between a die and a 5 carrier is made using wire. The die is attached to the carrier, face-up; next a wire is bonded first to the die, then looped and bonded to the carrier. Wires are typically 1 to 5 mm in length, and 25 to 35 pm in diameter. In flip-chip packaging, the interconnection between the die and 10 carrier is instead made through a conductive so-called "bump" that is formed directly on the die surface. The bumped die is then inver ted ("flipped over", in packaging parlance) and placed face-down, with the bumps connecting to the carrier directly. A bump is typi cally 70 to 100 pm tall, and 100 to 125 pm in diameter. 15 The flip-chip connection is generally formed with one of twuo attaching media: solder or conductive adhesive. By far the more common material is the solder, in either eutectic (63% Sn, 37% Pb) or high-lead (97% Pb, 3% Sn) compositions; and solder interconnect is used in the initial flip-chip products that Amkor has brought to 20 market. The solder-bumped die is attached to a substrate by a solder reflow process, very similar to the ball-grid array (BGA) process in which solder balls are attached to a package exterior. After the die is soldered, the remaining voids between the die and the sub 25 strate - surrounding the solder bumps - are filled with a special ly engineered epoxy called "underfill". That material is particularly designed to control stress in the solder joints caused by the difference in thermal ezapansion bet-7een the silicon die and the carrier. Once cured, the underfill absorbs 30 that stress, reducing the strain on the solder bumps and thereby greatly increasing the life of the finished package. The chip-attachment and underfill steps are the elements of flip-chip interconnection. Beyond this, as the AmnTor presentation concludes, the remainder of package construction surrounding the die 35 can take many forms and can generally utilize existing manufacturing processes and package formats. 21 WO 2004/065984 PCT/US2004/000949 d) Leveraging technologies Analogous features between lidar operation and free-space com munication allow technical developments in the latter potentially 5 large market to benefit the far smaller but important lidar remote sensing market. VCSELs form a key element of today's free-space communications thrust. Physical characteristics of VCSELs are well suited to solving the XC/STIL problem. First, individual VCSEL structures are small 10 (about 3 to 10 pm) although they typically have a high beam diver gence unless the output is coupled into a microlens. Addition of a lens array 17 (Fig. 2) results in a structure with pitch between 100 and 250 Mm. A VCSEL array with 200-pm pitch can be coupled into the streak tube through a three-to-one fiber 15 taper (also at 17), providing two hundred fifty-six: cross-track pixels on a standard-size (12.3 mm) CCD chip - assuming a streak tube magnification of 0.7, which is common. Secondly, VCSEL emission wavelengths can be tailored to match the response peak of the streak-tube photocathode. VCSEL output 20 wavelengths between 600 and 850 nm are easily achievable - with A1GaAs/GaAs or InGaAs/GaAs materials and standard molecular-beam epitaxy techniques. Arrays of up to a thousand elements have been manufactured, and several companies offer commercially available custom arrays 8 ' 25 e) Design considerations and drive circuitry for VCSELs Frot a plot of typical VCSL output po,er as a function of in 30 put current, it is seen that VCSELs ha: s a distinct losing threshold 4U (Fig. 3) that must be overcome to obtain significant light out put. In our 2Z, when a VCSEL should be quiescent a bias-current circuit 61 (Fig. 4) supplies the VCSEL with electrical current 64 held just below that threshold. 35 When the lidar return strikes the associated receiver element 13, drive electronics 14 provide an amplified photocurrent 65 which 22 WO 2004/065984 PCT/US2004/000949 is added to the quiescent-state current 64. The sum, i. e. the VCSEL total drive current, then exceeds the threshold 41 (Fig. 3). The light/current relationship 42 is very linear from the turn on point and up toward the region of saturation 43 - accordingly 5 providing nearly linear response to intensity of the lidar return. This characteristic is important where contrast or intensity infor mation in the lidar imagery may provide significant discrimination capabilities. Particularly good examples are polarimetric lidar applications, in which maintaining contrast information is critical. io Our prototype incorporates VCSEL drive circuitry 14 (Fig. 4) that provides ample bandwidth and gain to allow operation of a sin gle-pixel lidar system. The electronics required to drive the VCSEL elements are quite simple. We have built one configuration (Fig. 4) and confirmed that it 15 achieves desired operation for twelve pixels when linearly replica ted. In a production configuration this becomes in essence one unit of a custom large-scale integrated circuit that provides throughput for the two hundred fifty-six channels. Unlike near-IR streak-tube photocathode material, this technology is readily available. 20 f) Estimate of converter efficiency A theoretical analysis (see below) of the converter efficiency 25 for a particular form of our apparatus provides a foundation for use of the invention more generally. Conceptually, a minimum operatio nal value is roughly 0.36 just to conserve the energy of the input photons. InGa As photodiodes ha-e an extremely high quantum efficiency in 30 comparison w-ith a streak-tube photocathods. Therefore noise charac teristics at the input end 13 of our %C are in our favor; and any gain that can be applied before the newly generated visible light 22' reaches the photocathode 24, without adding significant noise, is advantageous. 35 The invention does involve some tradeoffs. An advisable pro duction configuration will have two hundred fifty-six channels, in a 23 WO 2004/065984 PCT/US2004/000949 device of suitable size to couple with a streak tube; this config uration places many operational amplifiers 14, 15 in a small space. Accordingly power consumption and physical space must be bal anced against the gain-bandwidth product. The solution here is a 5 simple configuration that provides arbitrary but significant gain, and that is readily reduced to an integrated-circuit implementation. We have modeled XC performance as described above. For purpo ses of such modeling, commercially available components were consid o10 ered: an InGaAs PIN photodiode 13 (Fig. 5), a transimpedance ampli fier 14 and transconductance amplifier 15, and finally a VCSEL 16 emitting 22' at 630 na. The )C device is capable of high gain. Equation 1 shows that the number of photons emitted per incoming photon is high. (Fig. 5 identifies the variables in this equation.) 15 NO= RRG7, = 465 (1) NV, Al The high conversion efficiency more than compensates for the inherent energy deficit in the transition from NIR to visible wave lengths. The large transimpedance resistance dominates the conver sion efficiency. 20 While conversion efficiency is an important factor, signal-to noise ratio (SNR) is also critical. A model has been developed to compute the SNR of the wavelength converter. Noise sources include background radiation (P,), dark current (ID), detector shot noise, and the respective amplifier noise terms 25 for the transimpedance and transconductance amplifiers (oT 2 , a,). For practical purposes, Equation 2 approximates the SYR for a receiver with bandwidth B. SAR Pit R (2) 2eB(ID R(+Ph))+ iBR +cB 30 Gigaherts-bandwidth operation of the %C is imperative if the system is to be used in lidar systems with resolution requirements on the order of 25 cm or less. Using Equation 2, the SNR at the output of the AC has been computed and plotted (Fig. 22) as a func tion of bandwidth for an InGaAs PIN and avalanche photodiodes (APD), 35 and incident energy on the photodetector of 4 fJ. In the case of 24 WO 2004/065984 PCT/US2004/000949 the APD, Equation 2 was modified to reflect the gain of the device as well as the excess noise. The laser pulse width was varied inversely to the bandwidth of the XC. For comparison, the lower detectable laser energy for typi 5 cal STIL receivers is approximately 1 fJ/pixel. Thus, use of InGaAs APD's in the XC will allow SNR performance at 1.5 pm nearly identi cal to that of STIL receivers operating at 532 nm. Results of the simulation suggest that the XC will provide IO adequate SNR to be used in conjunction with the streak tube. The PIN photodiode is adequate for all but the most demanding applica tions, and the APD can be used to achieve improved SNR at high band widths or under low-return-energy conditions. The dominant noise factor in the IC is the transimpedance am 15 plifier. Note that the amplifier considered here is a commercial, off-the-shelf ("COTS") item whose design can be improved. The high transimpedance resistance and the inherent noise in the amplifier can be traded off to some extent. A simulation of the IC incorporated into a STIL receiver was 20 completed and the results plotted (Fig. 23). In this case, the peak power transmitted was held constant and the laser pulse width was varied to determine the effect on SNR. This simulation establishes that the invention can meet demanding range-resolution requirements for detection and identification. 25 The simulation assumes a transmitter at 1.5 pm and PIN photodi odes as the detectors in the XC. Various other simulation parame ters are tabulated (Fig. 23, in 7hich D is the receiver collection aperture diameter, T the receiver transmission efficiency, and a the atmospheric attenuation coefficient) - but it is important to 30 note that the bandwidth of the C was varied inverrsely with the la ser pulse width, and the streak rate of the electron beam in the tube was maintained at three CCD pixels per laser pulse. The data shown are parametrized by the TIA noise factor; reducing the power spectral density of the TIA yields significant dividends. 35 The simulation predicts that reducing the TIA noise power spec tral density by a factor of ten will increase the SNR by approxi 25 WO 2004/065984 PCT/US2004/000949 mately a factor of three. Again the simulation indicates that opti mization of the TIA is a key component of future work on the XC. 5 g) Single-element XC prototype preparation and operation To characterize and understand the key performance issues for the XC, we built and operated a single-pixel prototype, for one pix el in the receiver focal plane. A XC with a large number of pixels 10 is instead a highly specialized ensemble of integrated circuitry, most-preferably packaged as a hybrid multichip module. The VCSEL in our prototype is a Honeywell SV3644-001 discrete element. Technical specifications of interest for this VCSEL are: 673 nm output, 2 V threshold voltage and 2 mA threshold current. 15 It can be driven above threshold in short-pulse low-duty-cycle mode from 2 to 100 mA, leading to a 0.01 to 1 mW peah output power range. The receiver module is an InGaAs PIN from Fermionics model number FRL 1500. The VCSEL drive circuitry used (Fig. 4) is dis cussed in subsection e) above. 20 The prototype took the form of an optical-bench setup using primarily COTS components, including a 1.55 pmn laser diode 9 (Fig. 6) to generate excitation pulses 6, a signal generator 11 and diode driver 12 for powering those laser pulses, and our high-bandwidth XC 10. The test investigated the capabilities and limitations of the 25 XC, and also used that module in conjunction with a streak tube 18 and camera 19 to demonstrate relative range measurements. The optical bench setup (Fig. 6) is assembled so that a laser pulse 6 traveling from the 1.55 pm laser diode 9 through a beam splitter 5 reflects from one of two plane mirrors 1, 2 mounted on 30so the bench. A portion of the NIR reflected return light , redi rected by the splitter 5, is incident on the input detector 13 of the C 10. The resulting VCSEL output is projected through a short fiber-optic coupler 17 onto the faceplate of the streak tube 18. 35 26 WO 2004/065984 PCT/US2004/000949 h) Bandwidth Our first determination, using the apparatus, was the bandwidth of the XC itself. In this measurement the width of the pulse C3 5 (Fig. 7) from the signal-generator 12 was varied from 16 to 2.6 nsec while observing the relative pulse shapes of the output current waveform Cl from the receiver 13, 14 and the current drive waveform C2 into the VCSEL 16, using a digital oscilloscope (not shown). That set of three oscilloscope traces was recorded with the 10 signal-generator pulse width set to 2.6 nsec. The three waveforms have a similar shape, and evidence no significant temporal disper sion as the signal passes through the various stages of the C. The pulse generator is not capable of producing pulses shorter than 2.6 nsec, but these observations nevertheless demonstrate di 15 rectly that the invention achieved a bandwidth of ~400 1N0z very easily - and, by visual interpolation of the screen waveforms, also accomplished a bandwidth extension into the gigahertz regime. Measurements of the same waveform pulse widths taken during this demonstration, over the above-stated range of signal-generator 20 pulse widths, were tabulated and plotted. The pulse widths 71 (Fig. 8) at the receiver 13, 14, and also the pulse widths 72 at the VCSEL output - i. e., both of the IC test points - linearly track the width of the drive pulse, indi cating that the bandwidth of the converter is not a limiting factor. 25 Thus the AC is an excellent match to the already demonstrated high temporal resolution of the streak-tube lidar receiver. i) Relative range measurement 30 With the infrared signal converted to visible light, the output was next used to actually create streak-tube lidar imagery. Our ap paratus reliably and reproducibly measured relative ranges estab lished by manipulation of the mirrors 1, 2 on the test bench. 35 Using the same experimental arrangement discussed above (Fig. 6), a set of streak-tube images was captured and recorded by the CCD camera at the back of the streak tube. During the first demonstra 27 WO 2004/065984 PCT/US2004/000949 tion, light from a single laser shot was allowed to reflect 4 from a near mirror 1 (Fig. 6) and pass through the )C and on to the streak tube/CCD system. During the second capture, mirror 1 was removed and the light 5 was instead reflected at a far mirror 2 (positioned AL
=
71 cm behind the near mirror 1). The resulting lidar images include a bright flash 81 (Fig. 9) corresponding to reflection from mirror 1, and another such flash 83 (Fig. 10) corresponding to that from mirror 2. The flash 81 from the near mirror 1 is much closer to the ori 10 gin of time coordinates (the bottom of the image) than the flash 83 from the far mirror 2. This relationship makes clear that the sys tem is able to detect a range difference from the two signals. The same information is revealed by displaying both pulse re turns 81, 83 (Figs. 9 and 10) from the near and far mirrors 1 and 2 15 together in an oscilloscope-screen trace (Fig. 11). The time dif ference between the two pulses is measured at 4.7 nsec, precisely the time it takes light to travel the 2AL
=
142 cm round-trip dif ferential for the 71-cm mirror separation. Since the VCSEL is operated at its threshold limit, and with 20 the signal generator working close to its operating limitations, any undesirable ringing in the drive circuitry causes the VCSEL current to rise, only instantaneously, above the threshold - releasing a small pulse of light. This small pulse is detectable by the streak tube camera and appears in the lidar images as a smaller, dimmer 25 pulse 82, 84 (Figs. 9 and 10). The same is seen also in the screen capture, with the 'scope set to offset the main traces 81, 83 (Fig. 12) vertically - and also to shift one of those traces to roughly align temporally (hori Lontally) with the other. The trace 83 due to reflection at mirror 30 2 was set with its FC3SEL threshold level 85 just at the oscilloscope horizontal centerline. The spurious pulses appear as smaller shal low peaks 82, 84 trailing the primary pulses 81, 83 respectively. ~inor trimming refinements to the AC suppress the resonances respon sible for this undesired effect. 35 28 WO 2004/065984 PCT/US2004/000949 j) Conceptual notes on the twelve-pixel implementation and testing A multichannel IC that we built and tested consists of the original single-channel circuit replicated twelve times. The sin 5 gle-element InGaAs photodetector has been replaced with a twelve channel InGaAs photodiode array (Fermionics P/N FD80DA-12). The array has a 250 pm pitch between detector elements; other wise the element size, spectral response and sensitivity are all identical to the original InGaAs diode. Care was taken during the 10 board layout to ensure line lengths were kept uniform from channel to channel to avoid a potential phase mismatch due to signal-propa gation delays. The same type of Thor Labs telecommunication VCSELs was used in the multichannel as in the single-channel %C. This presented an ob 15 stacle to emulation of a vary nearly production-style version of a multichannel prototype, as the large size of the "TO" cans housing the VCSELs limited minimum spacing between VCSELs to 0.200 inch. This meant that even though there was only 250 pm spacing be tween elements in the InGaAs detector, there was a significant dead 20 space between VCSEL emitter elements. The 0.2-inch spacing also limited the number of channels visible on the streak tube to eight. People skilled in this field, however, will understand that the difference in spacing is in the main only cosmetic, provided that interchannel crosstalk at the VCSEL output is fully assessed at some 25 other stage of the development. Moreover, the wide spacing between VCSEL beams also had a positive implication, offering an opportunity to watch parallel performances of the multiple individual VCSELs in isolation. The multichannel unit was tested using a doubled d:YAG laser 30so with an 8 ns pulse generating approim~ately 1.80 W of average power at % 532 nm and 1.20 W at I 1064 nm. Pulse repetition frequency was 200 Hs. The responsivity 232 (Fig. 15) of the InGa~s sensors to 1064 na is about 3 db reduced from responsivity 232 at 1500 nm. The re 35 sponse (off-scale, 231) of the InGaAs detectors to X 532 nm visible light is about 8.5 dB lower still. In addition, the IC board phys 29 WO 2004/065984 PCT/US2004/000949 ically blocked the visible light from reaching the streak tube, so images were from the k 1064 n light only. The beam from a source laser 309 (Fig. 16) was projected through a Fresnel lens (not shown) to produce a fan beam 303 paral 5 lel to the focal plane of the detector array 13 (Fig. 2) in the XC (Fig. 16). The kC collection optics consist of a 12.0 mm f/l.3 lens 311 positioned approximately 1 cm in front of the array element. The 12.5 mm FOV of the lens roughly approximates the horizontal expansion of the fan beam 303. 10 Our lidar test objects included a wall 241, at a distance 312 of about 5 m (sixteen feet) from the XC, and also a cardboard box 242 at an adjustable distance 313 in front of the wall. Resulting streak images (Fig. 17) of the wall alone clearly show the individ ual channels of the wavelength converter. 15 As noted earlier, even though the spacing of the detector elements is 250 pm, the physical dimensions of the VCSEL "TO" cans cause the optical emitters to be separated by about 0.2 inch per channel. This separation on the streak faceplate results in the return being segregated into distinct rows, and the dimensions of 20 the streak faceplate limit viewing to only eight channels. With the cardboard box 242 (Fig. 16) positioned at a distance 313 of roughly 0.6 m (two feet) in front of the wall (roughly 3.4 m or fourteen feet from the wavelength converter), resulting lidar im ages (Fig. 18) immediately show very different responses. Clearly 25 the system is indicating a closer object across part of the cross track field. In our test images (Figs. 17 through 19), range is presented from bottom to top: i. e. lower in these images is closer to the source. In addition to vertical displacement, the .ages correctly in 30 dicate as higher reflectivity of the cardboard surface 242 (Fig. 16) relative to that of the wall 241. This higher reflectivity is plain from the greater brightness (Figs. 18 and 19) of the return pulses. With the cardboard box moved forward to about 1.3 m (four feet) from the wall and thus closer to the lidar unit (roughly 4 m or 35 twelve feet from the C), the resulting images (Fig. 19) clearly follow the shift of the box. A significant increase in intensity of the return from the now-closer cardboard box is actually sending the 30 WO 2004/065984 PCT/US2004/000949 current drivers into saturation and inducing a ringing in the out put. This ringing results in a greater pulse length (height) for this return. In another test we substituted a translucent object (window 5 screen) for the box, in the imaging path between the lidar unit and the wall. This allowed us to capture returns from the partial re flection by the screen (fainter, lower pulses, Fig. 20) while still seeing the wall behind it. These accomplishments are shown more ex plicitly by a mesh plot (Fig. 21) of streak return from the screen 10 in front of the wall, as well as the wall itself. k) Twelve-pixel system design 15 Electronics subsystem - The twelve-pixel %C should be built using commercially available VCSEL and detector arrays. In fact the array dimension of twelve is based upon commercial availability. The primary commercial application for VCSEL arrays is in short-haul communications. These already existing structures can 20 abbreviate development time and reduce the cost of testing the in termediate design. It is recommended to use an InGaAs detector array that is com mercially available. Custom electronics, but well within the state of the art, are to be designed - including the transimpedance and 25 transconductance amplifiers. The circuit design discussed earlier (Fig. 4) can be replicated to drive the twelve VCSELs. To assure high bandwidth, all compo nents are best made surface-mount types, with strict attention to transmission-line lengths and control of stray cFpacitance and 30 inductance. PSPICE~ circuit emulation, available as software from Cadence Design Systems, Inc. of San Jose, California, is a recommended de sign support tool prior to board fabrication. We found that its use minimized errors in layout and operation. 35 Optical subsystem - The optics must focus backscattered radi ation onto the detector array and deliver the output of the VCSEL 31 WO 2004/065984 PCT/US2004/000949 array to a streak-tube receiver. Optics to deliver the backscat tered light are ideally in the form of a simple telescopic lens sys tem that has high throughput near 1.5 pm. Because the twelve-element array will be quite short (3 mm) and 5 have few pixels, it is possible to butt-couple (i. e. abut) VCSEL outputs directly to the fiber taper input of the streak-tube recei ver. The pixel pitch of the VCSELs should be adequate to minimize channel crosstalk after that coupling is achieved efficiently. An alternative approach is to use a fiber-coupled VCSEL array 10 and abut the fibers directly to the streak-tube input. The freedom to move each fiber independently will enable complete elimination of crosstalk. One ideal laser for this system is an Nd:YAG unit coupled to an optical parametric oscillator to provide output at 1.5 Mm. An opti 15 cal pulse slicer is recommended to enable tailoring of pulse widths in the range from 1 nasec up to the normal laser pulse width of 10 nsec. A suitable streak-tube lidar receiver is a Hamamatsu C4187 sys tem coupled to a DALSA 1M60 CCD camera. We have used such a system 20 in multiple STIL programs and find that it provides a solid founda tion on which to build experience with the XC. Data acquisition - A useful data-acquisition system for this purpose is based on a commercially available frame grabber and a PC 25 configured to capture and store the images. To minimize peripheral development time and cost it is advisable to obtain access to a sui table software library . Laborato ry measurements / subsystem test - Bach individual 30 pixel of the array should be independently tested for functionality, ascertaining the sensaitivity and bandzoidth of each channel using tests similar to those described above for the single-pixel system. The %C should then be coupled to the streak tube, and alignment and calibration completed. 35 Calibration should include measuring the uniformity of the various channels so that these variations can be taken out of sub sequent lidar images. In addition, the dynamic range of the system 32 WO 2004/065984 PCT/US2004/000949 should be assessed using a calibrated set of neutral-density fil ters. Our prototype efforts included both these components. Twelve-pixel imaging-lidar lab measurements - Several tests 5 should be run to characterize the performance of the XC in a STIL system. Initial tests ideally should involve simple flat-field images using objects of varying contrast spaced apart from each other in range. Such testing is important to quantify the contrast and range measurement capabilities. 20 A second test should determine the range resolution of the sys tem. In this test the two objects should be moved closer and closer together until the respective returns from the two can no longer be discriminated. This procedure should be repeated for a variety of laser pulse widths and streak-camera settings. 15 The resolution with which the distance between two partially transmissive objects can be measured should also be determined. Or dinary window screen serves admirably as test objects in this test. Finally, larger scale (more than twelve-pixel) imagery should be generated by simultaneously scanning the transmitted beam and the 20 receiver field of view with a large mirror to allow full three dimensional imagery to be collected when the pushbroom sensor is stationary. The tests discussed above will provide a design team with ex perience and knowledge of the parts of the system that are sensitive 25 to component tolerances and the like. It is particularly essential to take this opportunity to identify the most qualified and cost effective available vendors for the much more difficult stage of development that follows. 30 1) Two-hundred-fifty-six-pixls system design Specifications to guide a design effort - Design of a two hundred-fifty-six-pixel system is significantly more complex than that of a twelve-pixel system. With the expertise developed in the 35 dozen-pixel prototype, a design team can proceed much more confi dently and with fewer detours. 33 WO 2004/065984 PCT/US2004/000949 This effort should encompass design of a XC with imaging capa bilities that can meet real-world objectives. The accompanying table contains lidar system specifications to drive such a design effort. 5 parameter value comment IC receiver band- 1.5 GHz width IC crosstalk <-30 dB channel to channel 0o X power < 10 W consumption transmitted wave- 1.54 mm length VCSEL emission 600-850 nm 15 wavelength angular 250 mrad cross- 7.5 cm at 300 mn standoff resolution (IFOV) track FOV 64 mrad cross track range resolution, variable 20 AR 200 m maximum 7.5 cm minimum range extent, RE 100 x AR absolute range ±2 AR defined as the required mea precision surement accuracy of the distance from the sensor to the first return of a single shot pulse repetition 200 Hz maximum 25 frequency operating -10 to 45 C temperature Detector array - The detector array will be a two-hundred 30 fifty-six-element PIN InGa-s dev-ice. Three possible vendors of such arrays are known to us: Sensors Unlimited, Hamamatsu, and AXT. It is advisable to collaborate with vendors to determine the best possible configuration for the IC and finalize the design. The array is advisably flip-chip bonded, as outlined earlier, to pre 34 WO 2004/065984 PCT/US2004/000949 serve the bandwidth of the detector and minimize the physical extent of the connection. It is also advisable to bump-bond the array to a submount as sembly that will support both the VCSEL and the detector array. A 5 set of transmission lines should interconnect the detector array and the amplifier array. Transimpedance amplifier array - Here too it is recommended to work closely with IC design and process specialists to identify 10 the best process for the custom chip or chips to achieve the re quired combination of a low noise transimpedance amplifier, adequate bandwidth, a suitable gain stage, and output buffering to drive the VCSEL array. The design from COTS components used on a circuit card should then be converted into devices that are readily fabricated in 15 IC form with the process chosen. As noted above, a PSPICE@ model should be developed in advance to show that the design, when implemented in this fashion, provides the performance required. It is preferable to consider the system aspects of the design including packaging and thermal modeling. A 20 final system layout for fabrication at a foundry should be reserved for a later stage of development, but the two-hundred-fifty-six pixel effort will confirm that the work is on track and provide a clear path to the foundry later. 25 VCSEL array - VCSEL arrays are currently being produced in a large number of various formats. The VCSEL array will be two hun dred fifty-six elements with a device pitch of ~250 pm. Four commercial vendors are among know-n entities capable of producing the VCSEL arras that is necessary: Honeywell, Emoore, ULM 30 Photonics and RT. Alll have significant ez~pertise in delivering custom VCSEL arrays. It is also advisable, however, to consider collaboration in ad dition, or instead, with university-based academic specialists in this area. Such coventurers are likely to provide greater flexibil 35 ity in accommodating the special requirements of the XC/STIL ap proach without overregard for the high production volumes that drive more-conventional commercial applications. 35 WO 2004/065984 PCT/US2004/000949 The invention is not limited to using VCSELs. The emitters may instead comprise edge-emitting lasers, or quantum diodes or dots, or MEMS devices. 5 Cross-track sampling - Our use of discrete detectors, in front of the slit of the streak tube, in effect samples the image plane of the receiver - as compared with conventional STIL apparatus and op eration, in which the cross-track image at the photocathode is sub stantially continuous. One must assure that the sampling has ample 10 resolution to reconstruct the images desired in the lidar receiver. This effect is not overly complex, and a closely analogous phe nomenon conventionally occurs anyway at the output end of the streak tube. There a multiple-discrete-element CCD array, used to capture the output range-azimuth image that appears on the phosphor, neces 15 sarily imposes a quantising or discretising effect. A secondary effect of the discrete detectors is an effective reduction in the fill factor of the receiver. This problem can significantly degrade the performance of the system with respect to the more-traditional mode of operation. Such a limitation can be 20 overcome through the use of a microlens array that can be attached or integrated directly onto the detector array. Such a practice is common in CCD and CMOS imaging devices as well as in detector arrays designed for communications and spectroscopic applications. 25 m) Representative systems M-erely by way of example, one of myriad uses may involve an aircraft 101 (Fig. 13), serving as part of the inventive apparatus, 30 that translates 104 the STIL system 100 in the so-called "pushbroom" pulsed mode over or ne2t to objects in a scene 105 to be imaged. While in motion 104, the system forms both the do-nward- or side-ward-transmitted near-infrared pulses 103 and the reflected or back-scattered near-infrared pulses 8 within a thin-fan-shaped beam 35 envelope 102. 36 WO 2004/065984 PCT/US2004/000949 (It will be understood that the return pulses actually are scattered in essentially all directions. The receiver optics, how ever, confine the collection geometry to the fan shape 102.) The aircraft 101 may, further as an example, be searching for a 5 vehicle 109 that has gone off the road in snowy and foggy mountains 108. A person 107 in the mountains may be looking 106 directly at the aircraft and into the transmitted STIL beam pulses, but is not injured by the beam because it is near-IR rather than visible. The interpretive portions 91-94 of the apparatus may also in 0o clude a monitor 99 that displays an image 98 of the scene 105 for viewing by a person 97 within the aircraft - even though the scene 105 itself might be entirely invisible to direct human view, ob scured by fog or clouds (not shown). Viewing may instead, or in addition, be at a base station (not shown) that receives the results 15 of the data-processing system by telemetry 95. The primary data processing 91, 92 advantageously produces an image 98 for such viewing - preferably a volume-equivalent series of two-dimensional images as taught in the pushbroom art, including the earlier-mentioned previous patent documents of Arete Associates. 20 In addition the system preferably includes automatically operated interpretive modules 94 that determine whether particular conditions are met (here for example the image-enhanced detection of the vehicle sought), and operate automatic physical apparatus 95, 96 in response. 25 For example, in some preferred embodiments detection of the desired object (vehicle 109) actuates a broadcast announcement 96. These interpretive and automatically responsive modules 91-96, 99, however, are only exemplary of many different forms of what may be called "utilization means" that comprise automatic equipment actu 30 ated when particular optically detected conditions are met. Others include enabling or denying access to secure facilities through operation of doors and gates, or access to computer systems or to financial services such as credit or banking. Determination of hostile conditions, and resulting security measures such as auto 35 matically deployed area-sealing bulkheads, is also within the scope of the invention - as for instance in the case of safety screening 37 WO 2004/065984 PCT/US2004/000949 at airports, stadiums, meeting halls, prisons, laboratories, office buildings and many other sensitive facilities. Because the NIR beam is eye-safe, the entire system can be operated at close range to people and in fact can be used harmlessly 5 to image people, including their faces, as well as other parts of living bodies e for medical evaluations, as also taught in the earlier patent documents mentioned above. The elements of the envi ronment 105, 107-109 and of automatically operated response 94-96 that are shown shall be regarded as illustrations of all such other Io kinds of scenes for imaging, and the corresponding appropriate responses, respectively. The invention is not limited to pushbroom operation, but rather can be embodied in _ flash systems. It will be understood, how 15 ever, that the pushbroom mode makes the most - in terms of resolu tion or image sharpness - of comparatively modest resources. In particular, relatively fixed available imaging length and area are available at any streak-tube photocathode and phosphor screen, respectively. It follows that if an entire scene is re 20 mapped into a single slit image for streaking, then necessarily only a far tinier sampling of each part (.g . raster line) of the scene can be taken. In a flash system what is projected 203 (Fig. 14) and returned 208 can be a single rectangular-cross-section beam 202, rather than 25 a succession of fan beams 102 (Fig. 13). The aircraft 101 may hov er, rather than necessarily moving forward at some pace related to frame acquisition, and may be a lighter-than-air craft if desired. As in the pushbroom system, however, the wavelengths of trans mitted and recovered pulses 203, 208 (Fig. 14) are not in the vis 30 ible part of the spectrum; for many applications they are in the near-IR, but as noted earlier they can be in the infrared or ultra violet as appropriate to the application. All the illustrations in this document are expressly to be seen as representative of all such different wavelength embodiments. 35 Following the 2C 10 in a flash system is a mapper 212 that re arranges elements (a.. pixels) of the image captured by the XC 10. The mapper 212 may take the form of a fiber-optic prism that is 38 WO 2004/065984 PCT/US2004/000949 sliced, as described in the earlier-mentioned Knight or Alfano pat ents, to place successive raster lines of the image 22' end-to-end and thereby form a single common slit-shaped image 213. For purposes of the present invention, in purest principle the 5 mapping may instead be accomplished within the )C, by rerouting electrical connections at some point between the individual detec tors 13 and the individual VCSELs (or other emitters) 16. Such an arrangement poses a major challenge to maintaining minimum reactan ces throughout the system - and especially uniform reactances as 10 between the multiple channels. People skilled in this field will recognize that such an effort is at odds with the advantageous properties of flip-chip bump bonding, and perhaps even more with the common-epitarry principles of the Coldren patent. Ingenuity in geometrical arrangements, however, 15 may overcome these obstacles. Following the streak tube 18, the flash-mode output image 214 may be regarded as garbled due to the mapper 212 and therefore re quiring use of a remapper 215 to restore ordinary image properties of adjacency. This remapping can be accomplished in various ways. 20 The most straightforward is ordinarily a computerized resorting of pixels in the output image 214, to unscramble the effects of the mapper 212. 25 n) Alternate wavelength applications As noted earlier, the applicability of the invention is not at all limited to the near-infrared. One important area of use is the more-remote infrared, also a relatively difficult region for devel 30 opment of streak-tube photocathodes because of the even lower photon energy here than in the near-IR. The infrared portion of the electromagnetic spectrum (3 to 12 Im) overlaps strong absorption features of many molecules. As a result wavelengths in this region are particularly attractive for 35 monitoring gaseous contaminant concentrations such as those encoun tered in atmospheric pollution or industrial process control. 39 WO 2004/065984 PCT/US2004/000949
CO
2 lasers operating at 9 to 11 pm can produce large amounts of power and have been deployed in space for a number of applications. The wavelength converter ("XC") is well suited for use with C0 2 -la ser-based imaging lidar systems. 5 Even though photon energy in the ultraviolet is ample for de velopment of streak-tube photocathode materials - and in fact such materials do exist - nevertheless the UV too offers fertile ground for applications of the present invention. Here the particular ap peal of the present invention lies in the potential for imaging re 10 turns from wholly different spectral regions within a single, common streak tube; and if desired even at the same time. For example two lasers 409a, 409b (Fig. 24) producing respec tive pulses 403a, 403b in different wavebands - or if preferred a single laser capable of emission in different bands - can be opera 15 ted in alternation. The returns 408 from an object field 441 are directed to a single, common IC 10, which relays the optical signals to a streak tube 18, camera 19 and interpretive stages 34 just as before. This type of operation yields a time-shared system. Here the converter 10 may have sufficiently uniform response in 20 the two wavebands to enable operation of the camera system 18, 19, 34 for processing of both sets of returns 408. To enhance such ca pability the ,C, the streak tube 18, or the back-end stage 19 - or combinations of these - can be synchronously adjusted in sensitiv ity, electronically. 25 An alternative, acceptable in some applications involving rela tively stationary object fields, is to collect a complete image or large portion of an image in one of the wavebands based on pulses 403a from one laser 409a; and then change over to collection of a comparable image or portion in the other wavaband based on pulses 30 / 03b from the other laser 409b. In this case vet another alterna tive is shifting 11 of two or more converters 10, '10 into position in front of the streak tube 18 - or, if preferred retaining a single converter 10 in position while swapping optical filters (not shown) in front of that single converter 10. 35 Where time sharing is not acceptable or desirable, a spatially shared system can be used instead. For this case the system advan tageously uses a single laser 509 (Fig. 25) that can emit pulses 503 40 WO 2004/065984 PCT/US2004/000949 containing light in plural bands, or in particular plural spectral lines. Here the return 508 from the object field 541 is likewise in plural optical bands, or at least lines. The streak tube 518 in 5 this case advantageously has a plural-slit photocathode as described in the previously mentioned Gleckler patent document. Here g._. one wavelength filter 501 is inserted in front of only just one part of the 1C - while a second, different-wavelength filter 502 lies in front of another part. 10 For instance if just two wavebands or lines are in use, the two filters 501, 502 can be respectively inserted in front of the two ends of the converter array 10, which correspondingly feed optical signals into the two slits. If preferred, the two ends (or more generally plural parts) of a single streak-tube slit can be driven 15 in this way and the lidar images separately interpreted downstream. Yet another option is to use t.o different NC sections (not shown), with different wavelength sensitivities, in lieu of a single converter 10 - and generally without optical filters. A more-spe cific and more sophisticated implementation that better conserves 20 optical-signal power uses a diffraction grating 503 (Fig. 26) in stead of filters, to separate the wavebands of interest. These plural separated wavelength bands k, k, . . . advanta geously proceed to respective separate detector stages 513-1, 513-2, . . . which are the front-end stages of respective separate wave 25 length converters 10-1, 10-2, . . . . These in turn respectively provide optical signals 522'-1, 522'-2, . . . to plural slits (Slit 1, Slit 2, . . .) at the photocathode 524' of the streak tube 518. As will now be appreciated, many mi2x-and-match options are possible with respect to the specific components and modalities shown in the 30 plural-aveband configurations (Figs. 24 through 26) discussed here. By capturing images in a single streak tube concurrently, using any of the systems under discussion (Figs. 24 through 26), the in vention enables the interpretive parts 34 of the system to develop 35 difference signals, or ratio signals, as between the plural spectral regions. In this way the invention becomes a system capable of, for example, differential-intensity, or differential-absorbance, lidar 41 WO 2004/065984 PCT/US2004/000949 ,ectroscopy as between, eg, the far-IR and the UV - or other such combinations of spectral regions. 5 o) Claiming notes In accompanying apparatus claims generally the term "such" is used (instead of "said" or "the") in the bodies of the claims, when reciting elements of the claimed invention, for referring back to 10 features which are introduced in preamble as part of the context or environment of the claimed invention. The purpose of this conven tion is to aid in more particularly and emphatically pointing out which features are elements of the claimed invention, and which are parts of its context - and thereby to more distinctly claim the 15 invention. The foregoing disclosure is intended to be merely exemplary, and not to limit the scope of the invention - which is to be 20 determined by reference to the appended claims. p) References: 25 1. Gleckler, Anthony D., and A. Gelbart, "Three-dimensional imag ing polarimetry," Laser radar technology and applications VI, Pro ceedings of SPIE Vol. 4377, Aerosense (Florida 2001) 2. Gelbart, Asher, "Flash lidar based on multiple-slit streak tube imaging lidar", Laser Radar Technology and Applications VII, Pro 30 ceedings of SPIE Vol. 4723, Aerosense (Florida 2002) 3. Costello, Kenneth A., e rle W. Aebi, Gary A. Dasvis Ross A. La Rue, and Robert E. Weiss, "Transferred electron photocathode with greater than 20% quantum efficiency beyond 1 micron", Photodetectors and Power Meters II at 177-88, editors Kathleen Muray and Kenneth J. 35 Kaufmann (San Diego July 9-14, 1995) 42 WO 2004/065984 PCT/US2004/000949 4. Calmes, Lonnie K., James T. Murray, William L. Austin, and Richard C. Powell, "Solid-State Raman Image Amplification," Proceed ings of SPIE Vol. 3382 (1998) 5. Bowker, Kent, and Stephen C. Lubard, "Displaced-beam confocal 5 reflection streak lidar apparatus with strip-shaped photocathode, for imaging very small volumes and objects therein", United States Patent 6,400,396 (2002) 6. McLean, J. W., and J. T. Murray, "Streak tube lidar allows 3-D surveillance," Laser Focus World at 171-76 (January 1998) O10 7. Francis, D., H. L. Chen, W. Yuen, G. Li, and C. Chang Hasnain, "Monolithic 2D-VCSEL array with >2W CW and >5W pulsed output power," Electronics Letters Vol. 34, 2132 (1998) 8. Fujixerox online product literature http://~w~.fujixerox.co.jp /eng/product/vc sl/overview.html (January 2003) 15 9. Honeywell online product literature http://contnt.honewiell .com/vchael/capabilities/monolithic.stm (January 2003) 43

Claims (36)

1. Apparatus for detecting objects and determining their distance, to form a two-dimensional or three-dimensional image; said apparatus comprising: means for receiving light scattered from such objects and in 5 response forming a corresponding light of a different wavelength from the scattered light; and means for time-resolving the corresponding light to determine respective distances of such objects.
2. The apparatus of claim 1, further for use in determining re flectance of the objects; and wherein the receiving-and-forming means comprise: means for measuring and recording gray-level information in the 5 received and formed light.
3. The apparatus of claim 1, wherein the receiving-and-forming means comprise: a first, optointermediate stage that receives the scattered light and in response forms a corresponding intermediate signal; and 5 a second, intermedioptical stage that receives the intermediate signal and in response forms the corresponding light. .II The apparatus of claim 3, wherein: the intermediate signal comprises an optical signal.
5. The apparatus of claim 3, wherein: the time-resolving means comprise a streak lidar device. 44 WO 2004/065984 PCT/US2004/000949
6. The apparatus of claim 3, further comprising: a light source; and means for projecting pulses of light from the source toward such objects for scattering back toward the receiving-and-forming 5 means.
7. The apparatus of claim 5, wherein: the streak lidar device is incorporated into a repetitively pulsed pushbroom system.
8. The apparatus of claim 7, further comprising: an aircraft or other vehicle transporting the receiving-and forming means and the streak lidar device relative to such objects.
9. The apparatus of claim 5, wherein: the streak lidar device comprises a multislit streak tube.
10. The apparatus of claim 3, wherein: the time-resolving means comprise a flash lidar system.
11. The apparatus of claim 3, wherein: the intermediate signal comprises an electronic signal; the first stage comprises an optoelectronic stage; and the second stage comprises an electrooptical stage.
12. The apparatus of claim 10, wherein: the optoelectronic stage comprises light-sensitive semiconduc tor devices. 45 WO 2004/065984 PCT/US2004/000949
13. The apparatus of claim 11, wherein: the semiconductor devices comprise PIN diodes.
14. The apparatus of claim 11, wherein: the semiconductor devices comprise avalanche photodiodes.
15. The apparatus of claim 12, wherein: the electrooptical stage comprises vertical-cavity surface emitting lasers connected to receive the electronic signal from the PIN diodes.
16. The apparatus of claim 12, wherein: the electrooptical stage comprises devices selected from the group consisting of: 5 edge-emitting lasers, quantum diodes, quantum-dot lasers, and 10 microelectromechanical systems; said devices being connected to receive the electronic signal from the PIN diodes.
17. The apparatus of claim 10, wherein: the electrooptical stage comprises vertical-cavity surface emitting lasers.
18. The apparatus of claim 10, wherein: the electrooptical stage comprises light-emitting diodes. 46 WO 2004/065984 PCT/US2004/000949
19. The apparatus of claim 1, further comprising: utilization means responsive to the time-resolving means.
20. The apparatus of claim 19, wherein the utilization means are selected from the group consisting of: interpretive means for characterizing such objects based on the time-resolved light; 5 a monitor that displays an image of such objects for viewing by a person at the apparatus; a monitor at a base station for reviewing such objects or rela ted data received from the resolving by means by telemetry; a data-processing device for analysing such objects or images 10 of them; automatically operated interpretive modules that determine whether particular conditions are met; announcement-broadcasting means or other automatic physical apparatus connected to operate in response to the time-resolving 15 means; means for enabling or denying access to secure facilities through operation of doors and gates, or access to computer systems or to financial services including but not limited to credit or banking; 20 means for determination of hostile conditions, and resulting security measures including but not limited to automatically de ployed area-sealing bulkheads 25 21. The apparatus of claim 1, wherein: the receiving and forming means comrise discrete arrays of light-sensing and light-producing components respectively. 47 WO 2004/065984 PCT/US2004/000949
22. The apparatus of claim 21, wherein: the receiving and forming means further comprise a discrete ar ray of circuitry for controlling the forming means in response to the receiving means.
23. The apparatus of claim 1, wherein: the receiving and forming means comprise at least one monolith ic hybrid of light-sensing and light-producing components.
24. The apparatus of claim 23, wherein: the monolithic hybrid further comprises circuitry for control ling the forming means in response to the receiving means.
25. A method for detecting and ranging objects, said method com prising the steps of: receiving light scattered from the objects; in response to the scattered light, forming a corresponding 5 light of a different wavelength from the scattered light; and time-resolving the corresponding light to determine respective distances of such objects.
26. The method of claim 25, further for use in determining reflec tance of the objects; and wherein: the receiving step preserves at least some gray-level informa tion in the scattered light; and 5 the forming step also preserves at least some of the gray-lel information. 48 WO 2004/065984 PCT/US2004/000949
27. The method of claim 25; wherein: the receiving step receives the scattered light in plural wave length bands; and the forming step forms the corresponding light in substantially 5 a single, common wavelength band.
28. The method of claim 27, wherein: the plural wavelength bands include at least one ultraviolet wavelength.
29. The method of claim 28, wherein: the plural wavelength bands include at least one near-infrared wavelength.
30. The method of claim 27, wherein: the receiving step includes receiving the plural wavelength bands at plural slits, respectively, of a plural-slit streak camera.
31. The method of claim 30, further comprising the step of: before the receiving step, transmitting light in said plural wavelength bands, substantially simultaneously, toward the objects.
32. The method of claim 27, wherein: the receiving step includes receiving the plural wavelength bands at plural times, raspectiely.
33. The method of claim 32, further comrising the step of: before the receiving step, transmitting light in said plural wavelength bands, at respective plural times, toward the objects. 49 WO 2004/065984 PCT/US2004/000949
34. The method of claim 27, further comprising the steps of: deriving plural signals from the received light in the plural wavelength bands, respectively; and finding differences or ratios between signals received in the 5 plural wavelength bands.
35. Apparatus for detecting objects and determining their distance and reflectance, to form a two-dimensional or three-dimensional image; said apparatus comprising: a light source; and 5 means for projecting pulses of light from the source toward such objects for scattering back toward the receiving-and-forming means; means for receiving light scattered from such objects and in response forming a corresponding light of a different wavelength 10 from the scattered light, preserving gray-level information in said received and corresponding light; and means, comprising a streak camera, for time-resolving the corresponding light to determine respective distances and reflectan ces of such objects; 15 wherein the receiving-and-forming means comprise: a first, optoelectronic stage, comprising an array of light sensitive PIN diodes, that receives the scattered light and in re sponse forms a corresponding electronic signal; a second, electrooptical stage, comprising an array of verti 20 cal-cavity surface-emitting lasers connected to receive the elec tronic signal from the PIN diodes, that receives the electronic signal and in response forms the corresponding light; and an electronic circuit array connecting the electronic signal from the first stage to the second stage, and modifying the signal 25 to operate the second stage.
36. The apparatus of claim 35, wherein: the streak lidar device is incorporated into a repetitively pulsed pushbroom system. 50 WO2004/065984 PCT/US2004/000949
37. The apparatus of claim 36, further comprising: an aircraft or other vehicle transporting the receiving-and forming means and the streak lidar device relative to such objects.
38. The apparatus of claim 37, further comprising: utilization means responsive to the time-resolving means. 51
AU2004206520A 2003-01-15 2004-01-14 Ultraviolet, infrared, and near-infrared lidar system and method Abandoned AU2004206520A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US44030303P 2003-01-15 2003-01-15
US60/440,303 2003-01-15
PCT/US2004/000949 WO2004065984A1 (en) 2003-01-15 2004-01-14 Ultraviolet, infrared, and near-infrared lidar system and method

Publications (1)

Publication Number Publication Date
AU2004206520A1 true AU2004206520A1 (en) 2004-08-05

Family

ID=32771802

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2004206520A Abandoned AU2004206520A1 (en) 2003-01-15 2004-01-14 Ultraviolet, infrared, and near-infrared lidar system and method

Country Status (4)

Country Link
EP (1) EP1590683A1 (en)
AU (1) AU2004206520A1 (en)
CA (1) CA2546612A1 (en)
WO (1) WO2004065984A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102373926B1 (en) * 2016-02-05 2022-03-14 삼성전자주식회사 Vehicle and recognizing method of vehicle's position based on map
US10274599B2 (en) 2016-06-01 2019-04-30 Toyota Motor Engineering & Manufacturing North America, Inc. LIDAR systems with expanded fields of view on a planar substrate
JP7088937B2 (en) * 2016-12-30 2022-06-21 イノビュージョン インコーポレイテッド Multi-wavelength rider design
EP3673296A4 (en) * 2017-08-22 2021-09-01 Ping Li Dual-axis resonate light beam steering mirror system and method for use in lidar
WO2019164961A1 (en) 2018-02-21 2019-08-29 Innovusion Ireland Limited Lidar systems with fiber optic coupling
CN111366946B (en) * 2018-12-26 2023-10-13 保定市天河电子技术有限公司 Prison post channel protection method and device
CN112697711B (en) * 2020-12-14 2023-09-19 中国科学院合肥物质科学研究院 Mobile source waste gas snapshot type telemetry system
JP7178059B1 (en) * 2021-05-31 2022-11-25 日本ペイントコーポレートソリューションズ株式会社 Coating composition and coating film

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5467122A (en) * 1991-10-21 1995-11-14 Arete Associates Underwater imaging in real time, using substantially direct depth-to-display-height lidar streak mapping
FR2740227B1 (en) * 1995-10-20 1997-11-07 Thomson Csf LASER TOMOSCOPIC DETECTION DEVICE
WO2001081949A2 (en) * 2000-04-26 2001-11-01 Arete Associates Very fast time resolved imaging in multiparameter measurement space

Also Published As

Publication number Publication date
CA2546612A1 (en) 2004-08-05
EP1590683A1 (en) 2005-11-02
WO2004065984A1 (en) 2004-08-05

Similar Documents

Publication Publication Date Title
US7652752B2 (en) Ultraviolet, infrared, and near-infrared lidar system and method
Williams Jr Optimization of eyesafe avalanche photodiode lidar for automobile safety and autonomous navigation systems
US11112503B2 (en) Methods and apparatus for three-dimensional (3D) imaging
US7830442B2 (en) Compact economical lidar system
Albota et al. Three-dimensional imaging laser radars with Geiger-mode avalanche photodiode arrays
McCarthy et al. Long-range time-of-flight scanning sensor based on high-speed time-correlated single-photon counting
US11435446B2 (en) LIDAR signal acquisition
US11009592B2 (en) LiDAR system and method
CN110187357B (en) Laser active imaging system for three-dimensional image reconstruction
CN109791205A (en) For the method from the exposure value of the pixel unit in imaging array subduction bias light and for the pixel unit of this method
Pasquinelli et al. Single-photon detectors modeling and selection criteria for high-background LiDAR
US11791604B2 (en) Detector system having type of laser discrimination
CN110780312B (en) Adjustable distance measuring system and method
Jiang et al. InGaAsP/InP geiger-mode APD-based LiDAR
Hao et al. Development of pulsed‐laser three‐dimensional imaging flash lidar using APD arrays
KR20230003089A (en) LiDAR system with fog detection and adaptive response
Huikari et al. Compact laser radar based on a subnanosecond laser diode transmitter and a two-dimensional CMOS single-photon receiver
AU2004206520A1 (en) Ultraviolet, infrared, and near-infrared lidar system and method
Richmond et al. Laser radar focal plane array for three-dimensional imaging
US11802945B2 (en) Photonic ROIC having safety features
US20230051974A1 (en) Programmable active pixel test injection
Lange et al. Seeing distances–a fast time‐of‐flight 3D camera
GB2403614A (en) Streak tube imaging lidar
US11460551B2 (en) Virtual array method for 3D robotic vision
Browder et al. Three-dimensional imaging sensors program

Legal Events

Date Code Title Description
MK4 Application lapsed section 142(2)(d) - no continuation fee paid for the application