GB2403614A - Streak tube imaging lidar - Google Patents

Streak tube imaging lidar Download PDF

Info

Publication number
GB2403614A
GB2403614A GB0421638A GB0421638A GB2403614A GB 2403614 A GB2403614 A GB 2403614A GB 0421638 A GB0421638 A GB 0421638A GB 0421638 A GB0421638 A GB 0421638A GB 2403614 A GB2403614 A GB 2403614A
Authority
GB
Grant status
Application
Patent type
Prior art keywords
system
beam
image
plural
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB0421638A
Other versions
GB0421638D0 (en )
GB2403614B (en )
Inventor
Anthony D Gleckler
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Arete Associates
Original Assignee
Arete Associates
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRA-RED, VISIBLE OR ULTRA-VIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/447Polarisation spectrometry
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRA-RED, VISIBLE OR ULTRA-VIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J11/00Measuring the characteristics of individual optical pulses or of optical pulse trains
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRA-RED, VISIBLE OR ULTRA-VIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/2823Imaging spectrometer
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRA-RED, VISIBLE OR ULTRA-VIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/2889Rapid scan spectrometers; Time resolved spectrometry
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRA-RED, VISIBLE OR ULTRA-VIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J9/00Measuring optical phase difference; Determining degree of coherence; Measuring optical wavelength
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using infra-red, visible or ultra-violet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/21Polarisation-affecting properties
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using infra-red, visible or ultra-violet light
    • G01N21/62Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light
    • G01N21/63Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light optically excited
    • G01N21/64Fluorescence; Phosphorescence
    • G01N21/6408Fluorescence; Phosphorescence with measurement of decay time, time resolved fluorescence
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G04HOROLOGY
    • G04FTIME-INTERVAL MEASURING
    • G04F13/00Apparatus for measuring unknown time intervals by means not provided for in groups G04F5/00 - G04F10/00
    • G04F13/02Apparatus for measuring unknown time intervals by means not provided for in groups G04F5/00 - G04F10/00 using optical means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRA-RED, VISIBLE OR ULTRA-VIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/02Details
    • G01J3/0205Optical elements not provided otherwise, e.g. optical manifolds, diffusers, windows
    • G01J3/021Optical elements not provided otherwise, e.g. optical manifolds, diffusers, windows using plane or convex mirrors, parallel phase plates, or particular reflectors
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRA-RED, VISIBLE OR ULTRA-VIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/02Details
    • G01J3/0205Optical elements not provided otherwise, e.g. optical manifolds, diffusers, windows
    • G01J3/0229Optical elements not provided otherwise, e.g. optical manifolds, diffusers, windows using masks, aperture plates, spatial light modulators or spatial filters, e.g. reflective filters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using infra-red, visible or ultra-violet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N2021/1793Remote sensing

Abstract

The lidar (laser radar) incorporates a streak camera which uses a microelectromechanical minor to perform streaking. This mirror comprises an array of independently actuable sub-mirrors. Light coming into the tube via one or more slits is reflected by the mirror onto an image sensor which may be near to the slit(s).

Description

24036 1 4

VERY FAST TIME RESOLVED IMAGING

IN MULTIP TER_MEASUREMENT SPACE

PRIORITY

This application claims the priority benefit of U. S. provi- sional patent application 60!199,975, -iced Spry 26. 2000.

BACKGROUND

1. FIELD OF THE INVENTION

This invention relates generally to time-resolved recording of three or more optical parameters simultaneously; and more par- ticularly to novel methods and apparatus for making such measure- ments on an extremely short time scale, using lidar or a streak tube - or related technologies such as lenslet arrays - or com- binations of these. Certain forms of the invention enable provi- sion of a compact single-laser-pulse scannerless multidimensional imaging system using plural-slit streak-tube imaging lidar or "PS- STIL". The system is also capable of making plural-wavelength band spectrally discriminating recordings of objects or phenomena, as well as plural-polarization-state recordings, and also Anon= Lions of such novel measurements.

2. RELATED ART (a) Conventional streak lidar - The term "lidar" (in English pronounced "LIE-dahr"), by analogy to "radar", means "light detection and ranging". The use of lidar is greatly enhanced by incorporating a streak tube - an electrooptical system for time resolving lidar returns to a remarkably fine degree.

Several advanced forms of the streak-tube imaging lidar or "STIL" technology are presented in other patent documents. See, for example, U. S. 5,467,122 and PCT publication WO 98/10372.

BENEFITS: Many strengths of a conventional STIL system appear in Table 1, and most of these are discussed in more detail in this section. This technology has demonstrated the capability to col- lect range (or other time-related) information with dynamic range and bandwidth that cannot be achieved using earlier conventional signal- digitization electronics.

_ _

Feature Benefit

_ _

l A17 GUS robust operations in high-contrast scenes.

12-bit À User need not spend time keeping system "centered' linear in the dynamic range.

dynamic No electronic digitization can achieve this with range out significant compression (use of a logarithmic amplifier) that introduces artifacts in data.

Control- Can change digitization rate icon the fly", which fable allows the operator to start out with coarse range Is range - resolution and "zoom" in on areas of interest.

operation À 1 cm range resolution has been demonstrated at from short ranges; 15 cm, from an aircraft.

d. c. to Conventional high-speed digitization electronics multiGHz are designed for only one speed.

STIL operates on a single short-pulse (<10 nsec) Fast data time-of-flight measurement; thus no long integra collec- tion or multiple pulsing is necessary.

tion No target distortion/blur from moving source or target.

Compact The volume of commercial streak-tube electronics rug- packaging has been reduced by a factor of twenty.

gedized Ruggedized hardware for helicopter environment has been fabricated.

package System can be placed on a variety of platforms.

The streak tube has a noiseless gain (noise factor is 1) due to each accelerated photoelectron gen High Orating approximately three hundred photons on the internal phosphor screen.

gain Higher gain (>104) available using a microchannel plate (MCP) if necessary.

Raises small signals above the amplifier noise.

Simulta- No need for multiple sensors.

neous Allows significant improvements to ATR algorithms range and for shape matching and for clutter reduction.

contrast No registration/scale issues between range and images contrast data.

High Transmitters and receivers have been demonstrated frame at 400 Hz frame rates - ideal for large-area rates searching.

shown Ideal for very fast moving targets or sensors.

All processing within a single image frame is con Rapid ventionally performed with multiple DSP computers.

process- Allows rapid real-time display for an operator.

ing À Reduces vol'.e and electrical power requirements for computer.

Table 1. Benefits of the conventional streak-tube imaging lidar (STIL) approach.

For example, linear twelve-bit dynamic range has been shown at controllable bandwidths up to and beyond 3 GHz. A fundamental advantage of a streak-tube-based lidar system is that it can pro- vide hundreds or even thousands of channels of sampling at more than 3 GHz, with true twelve-bit dynamic range.

Such a receiver system provides range sample "bins" (i. e.

discrete range-sampling intervals) that can be as small as five centimeters (two inones) long, and provides 4036 levels of grav- scale imagery, both of which are important for robust operations.

The small range bins provide optimal ranging capability, and the large dynamic range reduces the effort of trying to keep the scene illumination in the middle of the response curve of a limited-dy- namic-range receiver. Such performance has been demonstrated in a laser radar configuration (J. McLean, "High Resolution 3-D Under water Imaging", Proc. SPIE 3761, 1999).

BASIC T=E ARCHITECT OPERATION: A streak tube (Fig. 1) as conventionally built nowadays is very similar to a standard image- intensifier tube, in that it is an evacuated tube with a photon cathode, producing electrons which are accelerated by very high voltages to a phosphor screen. In operation of a typical system, each such electron ejects roughly three hundred photons from the phosphor, which is then collected by an image-recording device such as a COD.

A major difference is that a streak tube has an extra pair of plates that deflect an electron beam, somewhat as do the de- flection plates in an ordinary cathode ray tube (CRT) tube used in most oscilloscopes, television sets and computer monitors. In conventional STIL operation, input photons are limited to a single slit-formed image - causing the electron beam within the tube to so be slit-shaped.

A fast ramp voltage is applied to the deflection plates, very rapidly and continuously displacing or "streaking:' the slit- shaped electron beam, parallel to its narrow dimension, from the top (as oriented in Pig. 1) of the phosphor screen to the bottom - effectively creating a series of line images formed at differ- ent times during the sweep. Thereby time information is impressed upon the screen image in the streak direction (here vertical), while spatial information is arrayed along the slit length.

The array of internal electronic line images in turn consti- tutes a latent areal image - which can be picked up ("developed") by phosphor on the screen. Most typically a charge-coupled-device (CCD) camera is attached to the streak tube to collect the image from the phosphor screen.

In this way the image is reconverted to an external elec- tronic image by a CC3. The COD output is digitized, interpreted, and -.f yes red saved or displayed by receiving electronics.

One of the two dimensions of each two-dimensional image ac o quired in this way is azimuth (taking the dimension parallel to the long dimension of the slit as extending left and right) just as with a common photographic or video camera. The other of the two dimensions, however, is unlike what an ordinary camera captures.

More specifically, the STIR images represent azimuth vs. rangy from the apparatus not vs. the commonplace orthogonally visible dimension as with a common camera. Thus for example if a two-dimensional image of an ocean volume is acquired by an instru ment pointed vertically downward into the sea, the two dimensions are azimuth and ocean dentin.

The operation described here should not be confused with that of a socalled "framing camera", whose tube internal geometry is commonly identical but which usually lacks an optical input slit, and whose deflection system is differently energized - so that more ordinary twodimensional images of a scene are formed at the phosphor screen. Often such images are sized to fit on just a fraction of the screen area, and the deflection plates quickly step the two-dimensional-image position (in some instruments dur- ing a blanking interval), rather than displacing it continuously as in streaking.

BASIC SYSTEM OPERATION: In a typical conventional streak-tbe lidar configuration, a short-duration high-energy laser pulse is emitted. The emitted beam is spread out into a single thin, fan shaped beam or line, which is directed toward a landscape, ocean volume, or other region of interest - and the receiver optics image the line back onto the slit input to the streak tube. (A later portion of this document discusses the phrase "thin, fan- shaped beam" in further detail.) pc In such a standard STIL system, coverage of the region of interest in the dimension perpendicular to the line illumination (Fig. 2) is generally accomplished through motion of a vehicle carrying the emitter and sensor of the beam.

Formation of a complete volumetric image therefore requires a series of pulses, each yielding a respective individual range-vs.- azimuth image.

Taking the laser projection direction as horizontal in Fig. 2(a), the vehicle direction should be vertical - as for instance in a vertically moving helicopter. In this case, each areal screen image represents a horizontal map, at a respective alti- tude, with the measuring instrument located above the top edge of the map and the remote horizon along the bottom edge.

Alternatively, reverting to the earlier example of a down ward-looking instrument over the ocean, vehicle motion should be horizontal. In this case each areal screen image represents a vertical slice of the ocean below the vehicle, at a respective position along the vehicle's horizontal path.

This is sometimes familiarly called a"pushbroom" system. A demonstrated alternative to vehicle-based data acquisition is a one-dimensional scanner system used from a fixed platform.

The deflection system of the streak tube is set to streak the electron beam completely down the phosphor screen in some specific time, called the "sweep time" of the tube. This also corresponds to the total range gate time (i. e., the total amount of time during which the system digitizes range data).

Ordinarily the sweep time is adjusted to fully display some interval or interest for exploring a particular region, as for instance some specific ocean depth from which useful beam return can be obtained - taking into account turbidity of the water.

The starting point of the range gate is controlled by the trigger signal used to begin the sweep.

Computer control of both the sweep time and the sweep-start trigger provides the operator a flexible lidar system that can very rapidly change its range-gate size, its range-digitization starting point, and also its range-sampling resolution. This en- ables the system to search large areas of range with coarse range resolution, and then "zoom in" to obtain a high-resolution image around a discovered region of particular interest. For example, in one pulse the system could capture a range from km to 7 km at low resolution, and then on the next laser pulse zoom in to 6 km + m and thereby image an object of prospective interest at the highest resolution.

Each column of CCD pixels corresponds to one channel of dig itized range data' such as ECU] d be collected from a single time resolved detector for instance a photomultiplier tube (PAT) or =-. avalanche pho.odiode Judd). Each row is the slit damage at a different time.

The size (in units of time) of the previously mentioned range bins is simply the sweep time divided by the number of pix els in the CCD columns. Such values are readily converted into distance units through multiplication by the speed of light in the relevant medium or media.

MODERN-DAY ENCNTS: Considerable practical advancement is now available in state-of-the-art streak-tube technology. Such advances include a compact ruggedized package suitable for heli copter environment.

Such a unit (Fig. 3) is only about 15 cm (6 inches) wide, 47 cm (19 inches) long, and 37 cm (15 inches) in diameter. This kind of device has complete computer control of all streak-tube parame ters, including highvoltage supplies.

Available as well are continuously variable linear sweep speeds from 50 nsec to 2 psec. Eigh-speed tube gating without a microchannel plate (MCP) is also offered, for enhanced signal-to- noise ratio.

ADVANCED COMMERCIAL FORM: The assignee of this patent document, Arete Associates (of Sherman Oaks, California, and Tucson' Arizo- na), has developed an airborne STIR system for bathymetry and terrestrial mapping. This device contains a diode-pumped solid- state Nd:YAG laser that is frequency doubled to 532 nm. This wavelength was chosen for maximum water penetration for the ba 3s thymetry task and for proximity to the peak of the streak-tube photocathode responsivity curve.

A raw image frame taken by STIL during airborne terrestrial mapping data collection (Fig. 4[b]) and a volume reconstruction from numerous such frames (Fig. 4[c]) compare interestingly with a conventional photograph (Fig. 4ta]). A like comparison is also shown (Fig. 5) for an object roughly 1 m (39 inches) in diameter and imaged through 6 m (20 feet) of seawater.

In these views, naturally the conventional photo gives a clearer and sharper image. One goal of the STIL imaging, however, is to obtain images and reconstructions under circumstances that preclude effective use of ordinary photos Of part-' cuiar interest in view (c) is the dear}- spot in the upper-left part of the imaged object: this is one of the two 5 cm 0 (two inch) holes in the object that appear in view (a). Here the STTL system is actually ranging down through that hole to the bot- tom of the object. (The other hole was covered by a weight used to keep the object on the ocean bottom.) As the scattering and attenuation of water are significantly fs greater (and propagation velocity significantly smaller) than in air, Arete has developed and tested the algorithms and software to account for such problems. These algorithms are directly trans- latable to long-range air paths, and propagation through fog, haze, smoke etc. (b) Safety_l4__tations of conventional lidar - Modern STIL innovations were developed for underwater applications that re- quire blue-green light for optimal water penetration. Human be- ings too are particularly adapted for sensitivity to light in these wavelengths.

By the same token, however, such light when projected at very high powers can pose a hazard to people - and possibly to other creatures as well who may be positioned ted look directly at the source. The possible hazard is compounded by a like sensitivity to viewing specular reflections of the beam from the source. ' As will be understood, the STIL system has many useful ap- plications in which this type of potential hazard poses no signif- icant concern. A thrust of the present document, however, is development of a new generation of STIL systems and applications that are industrial and even commercial, and accordingly introduce a much greater need for compatibility with the population at large.

Therefore the possibility of injury to eyes is an important obstacle to a new array of STIL devices. It may be in part due to this problem that widespread commercial and industrial adaptations of the SAIL principle have failed to appear in the marketplace.

(c) Conventional lidar streak-unit limitations - As the preceding introductory sections suggest, conventional modern streak tribes are relatively sizable vacuum tubes hat use h" gh voltages to streak the electron beam generated from the photocath o ode. Plainly this type of hardware is subject to several draw- backs.

Such devices are very expensive to make, maintain and use.

For field use, ruggedization is a necessary added expense (and a stillimperEect solution) since large vacuum tubes are inherently somewhat fragile. Their external high-tension connections are not optimal for routine use in aircraft.

A well-known alternative is optical streaking - in which a beam of incoming photons is rapidly displaced across a detector, along the range axis, entirely avoiding the need for a vacuum tube. This in fact was the earliest form of the streak camera - using a fast scan mirror, in particular a large spinning polygon (Fig. llama|).

These devices too, unfortunately, are problematic and even more so than the electronic form. The drawback of this approach as conventionally implemented is the requirement for the large high-speed rotating mirror, which is both bulky and relatively delicate. (One relatively modern example of such an installation is described by Ching C. Lai in "A New useless Nanosecond Streak Camera Based on Optical Deflection and Direct COD Imaging,", Proc. SPIE vol.1801, 1992, pp. 454-69.) What makes these drawbacks of the optical streaking tech- nique particularly unfortunate is that streaking with an optical device would otherwise greatly expand the choices in commonly available detectors. It would allow the use of common detectors for the wavelengths of interest - e._ g. silicon CCD and CMOS de- tectors for the visible and near IN, and HgCdTe, PtSi, or InSb arrays for the longer IR, out to 11-micron wavelengths if desired.

Longer-wavelength operation would be advantageous for vari- ous special applications. These include better penetration of fog, clouds and some types of smoke; and also enhanced dlscrimina- tion of object types by their different reElectivities at corre- sponding different wavelengths.

Regrettably the common detectors just mentioned are not sui ted for use as photocathode materials, to generate electrons that can then be streaked inside a streak tube. On the other hand it would accomplish nothing to place them following the conventional photocathode - e. A. at the treak-tuDe anode - since conversion from the optical to the electronic domain has already been accom plished at the cathode.

Use of a standard IR imaging detector instead of a COD would be advantageous to provide high-quantum-efficiency images. For some wavelength ranges this technique would be ideal - but the prior art has avoided these potential solutions because of the recognized problems presented by spinning mirrors.

(d) Conventional lidar imaging limitations - As a general observation, conceptually a STIL system is far in advance of com- peting technologies in terms of resolution capability in three dimensions, and in terms of signal-to-noise ratio as well. In its ability to fully exploit these advantages, however, a conventional STIL is severely impaired 'my an overriding problem in streak lidar systems heretofore: inflexibility of pixel allocation.

This limitation may be appreciated from three different per spectives, although in a sense they are only different aspects of the common phenomenon: the STIL cannot record in three dimensions without.mechani cal movement of the measuring instrument relative to the re gion to be inspected; the only practical way to make optimally efficient use of the very expensive detector area in a conventional STIL sys- tem is to build a fiber-optic remapped; and even when such a device has been built, a conventional STIL system fails to make fully economic use of that investment.

These problems will be taken up in turn below, but first beginning with a demonstration of the above preliminary observation that 3-D resolution is superior in a STIL apparatus.

TOME - DIMENSIONAL SOLUTION: Different existing lidar systems sample water volume differently (Fig. 16. The water surface is represented bar the irregular line shown on the two visible faces -= .ke vvl-e cage, in each view.

Pange-gated systems have excellent transverse spatial reso 0 lution, but only have one range pixel per camera - which results in poor range resolution as suggested by the relatively tall vol- ume elements (Fig. 16[al) in the shaded zone that is of interest.

Merely by way of example, one system well-known in this field as "Magic Lantern" is forced to use six separate cameras to cover multiple depths, resulting in a large and expensive system.

In addition, since a range-gated system thus collects large vertical sections of the water column, contrast of any object images is significantly reduced. That is, the contrast, which is directly proportional to the signal-to-noise ratio (SNR) in the region, is a function of the amount of water backscatter that is collected.

A system with range samples of 30 cm (one foot) has ten times the SNR of a system that has 3 m (ten foot) range samples.

In addition, as the diagram also suggests, the range-gated device must avoid the surface of the water.

Time-resolved systems, such as the one used in the advanced receiver in ATD-lll (a photomultiplier-tube-based, nonstreaking time-resolved lidar system), suffer from a similar contrast-re- duction problem. In this case the cause is poor transverse spa tial resolution, as suggested by the relatively broad volume ele- ments (Fig. 16Eb]) in the shaded zone of interest.

This system cannot isolate an object signal, and moreover also collects a large area of water backscatter around an object.

To have the same transverse spatial resolution as the range-gated system, this time-resolved apparatus would require a separate la- ser pulse for every pixel, resulting in a pulse repetition fre- quency (PRF) exceeding 100 kHz.

Unfortunately a frequency-doubled Q-switched Nd:YAG laser (the primary laser used in ocean lidar systems) operates effic - If iently only up to about 5 kHz. Inability to reach the needed PRF, in turn, results in larger spatial pixels to achieve the same area coverage.

To avoid these sampling problems, the two systems discussed above use both a time-resolved receiver and a range-gated module.

Although this approach represents significant additional system complexity, it still does not resolve the significant degradation of detection SIR.

Because a STIL system collects 500 to 1000 spatial pixels per laser pulse, the PRF can be in the hundreds of hertz, which is well within the performance envelope of the Nd:YAG lasers. In this way a STIL device can achieve pixel sizes smaller than an ob ject of interest; therefore, it has higher SNR for the same laser power.

Thus, a streak-tube-based system (Fig. 16[cl) can provide much higher SNR for the same amount of laser power, or can achieve equal performance with a significantly smaller laser system.

Streak-tube-based systems can provide good resolution in all dimensions.

Unfortunately this powerful benefit of the STIL principle is not heretofore broadly available without mechanical movement of the detector, and without costly and awkward remapping devices and even then carries only very limited amounts of image informa tion. These three problems are discussed in the paragraphs below.

THE MECXANICAL-MOVE}IENT REQUIREMENT: As described earlier, the conventional streak-tube system is a pushbroom system, which means that it depends on the motion of the vehicle tG sample the dimen sion along the track. This requirement prevents the conventional STIL from serving as what may be called a "staring" system - i. e., a stationary system that can acquire a stationary image of an area.

Just such a capability, however, is quite desirable for a number of useful applications. Inability of a conventional STIL 3s instrument to fill this role is a major limitation in industrial and commercial uses.

FIBEF<-OPTIC PEUAPPERS - CHARACTER AND COST: It is well known to use a variety of kinds of fiber-optic units to reconfigure a time varying area image as a line image, and thereby enable time reso- lution of the changing content in the area image. Such technolo- gies are seen in representative patents of Alfano (for example see U. S. 5,142,372), and of Knight (for example U. S. Re. 33,865); and in their technical papers as well.

An original concept for an area-mage streak-tube system was demonstrated by Knight, who mapped a 16xl6-unit areal -image onto a con-en_icna' strea-ube sli t, Wi En fiber optics. OF. R. Knight, et al., "Three dimensional imaging using a single laser pulse", Proc. SPIE vol. 1103, 1989, pp. 174-89.) This technique was se- verely limited in overall number of spatial pixels because of the relatively small number of pixels that can be mapped onto a slit.

Low-resolution fiber image redistribution (a 16x16 focal plane to a single 256-pixel line) has also been performed for streak tubes by MIT- Lincoln Labs. Many fiber-array manufacturers are in operation and ready to prepare units suited for STIL work: one of the largest firms is INCOM; another that makes individual fiber arrays is Polymicro Technologies - which has previously prepared arrays with 3000 fibers.

At best, however, all such approaches are hampered by the costly custom fabrication required, and the need to manufacture a special unit for each desired mapping respectively.

FIBER-OPTICRECAPPERS - INS EQUATE E LOITATION: What makes matters worse, as to fiber-optic remapping, is that a conventional STIL system nowadays continues to face the same basic obstacle seen in the Knight paper noted above. Only so many original image pixels can be meaningfully rearranged onto a slit.

This means that even after the limitation of expensive cus tom fabrication has been confronted and in a sense overcome by a decision to expend the necessary funds, and even after the re- quirement for making a separate special unit for each of several particular mappings has also been faced and in a sense overcome by a decision to invest even that multiple - yet nevertheless the technology continues to be not only uneconomic but also technical- ly unsatisfactory because the resulting images carry inadequate, frustratingly small amounts of image information. This obstacle has heretofore remained a persistent problem, and will be further discussed shortly in subsections (f) and (g).

(e) WS limitations of conventional lidar - The field of wavefront sensors (WAS) is an important one for laser diagnostics.

High-power short-pulse lasers are essential components of several different applications (e. A., laser trackers and imaging laser radar); however, such lasers are notoriously unreliable.

rt is difficult for vendors to manufacture them to desired specifications, and the devices seldom survive to their projected _fetme (at leant at rated output power! ore of the most diffi- cult aspects of the manufacture of such devices is the lack of diagnostic equipment for the total characterization of the laser output.

Typical laboratory equipment for the characterization of high-power pulsed lasers consists of three instruments: (1) a pow- er meter for measuring average power, (2) a single fast detector with an oscilloscope for measuring the pulse width temporally, and (3) a laser characterization imager that provides a spatial dis- play of the beam intensity. Each of these instruments has signif- icant limitations in the data that it produces.

The single fast detector averages over the spatial compo nents of the beam, and the laser characterization imager averages over the temporal component of the beam. That is to say, the in tegration time of the camera in the laser characterization setup is typically orders of magnitude longer than the laser pulse width.

as The power detector, furthermore, averages over both the spa tial component and the temporal component. Thus no one instrument provides information in time and space and phase with high resolu tion in all dimensions Yet this is precisely the information that the laser de signers use in their modeling and simulations. Using commercial laser modeling software such as GLAD, laser designers set up mod- els to simulate propagation of the beam in the laser cavity at very high spatial and temporal resolution.

The phase and intensity of the light, expressed as electric fields, are used in this simulated propagation process. After go- ing through all of that analysis, however, laser designers have no way to compare theactually resulting, operating product with that preliminary analysis.

None of the above-discussed three instruments measures the wavefront of the light - i. e., maps the phase of the outgoing light as a function of position in the beam. This is the role of another type of instrument, the WFS, which does exist to perform this task - but like the laser characterization imager it aver- ages over time.

The most come-on such unit in use today is the Har-ann-Shack IS. In this apparatus, incoming lions {Firs. 25) is split u' Onto multiple subapertures, each with its own lenslet. The lenslet fo 0 cuses the light onto a detector.

When a flat wavefront (i. e., a plane wave) is incident on the device, each of the lensless forms a spot image on-axis on the detector. When a distorted wavefront arrives, however, as illus- trated the average slope of the wavefront at the lenslet for each subaperture displaces the spot away from the on-axis position.

Although the illustration is essentially one-dimensional, the lensless are in a two-dimensional array; and the spot position measurements too are accordingly made in two dimensions. Design and fabrication of this kind of device is a highly specialized endeavor, available from various vendors such as Wavefront Scien ces, Inc. of Albuquerque, New Mexico.

Wavefront Sciences develops lenslet arrays for a number of applications. Typical cost for initial design and fabrication of one lenslet array is $20,000.

The displacement of the spot is measured, in both the x and y directions. Average local slope of the wavefront at the meas urement point is next calculated as linearly proportional to this displacement. The total wavefront is then reconstructed using algorithms that assemble such local tilts into a whole wavefront.

This process is referred to as "wavefront reconstruction".

It is a common and well-documented algorithm, currently used in astronomical and many other instruments.

In addition to the wa-efront, which corresponds to the phase of the electric field, the intensity of the light is measured for each subaperture. This allows generation of an intensity map, as well as a phase map, of the incoming beam.

In most Hartmann-Shack WAS units, the detector behind the optics is a CCD camera or an array of quad cells. A quad cell (Fig. 26) measures the two tilts and the intensity. These detec tor systems are relatively slow (30 Hz to 10 kHz); while suffic ient for assessing atmospheric corrections such detection natu rally is inadequate for applications that require suLnanosecond sample rates.

From the foregoing it will be clear that laser laboratory devices; and in particular WFS systems when used for laser e--alua tion, fail Q satisfy the needs of laser developers. This failure Is a manic;- problem, -.. p_cing progress in fine ciesign and refinement of more stable, reliable and long-lived lasers.

(f) Data-speed and package limitations - Signal processing in conventional STIL systems is performed using multiple digital signal processors (DSPs). These in turn impose requirements of weight, volume, power, and heat-loading which in effect demand Is vehicle-mounting of these sensors.

Even carried on a vehicle of modest size, practical forms of the system have relatively low data throughput and may therefore require several measurement passes to acquire adequate data for a region of interest. These limitations represent additional prob lems because many applications would be better served by a system that a single person could carry, or that could survey and map a region in a single pass - or ideally both.

(g) Limited uses of conventional streak lidar - No STIL 2S packages fully suited for commercial or industrial surveillance and mapping are known to be on the market. It appears that this may be due to a combination of factors including the visual haz- ards mentioned earlier (with the legal liability that would be associated with operations in populated areas), and also the limi ted data speed and resulting packaging obstacles outlined just above.

Potentially, a primary commercial application is airborne threedimensional terrain mapping. Terrestrial mapping is one function that can be performed using lidar, but this opportunity has not been exploited commercially. It is believed that this market may represent potential income exceeding tens of millions of dollars annually.

In California, for example, there is a need to perform com- plete surveys of the Los Angeles basin (2,400 square miles) every year. This task is currently performed using photogrammetry techniques. Other metropolitan areas have similar requirements, which in the aggregate thus can provide a sustained business in airborne surveying.

s An entree to this terrestrial mapping application can be obtained by contacting any large commercial and industrial sur veying company. A very roughly equal amount of business can be generated through on demarid't surveying for Particular GOnStrUc tion jobs - particularly threedimensional imaging.

Conventional SAIL equipment, however, has not been set up (or at least not set up in a convenient format) for three-dimen sional imaging. Likewise it is not available with any kind of viewing redundancy, to surmount problems of temporary or local barriers to viewing.

On land such barriers include for example landscaping or natural forestation, as well as coverings deliberately placed over some objects. At sea they include image-distorting effects of ocean waves (Fig. 19) which may completely obscure some fea tures and actually exchange the apparent positions of others.

In purest principle it is known that foliage and other kinds of cover can be neutralized through use of spectral signatures, polarization signatures or fluorescence signatures. Analysis that incorporates spectral, polarization, spectropolarization, and fluorescence discriminations is also known to be useful for other forms of optical monitoring for which streak lidar would be ex- tremely well suited.

Significant analysis of three-dimensional polarization anal- ysis with lidar systems; using a "Mueller.matr-. a', approach, is in the technical literature. See, for example, A. D. Gleckler, A. Gelbart, J. M. Bowden, "Multispectral and hyperspectral 3D imaging lidar based upon the multiple slit streak tube imaging lidar'', Proc. SPIE vol. 4377, April 2001; A. D. GleckLer, A. Gelbart, "Three-dimensional imaging polarimetry", Proc. SPlE vo1. 4377, April 2001; A. D. Gleckler, ''Multiple-Slit Streak Tube Imaging Lidar (MS-STIL) Applications, Proc. SPIN vol. 4035, p.266-278, 2000; R. M. A. Azzam and N. M. Bashara, Ellipsometrv and Polarized right, North Holland, Amsterdam (1977); R. A. Chipman, E. A. Sornsin, and J. L. Pezzaniti, "Mueller matrix imaging polarimetry: An overview" in Polarization Analysis and Applications to Device

-

- -i7

Technology, SPIE Volume 2873, June 1996; R. M. A. Azzam, "Mueller matrix ellipsometry: a review", SPIE Volume 3121, August 1997; P. Elies, et al., Surface rugosity and polarimetric analysis, SPIN Volume 2782, September 1996; Shih-Yau Lu & R. A. Chipman, Tnter- pretation of Mueller matrices based on polar decomposition, JOSA, Volume 13, No. 5, May lg96; and S. Bruegnot and P. C1emenceau, "Modeling and performance of a polarization active imaser at = 8Q no". SPIN 'rol. 37Qr, r'1 lip.

Thus spectral, fluorescence and polarization analyses in theory are susceptible to commercial and industrial streak-lidar beneficial exploitation. Examples are detection and measurement of atmospheric particulates, atmospheric constituents, waterborne particulates, and certain hard-body object returns (with propaga- tion paths in either air or water).

Heretofore, however, necessary equipment adaptations for in troducing fluorescence, polarization and spectral analyses into streak lidar work at least on a broad, general-use basis have been unavailable. The prior art in this field thus fails to teach how to go about making such refinements in any straight forward, practical way. This gap represents a major problem as it has left these kinds of mapping infeasible or even impossible - and accordingly several practical mapping needs unsolved.

(h) Now-unrelated technology: "eye safe" - This discus sion will next turn to modern developments that have not hereto- fore been pragmatically associated with lidar, or particularly with streak imaging lidar. The first of these relates to popula- tion exposure.

Studies have shown that light beams of different wavelengths have respective ocular destructive powers - for any given beam power that differ by many orders of magnitude. For all wave- lengths, such studies have established respective maximum power/- pulse-energy levels that are considered safe, at least for humans.

In particular the 1.5-micron region is considered to have the least ocular destructive power (by several orders of magni- tude) of any wavelength from x-rays to the far infrared. More specifically, light in the visible, near- W. and near-IA regions damages the retina r while light in the farW and far-IR damages the cornea) but light at about 1.5 microns tends to dissipate harmlessly in the intraocular fluid between the cornea and the retina. This region is therefore commonly designated "eye safe".

Accordingly for mapping or detecting systems that are to ir- radiate Large areas of land in which people or other higher orga nisms may be present, it is important to operate in that eye-safe wavelength region as much as possible. Of course there are many reasons to avoid incidental exposure of people above the damage threshold.

Traditional photocathode materials are well suited at visi ble wavelengths; however, efficient photocathode detector materi- als do not exist for wavelengths much over one micron. Neverthe- less operation at eye-safe wavelengths is feasible with commer- cially available fast phosphorescent materials, which respond to infrared photons by producing proportional quantities of higher frequency (visible) photons, and are thus loosely described as performing wavelength "conversion".

These materials thus in effect "convert" light at 1.5 mi- crons to roughly 0.65 micron, with Quantum efficiencies poten- tially as high as sixty-six percent. This process, sometimes called "upconversion" to visible light, occurs at the front of an imaging tube - in advance of the photocathode and enables use of conventional photocathode materials that respond well to visi- ble light.

TSMITTER: Transmitters operating at 1.5 microns are now commonly available from several vendors (including Big Sky Laser j LiteCycles, and GEC-Marconi) using diode-pumped solid state (DPSS) Q-switched Nd:YAG lasers. The lasers are coupled with either an optical parametric oscillator (OPO) or a stimulated Raman scatter ing (SRS) cell.

To achieve optimal range resolution, it is desired to keep the laser pulse length between 4 and 10 nsec. This is the range of typical DPSS Qswitched pulse lengths; accordingly transmitter conversion is straightforward within the state of the art.

RECEIVER: Streak-camera receiver operation at 1.5 microns is currently available using the standard S1 photocathode material.

Unfortunately, S1 has very poor quantum efficiency, which reduces

The applicability for real-world imaging. -i9

Due to the low efficiency of the S1 material, a 1.-micron streak-lidar product based upon it would similarly operate ineffi ciently. Pragmatically speaking, such a product would not be eco nomic or viable.

Exploration of other materials has been reported. Those materials which are relevant to vacuum streak-tube operation include: TE photocathode, InGaAs photocathode, ETIR upconversion, and À nonlinear upconversion.

These will be discussed in turn below, in this subsection of the present document. Another approach to eye-safe technology, but that excludes vacuum streak-tube operation entirely, will be dis cussed in a later subsection.

TE Photocathode: Intevac Corp. has fabricated a transfer electron (TE) photocathode with quantum efficiencies exceeding ten percent at 1.5 micron wavelengths. (See K. Costello, V. Abbe, et al., 'Transferred electron photocathode with greater than 20% quantum efficiency beyond 1 micron", Proc. SPIE vol. 2550, pp. 177-88, 1995.) This photocathode has been demonstrated in image intensified CCDs - but not in streak tubes - at 1.5 microns.

Very interestingly, applicability of the TE photocathode for streak tubes has been shown too, but not at that eye-safe wave- length. (Please refer to V. W. Abbe R. Costello, G. Davis, R. Weiss, "Photocathode development for a 1300 nm streak tube", frock SPIE vol. 2022, 1993.) Intevac used an early version of this photocathode in a streak tube that operated out to 1.3 microns - but, again, not to 1.5. Whether actually due to perceived lack of customer base or due to some failure in reduction to practice, no streak tube able 36 to operate efficiently at 1.5 microns is currently available.

InGaAs Photocathode: Hamamatsu Corporation of Japan has an InGaAs photocathode used for near-IA photomultiplier tubes (PMTs) . The Hamamatsu photocathode has poorer quantum efficiency (QE) on the order of one percent - at 1.5 microns than the TE dis- cussed above.

Although Hamamatsu suggests that this photocathode as com- patible with its streak-tube line, no such development has ap peered, at least commercially or in the literature. Again, prag- matically no successful report of testing is known.

Improvements in gE may De possible if 'slower' photocathodes that gaffe lunge- response title -- 1 '-sec. vs. tens or picosec- onds - are acceptable. Nanosecond response time at the photo cathode would have little adverse impact on system performance.

ETIR Phosphor Upconversion: Phosphor upconversion has been performed by simply placing a layer of phosphor in front of a photocathode, in image intensifiers and other photoresponsive devices. No testing with a streaktube photocathode, however, has been reported.

In known applications of the phosphor-upconversion tech- nique, the incoming IR interacts with the phosphor, which has been "charged" with blue light from an LED, and in response produces light between 600 and 700 nm. This is well within the high-per- formance range of conventional photocathode materials. The blue charging rED can be shut off during the brief data-collection period to avoid saturating the photocathode.

This technique is particularly effective using a class of phosphors, called "electron-trapping infrared" (ETIR) upconversion phosphors, which receive incident infrared photons and in response emit corresponding quantities of -risible photons.

The response band of typical ETIR phosphors is about 0 8 to 1.6 m. The most accepted model for the operation of ETIR phos phors is as follows.

(1) The phosphors are doped such that there are two doping lev els between the valence and conduction bands, with the lower doping level called the "trapping level" and the upper dop ing level called the "communication level".

(2) Visible photons (typically blue to green) excite electrons from the ground state to levels higher than the trapping level.

(3) In the combination process, most electrons then fall to the trapping level where they can remain for very protracted time periods (years), in the absence of infrared photons with energies corresponding to the gap between the trapping and communication level.

(, Incident infrared photons e-c_ts She electrons in the trap ping level to the communication level where they radiative ly decay to the ground state by combining with holes in the ground state -- releasing visible photons "typically orange to red).

Another class of infrared Reconversion phosphors is anti Stokes (AS) phosphors. Since these phosphors operate via a mul tiphoton process, they have higher thresholds and lower conversion efficiencies than do the ETIR phosphors. AS phosphors, however, do not require visible pump photons for operation as do the ETIR phosphors. The need for a visible pump is not a major drawback for the ETIR phosphors, since the pump need not be coherent - and hence LEDs can be used as the pump source.

Nonlinear optical processes competing with ETIR phosphors include second harmonic generation (SAG), stimulated Raman scat- tering (SRS) antiStokes (AS) lines, and sum-frequency generation.

The ETIR phosphors have an advantage over all these competitors, namely that there are no phase matching or coherency requirements, so that the ETIR process can operate over the wide incidence an- gles required for imaging.

Commercially available STIR films from Lum.itek Internatio- nal, Inc. (formerly Quantex) have been reported with 2 nsec pulse response width and twenty-two percent quantum efficiency (in re flective mode), from 1.06 Em to visible for the company's Q-ll-R film. (Ping, Gong and Hou Xun, ''A New Material Applicable in the Infrared Streak Camera," Chinese_Journal of Infrared and Millime ter Waves, vol. 14, No. 2, 1996, pp. 181-82.) Quantex has developed a near-infrared image intensifier (model I2) using an ETIR phosphor screen. In this project the company measured the minimum sensitivity - at several wavelengths - of a thick-film phosphor screen mated with the image intensi fier in transmissive mode. (Lindmayer, Joseph and David McGuire, - '/ tAn Extended Range Near-Infrared Image Intensifier," Electron Tubes and Image Intensifiers, [ed.] Illes P. Csorba, Proc. SPIE vol. 1243, 1990, pp. 107- 13.) Measured minimum sensitivity of this phosphor-I2 sensor at 1.55 Am was 670 nW/cm2 (ibid. at 108). Quantex had also vapor- deposited thin films of the ETIR material onto an image-intensi- fier fiber-optic faceplate to improve the imaging resolution. mbe resolution obtained with this device was 36 line-pairsmm, corre- sponding to the resolution of the 15 Am fiber pitch or the face o plate.

Researchers at Xitan Institute of Optics and Precision Me chanics, Chinese Academy of Sciences, have reported the develop ment of faster, more efficient ETIR phosphors. (Ping, Gong and Hou Xun, loo. cit.) They reported a red-emitting and a blue-emitting ETIR phos phor - the red phosphor having a 1.3 nsec response width and a 66t quantum efficiency in transmissive mode, and the blue phosphor having a 1.4 nsec response width and a 47% quantum efficiency in transmissive mode.

Nonlinear upconversion: Because conversion efficiency is a function of optical power, nonlinear upconversion techniques such as SHG and SRS are not practical for low-level signals. Conse quently, these techniques are typically used with the transmitter rather than the receiver.

2s Also, because of phase matching requirements these tech niques are typically only efficient over limited fields of view.

There is a technique in which the signal can be amplified opti cally in a Raman crystal, to allow for efficient upconversion or to directly overcome the poor quantum efficiency of an S1 photocathode (Calmer, Lonnie C., et al., "Marine Raman Image Am plification", Proc. SPIE Vol. 3761, 1999). At present a drawback of this technique, with respect to long-range strek-tube opera tions, is that the Raman amplifier can be pulsed on for only 10 to nsec at a time.

As to operation at longer wavelengths (1.6 to 10 microns), advantageous for various specialized applications as mentioned earlier, Quantex has also reported ETIR phosphors for upconverting medium-wavelength infrared (3.1 to 4.5 m) to 633 nm light.

(Soltani, Peter K., Gregory Pierce, George M. Storti, and Charles

V

Y. Wrigley, "New Medium Wave Infrared Stimulable Phosphor for Image Intensifier Applications," [ed.3 Illes P. Csorba, Proc. SPIN vol. 1243, 1990.) Unlike the eye-safe wavelength phosphors, these require cryogenic hardware for the phosphor upconversion plane.

RECEIVER CHOICE: As can now be seen, a great variety of technology is available for receiving eye-safe radiation and causing phosphor-responsive tube devices to respond. No demon- stration, however, of pragmatically efficient operation in a lidar streak tube has been reported. Equipment adaptations accordingly have not been developed.

The foregoing discussions have noted the availability and use of very inefficient S1 detector material at 1.5 microns, and Hamamatsu's views as to its own low-efficiency detector material at that wavelength - both these materials being inadequate for industrial-quality instrumentation in the present state of the art - and also the Abbe experiments with HE material at 1.3 microns, and the suggestion by Ping of using ETIR material in streak came- ras. In the absence of dispositive testing, none of these appears to represent an enabling disclosure of a commercially feasible eye-safe STIL system.

(i) Now-unrelated technologies: modern optical deflectors - Another area of technological advances that are known but have not heretofore been connected with streak lidar is microelectro- mechanical systems (MOMS). These devices are very small, and ena- ble use of a simplified optical path Fig. fib;).

A prominent example is a Texas Instruments product denomi- nated a "Digital Micromirror Device" (DADA). TI makes its DMD units for the commercial projection display market; accordingly they are readily available.

Key factors for efficient use of a DMD component include the mirror fill factor, scanning speed, uniformity of mirror motions, and quantification of diffraction effects. The DMD product has a fill factor higher than ninety percent, and can scan forty degrees in two microseconds (see Larry J. Hornbeck, "Digital Light Proc- essing for high-brightness high-resolution applications", A pres- entation for Electronic Imagine EI '97, Projection Displays, Feb.

i997).

They are compatible with operation at virtually any wave- length of interest for imaging and detection - including, in par- ticular r the eye-safe technology discussed in the preceding sub- section. Again, although well established these devices have not s been associated with lidar instrumentation heretofore.

As can now De seen, the re aged err remains subject no significant problems, and the efforts outlined above - although praiseworthy - have left room for considerable refinement.

SUMMARY OF THE DISCLOSURE

The present invention introduces such refinement. It has several main aspects or facets that are in general capable of use independently; however, using two or more of these primary aspects in combination with one another provides particularly important benefits as will be seen.

In preferred embodiments of its first major independent fac- et or aspect, the invention is a streak lidar imaging system. It is for measurements of a medium with any objects therein.

For purposes of the present document, this phrase "measurs- ments of a medic, witch any objects there)-." is hereby defined to mean that the system is for measurements of either a medium, whether or not it has any objects in it; or of objects, it any may happen to be in the medium - or of both the medium and any ob- jects that may be in it. Thus it is not intended to suggest that necessarily objects are in the medium, or that measurements of objects will necessarily be performed if objects are in fact pres- ent, or that measurements o* the medium will necessarily be per formed when what is of interest is objects within the medium.

The system includes a light source for emitting at least one beam into the medium. In certain of the appended claims, this will be expressed by the language "into such medium". In the ac- companying apparatus claims, generally the term "such" is used -2s (instead of said,, or "the") in the bodies of the claims, when reciting elements of the claimed invention for referring back to features which are introduced in preamble as part of the con- text or environment of the claimed invention. The purpose of this s convention is to aid in more distinctly and emphatically pointing out which features are elements of the claimed invention, and which are parts of its context and thereby to more psticular'y claim the invention The system also includes an imaging device for receiving light reflected from the medium and forming plural images, arrayed along a streak direction, of the reflected light. The imaging device includes plural slits for selecting particular bands of the plural images respectively. In addition the system includes a device for displacing all the plural images along the streak direction.

For purposes of this document, the terms "image" and "imag- ing" refer to any formation of an image, whether an optical or electronic image - or an image in some other domain - and also whether an image of spatial relationships as such or for instance an image of a spectrum or other spatially distributed parameter.

Merely by way of example, for present purposes "image" and "imag- ing" may refer to images of optical-wavefront direction, or of fluorescence delay, or of polarization angle, as e. < distributed over a beam cross-section.

The word "streak", in the present document, means substan- tially continuous spatial displacement of a beam - particularly in such a way that its points of impingement on a receiving screen or the like are shifted. This displacement most typically occurs during an established measurement interval, and ordinarily has the purpose of creating a substantially continuous relationship be- tween position on such a screen and time during that measurement interval. (In classical lidar environments, but not all forms of the present invention under discussion here, a further substan tially continuous relationship is thereby established between the screen position and "range" - i. e. the distance of some reflec- tive element from the measuring apparatus.) The word "substan- tially" is used here so that the appended claims encompass systems and methods in which the displacement, rather than being continu -2C ous, is stepwise but only to an inconsequential degree - for instance, merely perfunctory stepping whose primary purpose may in fact be an attempt to escape the scope of the claims.

The foregoing may represent a description or definition of the first aspect or facet of the invention in its broadest or most general form Even as couched in these broad terms, however, it can be seen that this facet of the invention importantly advances the art.

In particular, the plural slits and images enable a single streak tube to sense and record in, effectively, a space of three independent parameters rather than only two - thus curing a per vasive limitation of streak devices as mentioned earlier. The added dimension, or parameter, is extrinsic to the conventional native space of range (or time) vs. azimuth for a streak device (or indeed certain other time-resolving devices) r and accordingly in this document will be called the "extrinsic dimension." The extrinsic dimension of a streak (or other time-resolv- ing) device is not to be confused either with the intrinsic azi muth dimension - which can be mapped by optical fibers or other- wise to represent various spatial or other parameters - or with the intrinsic range/time dimension. Rather, the extrinsic dimen- sion is a truly independent and thus novel parametric enhancement.

2s Although the first major aspect of the invention thus sig nificantly advances the art, nevertheless to opti,me enjoy,. - sent of its benefits preferably the invention is practiced in conjunction with certain additional features or characteristics. In particu lar, preferably the imaging device includes an optical device, the so plural images are optical images, and the displacing device in- cludes a module for displacing the plural optical images.

Closely related to this preference are several subsidiary ones: preferably the displacing device includes an electrome- chanical device, and this in turn preferably includes at least one as scanning microelectromechanical mirror - still more preferably an array of such mirrors. Alternatively it is preferable that the displacing device include an electrooptical device. In one alternative to the optical-device preference, pref- erably the

imaging device includes an electronic device, the plu ral images are electronic images, and the displacing device in- cludes a module for displacing the plural electronic images. When this preference is observed, it is further preferable that the displacing device include electronic deflection plates. Two re lated electronic- implementation preferences r usually alternative to each other, are that the imaging device include (1) an optical front end that forms a single optical image of the reflected lights and an electror.ie stage receiving the single optical image and forming therefrom the plural electronic Images; or t2) an op tical front end that forms plural optical images of the reflected light, and an electronic stage receiving the plural optical images and forming therefrom the plural electronic images.

Another preference, as to the basic first main aspect of the invention, is that the displacing device form from each of the plural images a respective streak image - so that the displacing device forms, from the plural images considered in the aggregate, a corresponding array of plural streak images. In this case pref- erably the system further includes a device for receiving the ar- ray of plural streak images and in response forming a correspond ing composite signal.

In still another basic pair of alternative preferences, the plural slits operate on the images in either optical or electronic form. Yet a further preferred way of configuring the system is to include in the imaging device a module for forming substantially a continuum of images of the reflected beam; and arrange each of the plural slits to select a particular image band floor, the continuum.

There are several other basic preferences related to the above-introduced first main facet of the invention. In one of these, the light source includes an optical module for emitting at least one thin, fan-shaped beam into such medium, and the imaging device includes an optical module for receiving at least one thin, fan-shaped beam reflected from such medium; at least one of these optical modules includes an optical unit for orienting a thin di- mension of the reflected beam along the streak direction.

For purposes of the present document, the phrase "thin, f an- shaped beam" means a beam which - as evaluated at impingement on or exit from a medium, or at some point within the medium - is thin in one cross- sectional dimension but is fanned out in another tgenerally orthogonal) cross-sectional dimension. In other words, with a thin, fan-shaped beam it is possible to find some point along the beam propagation path where the aspect ratio of the beam, perpendicular to the propagation path, is much broader in one direction than in the other.

Very generally speaking a "thin, fan-shaped beam" has, at some point along the propagation path, an aspect ratio of perhaps lO:l to 500:1 and even much higher. In special applications of the invention, however, as will be clear to people skilled in this field, the ratio may be only 5:l or even 3:l and remain within the reasonable scope of the appended claims. The cross-section may be rectangular, elliptical, oval or irregular, provided only that the aspect ratio is at least 3:l or 5:l as indicated above.

This is not a requirement that the beam have such a cross sectional relationship at the point of transmission from the appa- ratus or the point of receipt into the apparatus - since in fact very commonly the beam at these particular points has a very low aspect ratio and indeed may be nearly circular. The term "aspect ratio", in turn, is used in this document in a very general sense that is common in the optics field, namely the ratio of widest to thinnest dimensions of an optical-beam cross-section (without regard to the orientations of such dimensions relative to the horizon or any other reference frame).

The point is that the aspect ratio of a thin, fan-shaped beam for purposes of this field varies greatly from the point of emission (or receipt) along the optical path For example, ris- ing as a transmitted beam traverses a relatively clear medium and then, within a turbid medium, changing in complicated ways as the beam continues to expand in the broader dimension but also becomes more diffuse in the thinner dimension. Therefore the aspect ra- tio, for purposes of this definition of a "thin, fan-shaped beam", is to be evaluated at some point where it assumes a value reasonably close to its maximum value. Thus the concept that is inten- ded by the phrasing "thin, fan-shaped" is a mental picture of a classical old-fashioned handheld fan used to cool a person's face, and does not encompass an optical beam that merely is thin at the outset and expands to a circular or slightly oval shape.

In three other basic preferences, the imaging device in- cludes an optical module for forming the plural images as images of the at least one reflected beam at - respectively - discrete optical wavelengths, or different polarization states, or differ ent angular sectors, of the at least one beam. In this last case the imaging device preferably further includes an optical device for rearranging image elements in each angular sector to form a single line image for that sector; and this device in turn pref- erably includes remapping optics - whlch still more preferably include a fiber-optic or laminar-optic module, ideally a lenslet array.

Another basic preference is that the light source include an emitter for emitting light in a -wavelength region at or near lo microns. In this case preferably the imaging device includes an upconverter for generating light at or near the visible wavelength region in response to the light at or near lo microns; and this upconverter in turn preferably includes phosphorescent or fluores- cent material - ideally ETIR material.

Another group of basic preferences relates to the character of the medium into which the light source emit. In particular the source preferably includes some means for emitting the at least one beam into a generally clear fluid above a generally hard sur- face; or into a turbid medium, including but not limited to ocean water, wastewater, fog, clouds, smoke or other particulate suspen z5 signs; or into a diffuse medium, including but not limited to fo- liage at least partially obscuring a landscape.

In addition, as noted earlier it is further preferred that the first primary aspect or facet of the invention also be em- ployed in conjunction with other main aspects introduced below.

Many of the preferences just discussed are analogously applicable to the following main facets of the invention.

In preferred embodiments of its second major independent facet or aspect, the invention is a lidar imaging system for optical measurements of a medium with any objects therein; the system includes a light source for emitting at least one light pulse into the medium. It also includes some means for receiving the at least one light pulse reflected from the medium and for forming from each reflected pulse a set of plural substantially simultaneous output images, each image representing reflected energy in two dimensions.

For purposes of generality and breadth in discussing the in- vention, these last-mentioned means will be called simply the "re- ceiving and forming means" - or sometimes just "receiving means".

The "substantially simultaneous" imaging character of the receiv Àng and forming means does not imply that the images are formed instantaneously, but rather only that formation of substanciaily all images in a particular set occurs during a common time inter- val, namely the interval during which the reflected pulse is received.

The foregoing may represent a description or definition of the second aspect or facet of the invention in its broadest or most general form. Even as couched in these broad terms, however, it can be seen that this facet of the invention importantly ad- vances the art.

In particular, as outlined above, this aspect of the inven- tion as broadly conceived is directed to pulsed systems, without regard to whether pulses are time resolved by streaking subsystems or by other means; such other means may encompass for instance extremely fast electronics; or instead optical circuits, proces- sors and memories that are nowadays being devised to replace elec- tronics. This second facet of the invention thus provides, in pulsed systems generally, a triple-parameter capability that ena- bles range resolution for equivalently time resolution) of two independent characteristics modulating the optical pulses - not just one such characteristic as in the past.

The term "range" as used in this document, if not otherwise specified or clear from the context, ordinarily means distance from the apparatus. As suggested earlier, this understanding is implicit in the acronym "lidar".

Although the second major aspect of the invention thus sig 3s nificantly advances the art, nevertheless to optimize enjoyment of its benefits preferably the invention is practiced in conjunction with certain additional features or characteristics. In particu- lar, preferably the light source includes means for emitting not just one but a series of light pulses into the medium, each of the pulses in the series generating a corresponding such image set (in this way the receiving means generate a sequence of plural corre sponding image sets); and further includes some means for storing the sequence of corresponding image sets.

Another basic preference is that the receiving means include some means for allocating image elements, in each image of the set, as among (1) azimuth, (2) range or time, and (3) an extrinsic measurement dimension. As noted earlier, the azimuth dimension may be mapped to another physical quantity as desired. In this case it is further preferred that the extrinsic measurement di- mension be, selectively, wavelength, or polarization state, or a spatial selection. (The latter typically is a different spatial choice than any that is represented by azimuth, in the invention as actually used.) Two additional basic preferences are that the receiving means include some means for causing the images in the set to be substantially contiguous) and that the receiving means include some means for receiving the reflected light pulse as a beam with a cross-section that has an aspect ratio on the order of l:l. In this latter case it is further preferable that the light source include some means for emitting the at least one light pulse as a beam with a cross-section that, analogously, has an aspect ratio on the order of l:l.

Yet another basic preference is that the receiving means include some means for forming the images in such a way that the two di,,ensions are range/ime and output-image azimuth, for a particular extrinsic dimension that corresponds to each output image respectively. (In other words, when a user looks - whether visually or using viewing apparatus - at any one of the output images, what the viewer sees is an intensity plot in time or range vs. output azimuth.) As noted earlier, this second main facet of the invention is preferably used in conjunction with certain of the other major aspects and their preferences. Thus for instance here preferably the light source includes some means for emitting the at least one beam into each of certain specific kinds of media, enumerated in the last above-stated preference for the first aspect of the invention.

In preferred embodiments of its third major independent facet or aspect, the invention is an optical system. It includes a first lenslet array for performing a first optical transforma- tion on an optical beam; and a second lenslet array, in series with the first err ay, for receiving a transfer ed beam from the first array and perfo rming a second optical bans ormation on the tr.-sfQrmed beam.

The foregoing may represent a description or definition of the third aspect or races of the invention in its broadest or most to general form. Even as couched in these broad terms, however, it can be seen that this facet of the invention importantly advances the art.

In particular, whereas earlier uses of lenslet arrays have been limited to applications that analyze essentially static or low-frequency phenomena 1_g: only events having no significant frequency content above roughly 10 kHz), this third main aspect of the invention enables reconfiguration of optical beams into for mats that are useful in time resolution of extremely fast phenom ena (_ 1 GHz and above).

Furthermore this facet of the invention performs such recon figurations with minimal loss of certain optical characteristics, such as dependin o g n the particular layout optical phase, or wavefront orientation. This aspect of the invention thereby fa cilitates an advance in the art of time-resolving complicated op 2s tical signals, by five orders of magnitude.

Although the third major aspect of the invention thus advan- ces the art to an extent that is all blat astonishing, nevertheless to optimize enjoyment of its benefits preferably the invention is 3G practiced in conjunction with certain additional features or char- acteristics. In particular, preferably one of the arrays includes image- plane defining lensless to define image elements of the beam; and the other array includes deflecting lensless to selec- tively deflect beam elements to reconfigure an image transmitted in the beam. In this case, preferably the one of the arrays that defines the image elements is the first array.

Another preference for this image-defining/deflecting case is that the system further include some means defining an image carried by the beam, and that the first array be positioned sub stantially at a iocal plane of the image. In this case it is fur- ther preferable that the image-defining means include a lidar source emitting an excitation beam to a region of interest; and collection optics receiving a reflection of the excitation beam from the region and focusing the reelection at the focal plane.

This preferable form of the invention is stilt further preferably implemented in such a waler that the two transformations, considered together, include selectively imaging particular components of the beam onto plural slits following the second array; and also incor o porating some means for streaking images from both slits for reim- aging at a detector.

Yet another preference for the image-defining/deflecting case under discussion is that the first array also relay the image from the focal plane to the second array. In this case preferably the second array is substantially in a plane, and that plane is disposed substantially at the relayed image.

One other preference, as to the third main facet of the in- vention, will be mentioned here. The two transformations, consid- ered together, include selectively imaging particular components of the beam onto plural slits following the second array.

In preferred embodiments of its fourth major independent facet or aspect, the invention is a streak lidar imaging system for making measurements of a medium with any objects in the me- dium. The system includes a light source for emitting into the medium a beam -in a substantially eye-safe wavelength range.

It also includes an imaging device for receiving light re- flected from such medium and forming an image of the reflected so light. In addition the system includes an upconverter for gen erating light at or near the visible wavelength region in response to the reflected light (_= to the returning light that is in a substantially eye-safe wavelength range); and a device for dis placing the image along a streak direction.

The foregoing may represent a description or definition of the fourth aspect or facet of the invention in its broadest or most general form. Even as couched in these broad terms, however, it can be seen that this facet of the invention importantly ad vances the art.

-

In particular, by means of this fourth aspect of the inven- tion the fine range- or time-resolving capabilities of streak im- aging lidar are made available for many kinds of measurements that otherwise would be precluded by proximity to unprotected people.

In some relatively small-scale applications, such people might be simply passersby in a laboratory, or in an industrial or like environment and in appl Locations at a larger scale such people might be members of a general population. It has not previously been suggested treat the temporal resolving power of streak lidar to systems could be exploited for such applications.

Although the fourth major aspect of the invention thus sig- nificantly -trances the art, nevertheless to optimize enjoyment of its benefits preferably the invention is practiced in conjunction with certain additional features or characteristics. In particu- lar, in alternative preferences the upconverter may be positioned in the system either after or before the displacing device.

Preferably the upconverter includes phosphorescent or fluo- rescent material, and most preferably STIR material. The light source preferably emits the beam in a wavelength range at substan- tially 1 microns.

In preferred embodiments of its fifth major independent fac et or aspect, the invention is a streak lidar imaging system. It includes a light source for emitting a beam, and an imaging device for receiving light originating from the source and for forming an image of the received light.

The system also includes at least one microelectromechanical mirror for displacing the image along a streak direction. In ad- dition it includes an image-responsive element for receiving and responding to the displaced image.

The foregoing may represent a description or definition of the fifth aspect or facet of the invention in its broadest or most general form. Even as couched in these broad terms, however, it can be seen that this facet of the invention importantly advances the art.

In particular, this fifth main aspect of the invention ena- bles enjoyment of streak-lidar capabilities without the inordinate -3= expense and fragility of an evacuated streak tube with its asso- ciated high voltages and relatively temperamental drive electron- ics and also without the previously discussed cumbersomeness and very limited operating properties of a macroscopic scanning mirror, such as a relatively large spinning polygon.

Although the fifth major aspect or the invention thus sg- nificantly advances the art, nevertheless to optimize enjo:..ent of its benefits preferably the invention is practiced in conjunction with certain additional features or characteristics. In particu- lar, preferably the at least one mirror includes an array of mul- tiple microelectromechanical mirrors.

* In case the system is for use with an optical medium, anoth- er preference is that the light source include some means for emitting the beam into the medium and that the imaging device include some means for receiving light reflected from the medium and forming an image of the reflected light.

Another basic preference, as to this fifth main facet of the invention, is that the light source include a resonant device and no the imaging device include some means for causing imperfections in resonance of the resonant device to modulate the image. In this case, particularly if the resonant device includes a laser, it is also preferred that the imaging device include some means for causing imperfections in optical wavefronts from the laser to mod ulate the image - and further preferably these causing means in- clude some means for deflecting elements of the beam in response to local impere.ions in the coherence. These deflecting means, in turn, preferably include at least one lenslet array.

In preferred embodiments of its sixth major independent fac- et or aspect, the invention is a spatial mapping system for map- ping a region. The system includes a light source for emitting at least one thin, fan-shaped beam from a moving emission location toward the region) a thin dimension of the beam is oriented gen- erally parallel to a direction of motion of the emission location.

The system also includes an imaging device for receiving light reflected from such region and forming an image of the re- flected light. The system also includes some means for separating the reflected light to form plural reflected beam images - repre- senting different aspects of the region, respectively.

Here the term "aspects" means characteristics or properties of the region. It is to be interpreted broadly to include differ ent values of any parameter that is susceptible to probing by an optical spatial-mapping system. Merely by way of example, such a parameter may be spatial, dynamic, optical, chemical, acoustic, biological or even sociological.

Also,c'uded is an image-responsive element for receiving so and responding to the plural beam images. The foregoing may rep- resent a description or definition of the sixth aspect or facet of the invention in its broadest or most general form.

Even as couched in these broad terms, however, it can be seen that this facet of the invention importantly advances the art. In particular, a thin fan beam or plural such beams can be used to yield a representation of two or more values of any opti cal parameters - expanding the usefulness of a pushbroom mapping system into a three-dimensional regime, with the selected parame ter functioning as the third dimension.

Thus with this aspect of the invention it is not necessary to be limited to mapping spatially at just one wavelength, or in just one polarization state or even at only one visual angle, one pair of subtended angular widths, or one focal condition.

Subject to laser-technology limitations, a single emitter may be made capable of emissions at more than one coherence length; and this would enable application of the invention with coherence length as the extrinsic parameter.

Although the sixth major aspect of the invention thus sig nificantly advances the art, nevertheless to optimize enjoyment of its benefits preferably the invention is practiced in conjunction with certain additional features or characteristics. In particu- lar, there are several preferences regarding capability of the separating means to discriminate between different aspects of the probed region: spatially different aspects, or aspects that are carried in portions of the beam received at different angles, or in portions of the beam received at different angular ranges, or in different polarization states of the beam, or in different spectral components.

Another basic prefe rence is that the separating means in Clude means for discr iminating between combinations of two or more dif Brent aspects of the region that are carried in di ferent characteristics of the st one of which Characte i is selected from among spatially different aspects, different po- lari2ation states, and different spectral components of the beam.

t is case prefers 1 two of the ah ract ri ti sele ted rom among th ee 'ted chara te ic Preference is that th be a spacecraft an nother type of vehicle ype of moving platform native, preferably the e i sion location is a fix ource cooperating with a System to provide a moving image of the light source. Also appli- cable here are the p e numerated preferences a t medium into which the beam is emitted.

In Preferred embO its seventh major ind facet or aspect, the invention is a spatial mapping system for mapping a region h e system includ es a light source or emitting a beam whose crosses an aspect ratio on the d :l, from a movie g emission location toward the region.

It also includes a n imaging device for recei ing light re- fleCted from such r egion and for ming an image of the re lected lights and some mea p ating the reflected ligh t images representing cliff ly. Also included is a i ment [or receiving and responding to the plural bed., images g may represent a descripti r facet of the inventiO i en as Couched in these b s facet of the inventiO i Lances the art rm of the invention ext d parameter measurement spa extension introduced above for thin fan-beam work. As will be seen, the benefits of this extension are felt in ability to obtain much more sophisticated image interpretations, in a variety of applications. -is

Although the seventh major aspect of the invention thus sig- nificantly advances the art, nevertheless to optimize enjoyment of its benefits preferably the invention is practiced in conjunction with certain additional features or characteristics. In particu far, preferably the imaging device includes some means for also receiving the reflected light from the region as a reflected-beam whose cross-section has an aspect ratio on the order of l:l.

In preferred embodiments of its eighth major independent facet or aspect, the invention is a spectrometric analytical sys tem for analyzing a medium with any objects therein. The system includes a light source for emitting substantially at least one pencil beam toward the medium, and an imaging device for receiving light reflected from such medium and forming an image of the re- flected light.

Also included are some means for separating the reflected light along one dimension to form plural reflected beam images ar- rayed along that "one dimension" and representing different as pects of the medium, respectively. The system further includes optical-dispersing means for forming a spectrum from at least one of the plural images, by dispersion of the at least one image along a dimension generally to the one dimension - and an image-responsive element for receiving and responding to the plural beam images.

The Foregoing may represent a description or definition of the eighth facet of the invention in its broadest or most general form. Even as couched in these broad terms, however, it can be seen that this facet of the invention importantly advances the art.

In particular, this eighth main facet of the invention pro- vides the benefits of an added extrinsic parameter, in the context of hyperspectral measurements. In particular through use of this eighth facet of the invention it is possible to obtain spectra for Is different aspects of the medium or objects as for different values of the optical properties listed above in discussion of the first preferences for the sixth facet of the invention.

Although the eighth major aspect of the invention thus sig- nificantly advances the art, nevertheless to optimize enjoyment of its benefits preferably the invention is practiced in conjunction with certain additional features or characteristics. In particu 6 far, preferably the dispersing means include means for forming a spectrum from each of the plural images, respectively.

Other preferences are that the separating means include some means for separating the reflected light to form plural images representing aspects of the beam that, respectively, are spatially different - or represent different polarization states or differ- ent spectral constituents.

In preferred embodiments of its ninth major independent fac et or aspect, the invention is a wavefront sensor, for evaluating a light beam from an optical source. The sensor includes optical components for receiving the beam from the source.

It also includes optical components for subdividing small portions of such beam to form indicator subbeams that reveal a direction of substantially each of the small portions; and optical components for steering the indicator subbeams to fall along at least one slit. The sensor also includes some means for streaking light that passes through the at least one slit; and some means for capturing the streaked light during a streaking duration.

The foregoing may represent a description or definition of the ninth aspect or facet of the invention in its Broadest or most general form. Even as couched in these broad terms, however, it can be seen that this facet of the invent on importantly advances the art.

In particular by steering the subbeams to fall along a slit, the sensorprovides an output that enables the aggregate of those subbeams to be streaked - and thereby makes it possible to time resolve the directional or other behavior of the sunbeams. Be cause many subbeam sets can be arrayed along even just a single slit, wavefront directions can be time resolved for many points in the beam cross-section.

Although the ninth major aspect of the invention thus sig- nif, cantly advances the art, nevertheless to optimize enjoyment of its beneEits preferably the invention is practiced in conjunction with certain additional features or characteristics. In particu- lar, preferably the at least one slit comprises plural slits.

(This preference raises very greatly the number of points in a beam crosssection that can be time resolved.) In another preference, particularly for use with a resonant optical source, the receiv-ing and subdividing components include means for causing imperfections in optical wavefronts from the resonant source to modify the Light that passes through the at least one slit. In this case preferably the receiving, subdivid- ing and steering components include at least one lenslet array - and more preferably at least two lenslet arrays in optical series.

In this latter arrangement preferably the lenslet arrays include one array that defines image elements at or near a focal plane of the beam, and another array that receives the image elements relayed from the first array, and that steers light from the image elements to the at least one slit. An alternative basic preference is that the receiving, subdividing and steering compo Dents comprise at least one lenslet array in optical series with at least one fiber-optic remapping device - in other words, that the steering function be performed by fiber-optic remapping rather than the other array just mentioned.

In preferred embodiments of its tenth major independent fac- et or aspect, the invention is a spectrometric analytical system for analyzing a medium with any objects therein. The system includes a light source for emitting substantially at least one pencil beam toward the medium, and an imaging device for receiving light reflected from such medium and forming an image of the re- flected light.

It also includes optical or electronic means for streaking the plural images, and an image-responsive element for receiving and responding to the plural beam images. Also included is a computer for extracting fluorescence-lifetime information from a signal produced by the imageresponsive element.

- Al - The foregoing may represent a description or definition of the tenth aspect or facet of the invention in its broadest or most general form. Even as couched in these broad terms, however, it can be seen that this facet of the invention importantly advances the art.

In particular, this hybrid form of the invention uniquely combines capabilities of earlier-discussed facets. It does so in such a way as to p-c;-_de from a gnle_apzatus Information about biological materials on volume materials - such as clouds - that heretofore would strain the capabilities of two or more different instruments.

Although the tenth major aspect of the invention thus slg- nificantly advances the art, nevertheless to optimize enjoyment of its benefits preferably the invention is practiced in conjunction with certain additional features or characteristics. In particu- lar, preferably the at least one beam includes at least one pencil beam.

Also preferably the imaging device includes a hyperspectral optical system. In this case it is further preferred that the zo imaging device include a plural-wavelength optical system, in which each of plural wavelength bands is arrayed along a length dimension of a respective slitshaped image.

All of the foregoing operational principles and advantages of the present invention will be more fully appreciated upon consideration of the following detailed description, with refer- ence to the appended drawings, of which: BRIEF 1SRIPmION OF THE DRAWINGS Fig. l is an elevation, highly schematic, of conventional streak-tube architecture) Fig. 2 is a pair of simplified diagrams showing (a} in perspective or isometric view, typical STIL data collection in a plane that extends away from the instrument through and beyond an object of interest; and (b) an elevation of a resulting CCD image for the same measurement setup; Fig. 3 is a pair of perspective views showing (a) a rugge- dized streak-tube assembly and (b) a three-dimensional model of a two-receiver lidar system fabricated to fit into a very small vol- ume, _3= of an unmanned underwater vehicle; Fig. 4 is a set of illustrations relating FIG conventional- STIL terrestrial mapping data, including (a! an aerial photo of DUi dings being surveyed, (b) a single laser shot showing raw data 0 for one line image (outlined by a rectangular white line in [al), and (c) a three-dimensional rendered range image of a correspond- ing area (outlined in [a]J generated by reconstruction from the individual line images (brighter is taller); Fig. 5 is a set of five images relating to data acquired by 1S conventional SATE imaging of an object submerged in shallow water, including (a) a photo of the bottom object used in the experiment' (b) a contrast/reflectivity image of the object at a depth of 6 m (20 feet), (c) a range image of the same object, in which brighter rendering represents greater proximity to the instrument, (d) a three- dimensionally-rendered surface, with contrast data mapped onto the surface, and (e) a one-dimensional cut through the range image, with actual object profile, showing an excellent match be- tween data and objects Fig. 6 is a set of three diagrams showing general character 2s istics applicable to several different forms of the invention: (a) system block diagram, (b) wavelength dispersio" methods, and (c) data-processing algorithms and hardware implementations; Fig. 7 is an levational diagram m,, somewhat schematic, of streak-tube receiver architecture for plural-slit (in this case as in Fig. 6, three-slit) operation; Fig. 8 is a set of four diagrams - three plan views and one isometric - showing how a simple area image can be remapped onto plural lines suitable for input into the plural-slit streak tube: (a) the original area image, (b) the area image rearranged into plural lines (here four) by remapping optics, (c) the relationship between the first two views, and (d) a deformed, sliced, and reas- sembled fiber-optic device performing the transformation shown in the first three views; Fig. 9 is an isometric diagram, highly schematic, of lenslet-array remapping optics for a plural-slit streak-tube configuration; Fig. 10 is a pair of phosphor-screen images (in both sets 6 range is vertical and a spatial dimension horizontal) for compari- son: (a) conventional single-slit operation, and (b) plural-slit operation according to the invention r in particular with four slits; Fig. 11 is a set of two elevational diagrams, both highly schematic, and an artist's perspective or isometric rendering showing respectively (a) conventional optical streaking by means of a large rapidly rotating mirror, (b) innovative optical streak- ing with MEMs mirrors and no other moving parts, and to) a more- specific novel design with DMDs; Fig. 12 is a diagram like Fig. 7, but showing a further modification for plural-slit spectroscopy - with a wavelength dispersion device arranged to array the available spectrum along the height of the photocathode plane r and at the photocathode a preferably programmable slit-mask device (e. a. spatial light modulator) for selecting particular plural spectrally narrow wavebands specified for a desired spectral analysis; as well as streaked forms of those wavebands appearing on the anode at right; Fig. 12 (a) is a like diagram but using an electronic, rather than optical, form of slit masking - with the selective plural z5 slit mask now inside the tube, following the photocathod-; Fig. 12(b) is a diagram like Fig. 12 but showing an optical, rather than electronic, form of streaking - which obviates the need for an evacuated tube and also represents one way of facili tating use of wavelengths outside the visible range; Fig. 13 is a set of three diagrams showing, for the Fig. 12 system, respective phenomena related to the image as it progresses through the front-end components: (a) the vertically dispersed spectrum at the entrance to the slit-mask device, (b) the program mable mask itself, here set for three slits at specified heights, 3s and (c) the resulting selected image on the photocathode - cons Listing of three isolated line images in respective shallow wave- bands; Fig. 14 is a diagram, highly schematic, of a fan beam pass- ing through a cloud of atmospheric constituents as for analysis in the system of Figs. 12 and 13, with the horizontal axis of the three-waveband image in the spatial dimension and the vertical axis (within each spectral band respectively) in the time dimen sion - in this case also range; Fig. 15 is a single-frame data image, exemplary of the type of data that can be collected by the system of Figs. 12 through 14 and at - ight an associated spectral profile, From which the character and quantity of the atmospheric constituents to_ contam inants) can be determined; to Fig. 16 is a set of three measurement-volume diagrams show ing sampling regimes of different conventional lidar systems: (a) range- gated cameras with poor range resolution, that must avoid the ocean surface, (b) time-resolved detectors with poor spatial resolution, and (c) SAIL systems that provide good resolution in all dimensions; Fig. 17 is a set of three like diagrams, but representing the present invention: (a) increased areal coverage rate due to more lines per shot, (b) finer spatial resolution due to more pixels per line (shown as a magnified portion of one of the volume cubes), and (c) resolution-enhanced broader areal coverage, demonstrating both high spatial and high range resolution in a single shot; Fig. 18 is an isometric diagram of fore-and-aft viewing us- ing two fan beams to simultaneously acqyire plural (here only two) different views of every object; Fig. 19 is a set of three diagrams illustrating img--distorting effects of water waves: (a) ray deviations, for rays to and from certain submerged objects r shown for a given surface, (b) images and associated SNRs of the same objects but showing various distorting effects for rays incident normally, and (c) images of the same objects for rays coming in at 30 degrees off normal - showing different object shapes, locations and SNRs; Fig. 20 is a pair of single-shot range-azimuth images through leaves: (a) light foliage on a relatively small tree, and (b) heavy foliage on a much larger tree; Fig. 21 is a set of three diagrams representing an example of measuring a covered hard object with a polarization-sensitive two-slit STIL: (a) a vertically projected fan beam intersecting the object on the ground, (b) streak camera images from a single -4s laser shot showing at left the respective images in two generally orthogonal polarization states - and at right the corresponding cross-sectional views - and (c) reconstructed images from multi ple shots; s Fig. 22 is a diagram, highly schematic, of transmitter optics for a preferred embodiment of a polarization form of the invention - wherein the half-wave (".2") plate allows the linear pol2.rizz'zon to be Notated arbitrarily ant the e ghth-wa-e i/8) plate creates the desired elliptical polarization state; Fig. 23 is a like diagram of complementary receiver optics for the same embodiment - the l/8 plate and Wollaston prism being polarization-state analyzers that produce the two measurements necessary to determine the degree of polarization of the return beam; Fig. 24 is a pair of illustrations relating to data from a preferred embodiment of a hyperspectral form of the invention: (a) a graphic showing how the spectral dimension is arrayed along the length of the slit - rather than across the width dimension - and (b) to essentially the same spectral scale, a reproduction of actual excitation and fluorescence image data acquired in a representative measurement; Fig. 25 is an elevational diagram, highly schematic, of a conventional Rartmann-Shack wavefront sensor (WAS) that directs subbeams to a conventional quad cell to indicate wavefront angle; 2s Fig. 26 is a conventional quad cell as used in a nartmann Shack WFS to measure spot position (which corresponds to the three-dimensional tilt of the wavefront) and intensity; Fig. 27 is an optical quad cell, according to the present invention, that performs the function of the conventional cell of Fig. 16 and also redirects the indicator subbeams onto a slit for the streak tube; Fig. 28 is a set of three images relating to time- resolved laser pulse and fluorescence return (a) from a highly fluorescent plastic cable tie, with the two 3-D views of the data in (b) and 3s (c) showing the elastic return from the laser pulse, followed by the longer fluorescence return: the gray lines locate the wave- length of maximum return as a function of time (vertical plot) and the time of maximum return as a function of wavelength (horizontal plot); and Fig. 29 is a set of three illustrations relating to a com- bined STIL polarimeter and fluorescence sensor: (a) is a diagram comparable with Figs. 12 etc. showing the front-end optics for an electron-tube system, (b) an enlarged detail view of beam-splitter optics at a fiber-optic faceplate that transfers the long-wave- length image to the actual image plane, and (c) a diagram of the combined sensor data as imaged on the photocathode slits.

DETAILED DESCRIPTION

OF THE PREFERRED EMBODIMENTS

1. PLURAL-SLIT SYSTEM FOR SINGLE-PULSE SCANLESS 3-D IMAGING Conventional underlying streak-tube concepts have been introduced above - in subsection 2 of the earlier "BACKGROUND" section. In those conventional approaches, a system transmits and receives a single narrow fan beam.

The present invention proceeds to a new technique, plural- slit streak-tube imaging lidar (PS-STIL) - with associated plural line images, and other resulting plural parameters. This innova- tion provides a much more general solution to a number of lidar 2s applications than possible with a single line image per laser shot.

As usual, "plural" means "two or more". This term thus en- compasses "m''7tiple" - in three, four etc. slits as well as associated multiple images, multiple wavelengths and other corre spending parameters.

Basically, the PS-STIL approach to streak-tube imaging pro vides plural, contiguous range-azimuth images (Figs. 6lal and 6lb]) per laser pulse. Each shot thereby yields a full three-di mensional image.

All three of these dimensions can be, but are not necessar ily, sp_ial dimensions. The third dimension - the one other than range (or time) and azimuth - may instead be virtually any parameter that has an optical manifestation.

Such a parameter may be for instance focal length, or wave- length, or coherence length, or polarization angle, or the suo- tended angle or other property of a beam or subbeam. (It is not intended to suggest that these particular examples are particu larly preferred embodiments of the extrinsic dimension or even that they are particularly useful choices but rather only that the available range of choices for that dimension may be extremely broad.) Ah& pLurai-slr STIL technique of the present invention, however, does enable collection of two-spatial- dimension area images rather than line images. It also thereby enables formation of true single-pulse three-spatial-dimensional images.

Each slit forms its own image zone on the phosphor anode (or other receiving surface such as will be considered below). The plural-slit system requires no modification to the streak tube or COD (although such modification can be provided for further en hancement if desired); the system can be implemented through use of external front-end remapping optics that convert an image into plural separated line images.

zo The plural-slit technique takes advantage of the fact that many of the pixels ordinarily dedicated to range are Unused in the conventional single-slit configuration. After careful study of such relationships, a system designer - or advanced operator can therefore reassign pixels to provide additional spatial for 2s other) information instead.

By parceling out image regions into plural lines on the streak-tube photocathode, the invention can trade-off range pixels against spatial (or other) pixels. In this -way the invention provides an additional degree of freedom, that can be used in any of several ways to better optimize the system for a given applica- tion.

One preferred way to implement the plural-image feature of the invention is simply to form corresponding plural optical slits - i. e. masks or baffles (Fig. 7) on the photocathode. As the drawing suggests, this technique does not require changes to the streak tube itself (although optimizing refinements, as mentioned above, are encompassed within the scope of the invention), but can be a change in front-end optics only. Several other ways of form t I l l - = -4s ing and streaking plural images are introduced below (particularly in subsection 4).

Further exploiting this novel arrangement, an area image can now be remapped by fiber-optic devices (Fig. 8), by lenslet ar rays, or in other ways into plural line images - not just one as in the previous fiberoptic systems of Alfano or of Knight. Such plural line images can then be directed for 1nput into the plural- slit streak tube.

litany other walers to make use of this newly added degree of to freedom are within the scope of the present invention. Several such innovations are detailed below in subsections 4 through 7.

The invention is adaptable to most uses of the now-standard STIL technology, including airborne bathymetry, airborne detection of fish schools and various other utilizations mentioned in related is patent documents of Arete Associates and its personnel. 2.

EYE-SAFE OPERATION, AND RELATED INNOVATIONS 20Converting STIL technology to eye-safe wavelengths entails conversion of both the transmitter and the receiver.

(a) Using electronic tube with selected detector materials - It appears to the present inventor that phosphor upconversion promises to be the simplest and most straightforward technique for moving to the eye-safe regime. It also appears to the inventor that in particular the class of phosphors called "electron-trap- ping infrared" (ETIR) up-conversion phosphors is the best of the candidates, based on the simplicity of its application to photo cathode surfaces - although testing in STIL cameras has not been reported.

Collaboration with the Lumitek firm mentioned earlier, or an alternative source, is advisable for implementation of ETIR phos- phor films. Samples of the Q-32-R phosphor film are a good start ing point and have been evaluated by the inventor for sensitivity, resolution, and temporal pulse spreading. For STIL testing the ETIR film should be deposited on the fiber-optic faceplate of the streak tube, and actual lidar data taken in both the lab and the

field.

Also desirable i5 provision of a 1.5 micron source - as e. by conversion of an existing Nd:YAG laser, through addition of an optical parametric oscillator (oPO). LiteCycles, a manufac- turer of YAG lasers also mentioned earlier in this document, can perform such work - particularly for that firm's own products.

(b) Using optical streaking - A new variant accordlug to the present invention is to use a MEMS Digital Micromirror Device tDMD product that scans the beam without the use of large mov ing mirrors, thus allowing the system to be compact and rugged.

As will be recalled, such units are available as an off-the-shelf component from Texas Instruments r though such usage in a streak imaging lidar system has not been suggested earlier. This tech- nique allows for longer-wavelength operation - thereby promoting the previously noted specialized applications that call for better penetration and discrimination - and also provides a substitute methodology for the eye-safe regime, for situations in which the EDIT phosphor may prove inadequate.

A MEMS-based PS-STIL sensor uses the motion of the MEMS el ements to provide the optical streaking. The beams enter the MEMS sensor (Fig. 12lb3) through a series of slits, closely analogous to the arrangement when a streak tube is used, and the beams are reimaged by a lens to form a new image on the detector.

Putting a HEMS mirror (labeled "DAD" in the figure) behind the lens as shown, and bouncing the light back through the lens, has an important result: the MOMS mirror can then be used near a pupil plane, where the scanning motion of the MEMS elements is translated into motion in the image plane (at the detector).

Attempting to place the MEMS element at a focal plane and then reimaging that focal plane onto the detector would result in no streaking.

The rest of the system is closely similar to the streak- tube-based systems. Software and control-electronics modifica- tions are needed only as appropriate to accommodate the specific detailed relationships between the beam, at the detector, and giv- en mirror-control commands.

It appears that a MEMS optically streaked camera can be very compact. In fact the invention contemplates that size limitations are determined by the IR detector array, rather than the streak - -so - tube assembly. Thus the main details are diffraction effects and mirror- response uniformity during scanning.

Nonuniformity of motion generates a blur in the range direc- tion, and diffraction from the small mirrors can cause blurring in both the range and spatial directions. Based on the discussion here and in the papers method, these aspects of DAD performance are straightforwardly calculated and optimum, configurations then found accordingly.

Available DAD scanning speeds allow for 1 GHz range sampling on a 25 mm (l inch) detector that is spaced 70 mm from the DAD scanner. OLD units are compatible with operation at any wave- length for which a standard area imaging sensor is available !_9: 300 nm to 5 microns).

IR detectors and optical streaking, for present purposes, must be implemented with care to avoid disrupting the plural-slit functionality of the present invention - which is also accom- plished optically.

(c) Lonaer-wavelenath operation - A microelectromechanical DAD device, mentioned above, can be procured from Texas Instru ments and straightforwardly integrated into a streak-camera con figuration. The resulting instrument is particularly effective for special applications that exploit longer wavelengths as poin ted out previously.

Uniformity of mirror motion and diffraction effects should be measured, and for relative safety and simplicity of operation most o- all OF this preliminary laboratory phase can be carried out with the system operating in the risible. Of course care must still be taken to avoid eye injury.

Those skilled in the field will understand that actual dif fraction is of course difCere_ when the unit is put into service in the infrared, but also that the behavior in the two wavelength regions is related in simple and very predictable ways. For infra red operation it is also advisable to determine the emissivity of the DAD unit itself, in the anticipated actual operating region of l to 5 microns - and its utility in that region.

For this purpose, collaboration with infrared astronomers may be found particularly helpful; for example, the present inven- tor has made arrangements with such scientists at Steward Observa 117 A I _ _ all tory, a facility of the University of Arizona. A 3-to-5- micron streak camera system is a good initial implementation for practi- cal testing, familiarizing personnel with operation and use as a platform for possible redesign to satisfy specific requirements of the intended application.

3. SIGNAL-PROCESSING REFINEMENTS 0 As mentioned earlier, conventional real-time range proc essing typically uses multiple SXARC digital signal processors (DSPs), each capable of 120 MElop/sec (million floating-point operations per second). Through porting of existing and proven algorithms to run on a single field-programmable gate array (FPGA), throughput can be enormously increased.

Such units are available from the previously mentioned firm Nallatech Limited, of Glasgow. Nallatech has demonstrated image processing at 100 Gflop/sec (billion floating-point operations per second).

More specifically Nallatech's demonstration entailed a matched-filter convolution of a 13xl3-pixel kernel on a 1024x1024 image at 1000 Hz. This increase by three orders of magnitude was accomplished in part by configuring the FPGA hardware specifically for the image-processing task.

An FPGA-based range-processing scheme can save considerable -volume and power over a real-time DSP solution. The associated reduction in computer size makes practical a short-range system that can be carried by a person - in this way resolving a previ ously discussed major problem of the prior art, as well as reduc ing weight, volume, power, and heat- loading in vehicle-mounted sensors.

In addition the very compact systems enabled by the FPGA approach can devote all possible power to the laser transmitter rather than the processing hardware. Power allocated to the transmitter directly improves system performance, thus optimally actualizing the plural-slit technique of the invention with max- imum detection STIR in a single receiver.

Collaboration with Nallatech is advisable before specifying the interfaces and algorithms to be implemented. Nallatech can develop a first-effort processor using the firm's hardware known as "DIME" (DSP and Image-processing Module for Enhanced FPGAs).

This is a plug-and-play PCI board, with two FPGAs that work in a standard PC. Timing and performance of the selected algorithm should be evaluated and compared with the performance of a stan- dard algorithm run on oSPs.

4. PCTR;-^ESOUT'ON ELMS OF THE INVENTION The plural-image technique allows tremendous gains in abil- ity to simultaneously and independently resolve spectral, temporal and spatial portions or a lidar signal, all within the same exci- tation pulse. These forms of the invention represent extension of pixel- remapping concepts into the spectral domain.

This approach can provide a simplified and more robust mode Of fluorescence imaging in detecting and measuring atmospheric particulates and constituents, waterborne particulates, and hard objects twith propagation paths in either air or water). Simulta zo neous measurement of all pertinent signal parameters, within a common laser pulse, removes many hardware requirements and noise terms associated with use of multiple pulses to gather all of these data.

For example the laser-pulse power, pulse shape and pulse timing are all the same for an entire data frame; therefore any artifacts Okay would be caused by differences in these quantities are absent. In addition, absolute calibration requirements that do remain are significantly reduced.

Timing within an area is all internally consistent. The on ly persisting problem of this sort must arise from jitter between the start of the laser pulse and the start of the electronic re- ceiving system.

The invention facilitates high levels of digitization and sampling achieved by spreading out temporal data spatially on a streak-tube screen - and so enables extremely fine resolution in critical dimensions. The system can trade-off resolution between wavelength, time and space in order to optimize performance for a given application. --i3

Digitization to twelve bits and better, with up to 1000 channels sampled simultaneously at over 100 GHz, is far beyond anything that can be done with conventional analog-to-digital conversion electronics. For instance 1000 channels of twelve-bit digitization at only 100 MHZ would require approximately a hundred VME-size boards (computer-bus type, chassis/backplane units).

As explained above, the plural-sit technique employs two or more slits,stacked in the streak direction r to provide an addi- tional dimension to the data set. Here the additional dimension iS made to correspond to wavelength. The streaking electronics require additional controls to avoid having the streaked image from one wavelength band overlap the streaked image from another.

In one form of hardware suitable for performing plural-slit spectroscopy, a wavelength dispersion device (Fig. 12) distributes the wavelength spectrum along the photocathode plane, but just ahead of the photocathode a slit-mask device (a spatial light modulator, for example) forms a programmable set of slits for se- lecting desired wavelength bands from the spectrum.

The spectrum image (Fig. 13) reaches the cathode, then (to the same scale) the programmable mask, and finally the image on the photocathode. The latter is a set of discrete line images at respective different wavelengths or, more precisely, narrow wavebands.

Alternative hardware applies the entire dispersed image (Fig. 12[a]) to the photocathode - producing a corresponding uni= tarv electronic image, within the evacuated tube, representing the full spectral image. Masking or electronic selection within the tube then performs the selection process.

The streak-tube electronics subsystem then streaks these images to create a set of wavelength regions on the phosphor anode that have both time and space data at each wavelength. Thus each wavelength region has its own time- and space-resolved image.

An alternative to the electronic streaking just described is optical streaking. In accordance with the present invention this can be accomplished using microelectromechanical devices - devel- oped commercially for quite different purposes - that make the streak lidar apparatus (Fig. 12tb]) far more compact and robust than possible with the spinning polygon mirrors historically employed. - 54

This section introduces various applications of such a device. It also discusses in more detail the data in each of the individual wavelength regions.

A number of trade-offs between the various dimensions of resolution are available. The total number of sampled points is, at most, equal to the total number of pixels in the COD. To first order, the product of the number no of discret" wavelength bands, the number ns of spatial points, and the number n m O f time points must be equal to or less than the number no of pixels.

For schemes that are more complicated (compared with those shown in Figs. 12 and 13), there are a number of options involving wavelength dispersion elements and the imaging optic - to allow for nearly any desired mapping of time, space and wavelength on the streak-tube screen and CCD. Discussed below are a few simple examples showing how a plural-image device can be useful.

Although many other examples could be given, these are rea- sonably representative. In all these applications the streak-tube receiver is assumed to be coupled with a pulsed laser system that provides the excitation sources for phenomena to be observed (e._q: fluorescence, Raman shifts etc.).

(a) Atmospheric constituents and_contaminants - The inven- tion passes a fan beam (Fig. 14) through a cloud of substances in the atmosphere. In the example a three-waveband system is set up, in which the fluorescence for the red and blue wavelengths is strong but the green fluorescence is weak. The horizontal axis of the three-color-band image is the spatial dimension, while the vertical axis (within each spectral band respectively) is the time dimension - which in this case also corresponds to range.

so An alternative way of practicing this form of the invention is to use a plural-wavelength laser source and construct a plural wavelength differential-imaging absorption ltdar (DIAL) system.

That type of system directly measures the return of the individual wavelengths, rather than looking at the fluorescence signature.

(Related polarization, spectral-polarization and hyperspectral forms of the invention are discussed below in subsection 6.) (b) Hard objects (airborne terrestrial mapping) - This type of application is related to the metropolitan-region and - is- construction-project surveys discussed elsewhere in this document, but with important added benefits from the injection of spectral discrimination into the apparatus and procedures. A down-looking airborne system developing an image of the ground can perform several tasks simultaneously.

Data that can be collected include, as an example (Fig. 15) fluorescence imaging or other DIAL-tse data. mbe bright line in each of the wavelength regions corresponds to the ground; thus, such a system provides accurate range maps of terrain under the aircraft.

If the system is only singLe-wavelength, a monochromatic reflectivity map of the surface can be generated through simple consideration of the brightness of the line. In the illustration, the hump at the right of the image has a different reflectivity than the rest of the image - indicative of a different substance.

Through consideration of the different wavelengths, a spec tral-spatial profile of the return can be determined. The re sponse at different frequencies in turn can be used to determine the type of material.

In essence the system acts as an active plural- or hyper spectral system. This mode has two major benefits: (1) being active, the system is not dependent on ambient lighting and therefore can operate in day or night; and (2) the system also provides an accurate range map to the surface.

This ranging allows the user to also pick out three-dimen sional shapes (i. e. including heights) of objects as well as their spectral signatures. Addition of shape information makes automatic object-recognition algorithms much easier to manage, as compared with contrast-only systems. 3d

(c) Water-based applications - Water-based measurement environments are essentially the same as the air-based systems except that the water has a very high spectral attenuation coeffi cient, which changes the return signature dramatically compared with air. In addition, the water has a very high backscatter coefficient compared with airs therefore the system picks up more reflected fundamental frequencies than does an air-based system.

5. PLURAL-BEAM PIXEL ALLOCATIONS The foregoing discussions relate primarily to resolution of a single returning beam into different wavelength bands, or rear rangement and reassembly of a single returning beam into a differ- ently formatted image. This section instead introduces expanded capabilities that arise from transmission and recovery of more than one beam at a time.

Advanced airborne lidar concepts according to this invention 0 are applicable to a number of methodologies for detecting and classifying marine objects that are moored, floating and resting on a shallow bottom. Such objects present a hazard to shipping and recreation, as well as having some significance to police interests and the like.

Each of the ideas presented uses the patented streak-tube imaging lidar tSTIL) concept as a base, but represents a signifi cant advancement. These forms of the present invention have important advantages over other lidar-related systems, including those familiarly known as ALMDS, Magic Lantern (Adaptation), and RAMICS.

As noted elsewhere in this document, plural-image technique can be used either for generating very high resolution three- dimensional images, or for significantly increasing the areal coverage rate of an airborne lidar system. The technique can also be used to provide area images with a single laser pulse (compared to the line images normally produced with a streak-tube system).

This capability is useful for a number of applications, including a RAMICS refinement that provides the exact depth for every pixel in the image. The conventional two-sensor design instead gives depth data for only the image center.

The preferred embodiments discussed below are not wholly independent. Rather, the embodiments are to a large extent over- lapping, and many of the features taken up in one or another sub- section are applicable in others as well.

(a) Fore-and-aft (or "progressive") viewing - This tech- nique in its simplest form simply makes two or more line images on the streak tube (Fig. 18. Plural fan beams - for instance two that are pointed roughly lS degrees forward and 15 degrees back lain ret m. , _ ward from the aircraft, as shown - generate the corresponding plural line images.

Fore-and-aft viewing offers a notable improvement for an ALMDS-type system. In fore-and-aft viewing the system takes two or many looks at each object, through significantly different wave structure, on a single pass and with a single sensor - thereby enabling the system to remove wave-noise errors that can seriously degrade, or completely eliminate, the signal.

Because it is extremely unlikely that the target has low S-2 or highly distorted images as seen from all observing positions (Fig. 19), multiple passes over the same area are not required.

Area coverage rate is therefore made very high without adding significant cost to he sensor.

While the near-surface targets (#1 and #2) suffer little change, for the surface through which imaging is assumed, it does not matter because the rays have little chance to deviate signifi- cantly. The more distorted deeper targets get significantly dif- ferent looks.

Progressive viewing can also be used over dry land, for example in aid of imaging through cover - discussed elsewhere in this document - and through patchy fog and clouds.

(b) Greater operating efficiency or resolution - With a multibeam, multislit, multiimage system, many more spatial pixels can be placed in the swath of the sensor than possible with a single line. This allows a STIL-carrying aircraft to 'ly 'aster, or obtain greater resolution, or both.

Areal coverage can be increased by collecting more lines (Fig. 17[al) with a single laser shot. Since aircraft speed is ha limited by the size of the sampled area on the surface in the direction along the track, speed can be increased - proportion- ally with the number of lines.

Alternatively, the additional spatial pixels can be used to provide higher-spatial-resolution images (Fig. 17Ebl). This is accomplished by inserting the additional pixels into the same area as the original line image (c) Staring svatems - The plural-image innovation may in- stead be used to provide three beams, or many, for surveying over ground. One way to exploit this advance is in stationarY area imaging. Conventional streak-tube surveying requires a pushbroom system - depending on aircraft motion to sample the dimension along the track - but multiimage viewing can provide the desira ble "staring" type of system mentioned earlier.

In the earlier "background" section of this document it was shown that STIt systems have powerful sampling and SNR advantages (Fig. 16) over other lidar systems. It was shown also' however, that STIL systems have been restrictively iimiced in the amount of to data collected in each laser pulse, and hampered by the require- ment for continuous translation of the instrument in a "pushbroom" mode, and also have been inadequately exploited with regard to several important commercial and industrial applications.

Thus multiimage viewing, instead of being used to provide viewing redundancy through fore-and-aft viewing as described above, can instead be used to multiply the amount of definite in- formation acquired in each pulse. This advantage in turn can be exploited to help avoid (Fig. 17[c]) the requirement for mechani- cal movement of the detector, or a scanner and also to mitigate the requirement for remapping.

(d) Lenslet arrays for remapping - For performance of area imaging and other plural-image embodiments of the invention, typi- cally a streak-tube module need not itself be changed. Represen tatively the only changes are in the front-end optics, called the remapping optics", and in the software that reassembles and in- cerprees the COD output.

As explained earlier, remapping optics convert an area image into a series of line images that can be fed into the streak tube.

Fig. 9 shows a module for remapping an area image into lines using two lenslet arrays.

The first lenslet array is in the focal plane of the recei- ver (i. e., this is the location of the area image that is to be remapped -- the size of the lensless determines the pixel size of the receiver). These lensless relay the pupil of the receiver onto the second set of lensless.

It is the second set of lensless that performs the actual remapping task, through the use of beam-steering elements. The steering elements, shown as prisms, can redirect the beam so that all of the light falls onto a selected one - or selected ones - of the slits.

The second lenslet array and its beam-steering elements are advantageously fabricated in one piece, i. e. as a single off-axis lens element. These lensless can be made by photolithography, which is completely computer-controlled.

Both arrays are very straightforwardly manufacturable. They do have some restrictions requiring some design work, the most im- portanc being a maximum achievable ray deviation. This may dictate some compromises in a particular system design (e. A., reduced ap ertures of input optics, or reduced numbers of spatial pixels that can be accommodated). Very close collaboration with the optics fabricator is advisable, to define the lenslet design trade-offs in such a way as to optimize system performance.

t5 The remapping process is otherwise completely under control of the designer. A number of system trade-offs should be strate gized before settling on a final design.

The most significant of these is deciding between the den sity of spatial pixels and range pixels. In principle, however, several different interchangeable sets of remapping optics - each with its own associated software - can be prepared for a single streak-tube, to accommodate various data-collection environments.

Once in existence, these optics/software sets can be interchanged as readily as the lenses on an ordinary SLY, video or cinema 2s camera.

Throughput of a lenslet array may be a concern, due to fab rication details that limit the curvature of the lenses. The impact of these limitations should be examined in the detailed design of the optics.

Discussions with lenslet array vendors (such as Wavefront Sciences, Inc.) should verify in advance that reasonable through put for the intended application is possible with existing tech nology and that alreadyoccurring improvements in the state of the art are likely to eliminate any serious limitations. Also, the 3s two lenslet arrays have to be well aligned to each other in order to work.

An alignment mechanism (either alignment fiducials or actual alignment structures, such as a post) can be built onto the array as a part of the fabrication process. Such fixtures or other pro visions further distance practice of the invention from potential alignment problems.

Although as mentioned earlier a typical cost for initial lenslet design and fabrication is $20,000 per array, this amount is mostly nonrecoverable expense for setup processes. Thus sub- seuent copies of the lenslet array can be procured for signifi- cantly less.

Remapping for most ebod_ments of the present invention requires two lenslet arrays; therefore, the initial procurement is on the order of $40, 000. Due to the unique requirements of this lenslet design (e. o. using two lenslet arrays in conjunction, remapping thousands of points, etc.), two separate fabrication runs should be planned to work out details and optimize system performance. Because arrays can be made of various materials, operations at longer wavelengths are feasible.

The Fig. 9 optics must provide good throughput (so that the system SIR is not degraded), and they must demonstrate the ability to survive in, e. , the vibratory environment of a typical hel- icopter. Several different sets of optics should be prepared, so that the operator can resort at will to all of the methodologies discussed above - namely, high-resolution or greater area cover- age, area imaging, and progressive viewing.

Each optics set should be integrated with STIL-system hardy ware in a modular manner for easy interchangeability as noted earlier. Lab testing should be followed by airborne ocean testing for location and characterization of realistic objects.

Once fabricated, a lenslet array is advisably mounted into an existing cor.-entonai streak-tube system in the laboratory to confirm operational area imaging. The STIL data-analysis package must be modified to accommodate the remapping matrix (i. e., the matrix of area-image pixel positions in the plural streak-tube slits) and also to account for the plural image zones on the phos- phor screen (i. e., one image zone per slit as shown in Fig. 7).

When these tasks are performed, single-laser-pulse three-dimensio nal images are generated.

Conventional STIL systems, however, are set up for wide- transverse-angle fan beams; therefore ideal practice of the invention calls for substitution of new transmitter and receiver optics better adapted for a field of view suited to three-dimen sional imaging. The range-processing software that generates the three-dimensional image must be modified to work with the plural- slit configuration.

s As noted earlier, remapping optics whether as lenslet arrays (Fig. 9) or as fiber-optic modules (next subsection) are not only expensive and somewhat cumbersome, but even when those drawbacks are endured are also Able to collect only a very limited amount of image information. With the present invention, after an area to image is remapped to a line image a three-dimensional region can be imaged - again, without requiring sensor motion or a scanner.

This is an ideal sensor for a range-gated system in which the exact range of the target can be determined, as well as the spatial position, for every Pixel. This technique is also useful for a large number of applications in which true three-dimensional imaging is desired.

(e) Fiber optlas fo_ remap - Although fiber-optic re- mappers as such are not at all new in STIR technology, the present invention makes fiber-optic remapping far more interesting than ever before. By virtue of the greater number of slit lengths that can be productively used with multislit imaging, as set forth in the preceding subsections, a great deal more can now be accom plished through remapping.

2s A large fiber redistribution network may prove difficult to build without first preparing an automated setup of some kind. A method for providing plural line images by stacking slices of large fiber tapers, however has the potential to provide some of the desired capability without significant fixture fabrication.

This has been verified in discussions with the previously noted large manufacturer INCOM.

Another approach is modular, but requires custom fixturing and may require addressing some focal-plane gaps. This approach has been confirmed in talks with Polymicro Technologies, also mentioned earlier.

In any event it is advisable to work with manufacturers to determine the best way to fabricate the devices. It is also best to have smaller, less expensive test pieces built for evaluation before ordering any final pieces.

While the earlier work of Alfano and Knight was stringently hampered as to overall number of spatial pixels, the present in- vention very greatly mitigates that obstacle. Given a CCD camera with 1024x1024 pixels, a user can split up the image in a number HE ways, with only the constraint that the product of spatial pixels and range pixels be equal to the total number of pixels (in e., 1024X1024, or 1 million pixels).

As a practical matter, unused 'burfer" P1XQ1 mGY be desired between the slit image zones; yet in most cases this consideration only slightly reduces the total number of spatial or range pixels available. Such buffer pixels ordinarily will occupy less than ten percent of the CCD.

Table 2 shows some examples of range vs. spatial pixel trade-offs for various CCD sizes. For larger CCDs, as the table suggests, electron optics of the streak tube may dominate imaging performance.

_ spatial-image size CCD size for indicated number of range bins _ 64 bins 256 bins | 512 bins 1024x1024 128xi28 _ 90x 90 64x 64 1 45x 45 2048x2048 256x256 -1-28x128_ 1_ 90x 90 4096x4096 512x51?_ 36?x362 256x256 L 18 Table 2. Trade-off between the umber of spatial and range pixels for given CCD size.

In addition to square image areas, a designer has the free- dom to arrange any image area that has the same number of pixels.

Given a 10242 CCD and 256 desired range pixels, for example, the designer can choose a 64x64 square, a 4xlO24-pixel rectangle (Fig. 10), a 4,096-pixel-by-one-line image, or virtually any other de- sired allocation of the available pixels.

The very great proportion of unused range pixels that is characteristic of many conventional STIL images (top and bottom in Fig. lOfa]) can be eliminated (Fig. lO[b]) through use of the invention. The PS-STIL remapping optics concept allow significant system-optimization options that are not readily available to de- signers using all-electronic sensors: with those devices, the designer typically cannot change the number and geometric arrange- ment of detector pixels.

(f) Terrestrial mappinq - The potential importance of STIL application to municipal and industrial surveys has been discussed earlier. The present invention offers a key to unlocking this potential.

Cost analysis studies suggest that the invention can cover roughly five square miles per hour at a total cost of roughly $5,000 per square mile, which is considered to be very cost com petitive. Thus in principle the Bos Anaeles area alone cQuid generate income on the order of $10 million each year - and the 0 effort should require only about 500 hours.

By the nature of the apparatus o* the invention, this work can be done at all hours. Hence if desired the entire project, working three shifts around the clock, can be completed in only twenty days, leaving ample time for other assignments to more fully utilize a single equipment set.

Alternatively working a single shift of weekdays only, the Los Angeles mapping could be completed in sixty working days or twelve weeks - an annual duty cycle of only twenty-four percent, still allowing another three equivalent such efforts each year, and also still assuming only a single equipment set. Other metro- politan areas have similar requirements, which in the aggregate thus can provide a sustained business in airborne surveying.

The importance of using an eye-safe system bears repetition here. A significant business advantage is reduced risk and lia 2s bility for eye damage, actual or claimed: physically speaking, there is essentially no possibility of such injury from an eye- safe system though of course claims can always be made.

Another benefit is that the system car. be operated at grea- ter power. This could allow for higher-altitude flight, which would result in greater area coverage for each hour of flight. 6.

POLARIZATION AND SPECTRAL-POLARIZATION EMBODIMENTS

A single lidar sensor that can provide several different sets of information (e. g., contrast/reflectivity images, 3-D im ages, and polarimetry images) is described. Because all of these different data sets are collected simultaneously, and because they are all collected from the same sensor, there are no image regis tration issues (i. e., difficulty in aligning data from different sensors in time or space).

Also, because this is an active system, it is not dependent on ambient lighting and can provide operation during either day or night. Finally, operation is sound at any wavelength; therefore, as previously described the system is adaptable to the eye-safe wavelength regime for deliverable systems.

Polarization forms of the invention are useful in police work for interdiction of various clandestine activities - as in fugitive pursuit and detection of drug-running or smuggling - and also for general-purpose private-sector surveying under natural but adverse viewing conditions. Such applications are discussed next. In addition to law-enforcement applications, the technology is also useful for commercial water-based applications such as bathymetry and oil-slick detection.

(a) Imaginq_5rouch diffuse cover - In the case of single- shot STIR images through trees (Fig. 20), with the system of the invention the foliage is clearly defined but the ground is also visible. Thus the 3-D ST}L polarimeter can detect objects under such cover.

One of the benefits of a monostatic lidar system is that the system transmits and receives through the same opening in a screening/covering medium. Therefore it can register an object under a forest canopy, whereas a simple imaging system fails - at least intermittently - in such a situation because the image is fragmented by the screening media.

(b) Covered_hard objects - Range and polarimetry data can provide a clear distinction between an object and the ground, the first by height and the second by polarization effects. While complete reconstruction of the object may be impeded by blocking effects of the cover, general size and overall polarization signature can still be estimated Any artificial covering such as a tarpaulin or netting po tentially has polarization signatures significantly different from those of the background, due to the significantly different ma terials. A 3D imaging polarimeter has the potential to provide an excellent countermeasure to most forms of covering - and -6i especially netting in that it can both detect that covering and separate its signature from that of an object beneath.

As an example, consider detection of a domed hard object, well-covered but intersected (Fig. 21 la]) by a SAIL fan beam. The object is assumed to be only slightly different in reflectivity, but has z significant polarization effect on the return light.

Here the horizontal axis of the dual-slit polarimetry image is the spatial dimension Rig. 21[3j while the vertical axis within each polarization state analyzer (PSA) band - is the time dimension, which also corresponds to range. Three maps (Fig. 21[c]) are produced by the sensor, illustrating how difficult it would be for a criminal to effectively hide the object from this system.

Although the cover is very effective in concealing the object in a simple reflectivity image (similar to what would appear in an ordinary photo), the different polarization returns clearly reveal an object of differing material - and the conspicuous hump caused by the object in the range direction adds emphasis to the polari zation data.

Reconstructed images from multiple shots illustrate the ad vantage of acquiring both polarization and range data in addition to reflectivity. The PS-STIL 3-D polarimeter, described in the next section, captures all these data in a single laser pulse.

Analysis of the data is performed using Mueller matrix tech niques documented earlier. These analyses are performed as de- scribed in the technical literature - but they all come down to creating at least two independent measurements (i. e., two equa- tons in two unknowns!.

(c) PS-STIL nolarimeter design - As can be seen from a preferred implementation of the instrument for the example given above (Figs. 22 and 23), the simplicity of the device (i. e., few optics and no moving parts) is one of the most appealing aspects of the PS-STIL approach. As the receiver polarization-state ana s5 lyzer splits the incoming beam into two separate polarization states, the polarization-state generator in the transmitter has to produce only one state.

Because the two polarization states in the receiver are typi- cally produced by a polarizer at two orthogonal settings (e. a., the two polarizer settings, -72.385 and -162.385 , are 90 apart), the rotatingpolarizer can be replaced with a fixed Wollaston prism. This prism, which is a commonly available polarization component, splits the two polarization states by adding an angular displacement to one polarization as compared to the other.

The angular displacement is translated Into a spatial dis- pacement by the imaging lens, which forms two separate images on the streak-tube photocathode hat go into two separate slits.

Other polarization elements (e. go, beam-splitters, Brewster win cows, etc.) can achieve a similar polarization split, but the Wollaston 1s preferred for this example.

This instrument uses a fan-beam illumination pattern. It produces three maps of the area under investigation: (1) a con- trast (or reflectivity) map at the wavelength of the sensor; (2) a degree of polarization (DP) map that shows how the target and surroundings affected the polarization of the transmitted light; Feature Benefit À Significantly improves detection zo and clutter reduction.

À Segmentation, the method for de sensor provides data termining which pixels are object from three different and which are background, is sig physical mechanisms nificantly enhanced by looking for for interacting with correlation in all three images.

targets and clutter À Algorithm development, analysis, and visualization are simpler than -_ _for h=erspeotral systems À Provides 3-D shape of target.

À Allows separation of range data screening/camouflage from target.

Allows À...aging through forest can opy. _.

contrast data Provides standard imagery, easy to _ evaluate and process _ À Difficult to provide camouf polarimetry data paint typically has distinct po larization characteristics.

one receiver and À Sensor is compact.

one transmitter À Sensor is less complex.

À Sensor is more reliable _ _ _. . . no moving parts Sensor is more reliable.

simultaneous collection Spatial and temporal registration of all three data sets of the data sets is not an issue.

in one receiver Table 3. Fea ures and benefits of the 3-D imaging polarimetry system.

and (3) a range map that has the 3-D shape of a detected object and its surroundings.

These three maps interact with the object and the surround- ings in fundamentally different ways; therefore the clutter signa tures are different (and often uncorrelated). In addition, it is difficult to conceal an object from all three of these detection methods at once; consequently countermeasures are difficult. Ta- ble 3 summarizes benefits of the three-dimns-'ona.-imaging pola- rimetry system.

(d) Spectral nolarimetry - A plural-spectral version of the instrument is discussed above. Here a hyperspectral version is presented, combined with a polarimeter to provide single-laser pulse spectral-polarimetry data.

The sort of data collected from the hyperspectral form of the IS instrument appears in Fig. 24. The receiver is identical in con- struction to the plural-spectral form except that the dispersive element is rotated 90 degrees, so that wavelengths are spread along rather than perpendicular to the slit.

In this case the transmitter emits a pencil beam, rather than a fan beam. A plural-slit hyperspectral version of the instrument can be implemented by transmitting plural pencil beams at the same or different wavelengths (one per slit).

The spectral-polarimetry (SP) form of the instrument is a combination of the hyperspectral and polarization forms described above. The laser transmitter is identical to that used for pola- rimetry, and the receiver has both the polarization optics and the wavelength dispersion optics in front of the lens.

This form of the instrument has the potential to allow truly unique data capture for a variety of applications. For atmos pheric constituents the combination of fluorescence spectral strength combined with polarization data provides potentially great discrimination capability against a variety of chemical and biological species.

3s Biological materials are known to have variations in fluores cence lifetime ranging from 100 psec and 10 nsec. Temporal reso lution of the elastic backscatter and the fluorescence from nano second laser pulses provides the capability of discriminating objects by measuring their fluorescence lifetimes.

In the hyperspectral mode, operator-controlled timing parame- ters for the streak tube determine the resolution of the range information and the distance in space over which the range infor- mation is collected. When the streak tube sweep extends less than a meter in front of and beyond the target, the streak tube time- resolves the return laser pulse and the fluorescence return.

In such circumstances, the lifetime of the induced fluores- cence can be measured. Fluorescence lifetime measurements are an additional discriminant that may be beneficial for identification of biological agents.

With such a measurement arrangement, different views of data (Figs. 28) collected from a single laser pulse contain the elastic backscatter and induced fluorescence return as a function of wave- length and time. In the illustrated example, the tall, narrow peak is the time-resolved elastic backscatter from the 9 ns exci- tation pulse at 532 nm.

The broadly sloping region is the time-resolved induced fluo- rescence. The induced fluorescence is delayed in time with re- spect to the excitation pulse and lasts more than twice as long as so the excitation pulse.

Yet another extremely powerful hybrid instrument (Figs. 16) enables simultaneous acquisition of polarimetric data together with fluorescence information. Here polarization data are col- lected in two slits, and all other wavelengths through another pair of slits.

Such a system detects whether fluorescence occurred, but provides no further spectral-discrimination capability. Other configurations can provide significantly more spectral data.

so For example, the fan-beam transmitter can be replaced with a pencil beam, and the spectrum analyzed as for the hyperspectral configuration analogously to the fluorescence-lifetimes system discussed just above. Multiple slits can then be used for dif ferent-wavelength transmitter beams, or for multiple polarization states.

7. A GIGAHERTZ WAVEFRONT SENSOR This system, although based upon an array of subapertures and resulting indicator-beam spot positions analogous to those used in s the conventional Hartmann-Shack sensor discussed earlier, is a ma- jor advancement over that Hartmann-Shack system. In order to take advantage of the speed of the streak tube, the 7 igh from each subapsrture has to ba split up in a way that it is possible to both measure the spot position and also direct the light onto at 0 least one slit - thereby enabling streaking, so that the indica- tor-beam positions can be time resolved.

(a) Principles of the system - Precisely this capability is achieved by adding a second set of lensless (Fig. 27) that serves as an array of optical quad cells. Each of these cells collects the light from a corresponding respective group of four lensless in the first array - and then redistributes the light onto a slit.

This redistribution is invoked by proper selection of the lenslet focal length and - in effect - the strength of an asso ciated prism that is used to "steer" the light onto the slit. In practice, however, there is no separate prism as such; rather, the lenslet and prism are an integrated single-element optic (i. e. an off-axis lens element).

The light reaching the slit is then streaked, to provide time resolution of the linear array of indicator beams mentioned above - and thus of the three-dimensional tilts of all the subapertures whose indicators are directed to the slit. It is this stage, in particular, which achieves the previously noted improvement in time resolution by five orders of magnitude. This system, using a single slit r is believed to be novel and is within the scope a* certain of the appended claims.

(b) Plural-slit enhancement - A single-slit embodiment, however, is not the end of the matter - for the number of indi- vidual wavefront elements that can be sensed and time resolved in this way is somewhat limited. To obtain a truly excellent result, the preferred system also incorporates the plural-slit feature - 7o- described above and thereby at least doubles the spatial resolu- tion across the wavefront to be sensed.

Ideally multiple slits are used, and a corresponding multiple thereby imposed upon the number of wavefront regions whose orien tations can be independently measured. As a result this gigahertz wavefront sensor (GHz WAS) is able to measure both the intensity and the phase of laser light with extrao-din lily high resolution in both time and space.

Furthermore this instrument collects all the Information in a single laser pulse (i. e., it does not require multiple pulses to assemble information). In this way the system eliminates any pulse-to-pulse variations that would compromise the data.

With this instrument, designers, modelers, and users of high- power short-pulse lasers now have access to data that can be di rectly compared to information from their design models and simu- lations. This allows much greater confidence in the design, and allows the models and designs to be experimentally validated.

In particular, transient "hot spot" events (intensity peaks in single pulses which reach the damage threshold of the optics) - and which are often blamed for the degradation of laser perEor- mance - can now be fully captured. Conventional laser instrumen- tation would be able to do no more than determine that such an event has occurred. For instance, an oscilloscope trace might show a narrow bright peak, or a laser-characterization instrument might show a hot spot in the intensity image but they fail to provide phase and intensity information at high spatial and tempo- ral resolution, as the present invention does, and therefore can- not enable a full causal assessment.

Laser manufacturers can use these several types of new in o formation to provide far better control over their design and fab- rication processes and consequently can greatly improve laser reliability and performance by virtue of the enormously su- perior diagnostic capabilities that this sensor provides. All applications using high-power short-pulse lasers will in turn as benefit from better lasers resulting from use of the GRz WAS.

(c) Pixel allocation - The number of subapertures that can be sampled (and thus the spatial resolution) is limited by several parameters: the dimensions of the COD camera on the back of the p streak tube, the pulse length, and the desired time resolution of the samples. If the GHz WAS were implemented with conventional streak-tube imaging, as noted above the user could only get the number of pixels that would fit across one slit.

Plural-slit streak-tube imaging allows a great deal more flexibility in the manner in which pixels are allocated in space and time. An example of the way in which pixels can be assigned to slits will now be presented and is very interesting in that it serves as an example for many pluralslit systems other than the WAS. In particular, the discussion below also represents a methodology for determining system limits and organizing the proc- ess of pixel-to-slit allocation for essentially all the PS-STIL embodiments introduced in this document.

Consider a camera with 1024x1024 pixels on the back of the streak tube, together with a laser having a 5 no FWHM average pulse length. The spatial and temporal resolution limits of the system are now clear - they are given by: 4PSPT PCCD where Ps. is the number of spatial subapertures (the factor of 4 is due to the fact that each spatial subaperture requires the four quad cell measurements), PT is the number of samples in time per pulse, and PCCD is the total number of pixels in the CCD (i. e., roughly 1,000,000).

Pt is the total sample length divided by the sample period.

Most of the energy is ordinarily in a time corresponding to 6 SWAM (full width at half maximum) of the pulse; therefore, the elation becomes 4 P5 6E/ TS:5 PCCD / where Is is the amount of time per sample. Putting in the numbers gives Ps 8333TS, where TB is in nanoseconds. Thus, if 1 nanosecond samples are desired, the GHz WAS can have 8333 subapertures (i. e., approxi- mately a 90x90 spatial sampling of the laser beam).

Since there are 1024 pixels in the camera, the optics can be set up to provide 8 slits that are 1024 pixels wide. A conven tional streak-tube imaging system, which can only use one slit, would only provide 1024 spatial samples - forcing all of the other pixels to be in the time dimension. Plural slits thus pro vide much more flexibility and capability.

to This description is based upon theoretical maximum resolu tion. In actual usage, the numerical result may be reduced by ten to twenty percent, to provide some visual buffer zones between pixels.

Performance of all forms of the present invention is unusu ally and extremely sensitive to any individual component deficien cies. For this reason all components should be carefully tested individually to determine whether they meet their respective spec- ifications before attempting to assemble and operate the system.

Also highly advisable is initial system testing in a labora- tory setting. Radiometry, resolution, and noise tests and analy- ses should establish whether the system meets specifications before essaying field operation. Field testing, when it is appro- prate, should be performed both in aircraft and in a ground-based vehicle.

For an effective assessment it is advisable to independently model the expected system performance (radiometry, resolution, and noise). Comparison of a model with measurements taken in the lab and in the field, as prescribed above, is very informative. Modi- fications to the model, and validation against the data, may then be necessary. It is also essential to modify any existing STIL data-analysis software straightforwardly to work with the PS-STIL of GHz WAS data.

It will be understood that the foregoing dlaclosure is intended to be merely exemplary, and not to limit the scope of the invention -- which is to be determined by reference to the ap- pended claims.

Claims (1)

1. A streak lidar imaging system comprising: a light source for emitting a beam; 3 an imaging device for receiving light originating from the 4 source and for forming an image of the received light; at least one microelectromechanical mirror for displacing the 6 image along a streak direction; and 7 an image-responsive element fox receiving and responding to the displaced image.
1 2. The system of claim 1, wherein: 2 the at least one mirror comprises an array of multiple 3 microelectromechanical mirrors.
1 3. the system of claim 1, for use with an optical medium and 2 wherein: 3 the light source comprises means for emitting the beam into 4 such medium; and s the imaging device comprises means for receiving light 6 reflected from such medium and forming an image of the reflected 7 light.
1 4. The system of claim i, wherein: 2 the 1 - she source comprises a resonant device; and 3 the imaging device comprises means for canning imperfections 4 in resonance of the resonant device to modulate the image.
1 5. The system of claim 4, particularly for use with a resonant 2 device that comprises a laser; mad wherein: 3 the imaging device comprises means for causing imperfections in optical wavefronts from the laser to modulate the image.
1 6. The system of claim 5, wherein: 2 the causing me&es comprise means for deflecting elements of 3 said beam in response to local imperfections in said coherence.
7. The system up civic 6, wherein: z the deflecting means comprise at least one lenslet array.
1 8. A streak lidar imaging system for measurements of a medium 2 with any objects therein; said system comprising: s a light source for emitting at least one beam into such 4 medium; s an imaging device for receiving light reflected from such 6 medium and forming plural images, arrayed along a streak direc 7 tion, of the reflected light; 8 wherein the imaging device comprises plural slits for select g ing particular bands of the plural images respectively; and a device for displacing all the plural images along the streak direction.
1 9. The system of claim 8, wherein: 2 the imaging device comprises an optical device; 3 the plural images are optical images; and 4 the displacing device comprises a module for displacing the plural optical images.
1 10. The system of clang, wherein: z the displacing device aamprises an electromechanical device.
1 11- The system of claim lD,wherein: 2 the electromechanical device comprises at least one scanning 3 microelectromechanical mirror.
1 1Z The system of claim 1o,wherein: 2 the electromechanical device comprises an array of scanning 3 microelectromechanical mirrors.
i 13. The system of claim 9, wherein: 2 the displacing device comprises an electrooptical device.
l 14. The system of claim 8, wherein: the imaging device comprises an electronic device; 3 the plural images are electronic images; and 4 the displacing device comprises a module for displacing the plural electronic images.
1 15 The system of claim 1 h,wherein: 2 the displacing device comprises electronic deflection plates.
1 16. The system of claiml4, wherein the imaging device comprises: 2 an optical front end that forms a single optical image of the 3 reflected light; and 4 an electronic stage receiving the single optical image and s forming therefrom the plural electronic images.
l 17. The system of claim 14 wherein the imaging device comprises: 2 an optical front end that forms plural optical images of the 3 reflected light; and 4 an electronic stage receiving the plural optical images and forming therefrom the plural electronic images.
1 18. The system of claim 8: wherein the displacing device forms from each of the plural 3 images a respective streak image; 4 whereby the displacing device forms, from the plural images considered in the aggregate, a corresponding array of plural 6 streak images; and 7 further comprising a device for receiving the array of plural 8 streak images and in response forming a corresponding composite 9 signal.
1 19. The system of claim 8, wherein: 2 the plural slits operate on use images in optical form.
1 20. The system of claim 8, wherein: 2 the plural slits operate on the images in electronic form.
21. The system of clalm8, wherein: the imaging device comprises a module for forming substan- 3 tially a continuum of images of the reflected beam; and 4 each of the plural slits selects a particular image band from the continuum.
1 22 À The system of claim 8, wherein: z the light source comprises an optical module for emitting at 3 least one thin, fan-shaped beam into such medium; 4 the imaging device comprises an optical module for receiving at least one thin, fan-shaped beam reflected from such medium; and 6 at least one of the optical modules comprises an optical unit 7 for orienting a thin dimension of the reflected beam along the a streak direction.
1 23. The system of claim wherein: 2 the imaging device comprises an optical module for forming the plural images as images of the at least one reflected beam at 4 discrete optical wavelengths, respectively.
2 24. The system of claim 8, wherein: ' 2 the imaging device comprises an optical module for forming 3 the plural images as images of the at least one reflected beam in 4 different polarization states, respectively.
25. The system of claims, wherein: the.agi,.g de-vice comprises an optical device for forming 3 the plural images from different angular sectors, respectively, of the at least one reflected beam.
1 26. The system of claim 25, wherein: 2 the imaging device further comprises an optical device For 3 rearranging image elements in each angular sector to form a single 4 line image for that sector.
1 27. The system of claim 26, wherein: 2 the optical device comprises remapping optics.
1 28. The system of claim 27, wherein: 2 the remapping optics comprise a fiber-optic or lminar-optic 3 module.
1 29. The system of claim27, wherein: 2 the remapping optics comprise a lenslet array.
1 30. The system of claim 8, wherein: 2 the light source comprises means for emitting plural berms 3 into such medium; and 4 the imaging device comprises.
6 means for receiving plural beams of the reflected Light from such medium, and a 9 an optical device for forming the plural images from, respectively, the plural reflected 11 beams.
1 3l. The system of claim 8, wherein: 2 the light source comprises an emitter for Sting light in a 3 wavelength region at or near 1 microns.
1 32. The system of claim 31, wherein: 2 the imaging device comprises en 'pconverter fo- generating 3 light at or near the visible wavelength region in response to the 4 1 ght at or near 1 microns.
1 33. The system of claim 32, wherein: 2 the upconverter comprises ETIR material.
l 34. The sys tem of claim S, wherein: 2 the light source comprises means for emitting the at least 3 one beam into such medium that is selected from the group consist 4 ing of; 6 a generally clear fluid above a generally 7 hard surface; g a turbid medium, including but not limited to lo ocean water, wastewater, fog, clouds, smoke 11 or other particulate suspensions; 13 a diffuse medium, including but not limited to 1 foliage at least partially absouring a '=.d 1S shape. so
i 35. A iidar imaging system for optical measurements of a medium z with any objects t.a.en; said system comprising: 3 a light source for emitting at least one light pulse into such medium; and means for receiving the at least one light pulse reflected 6 from such medium and for forming from each reflected pulse a set 7 of plural substantially simultaneous output Images, each image 8 representr.g -elected energy in two dimensions.
1 36. The system of claim 35: 2 wherein the light source comprises means for emitting a 3 series of light pulses into such medium, each of the pulses in the 4 series generating a corresponding such image set; whereby the receiving means generate a sequence of plural 6 corresponding image sets; and 7 further comprising means for storing the sequence of corre s spending image sets.
37. The system of claim 35 wherein: 2 the receiving means comprise means for allocating image 3 elements, in each image of the set, as among (1) azimuth, (2) 4 range or time, and (3) an extrinsic measurement dimension.
1 38. The system of claim 37, wherein: 2 the extrinsic measurement dimension is wavelength.
1 39. The system of claim 37, wherein: 2 the extrinsic measurement dimension is polarization state.
1 40. The system of claim 37, wherein: 2 the extrinsic measurement dimension in a spatial selection.
1 4i. The system of claim 35, wherein: 2 the recei.ing me-es comprise r,ear.s for causing the images in 3 the set to be substantially contiguous.
1 42. The system of claim 35, wherein: 2 the receiving means comprise means for receiving the re- 3 fleeted light pulse an a beam with a CrosB-sectlon that has an 4 aspect ratio on the order of,:1.
1 43. The system of claim 42,wherein: 2 the light source comprises means for emitting the at least 3 one light pulse as a beam with a cross-section that has an aspect 4 ratio on the order of 1:1.
44. The system of claim 35, wherein: 2 the receiving means comprise means for forming the images in 3 such a way that the two dimensions are range/time and output-image 4 azimuth, for a particular extrinsic dimension that corresponds to s each output image respectively.
1 45. The system of claim 35, wherein: 2 the light source comprises means for emitting the at least 3 one beam into such medium that is a generally clear fluid above a 4 generally hard surface.
1 46. The system of claim35, wherein: the light source comprises means for emitting the at least 3 one beam into such medium that is a turbid medium, including but 4 not limited to ocean water, was tewater, fog, clouds, smoke or other particulate suspensions.
1 47. The system of claim 35, wherein: the 1 ght source corup.lse means for emitting the at least 3 one beam into such medium that is a diffuse medium, including but 4 not limited to foliage at least partially obscuring a landscape.
1 48. An optical system comprising: z a first lenslet array for performing a first optical trans 3 formation on an optical beam; and 4 a second lenslet array, in series with the first array, for receiving a transformed beam from the first array and performing a 6 second optical transformation on the transformed beam.
1 49. The system of claim 48, wherein: 2 one of the arrays comprises image-plane defining lensless to 3 define image elements of the beam; and the other array comprises deflecting lensless to selectively deflect beam elements to reconfigure an image transmitted in the 6 beam.
1 50. The system of claim 49, wherein: 2 the one of the arrays that defines the image elements is the first array.
51. The system of claim 49, further comprising: .mens defining an image carried by the beam; and 3 wherein the first array in positioned substantially at a 4 focal plane of the image.
1 52. The system of claim s,wherein the image-defining means 2 comprise: a lidar source emitting an excitation beam to a region of 4 interest; and s collection optics receiving a reflection of the excitation 6 beam from the region and focusing the reflection at the focal 7 plane.
1 53. The system of claim 52: 2 wherein the two transformations, considered together, com- 3 prise selectively imaging particular components of the beam onto 4 plural slits following the second array; and further comprising means for streaking images from both slits 6 for reimaging at a detector.
1 54. The system of claim 5l, wherein: 2 the first array also relays the image from the focal plane to 3 the second array.
1 55. The system of claim 54, wherein: 2 the second array is substantially in a plane, said plane 3 being disposed substantially at the relayed image.
1 56 The system of claim 48, wherein: 2 the two transformations, considered together, comprise 3 selectively imaging particular components of the beam onto plural 4 slits following the second array.
l 57. A streak lidar imaging system for making measurements of a 2 medium with any objects therein, said system corisir.g: 3 a light source for emitting into such medium a beam in a sub- 4 stantially eye-safe wavelength range; an imaging device for receiving light reflected from such 6 medium and forming an image of the reflected light; 7 an upconverter for generating light at or near the visible wavelength region in response to the reflected light in a substan- g tially eye-safe wavelength range; and a device for displacing the image along a streak direction.
l 58. The system of claim 57, wherein: 2 the upconverter is positioned in the system after the dis- 3 placing device.
l 59. The system of claim 57, wherein: 2 the unconverted comprises ETIR material.
l 60. The system of claim 57, wherein: the displacing device is positioned in the system after the 3 unconverted.
1 6l. The system of claim 57, wherein: 2 the light source emits said beam in a wavelength range at 3 substantially lo m.iarons. as
1 62. A spat-al mapping system for mapping a region; said system 2 comprising: 3 a light source for emitting at least one thin, fan-shaped 4 beam from a moving emission location toward such region, a thin s dimension of the beam being oriented generally parallel to a di rection of motion of the emission location; 7 an imaging device for receiving light reflected from such 6 region and forming an image of the reflected light; 9 means for separating the reflected light to form plural re fleeted beam images representing different aspects of such region, respectively; and 12 an image-responsive element for receiving and responding to 13 the plural beam images.
1 63. The system of claim 62, wherein: 2 the separating means comprise means for discriminating be- 3 tween spatially different aspects of such region.
1 64. The system of claim 62, wherein: the separating means comprise means for discriminating be- 3 tween aspects of such region that are carried in portions of the 4 beam received at different angles. -86.
1 65. The system of claim 64, wherein: 2 the discriminating means apprise means for forming discrete 3 plural reflected beam images from portions of the beam received at A different angular ranges; respectively.
1 66. The system of claim 62, wherein: 2 the separating means comprise means for discriminating 3 between aspects of such region that are carried in different polarization states of the beam.
1 67. The system of claim 62, wherein: 2 the separating means comprise means for discriminating 3 between aspects of suab region that are carried in different 4 spectral components of the beam.
1 68. The system of claim 62, wherein: z the separating means comprise means for discriminating be 3 tween combinations of two or more different aspects of such region 4 that are carried in different characteristics of the beam, at least one of which characteristics is selected from among: 7 spatially different aspects of the beam, different polarization states of the beam, and 11 different spectral components o. the bean.
69. The ByS tom of claim 68, wherein: 2 at least two of said characteristics are selected from said 3 spatial, polarization and spectral characteristics.
?0. The system of claim 62, wherein the emission location is 2 selected from the group consisting of: 4 a spacecraft; 6 an aircraft; B another type of vehicle; and do another type of moving platform.
l 71. The system of claim 62, wherein: 2 the emission location is a fixed light source cooperating 3 with a scanning s provide a moving i age o th l 4 source.
1 72. The system of claim 62, wherein: 2 the light source comprises means for emitting the at least 3 one beam into such medium that is a generally clear fluid above a 4 generally hard surface.
73. The sys ten of claim 62, wherein: 2 the light source comprises means for emitting the at least 3 one bean into such medium that is a turbid medium, including but 4 not limited to ocean water, wastewater, fog, clouds, smoke or other particulate suspensions.
1 74. The system of claim 62, wherein: 2 the light source comprises means for emitting the at least 3 one beam into such medium that is a diffuse medium, including but 4 not limited to foliage at least partially obscuring a landscape.
1 75. A spatial mapping system for mapping a region; said system 2 comprising: 3 a light source for emitting a beam whose cross-section has an 4 aspect ratio on the order of 1:1, from a moving emission location toward such region; 6 an imaging device for receiving light reflected from such 7 region and forming an image of the reflected light; means for separating the reflected light to form plural re g fleeted beam images representing dirferer.t aspects of such region, JO respectively; and 11 an image-responsive element for receiving and responding to 2 the plural beam images.
1 76. The system of claim 75, wherein: 2 the imaging device comprises means for receiving the reflec- 3 ted light from such region as a reflected-beam whose cross-section a has an aspect ratio on the order of 1:1.
1 77. A spectrometric analytical system for analyzing a medium with 2 any objects therein; said system comprising: 3 a light source for emitting substantially at least one pencil 4 beam toward such medium; an imaging device for receiving light reflected from such me 6 dium and forming an image of the reflected light, 7 means for separating the reflected light along one dimension s to form plural reflected beam images arrayed along said dimension g and representing different aspects of the medium, respectively; JO optical- dispersing means for forming a spectrum from at least 11 one of the plural images, by dispersion of the at least one image 12 along a dimension generally orthogonal to the said dimension; and 13 an image-respons've element for receiving and responding to lo the plural beam images. -89.
1 78. The system of claim 77, wherein: 2 the dispersing means comprise means for forming a spectrum 3 from each of the plural images, respectively.
1 79. The system of claim 77, wherein: 2 the separating means comprise means for separating the 3 reflected light to form plural images representing spatially 4 different aspects of the beam, respect-.
80. the system of claim 77, wherein: 2 the separating means comprise means for separating the 3 reflected light to form plural images representing different 4 polarization states of the beam, respectively.
1 81. The system of claim 77, wherein: 2 the separating means comprise means for separating the 3 reflected light to form plural images representing different 4 spectral constituents of the beam, respectively.
l 82. The system of claim 77, wherein: 2 the separating means comprise means for separating the reflected light to form plural images representing combinations of 4 two or more different aspects of such medium that are carried in different characteristics of the beam, at least one of which 6 characteristics is selected from money: spatially different aspects of the beam, different polarization states of the beam, and 12 different spectral components of the beam.
1 83. A wavefront sensor, for evaluating a light beam from an 2 optical source; said sensor comprising: 3 optical components for receiving such beam from such source; 4 optical components for subdividing small portions of such beam to form indicator subbeams that reveal a direction of sub- 6 stantially each of said small portions; and 7 optical components for steering the indicator sunbeams to s fall along at least one slit; i means for streaking light that passes through the at least o one s i t; and 1' means for capturing the streaked light during a streaking 2 duration.
1 84. The system of claim 84, wherein: 2 the at least one slit comprises plural slits.
l 85. The system of claim 83, particularly for use with a resonant 2 optical source; and wherein: 3 the receiving and subdividing components comprise means for causing imperfections in optical wavefronts from the resonant source to modify the light that passes through the at least one 6 slit.
1 86. The system of claim 85, wherein: 2 the receiving, subdividing and steering components comprise 3 at least one lenslet array.
1 87. The system of claim 86, wherein: 2 the receiving, subdividing and steering components comprise 3 at least two lenslet arrays in optical series.
88. The system of claim 87, wherein the lenslet arrays comprise: 2 one array that defines image elements at or near a focal 3 plane of such beami and 4 another array that receives the image elements relayed from the first array, and that steers light from the image elements to 6 the at least one slit.
1 89 the system of claim 86, wherein: 2 the receiving, subdividing and steering components comprise 3 at least one lenslet array in optical series with at least one 4 fiber-optic remapping device.
90. A spectrometric analytical system for analyzing a medium with 2 any objects therein; said system comprising: 3 a light source for emitting substantially at least one beam 4 toward such medium; an imaging device for receiving light reflected from such 6 medium and forming plural images of the reflected light; 7 optical or electronic means for streaking the plural images) an image-repnsie element for receiving and responding to g the plural beam images; and a computer for extracting fluorescence-lifetime information 11 from a signal produced by the image-responsive element.
1 31. the system of claim 90, wherein the at least one beam 2 comprises: 3 at least one pencil beam.
92. The system of claim 90, wherein the imaging device comprises: 2 a hyperspectral optical system.
1 93. The system of claim 92, wherein the imaging device further z comprises: 3 a plural-warelength optical system wherein each of plural 4 wavelength bands is arrayed along a length dimension of a respec tine Lit-shaped image.
GB0421638A 2000-04-26 2001-04-26 Streak lidar imaging system Expired - Fee Related GB2403614B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US19991500 true 2000-04-26 2000-04-26
GB0227440A GB2380344B (en) 2000-04-26 2001-04-26 Very fast time resolved imaging in multiparameter measurement space

Publications (3)

Publication Number Publication Date
GB0421638D0 GB0421638D0 (en) 2004-10-27
GB2403614A true true GB2403614A (en) 2005-01-05
GB2403614B GB2403614B (en) 2005-02-23

Family

ID=33542656

Family Applications (2)

Application Number Title Priority Date Filing Date
GB0421647A Expired - Fee Related GB2403615B (en) 2000-04-26 2001-04-26 Streak lidar imaging system
GB0421638A Expired - Fee Related GB2403614B (en) 2000-04-26 2001-04-26 Streak lidar imaging system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
GB0421647A Expired - Fee Related GB2403615B (en) 2000-04-26 2001-04-26 Streak lidar imaging system

Country Status (1)

Country Link
GB (2) GB2403615B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2457375A (en) * 2008-02-13 2009-08-19 Boeing Co Lidar arrangement for mapping the environment around a vehicle
US8599374B1 (en) 2012-11-15 2013-12-03 Corning Incorporated Hyperspectral imaging systems and methods for imaging a remote object
WO2016075342A1 (en) * 2014-11-11 2016-05-19 Universitat De València System, method and computer program for measuring and analysing temporal light signals

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0809252D0 (en) * 2008-05-21 2008-06-25 Ntnu Technology Transfer As Underwater hyperspectral imaging

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997018487A1 (en) * 1993-04-12 1997-05-22 Areté Associates Inc. Imaging lidar system with strip-shaped photocathode and confocal-reflection

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9309750D0 (en) * 1993-05-12 1993-07-21 Pilkington Perkin Elmer Ltd Method of monitoring coalignment of a sighting or surveilance sensor suite
WO1998013909A2 (en) * 1996-09-03 1998-04-02 Stanger, Leo Energy transmission by laser radiation

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997018487A1 (en) * 1993-04-12 1997-05-22 Areté Associates Inc. Imaging lidar system with strip-shaped photocathode and confocal-reflection

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2457375A (en) * 2008-02-13 2009-08-19 Boeing Co Lidar arrangement for mapping the environment around a vehicle
US7710545B2 (en) 2008-02-13 2010-05-04 The Boeing Company Scanned laser detection and ranging apparatus
GB2457375B (en) * 2008-02-13 2010-08-04 Boeing Co Scanned laser detection and ranging apparatus
US8599374B1 (en) 2012-11-15 2013-12-03 Corning Incorporated Hyperspectral imaging systems and methods for imaging a remote object
US9200958B2 (en) 2012-11-15 2015-12-01 Corning Incorporated Hyperspectral imaging systems and methods for imaging a remote object
US9267843B2 (en) 2012-11-15 2016-02-23 Corning Incorporated Hyperspectral imaging systems and methods for imaging a remote object
US9341514B2 (en) 2012-11-15 2016-05-17 Corning Incorporated Hyperspectral imaging systems and methods for imaging a remote object
WO2016075342A1 (en) * 2014-11-11 2016-05-19 Universitat De València System, method and computer program for measuring and analysing temporal light signals

Also Published As

Publication number Publication date Type
GB2403615B (en) 2005-02-23 grant
GB0421638D0 (en) 2004-10-27 grant
GB2403614B (en) 2005-02-23 grant
GB0421647D0 (en) 2004-10-27 grant
GB2403615A (en) 2005-01-05 application

Similar Documents

Publication Publication Date Title
Albota et al. Three-dimensional imaging laser radar with a photon-counting avalanche photodiode array and microchip laser
McCarthy et al. Long-range time-of-flight scanning sensor based on high-speed time-correlated single-photon counting
US5013917A (en) Imaging lidar system using non-visible light
De Naurois et al. Measurement of the Crab flux above 60 GeV with the Celeste Cerenkov telescope
US6057909A (en) Optical ranging camera
US6856355B1 (en) Method and apparatus for a color scannerless range image system
US7060957B2 (en) Device and method for spatially resolved photodetection and demodulation of modulated electromagnetic waves
Huang et al. The Hawaii K-Band Galaxy Survey. II. Bright K-Band Imaging
US7834985B2 (en) Surface profile measurement
US20100128109A1 (en) Systems And Methods Of High Resolution Three-Dimensional Imaging
US7301608B1 (en) Photon-counting, non-imaging, direct-detect LADAR
Jones et al. The NASA/NSO Spectromagnetograph
US7012738B1 (en) Method and device for detecting and processing signal waves
Davis et al. Ocean PHILLS hyperspectral imager: design, characterization, and calibration
US6690472B2 (en) Pulsed laser linescanner for a backscatter absorption gas imaging system
US5717487A (en) Compact fast imaging spectrometer
US6323941B1 (en) Sensor assembly for imaging passive infrared and active LADAR and method for same
US6181472B1 (en) Method and system for imaging an object with a plurality of optical beams
US5870180A (en) Time measurement device and method useful in a laser range camera
US6459493B1 (en) Apparatus for measuring surface form
US4226529A (en) Viewing systems
US3446555A (en) Optical ranging and detection system for submerged objects
Breugnot et al. Modeling and performances of a polarization active imager at λ= 806nm
US3527533A (en) Method and apparatus for deriving and processing topographical information
US5198657A (en) Integrated imaging and ranging lidar receiver

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20090426