GB2403615A - Eye-safe streak tube imaging lidar - Google Patents
Eye-safe streak tube imaging lidar Download PDFInfo
- Publication number
- GB2403615A GB2403615A GB0421647A GB0421647A GB2403615A GB 2403615 A GB2403615 A GB 2403615A GB 0421647 A GB0421647 A GB 0421647A GB 0421647 A GB0421647 A GB 0421647A GB 2403615 A GB2403615 A GB 2403615A
- Authority
- GB
- United Kingdom
- Prior art keywords
- image
- plural
- images
- optical
- light
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 114
- 230000003287 optical effect Effects 0.000 claims abstract description 112
- 230000010287 polarization Effects 0.000 claims description 52
- 238000005259 measurement Methods 0.000 claims description 38
- 239000000463 material Substances 0.000 claims description 29
- 230000003595 spectral effect Effects 0.000 claims description 27
- 238000013507 mapping Methods 0.000 claims description 26
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims description 24
- 238000003491 array Methods 0.000 claims description 21
- 230000004044 response Effects 0.000 claims description 18
- 230000033001 locomotion Effects 0.000 claims description 17
- 238000001228 spectrum Methods 0.000 claims description 12
- 230000005284 excitation Effects 0.000 claims description 10
- 238000003756 stirring Methods 0.000 claims description 9
- 239000000470 constituent Substances 0.000 claims description 8
- 230000009466 transformation Effects 0.000 claims description 8
- 239000006185 dispersion Substances 0.000 claims description 7
- 239000000779 smoke Substances 0.000 claims description 6
- 229920000136 polysorbate Polymers 0.000 claims description 5
- 238000000844 transformation Methods 0.000 claims description 5
- 239000012530 fluid Substances 0.000 claims description 4
- 239000008264 cloud Substances 0.000 claims description 3
- 239000003897 fog Substances 0.000 claims description 3
- 239000002351 wastewater Substances 0.000 claims description 3
- 239000002131 composite material Substances 0.000 claims description 2
- 239000000725 suspension Substances 0.000 claims 3
- 101100039010 Caenorhabditis elegans dis-3 gene Proteins 0.000 claims 1
- -1 w=stewater Substances 0.000 claims 1
- OAICVXFJPJFONN-UHFFFAOYSA-N Phosphorus Chemical compound [P] OAICVXFJPJFONN-UHFFFAOYSA-N 0.000 abstract description 26
- 230000005532 trapping Effects 0.000 abstract description 6
- 230000005855 radiation Effects 0.000 abstract description 2
- 238000000034 method Methods 0.000 description 48
- 230000008901 benefit Effects 0.000 description 33
- 238000010586 diagram Methods 0.000 description 23
- 238000005516 engineering process Methods 0.000 description 21
- 239000000306 component Substances 0.000 description 20
- 238000009740 moulding (composite fabrication) Methods 0.000 description 20
- 230000000875 corresponding effect Effects 0.000 description 17
- 238000013461 design Methods 0.000 description 17
- 238000004458 analytical method Methods 0.000 description 14
- 230000008569 process Effects 0.000 description 14
- 238000013459 approach Methods 0.000 description 13
- 238000001514 detection method Methods 0.000 description 12
- 230000000694 effects Effects 0.000 description 12
- 238000004519 manufacturing process Methods 0.000 description 12
- 238000000711 polarimetry Methods 0.000 description 12
- 238000012545 processing Methods 0.000 description 11
- 230000002123 temporal effect Effects 0.000 description 11
- 210000004027 cell Anatomy 0.000 description 10
- 238000012360 testing method Methods 0.000 description 10
- 238000006243 chemical reaction Methods 0.000 description 9
- 239000000835 fiber Substances 0.000 description 9
- 230000006870 function Effects 0.000 description 9
- 238000002310 reflectometry Methods 0.000 description 9
- 238000005070 sampling Methods 0.000 description 9
- 238000012512 characterization method Methods 0.000 description 8
- 238000006073 displacement reaction Methods 0.000 description 8
- 230000006378 damage Effects 0.000 description 7
- 230000002829 reductive effect Effects 0.000 description 7
- 230000015572 biosynthetic process Effects 0.000 description 6
- 238000011161 development Methods 0.000 description 6
- 230000018109 developmental process Effects 0.000 description 6
- 239000010408 film Substances 0.000 description 6
- 238000001069 Raman spectroscopy Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 5
- 238000010894 electron beam technology Methods 0.000 description 5
- 230000006872 improvement Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 230000035945 sensitivity Effects 0.000 description 5
- 230000006978 adaptation Effects 0.000 description 4
- 238000013480 data collection Methods 0.000 description 4
- 239000011159 matrix material Substances 0.000 description 4
- 230000035515 penetration Effects 0.000 description 4
- 230000009467 reduction Effects 0.000 description 4
- 239000000243 solution Substances 0.000 description 4
- 238000009987 spinning Methods 0.000 description 4
- 239000000126 substance Substances 0.000 description 4
- YUBJPYNSGLJZPQ-UHFFFAOYSA-N Dithiopyr Chemical compound CSC(=O)C1=C(C(F)F)N=C(C(F)(F)F)C(C(=O)SC)=C1CC(C)C YUBJPYNSGLJZPQ-UHFFFAOYSA-N 0.000 description 3
- 229910000530 Gallium indium arsenide Inorganic materials 0.000 description 3
- 206010056740 Genital discharge Diseases 0.000 description 3
- 230000032683 aging Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000005283 ground state Effects 0.000 description 3
- 238000005286 illumination Methods 0.000 description 3
- 230000000750 progressive effect Effects 0.000 description 3
- 238000012216 screening Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 206010001497 Agitation Diseases 0.000 description 2
- RTZKZFJDLAIYFH-UHFFFAOYSA-N Diethyl ether Chemical compound CCOCC RTZKZFJDLAIYFH-UHFFFAOYSA-N 0.000 description 2
- 208000027418 Wounds and injury Diseases 0.000 description 2
- 230000002411 adverse Effects 0.000 description 2
- 230000004888 barrier function Effects 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 239000012620 biological material Substances 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000015556 catabolic process Effects 0.000 description 2
- 238000001444 catalytic combustion detection Methods 0.000 description 2
- 229910052729 chemical element Inorganic materials 0.000 description 2
- 239000000356 contaminant Substances 0.000 description 2
- 210000004087 cornea Anatomy 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 230000001066 destructive effect Effects 0.000 description 2
- 230000005684 electric field Effects 0.000 description 2
- 238000010893 electron trap Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000000799 fluorescence microscopy Methods 0.000 description 2
- 208000014674 injury Diseases 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 230000000873 masking effect Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000002085 persistent effect Effects 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 210000001525 retina Anatomy 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000004611 spectroscopical analysis Methods 0.000 description 2
- 230000007480 spreading Effects 0.000 description 2
- 238000003892 spreading Methods 0.000 description 2
- 230000002459 sustained effect Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- VLJQDHDVZJXNQL-UHFFFAOYSA-N 4-methyl-n-(oxomethylidene)benzenesulfonamide Chemical compound CC1=CC=C(S(=O)(=O)N=C=O)C=C1 VLJQDHDVZJXNQL-UHFFFAOYSA-N 0.000 description 1
- 241000251468 Actinopterygii Species 0.000 description 1
- 101100225969 Aquifex aeolicus (strain VF5) era gene Proteins 0.000 description 1
- BSYNRYMUTXBXSQ-UHFFFAOYSA-N Aspirin Chemical compound CC(=O)OC1=CC=CC=C1C(O)=O BSYNRYMUTXBXSQ-UHFFFAOYSA-N 0.000 description 1
- 241000283690 Bos taurus Species 0.000 description 1
- 101100352755 Drosophila melanogaster pnt gene Proteins 0.000 description 1
- 241000490229 Eucephalus Species 0.000 description 1
- 208000020564 Eye injury Diseases 0.000 description 1
- 208000026097 Factitious disease Diseases 0.000 description 1
- LLQPHQFNMLZJMP-UHFFFAOYSA-N Fentrazamide Chemical class N1=NN(C=2C(=CC=CC=2)Cl)C(=O)N1C(=O)N(CC)C1CCCCC1 LLQPHQFNMLZJMP-UHFFFAOYSA-N 0.000 description 1
- 101000628483 Homo sapiens Suppressor of tumorigenicity 7 protein-like Proteins 0.000 description 1
- 208000037309 Hypomyelination of early myelinating structures Diseases 0.000 description 1
- 229910000661 Mercury cadmium telluride Inorganic materials 0.000 description 1
- 235000013382 Morus laevigata Nutrition 0.000 description 1
- 244000278455 Morus laevigata Species 0.000 description 1
- GXCLVBGFBYZDAG-UHFFFAOYSA-N N-[2-(1H-indol-3-yl)ethyl]-N-methylprop-2-en-1-amine Chemical compound CN(CCC1=CNC2=C1C=CC=C2)CC=C GXCLVBGFBYZDAG-UHFFFAOYSA-N 0.000 description 1
- 241001282736 Oriens Species 0.000 description 1
- 101150084935 PTER gene Proteins 0.000 description 1
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 102100026721 Suppressor of tumorigenicity 7 protein-like Human genes 0.000 description 1
- 241000950638 Symphysodon discus Species 0.000 description 1
- ATJFFYVFTNAWJD-UHFFFAOYSA-N Tin Chemical compound [Sn] ATJFFYVFTNAWJD-UHFFFAOYSA-N 0.000 description 1
- 238000010521 absorption reaction Methods 0.000 description 1
- 230000002730 additional effect Effects 0.000 description 1
- 230000003466 anti-cipated effect Effects 0.000 description 1
- 210000001742 aqueous humor Anatomy 0.000 description 1
- 239000003124 biologic agent Substances 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 230000001364 causal effect Effects 0.000 description 1
- 230000001427 coherent effect Effects 0.000 description 1
- 229940000425 combination drug Drugs 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000002844 continuous effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000007728 cost analysis Methods 0.000 description 1
- 239000013078 crystal Substances 0.000 description 1
- 238000013481 data capture Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000000572 ellipsometry Methods 0.000 description 1
- 235000021183 entrée Nutrition 0.000 description 1
- 231100000040 eye damage Toxicity 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 238000001413 far-infrared spectroscopy Methods 0.000 description 1
- JEIPFZHSYJVQDO-UHFFFAOYSA-N ferric oxide Chemical compound O=[Fe]O[Fe]=O JEIPFZHSYJVQDO-UHFFFAOYSA-N 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 238000002073 fluorescence micrograph Methods 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- BTCSSZJGUNDROE-UHFFFAOYSA-N gamma-aminobutyric acid Chemical compound NCCCC(O)=O BTCSSZJGUNDROE-UHFFFAOYSA-N 0.000 description 1
- 230000001771 impaired effect Effects 0.000 description 1
- WPYVAWXEWQSOGY-UHFFFAOYSA-N indium antimonide Chemical compound [Sb]#[In] WPYVAWXEWQSOGY-UHFFFAOYSA-N 0.000 description 1
- 238000002347 injection Methods 0.000 description 1
- 239000007924 injection Substances 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- HOQADATXFBOEGG-UHFFFAOYSA-N isofenphos Chemical class CCOP(=S)(NC(C)C)OC1=CC=CC=C1C(=O)OC(C)C HOQADATXFBOEGG-UHFFFAOYSA-N 0.000 description 1
- 230000015654 memory Effects 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- MYWUZJCMWCOHBA-VIFPVBQESA-N methamphetamine Chemical compound CN[C@@H](C)CC1=CC=CC=C1 MYWUZJCMWCOHBA-VIFPVBQESA-N 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 239000003973 paint Substances 0.000 description 1
- HCTVWSOKIJULET-LQDWTQKMSA-M phenoxymethylpenicillin potassium Chemical compound [K+].N([C@H]1[C@H]2SC([C@@H](N2C1=O)C([O-])=O)(C)C)C(=O)COC1=CC=CC=C1 HCTVWSOKIJULET-LQDWTQKMSA-M 0.000 description 1
- 238000000206 photolithography Methods 0.000 description 1
- 229910021340 platinum monosilicide Inorganic materials 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000001603 reducing effect Effects 0.000 description 1
- 230000000284 resting effect Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 238000009738 saturating Methods 0.000 description 1
- 239000013535 sea water Substances 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- VCSAHSDZAKGXAT-AFEZEDKISA-M sodium;(z)-(1-carbamoyl-5-chloro-2-oxoindol-3-ylidene)-thiophen-2-ylmethanolate Chemical compound [Na+].C12=CC(Cl)=CC=C2N(C(=O)N)C(=O)\C1=C(/[O-])C1=CC=CS1 VCSAHSDZAKGXAT-AFEZEDKISA-M 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 241000894007 species Species 0.000 description 1
- 238000010183 spectrum analysis Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000010409 thin film Substances 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 230000004304 visual acuity Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/28—Investigating the spectrum
- G01J3/447—Polarisation spectrometry
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J11/00—Measuring the characteristics of individual optical pulses or of optical pulse trains
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/28—Investigating the spectrum
- G01J3/2823—Imaging spectrometer
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/28—Investigating the spectrum
- G01J3/2889—Rapid scan spectrometers; Time resolved spectrometry
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J9/00—Measuring optical phase difference; Determining degree of coherence; Measuring optical wavelength
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/17—Systems in which incident light is modified in accordance with the properties of the material investigated
- G01N21/21—Polarisation-affecting properties
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/62—Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light
- G01N21/63—Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light optically excited
- G01N21/64—Fluorescence; Phosphorescence
- G01N21/6408—Fluorescence; Phosphorescence with measurement of decay time, time resolved fluorescence
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G04—HOROLOGY
- G04F—TIME-INTERVAL MEASURING
- G04F13/00—Apparatus for measuring unknown time intervals by means not provided for in groups G04F5/00 - G04F10/00
- G04F13/02—Apparatus for measuring unknown time intervals by means not provided for in groups G04F5/00 - G04F10/00 using optical means
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/02—Details
- G01J3/0205—Optical elements not provided otherwise, e.g. optical manifolds, diffusers, windows
- G01J3/021—Optical elements not provided otherwise, e.g. optical manifolds, diffusers, windows using plane or convex mirrors, parallel phase plates, or particular reflectors
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J3/00—Spectrometry; Spectrophotometry; Monochromators; Measuring colours
- G01J3/02—Details
- G01J3/0205—Optical elements not provided otherwise, e.g. optical manifolds, diffusers, windows
- G01J3/0229—Optical elements not provided otherwise, e.g. optical manifolds, diffusers, windows using masks, aperture plates, spatial light modulators or spatial filters, e.g. reflective filters
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/17—Systems in which incident light is modified in accordance with the properties of the material investigated
- G01N2021/1793—Remote sensing
Landscapes
- Physics & Mathematics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Optical Radar Systems And Details Thereof (AREA)
- Investigating Or Analysing Materials By Optical Means (AREA)
Abstract
A STIL (streak tube imaging lidar) uses radiation of an eye-safe infra-red wavelength such as 1500nm. This is converted, eg by a frequency doubling ETIR (energy trapping infra-red) phosphor film, into visible light which is input to the streak camera. The infra-red light may be generated by a Nd:YAG laser fitted with an OPO (optical parametric oscillator).
Description
24036 1 5 PCl'/lLS()1/13489
VERY FAST TIME RESOLVED IMAGING_ : s
This application claims the priority benefit of U. S. provi- sional patent application 50/133,315, filed April 26, 20Q0.
BACKGROUND
FIELD OF THE INVENTION
This invention relates generally to time-resolved recording of three or more optical parameters simultaneously; and more par- ticularly to novel methods and apparatus for making such measure- ments on an extremely short time scale, using lidar or a streak tube - or related technologies such as lenslet arrays - or com- binations of these. Certain forms of the invention enable provi- sion of a compact single-laser-pulse scannerless multidimensional imaging system using plural-slit streak-tube imaging lidar or 'iPS- STIL". The system is also capable of making plural-wavelength band spectrally discriminating recordings of objects or phenomena, as well as plural-polarization-state recordings, and also combina- tigag of such novel measurements.
2. RELATED ART (a) - The term "lidar" (in Eng- lish pronounced "LIE-dahr"), by analogy to "radar", means "light detection and ranging". The use of lidar is greatly enhanced by incorporating a streak tube - an electrooptical system for time resolving lidar returns to a remarkably fine degree.
Several advanced forms of the streak-tube imaging lidar or "STIL" technology are presented in other patent documents. See, for example, U. S. 5,467,122 and PC publication WO 98/10372.
BENEFITS: Many strengths of a conventional STIL system appear in Table l, and most of these are discussed in more detail in this section. This technology has demonstrated the capability to col- lect range (or other time-related) information with dynamic range and bandwidth that cannot be achieved using earlier conventional signal- digitization electronics.
gesture 12-bit ruder heed not spend time keeping system 'centered" linear in the dynamic range.
dynamic No electronic digitization can achieve this with range out significant compression (use of a logarithmic _._ amplifier) that introduces artifacts in data.
Control- Can change digitization rate "on the fly", we I fable allows the operator to start out with coarse range range - resolution and "zoom" in on areas of interest.
operation l cm range resolution has been demonstrated at from short ranges; 15 cm, from an aircraft.
d. c. to Conventional high-speed digitization electronics multiGHz are designed for only one speed.
STIL operates on a single short-pulse (<lO nsec) I Fast data time-offlight measurement; thus no long integra collec- tion or multiple pulsing is necessary.
tion No target distortion/blur from moving source or: target.
l Compact À The volume of commercial streak-tube electronics l; rugpackaging has been reduced by a factor of twenty.
gedized Ruggedized hardware for helicopter environment has package been fabricated.
À System can be placed c= a variety of platforms.
The streak tube has a noiseless gain (noise factor I is l) due to each accelerated photoelectron gen High orating approximately three hundred photons on the! internal phosphor screen.
gain Higher gain (>104) available using a microchannel plate (MCP) if necessary.
_ _ _ Raises small signals above the amplifier noise.
Simulta- No need for multiple sensors. l neous Allows significant improvements to ATR algorithms as range and for shape matching and for clutter reduction.
contrast No registration scale issues between range and images contrast data.
High À Transmits and receivers have been demonstrated I frame at 400 Ez frame rates - ideal for large-area rates searching.
shown Ideal for very fast moving targets or sensors.
All processing within a single image frame is con- | Rapid ventionally performed with multiple DSP computers.
process- Allows rapid real-time display for an operator.
ing Reduces volume and electrical power requirements for computer.
Tab Le 1. Benefits of the convention imaging lidar (STIL) approach.
For example, linear twelve-bit dynamic range has been shown at controllable bandwidths up to and beyond 3 GHz. A fundamental advantage of a streak-tube-based lidar system is that it can pro- vide hundreds or even thousands of channels of sampling at more than 3 GHz, with true twelve-bit dynamic range.
Such a receiver system provides range sample "bins" (i. e.
discrete range-sampling intervals, the' can be as small as five centimeters (two inches) long, and provides 4096 levels of grav- scale imagery, both of which are important for robust operations.
The small range bins provide optimal ranging capability, and the large dynamic range reduces the effort of trying to keep the scene illumination in the middle of the response curve of a limited-dy- namic-rar.ge receiver. Such performance has been demonstrated in a laser radar configuration (J. McLean, "High Resolution 3-D Under- water Imaging", Proc. SPIN 3761, 1999).
BASIC TIE ARCHITECT OPERATION: A streak tube (Fig. l) as conventionally built nowadays is very similar to a standard image- intensifier tube, in that it is an evacuated tube with a photo- cathode, producing electrons which are accelerated by very high voltages to a phosphor screen. Tn operation of a typical system, each such electron ejects roughly three hundred photons from the phosphor, which is then collected by an image-recording device such as a CCD.
A major difference is that a streak tube has an extra pair of plates that deflect an electron beam, somewhat as do the de- flection plates in an ordinary cathode ray tube (CRT) tube used in most oscilloscopes, television sets and computer monitors. In conventional STIL operation, input photons are limited to a single slit-formed image causing the electron beam within the tube to be slit-shaped.
A fast ramp voltage is applied to the deflection plates, very rapidly and continuously displacing or "streaking" the slit- shaped electron beam, parallel to its narrow dimension, from the top (as oriented in Fig. l) of the phosphor screen to the bottom - effectively creating a series of line images formed at differ- ent times during the sweep. Thereby time information is impressed upon the screen image in the streak direction (here vertical), while spatial information is arrayed along the slit length.
The array of internal electronic line images in turn consti- tutes a latent areal image - which can be picked up ""developed") by phosphor on the screen. Most typically a charge-coupled-device (CCD) camera is attached to the streak tube to collect the image from the phosphor screen.
In this way the image is reconverted to an external elec- tronic image by a CCD mbe COD output is digitized, interpreted, and if desired saved or displayed by receiving electronics.
One of the two dimensions of each two-dimensional image ac o quired in this way is azimuth (taking the dimension parallel to the long dimension of the slit as extending left and right) - just as with a common photographic or video camera. The other of the two dimensions, however, is unlike what an ordinary camera captures.
Is More specifically, the STIR images represent azimuth vs. range from the apparatus - not vs. the commonplace orthogonally visible dimension as with a common camera. Thus for example if a two-dimensional image of an ocean volume is acquired by an instru ment pointed vertically downward into the sea, the two dimensions are azimuth and ocean dank_.
The operation described here should not be confused with that of a socalled "framing camera", whose tube internal geometry is commonly identical but which usually lacks an optical input slit, and whose deflection system is differently energized - so that more ordinary twodimensional images of a scene are formed at the phosphor screen. Often such images are sized to fit on just a f raction OF the screen area, and the deflection plates quickly step the two-dimensional-image position fin some instruments dur- ing a blanking interval), rather than displacing it continuously So as in streaking.
BASIC SYSTEM OPERATION: In a typical conventional streak-tube lidar configuration, a short-duration high-energy laser pulse is emitted. The emitted beam is spread out into a single thin, fan 3s shaped beam or line, which is directed toward a landscape, ocean volume, or other region of interest - and the receiver optics image the line back onto the slit input to the streak tube. tA later portion of this document discusses the phrase "thin, fan- shaped beam" in further detail.) PET/US01/13489 In such a standard ST7L system, coverage of the region of interest in the dimension perpendicular to the line illumination (Fig. 2) is generally accomplished through motion of a vehicle carrying the emitter and sensor of the beam.
Formation of a complete volumetric image therefore requires a series of pulses, each yielding a respective individual range-vs.- azimuth image.
Taking the laser projection direction as horizontal in Dig.
Cad, the vehicle direction should be vertical - as for instance in a vertically moving helicopter. In this case, each areal screen image represents a horizontal map/ at a respective alti- tude, with the measuring instrument located above the top edge of the map and the remote horizon along the bottom edge.
Alternatively, reverting to the earlier example of a down ward-looking instrument over the ocean, vehicle motion should be horizontal. In this case each areal screen image represents a vertical slice of the ocean below the vehicle, at a respective position along the vehicle's horizontal path.
This is sometimes familiarly called a"pushbroom" system. A demonstrated alternative to vehicle-based data acquisition is a one-dimensional scanner system used from a fixed platform.
The deflection system of the streak tube is set to streak the electron beam completely down the phosphor screen in some specific time, called the "sweep time" of the tube. This also 2s corresponds to the total range gate time (i._e., the total amount of time during which the system digitizes range data).
Ordinarily the sweep time is adjusted to fully display some interval of interest for exploring a particular region, as for instance some specific ocean depth from which useful beam return can be obtained - taking into account turbidity of the water.
The starting point of the range gate is controlled by the trigger signal used to begin the sweep.
Computer control of both the sweep time and the sweep-start trigger provides the operator a flexible lidar system that can very rapidly change its range-gate size, its range-digitization starting point, and also its range-sampling resolution. This en- ables the system to search large areas of range with coarse range resolution, and then "zoom in" to obtain a high-resolution image around a discovered region of particular interest. For example, in one pulse the system could capture a range from km to 7 km at low resolution, and then on the next laser pulse zoom in to 6 km + m and thereby image an object of prospective interest at the highest resolution.
Each column of CCD pixels corresponds to one channel of dig itized range data, such as would be collected from a single time resolved detector for instance a photomultiplier tube (PET) or an avalanche photodiode (APD) . Each row is the slit image at a different time.
The size (in units of time) of the previously mentioned range bins is simply the sweep time divided by the number of pix els in the COD columns. Such values are readily converted into distance units through multiplication by the speed of light in the relevant medium or media.
MODERN-DAY ENCNTS: Considerable practical advancement is now available in state-of-the-art streak-tube technology. Such advances include a compact ruggedized package suitable for heli copter environment.
Such a unit (Fig. 3) is only about 15 cm (6 inches) wide, 47 cm (l9 inches) long, and 37 cm (15 inches) in diameter. This kind of device has complete computer control of all streak-tube parame ters, including highvoltage supplies.
Available as well are continuously variable linear sweep speeds from 50 nsec to 2 psec. High-speed tube gating without a microchannel plate (MCP) is also offered, for enhanced signal-to- noise ratio.
ADVANCED COMMERCIAL FORE: The assignee of this patent document, Arete Associates (of Sherman Oaks, California, and Tucson, Arizo- na), has developed an airborne STIL system for bathymetry and terrestrial mapping. This device contains a diode-plmped soli- state Nd:YAG laser that is frequency doubled to 532 nm. This wavelength was chosen for maximum water penetration for the ba 3s thymetry task and for proximity to the peak of the streak-tube photocathode responsivity curve.
A raw image frame taken by STIL during airborne terrestrial mapping data collection (Fig. 4[b]) and a volume reconstruction from numerous such frames (Fig. 4[c]) compare interestingly with a conventional photograph (Fig. 4[al). A like comparison is also shown (Fig. 5) for an object roughly l m (39 inches) in diameter and imaged through 6 m (20 feet) of seawater.
In these views, naturally the conventional photo gives a clearer and sharper image. One goal of the STIL imaging, however, is to obtain images and reconstructions under circumstances that preclude effective use of ordinary photos.
Of particular interest in view 'c) is the dark spot tin the upper-left part of the imaged object: this is one of the two 5 cm (two inch) holes in the object that appear in view (a). Here the STIL system is actually ranging down through that hole to the bot- tom of the object. (The other hole was covered by a weight used to keep the object on the ocean bottom.) As the scattering and attenuation of water are significantly greater (and propagation velocity significantly smaller) than in air, Arete has developed and tested the algorithms and software to account for such problems. These algorithms are directly trans- lata'ole to long-range air paths, and propagation through fog, haze, smoke etc. (b) Safetv llmitat ons of conventional lidar - Modern STIL innovations were developed for underwater applications that re- quire blue-green light for optimal water penetration. Human be- ings too are particularly adapted for sensitivity to light in 26 these wavelengths.
By the same token, however, such light when projected at very high powers can pose a hazard to people - and possibly to other creatures as well who may be positioned to look directly at the source. The possible hazard is compounded by a like sensitivity to viewing specular reflections of the beam from the source. ' As will be understood, the SAIL system has many useful ap- plications in which this type of potential hazard poses no signif- icant concern. A thrust of the present document, however, is 3s development of a new generation of STIL systems and applications that are industrial and even commercial, and accordingly introduce a much greater need for compatibility with the population at large.
Therefore the possibility of injury to eyes is an important obstacle to a new array of STIL devices. It may be in part due to this problem that widespread commercial and industrial adaptations of the STIL principle have failed to appear in the marketplace.
(c) Conventional lidar streak-unit limitations - As the preceding introductory sections suggest, conventional modern streak tubes are relatively sizable vacuum tubes that use high voltages to streak the electron beam generated from the photocath ode. Plainly this type of hardware is subject to several draw- backs.
Such devices are very expensive to make, maintain and use.
For field use, ruggediation is a necessary added expense (and a stillimperfect solution) since large vacuum tubes are inherently somewhat fragile. Their external high-tension connections are not optimal for routine use in aircraft.
A well-known alternative is optical streaking - in which a beam of incoming photons is rapidly displaced across a detector, along the range axis, entirely avoiding the need for a vacuum tube. This in fact was the earliest form of the streak camera - using a fast scan mirror, in particular a large spinning polygon (Fig. Alla]).
These devices too, unfortunately, are problematic and even more so than the electronic form. The drawback of this approach 2s as conventionally implemented is the requirement for the large high- speed rotating mirror, which is both bulky and relatively delicate. tone relatively modern example of such an installation is described by Ching C. Lai in "A New Tubeless Nanosecond Streak Camera Based on Optical Deflection and Direct COD Imaging r,, r Proc. SPIE vol.1801, 1992, pp. 454-69.) What makes these drawbacks of the optical streaking tech- nique particularly unfortunate is that streaking with an optical device would otherwise greatly expand the choices in commonly available detectors. It would allow the use of common detectors 36 for the wavelengths of interest - e. a. silicon COD and CMOS de- tectors for the visible and near IR/ and HgCdTe, PtSi, or InSb arrays for the longer IR, out to 11-micron wavelengths if desired.
Longer-wavelength operation would be advantageous for vari- ous special applications. These include better penetration Of fog, clouds and some types of smoke; and also enhanced discrimina- tion of object types by their different reflectivities at corre- sponding different wavelengths.
Regrettably the common detectors just mentioned are not sui ted for use as photocathode materials, to generate electrons that can then be streaked inside a streak tube. On the other hand it would accomplish nothing to place them following the conventional photocathode - e. cry at the streak-tu'oe anode - since conversion from the optical to the aleoronic domain has already been accom plished at the cathode.
Use of a standard IR imaging detector instead of a COD would be advantageous to provide high-quantum-efficiency images. For some wavelength ranges this technique would be ideal - but the prior art has avoided these potential solutions because of the is recognized problems presented by spinning mirrors.
(d) Conventional lidar imagine limitations - As a general observation, conceptually a STIL system is far in advance of com- peting technologies in terms of resolution capability in three dimensions, and in terms of signal-to-noise ratio as well. In its ability to fully exploit these advantages, however, a conventional STIL is severely impaired by an overriding problem in streak lidar systems heretofore: inflexibility of pixel allocation.
This limitation may be appreciated from three different per spectives, although in a sense they are only different aspects of the common phenomenon: À the STIL cannot record in three dimensions without mechani cai movement of the measuring instrument relative to the re gion to be inspected; À the only practical way to make optimally efficient use of the very expensive detector area in a conventional STIL sys- tem is to build a fiber-optic remapper; and even when such a device has been built, a conventional STIL system fails to make fully economic use of that investment. - 10
These problems will be taken up in turn below, but first beginning with a demonstration of the above preliminary observation that 3-D resolution is superior in a STIL apparatus.
TOME - DIMENSIONAL RESOLOTIOM: Different existing lidar systems sample a water volume differently (Fig. 16). The water surface is represented by the irregular line shown on the two visible faces of the volume cube, in each view.
Range-gated systems have excellent transverse spatial reso lution, but only have one range pixel per camera - which results in poor range resolution as suggested by the relatively tall vol- ume elements (Fig. 16[al) in the shaded zone that is of interest.
Merely by way of example, one system well-known in this field as "Magic Lantern" is forced to use six separate cameras to cover multiple depths, resulting in a large and expensive system.
In addition, since a range-gated system thus collects large vertical sections of the water column, contrast of any object images is significantly reduced. That is, the contrast, which is directly proportional to the signal-to-noise ratio (SNR) in the region, is a function of the amount of water backscatter that is collected.
A system with range samples of 30 cm (one foot) has ten times the SNR of a system that has 3 m (ten foot) range samples.
In addition, as the diagram also suggests, the range-gated device must avoid the surface of the water.
Time-resolved systems, such as the one used in the advanced recelrer In ATD-lll (a photomultiplier-tube-hased, nonstreaking time-resolved lidar system), suffer from a similar contrastre- duction problem. In this case the cause is poor transverse spa tial resolution, as suggested by the relatively broad volume ele- ments (Fig. 16[b]) in the shaded zone of interest.
This system cannot isolate an object signal, and moreover also collects a large area of water backscatter around an object.
To have the same transverse spatial resolution as the range-gated system, this time-resolved apparatus would require a separate la- ser pulse for every pixel, resulting in a pulse repetition fre- quency (PRF) exceeding lOO kHz.
Unfortunately a frequency-doubled Q-switched Nd:YAG laser (the primary laser used in ocean lidar systems) operates effic iently only up to about 5 kHz. Inability to reach the needed PRF, in turn, results in larger spatial pixels to achieve the same area coverage. - To avoid these sampling problems, the two systems discussed above use both a time-resolved receiver and a range-gated module.
Although this approach represents significant additional system; complexity, it still does not resolve the significant degradation of detection SNR.
Because a STIR system collects 500 to lOOO spatial pixels per laser pulse, the PRF can be in the hundreds of hertz, which is well within the performance envelope of the Nd:YAG lasers. In this way a STIL device can achieve pixel sizes smaller than an ob ject of interest; therefore, it has higher SNR for the same laser power.
Thus, a streak-tube-based system (Fig. l6tC]) can provide much higher SNR for the same amount of laser power, or can achieve equal performance with a significantly smaller laser system.
Streak-tube-based systems can provide good resolution in all dimensions.
Unfortunately this powerful benefit of the STIL principle is not heretofore broadly available without mechanical movement of the detector, and without costly and awkward remapping devices and even then carries only very limited amounts of image informa tion. These three problems are discussed in the paragraphs below. I 2s THE MECHANICAL - MOVEMENT EQUIPMENT: As described earlier, the conventional streak-_e system is a pushbroom system, which means! that it depends on the motion of the vehicle to sample the dimen sion along the track. This requirement prevents the conventional STIL from serving as what may be called a "staring" system i. e., a stationary system that can acquire a stationary image of an area.
Just such a capability, however, is quite desirable for a number of useful applications. Inability of a conventional STIL; instrument to fill this role is a major limitation in industrial and commercial uses.
FIBER - OPTIC MAPPERS - CATER COST: It is well known to use a variety of kinds of fiber-optic units to reconfigure a time varying area image as a line image, and thereby enable time reso- lution of the changing content in the area image. Such technolo- gies are seen in representative patents of Alfano (for example see U. S. 5,142,372), and of Knight (for example U. S. Re. 33,865); and in their technical papers as well.
An original concept for an area-image streak-tube system was demonstrated by Knight, who mapped a 16xl6-unit areal image onto a conventional streaktube slit, with fiber optics. (F. A. Knight, et al., genres dimensional imaging using a single laser pulse", Proc. SPIE vol. 1103, 1989, pp. 174-89.) This technique was se- verely limited in overall number of spatial pixels because of the relatively small number of pixels that can be mapped onto a slit.
Low-resolution fiber image redistribution (a 16x16 focal plane to a single 256-pixel line) has also been performed for streak tubes by MIT- Lincoln Labs. Many fiber-array manufacturers are in operation and ready to prepare units suited for STIL work: one of the largest firms is INCOM; another that makes individual fiber arrays is Polymicro Technologies - which has previously prepared arrays with 3000 fibers.
At best, however, all such approaches are hampered by the costly custom fabrication required, and the need to manufacture a special unit for each desired mapping respectively.
FIBER-OPTIC RECAPPERS - INADEQUATE EXPLOITATION: What makes matters worse, as to fiber-optic remapping, is that a conventional STIL system nowadays continues to race the same basic obstacle seen in The nlght paper noted above. Only so many original image pixels can be meaningfully rearranged onto a slit.
This means that even after the limitation of expensive cus tom fabrication has been confronted and in a sense overcome by a decision to expend the necessary funds, and even after the re- quirement for making a separate special unit for each of several particular mappings has also been faced and in a sense overcome by a decision to invest even that multiple - yet nevertheless the technology continues to be not only uneconomic but also technical- ly unsatisfactory because the resulting images carry inadequate, frustratingly small amounts of image information. This obstacle has heretofore remained a persistent problem, and will be further discussed shortly in subsections (f) and (g) (e) WFS limitations of conventional lidar - The field of wavefront sensors (WES) is an important one for laser diagnostics.
High-power short-pulse lasers are essential components of several different applications (_g:, laser trackers and imaging laser radar); however/ such lasers are notoriously unreliable.
It is difficult for vendors to manufacture them to desired specifications, and the devices seldom survive to their projected lifetime (at least at rated output power). One of the most diffi- clt aspects of the manufacture of such devices is the lack of diagnostic equipment for the total characterization of the laser output.
Typical laboratory equipment for the characterization of high-power pulsed lasers consists of Three instruments (l) a pow- er meter for measuring average power, (2) a single fast detector with an oscilloscope for measuring the pulse width temporally, and (3) a laser characterization imager that provides a spatial dis- play of the beam intensity. Each of these instruments has signif- icant limitations in the data that it produces.
The single fast detector averages over the spatial compo nents of the beam, and the laser characterization imager averages over the temporal component of the beam. That is to say, the in tegration time of the camera in the laser characterization setup is typically orders of magnitude longer than the laser pulse width.
The power detector, furthermore, averages over both the spa tial component and the temporal component. Thus no one -.nstr-ment provides information in time and space and phase with high resolu tion in all dimensions.
Yet this is precisely the information that the laser de signers use in their modeling and simulations. Using commercial laser modeling software such as GLAD, laser designers set up mod- els to simulate propagation of the beam in the laser cavity at very high spatial and temporal resolution.
The phase and intensity of the light, expressed as electric fields, are used in this simulated propagation process. After go- ing through all of that analysis, however, laser designers have no way to compare the actually resulting, operating product with that preliminary analysis. None of the above-discussed three instruments measures the wavefront of
the light - i. e., maps the phase of the outgoing light as a function of position in the beam. This is the role of another type of instrument, the WFS/ which does exist to perform this task - but like the laser characterization imager it aver- ages over time.
The most common such unit in use today is the Hartmann-Shack WAS. In this apparatus, incoming light IFig. 25) is split up into multiple Apertures, each with its own lensiet. The iensiet fo o cuses the light onto a detector.
When a flat wavefront (i. e., a plane wave) is incident on the device, each of the lensless forms a spot image on-axis on the detector. when a distorted wavefront arrives, however, as illus- trated the average slope of the wavefront at the lenslet for each subaperture displaces the spot away from the on-axis position.
Although the illustration is essentially one-dimensional, the lensless are in a two-dimensional array; and the spot position measurements too are accordingly made in two dimensions. Design and fabrication of this kind of device is a highly specialized endeavor, available from various vendors such as Wavefront Scien ces, Inc. of Albuquerque, New Mexico.
Wavefront Sciences develops lenslet arrays for a number of applications. Typical cost for initial design and fabrication of one lenslet array is $20,000.
The displacement of the spot is measured, in both the x and y directions. Average local slope of the wareront at the meas urement point is next calculated as linearly proportional to this displacement. The total wavefront is then reconstructed using algorithms that assemble such local tilts into a whole wavefront.
This process is referred to as "wavefront reconstruction".
It is a common and well-documented algorithm, currently used in astronomical and many other instruments.
In addition to the wavefront, which corresponds to the phase of the electric field, the intensity of the light is measured for each subaperture. This allows generation of an intensity map, as well as a phase map, of the incoming beam.
In most Hartmann-Shack WFS units, the detector behind the optics is a CCD camera or an array of quad cells. A quad cell (Fig. 26) measures the two tilts and the intensity. These detec -I5 tor systems are relatively slow (30 Hz to lO kHz); while suffic ient for assessing atmospheric corrections such detection natu rally is inadequate for applications that require sutnanosecond sample rates.
From the foregoing it will be clear that laser laboratory devices, and in particular WAS systems when used for laser evalua tion, fail to satisfy the needs of laser developers. This failure is a major problem r impeding progress in the design and refinement of more stable, reliable and longlived lacers. to
(f) Data-speed and package limitations Signal processing in conventional SAIL systems is performed using multiple digital signal processors (DSPs). These in turn impose requirements of weight, volume, power, and heatloading which in effect demand Is vehicle-mounting of these sensors.
Even carried on a vehicle of modest size, practical forms of the system have relatively low data throughput and may therefore require several measurement passes to acquire adequate data for a region of interest. These limitations represent additional prob lems because many applications would be better served by a system that a single person could carry, or that could survey and map a region in a single pass - or ideally both.
(g) Limited uses of conventional streak lidar - No STIL packages fully suited for commercial or industrial surveillance and mapping are known to be on the market. It appears that this may be due to a combination. of factors including the visual haz- ards mentioned earlier (with the legal liability that would be associated with operations in populated areas), and also the limi ted data speed and resulting packaging obstacles outlined just above.
Potentially, a primary commercial application is airborne threedimensional terrain mapping. Terrestrial mapping is one function that can be performed using lidar, but this opportunity has not been exploited commercially. It is believed that this market may represent potential income exceeding tens of millions of dollars annually.
In California, for example, there is a need to perform com- plete surveys of the Los Angeles basin (2,400 square miles) every year. This task is currently performed using photogrammetry techniques. Other metropolitan areas have similar requirements, which in the aggregate thus can provide a sustained business in airborne surveying.
s An entree to this terrestrial mapping application can be obtained by contacting any large commercial and industrial sur veying company. A very roughly equal amount of business can be generated through "on demand" surveying for particular construc tl_-. jobs particular' y threedimensional lri,aging.
Conventional STIL equipment, however, has not been set up (or at least not set up in a convenient format) for threedimen sional imaging. Likewise it is not available with any kind of viewing redundancy, to surmount problems of temporary or local barriers to viewing.
On land such barriers include for example landscaping or natural forestation, as well as coverings deliberately placed over some objects. At sea they include image-distorting effects of ocean waves (Fig. l9) which may completely obscure some fea tures and actually exchange the apparent positions of others.
In purest principle it is known that foliage and other kinds of cover can be neutralized through use of spectral signatures, polarization signatures or fluorescence signatures. Analysis that incorporates spectral, polarization, spectropolarization, and fluorescence discriminations is also known to be useful for other forms of optical monitoring for which streak lidar would be ex- tremely well suited.
Significant analysis of three-d.mer.sional polarization anal- ysis with lidar systems, using a "Mueller matrix" approach, is in the technical literature. See, for example, A. D. Gleckler, A. Gelbart, J. M. Bowden, "Multispectral and hyperspectral 3D imaging lidar based upon the multiple slit streak tube imaging lidar", Proc. SPIE vol. 4377, April 2001; A. D. Gleckler, A. Gelbart, "Three-dimensional imaging polarimetry", Proc. SPIE vol. 4377, April 2001; A. D. Gleckler, "Multiple-Slit Streak Tube Imaging Lidar (MS-STIL) Applications," Proc. SPIE vol. 4035, p.266-278, 2000; R. M. A. Azzam and N. M. Bashara, Ellipsometrv and Polarized Light, North Holland, Amsterdam (1977); R. A. Chipman, E. A. Sornsin, and J. L. Pezzaniti, "Mueller matrix imaging polarimetry: An overview" in Polarization Analysis and Applications to Device -l7 Technology, SPIE Volume 2873, June 1996; R. M. A. Azzam, "Mueller matrix ellipsometry: a review", SPIE Volume 3121, August 1997; P. Elies, et al., Surface rugosity and polarimetric analysis, SPIE Volume 2782, September 1996; Shih-Yau Lu & R. A. Chipman, Inter pretation of Mueller matrices based on polar decomposition, JOSE, Volume 13, No. 5, May 1996; and S. Bruegnot and P. Clemenceau, "Modeling and performance o' a polarization actlre imager at = 806 nary", SPIE Vol. 37G7! April lDa9.
Thus spectral, fluorescence and polarization analyses in theory are susceptible to commercial and industrial streak-lidar beneficial exploitation. Examples are detection and measurement of atmospheric particulates, atmospheric constituents, waterborne Articulates, and certain hard-body object returns (with propaga- tion paths in either air or water).
Heretofore, however, necessary equipment adaptations for in troducing fluorescence, polarization and spectral analyses into streak lidar work at least on a broad, general-use basis have been unavailable. The prior art in this field thus fails to teach how to go about making such refinements in any straight forward, practical way. This gap represents a major problem as it has left these kinds of mapping infeasible or even impossible - and accordingly several practical mapping needs unsolved.
(h) Now-unrelated technology: "eye safe" - This discus sion will next turn to modern developments that have not hereto- fore been pragmatically associated with lidar, or particularly with streak imaging lidar. The first of these relates to popula- tion exposure.
Studies have shown that light beams of different wavelengths have respective ocular destructive powers - for any given beam power that differ by many orders of magnitude. For all wave- lengths, such studies have established respective maximum power/- pulse-energy levels that are considered safe, at least for humans.
In particular the 1.5-micron region is considered to have 3s the least ocular destructive power (by several orders of magni- tude) of any wavelength from x-rays to the far infrared. More specifically, light in the visible, near- W, and near-IA regions damages the retina, while light in the far- W and far-IR damages the cornea; but light at about 1.5 microns tends to dissipate -i8 harmlessly in the intraocular fluid between the cornea and the retina. This region is therefore commonly designated "eye safe".
Accordingly for mapping or detecting systems that are to ir- radiate large areas of land in which people or other higher orga nisms may be present, it is important to operate in that eye-safe wavelength region as much as possible. of course there are many reasons to avoid incidental exposure of people Above the damage threshold.
Traditional photocathode materials are well suited at visi ble wavelengths; however, efficient photocathode detector materi- als do not exist for wavelengths much over one micron. Neverthe- less operation at eye-safe wavelengths is feasible with commer- cially available fast phosphorescent materials, which respond to infrared photons by producing proportional quantities of higher- frequency (visible) photons, and are thus loosely described as performing wavelength "conversion".
These materials thus in effect "convert" light at 1.5 mi- crons to roughly 0.65 micron, with quantum efficiencies poten- tially as high as sixty-six percent. This process, sometimes to called "upconversion" to visible light, occurs at the front of an imaging tube - in advance of the photocathode - and enables use of conventional photocathode materials that respond well to visi- ble light.
TSMITTER: Transmitters operating at 1.5 microns arm now commonly available from several vendors ("2 ncludina Big Sky Laser, LiteCycles, and GEC-Marconi) using diode-pumped solid state (DPSS) Q-switched Nd:YAG lasers. The lasers are coupled with either an optical parametric oscillator (OPO) or a stimulated Raman scatter ing (SRS) cell.
To achieve optimal range resolution, it is desired to keep the laser pulse length between 4 and 10 nsec. This is the range of typical DPSS Qswitched pulse lengths; accordingly transmitter conversion is straightforward within the state of the art.
RECEIVER: Streak-camera receiver operation at 1.5 microns is currently available using the standard S1 photocathode material.
Unfortunately, S1 has very poor quantum efficiency, which reduces
the applicability for real-world imaging. -i9
Oue to the low efficiency of the Sl material, a l.5-micron streak-lidar product based upon it would similarly operate ineffi ciently. Pragmatically speaking, such a product would not be eco nomic or viable.
Exploration of other materials has been reported. Those materials which are relevant to vacuum streak-tube operation include: TE photocathode, InGaAs photocathode, ETIR upconversion, and nonlinear upconversion.
These will be discussed in turn below, in this subsection of the present document. Another approach to eye-safe technology, but that excludes vacuum streak-tube operation entirely, will be dis- cussed in a later subsection.
TE Photocathode: Intevac Corp. has fabricated a transfer electron (TE) photocathode with quantum efficiencies exceeding ten percent at l.5 micron wavelengths. (See K. Costello, V. Abbe, et all, "Transferred electron photocathode with greater than 20% quantum efficiency beyond l micron", Proc. SPIE vol. 2550, pp. 17788, 1995.) This photocathode has been demonstrated in image intensified CCDs - but not in streak tubes - at l.5 microns.
Very interestingly, applicability of the TE photocathode for streak tubes has been shown too, but no_ at that eye-safe wave length. (Please refer to V. W. Abbe r R. Costello, G. Davis, R. Weiss, "Photocathode development for a 1300 nm streak tube", Proc. vol. 2022, 1993.) Intevac used an early version of this photocathode in a streak tube that operated out to 1.3 microns - but, again, not to 1.5. Whether actually due to perceived lack of customer base or due to some failure in reduction to practice, no streak tube able 3S to operate efficiently at l 5 microns is currently available.
InGaAs Photocathode: Hamamatsu Corporation of Japan has an InGaAs photocathode used for near-IA photomultiplier tubes (Pats).
The H=mamatsu photocathode has poorer quantum efficiency (QE) on the order of one percent - at 1.5 microns than the TE dis- cussed above.
Although Hamamatsu suggests that this photocathode as com- patible with its streak-tube line, no such development has ap peered, at least commercially or in the literature. Again, prag- matically no successful report of testing is known.
Improvements in QE..,ay be possible if "slower" photocachodes that have a longer response time - nsec, vs. tens of poosec- onds - are acceptable. Nanosecond response time at the photo cathode would have little adverse impact on system performance.
ETIR Phosphor Upconversion: Phosphor upconversion has been performed by simply placing a layer of phosphor in front of a photocathode, in image intensifiers and other photoresponsive devices. No testing with a streaktube photocathode, however, has been reported.
In known applications of the phosphor-upconversion tech- nique, the incoming IR interacts with the phosphor, which has been "charged" with blue light from an LED, and in response produces light between 600 and 700 nm. This is well within the high-per- formance range of conventional photocathode materials. The blue charging LED can be shut off during the brief data-collection period to avoid saturating the photocathode.
This technique is particularly effective using a class of 2s phosphors, called "electron-trapping infrared" {ETIR) Reconversion phosphors, which receive incident infrared photons and in response emit corresponding quantities of visible photons.
The response band of typical ETIR phosphors is about 0.8 to 1.6 am. The most accepted model for the operation of ETIR phos so phors is as follows.
(l) The phosphors are doped such that there are two doping lev els between the valence and conduction bands, with the lower doping level called the "trapping level" and the upper dop ing level called the "communication level".
(2J Visible photons (typically blue to green) excite electrons from the ground state to levels higher than the trapping level. - (3) In the combination process, most electrons then fall to the trapping level where they can remain for very protracted time periods (years), in the absence of infrared photons with energies corresponding to the gap between the trapping and communication level.
(4) Incident infrared photons excite the electrons in the trap ping level to the communication level, v.,here they radiative ly decay to the ground state by combining with holes in the ground state -- releasing visible photons (typically orange to red).
Amother class of infrared upconversion phosphors is anti Stokes (AS) phosphors. Since these phosphors operate via a mul tiphoton process, they have higher thresholds and lower conversion efficiencies than do the ETIR phosphors. AS phosphors, however, do not require visible pump photons for operation as do the ETIR phosphors. The need for a visible pump is not a major drawback for the ETIR phosphors, since the pump need not be coherent - and So hence LEDs can be used as the pump source.
Nonlinear optical processes competing with ET1R phosphors include second harmonic generation (SHG), stimulated Raman scat- tering (SRS) antiStokes (AS) lines, and sum-frequency generation.
The ETIR phosphors have an advantage over all these competitors, 2s namely that there are no phase matching or coherency requirements, so that the HEIR process can operate over the wide incidence an- gles required for imaging.
Commercially available ETIR films from L-mitek Internatio- r,al, sync. (formerly Quantex) have been reported with 2 nsec pulse so response width and twenty-two percent quantum efficiency (in re flective mode), from 1.06 Em to visible for the company's Q-ll-R film. (Ping, Gong and Hou Bun, "A New Material Applicable in the Infrared Streak Camera," Chinese Journal of Infrared and Millime ter Waves, vol. 14, No. 2, 1996, pp. 181-82.) Quantex has developed a near-infrared image intensifier (model I2) using an ETIR phosphor screen. In this project the company measured the minimum sensitivity - at several wavelengths - of a thick-film phosphor screen mated with the image intensi fier in transmissive mode. (Lindmayer, Joseph and David McGuire, "An Extended Range Near-Infrared Image Intensifier," Electron Tubes and Image Intensifiers, Led.] Illes P. Csorba, Proc. SPIN vol. 1243, 1990, pp. 107-13.) Measured minimum sensitivity of this phosphor-I2 sensor at 1.55 Am was 670 nW/cm2 (ibid. at 108). Quantex had also vapor- deposited thin films of the ETIR material onto an image-latensi- fier fiber-optic faceplate to improve the imaging resolution. The resolution obtained with this device was 3 ine-pairs/mm, corre- sponding to the resolution of the 15 Am fiber pitch of the face o plate.
Researchers at Xi 'an Institute of Optics and Precision Me chanics, Chinese Academy of Sciences, have reported the develop ment of faster, more efficient ETIR phosphors. (Ping, Gong and Hou Xun, loc. cit.) They reported a red-emitting and a blue-emitting ETIR phos phor - the red phosphor having a 1.3 nsec response width and a 66% quantum efficiency in transmissive mode, and the blue phosphor having a 1.4 nsec response width and a 47% quantum efficiency in transmissive mode.
20: Because conversion efficiency is a function of optical power, nonlinear upconversion techniques such as SHG and SRS are not practical for low-level signals. Conse quently, these techniques are typically used with the transmitter rather than the receiver.
Also, because of phase matching requirements these tech niques are typically only efficient over limited fields of view.
There is a technique in which the signal can be amplified opti cally in a Raman crystal, to al 7 ow for efficient upconversion or to directly overcome the poor quantum efficiency of an S1 photocathode (Calmer, Lonnie C., et al., 'Marine Raman Image Am plification", Proc. SPIE Vol. 3761, 1999). At present a drawback of this technique, with respect to long-range streak-tube opera tions, is that the Raman amplifier can be pulsed on for only 10 to nsec at a time.
As to operation at longer wavelengths (1.6 to 10 microns), advantageous for various specialized applications as mentioned earlier, Quantex has also reported ETIR phosphors for upconverting medium-wavelength infrared (3.1 to 4.5 m) to 633 nm light.
(Soltani, Peter R., Gregory Pierce, George M. Storti, and Charles Y. Wrigley, sinew Medium Wave Infrared Stimulable Phosphor for Image Intensifier Applications," Led.] Illes P. Csorba, Proc._SPIE vol. 1243, 1990.) Unlike the eye-safe wavelength phosphors, these require cryogenic hardware for the phosphor upconversion plane.
RECEIVER CHOICE: As can now be seen, a great variety of technology is available for receiving eye-care radiation and causing pnosphorresponsive tube devices to respond. No demon- stratlon, however, of pragmatically efficient operation in a lidar streak tube has been reported. Equipment adaptations accordingly have not been developed.
The foregoing discussions have noted the availability and use of very inefficient Sl detector material at l.5 microns, and Hamamatsu's views as to its own low-efficiency detector material t5 at that wavelength - both these materials being inadequate for industrial-quality instrumentation in the present state of the art - and also the Abbe experiments with TE material at 1.3 microns, and the suggestion by Ping of using STIR material in streak came- ras. In the absence of dispositive testing, none of these appears to represent an enabling disclosure of a commercially feasible eye-safe STIL system.
(i) Now-unrelated technologies: _modern optical deflectors - Another area of technological advances that are known but have not heretofore been connected with streak lidar is microelectro- mechanical systems (HEMS). These devices are very small, and ena- ble use of a simplified optical path lFig. llthl).
A prominent example is a Texas Instruments product denomi- nated a "Digital Micromirror Device" (DMD=). TI makes its DMD units for the commercial projection display market; accordingly they are readily available.
Key factors for efficient use of a DMD component include the mirror fill factor, scanning speed, uniformity of mirror motions, and quantification of diffraction effects. The DMD product has a 3s fill factor higher than ninety percent, and can scan forty degrees in two microseconds (see Larry J. Hornbeck, "Digital Light Proc- essing for high-brightness high-resolution applications, A res- entation for Electronic Imaging' EI r 97, Projection Displays, Feb.
1997).
They are compatible with operation at virtually any wave- length of interest for imaging and detection - including, in par- ticular, the eye-safe technology discussed in the preceding sub- section. Again, although well established these devices have not s been associated with lidar instrumentation heretofore.
As can now be seen, the related art remains subject to 0 significant problems, and the efforts outlined above - although praiseworthy - have left room for considerable refinement.
SAY OF_THE DISCLOSE
The present invention introduces such re f inement. It has several main aspects or facets that are in general capable of use independently; however, using two or more of these primary aspects 2a in combination with one another provides particularly important benefits as will be seen.
In preferred embodiments of its first major independent fac- et or aspect, the invention is a streak lidar imaging system. It is for measurements of a medium with any objects therein.
For purposes of the present document, this phrase "measure- ments of a medium with any objects therein" is hereby defined to mean that the system is for measurements of either a medium, whether or not it has any objects in it; or of objects, if any may happen to be in the medium - or of both the medium and any ob- jects that may be in it. Thus it is no_ intended to suggest that necessarily objects are in the medium, or that measurements of objects will necessarily be performed if objects are in fact pres- ent, or that measurements of the medium will necessarily be per formed when what is of interest is objects within the medium.
The system includes a light source for emitting at least one beam into the medium. In certain of the appended claims, this will be expressed by the language "into Much medium". In the ac- companying apparatus claims, generally the term "such" is used -2s (instead of,said,' or "the") in the bodies of the claims, when reciting elements of the claimed invention for referring back to features which are introduced in preamble as part of the con- text_or environment of the claimed invention. The purpose of this s convention is to aid in more distinctly and emphatically pointing out which features are elements of the claimed invention, and which are parts of its context - and thereby to more particularly claim the invention.
The system also includes an imaging device for receiving JO light reflected from the medium and forming plural images, arrayed along a streak direction, of the reflected light. The imaging device includes plural slits for selecting particular bands of the plural images respectively. In addition the system includes a device for displacing all the plural images along the streak direction.
For purposes of this document, the terms "image" and "imag- ing" refer to any formation of an image, whether an optical or electronic image or an image in some other domain - and also whether an image of spatial relationships as such or for instance an image of a spectrum or other spatially distributed parameter.
Merely by way of example, for present purposes "image" and "imag- ingr' may refer to images of optical-wavefront direction, or of fluorescence delay, or of polarization angle, as e. a: distributed over a beam cross-section.
The Afford "streak", in the present document, means substan- tially continuous spatial displacement of a beam - particularly in such a way that its points of impingement on a receiving screen or the like are shifted. This displacement most typically occurs during an established measurement interval, and ordinarily has the purpose of creating a substantially continuous relationship oe- tween position on such a screen and time during that measurement interval. (In classical lidar environments, but not all forms of the present invention under discussion here, a further substan tially continuous relationship is thereby established between the screen position and "range" - i. e. the distance of some reflec- tive element from the measuring apparatus.) The word "substan- tially" is used here so that the appended claims encompass systems and methods in which the displacement, rather than being continu ous, is stepwise but only to an inconsequential degree - for instance, merely perfunctory stepping whose primary purpose may in fact be an attempt to escape the scope of the claims.
The foregoing may represent a description or definition of the first aspect or facet of the invention in its broadest or most general form. Even as couched in these broad terms, however, it can be seen that this facet of the in-entlvn importantly advances the art.
In particular, the plural slits and images enable a single streak tube to sense and record in, effectively, a space of three independent parameters rather than only two - thus curing a per vasive limitation of streak devices as mentioned earlier. The added dimension, or parameter, is extrinsic to the conventional native space of range (or time) vs. azimuth for a streak device (or indeed certain other time-resolving devices), and accordingly in this document will be called the "extrinsic dimension." The extrinsic dimension of a streak (or other time-resolv- ing) device is not to be confused either with the intrinsic azi zo muth dimension - which can be mapped by optical fibers or other- wise to represent various spatial or other parameters - or with the intrinsic range/time dimension. Rather, the extrinsic dimen- sion is a truly independent and thus novel parametric enhancement.
2s Although the first major aspect of the invention thus sig nificantly advances the art, nevertheless to optimize enjoyment of its benefits preferably the invention is practiced in conjunction with certain additional features or characteristics. In particu lar, preferably the imaging device includes an optical device, the plural images are optical images, and the displacing device in- cludes a module for displacing the plural optical images.
Closely related to this preference are several subsidiary ones: preferably the displacing device includes an electrome- chanical device, and this in turn preferably includes at least one scanning microelectromechanical mirror - still more preferably an array of such mirrors. Alternatively it is preferable that the displacing device include an electrooptical device.
In one alternative to the optical-device preference, pref- erably the imaging device includes an electronic device, the plu ral images areelectronic images, and the displacing device in- cludes a module for displacing the plural electronic images. When this preference is observed, it is further preferable that the displacing device include electronic deflection plates. Two re- lated electronic-implementation preferences, usually alternative to each other, are that the imaging device include (l) an optical front end that forms a single optical image of the reflected light, and an electronic stage receiving the single optical image and forming therefrom,, the plural electronic images; or (2) an op- tical front end that forms plural optical images of the reflected light, and an electronic stage receiving the plural optical images and forming therefrom the plural electronic images.
Another preference, as to the basic first main aspect of the invention, is that the displacing device form from each of the plural images a respective streak image - so that the displacing device forms, from the plural images considered in the aggregate, a corresponding array of plural streak images. In this case pref- erably the system further includes a device for receiving the ar- ray of plural streak images and in response forming a correspond- ing composite signal.
In still another basic pair of alternative preferences, the plural slits operate on the images in either optical or electronic form. Yet a further preferred way of configuring the system is to include in the imaging device a module for forming substantially a continuum of images of the reflected beam; and arrange each of the plural slits to select a particular image band from the continuum.
There are several other basic preferences related to the above-introduced first main facet of the invention. In one of these, the light source includes an optical module for emitting at least one thin, fan-shaped beam into such medium, and the imaging device includes an optical module for receiving at least one thin, fan-shaped beam reflected from such medium; at least one of these optical modules includes an optical unit for orienting a thin di- mension of the reflected beam along the streak direction.
For purposes of the present document, the phrase "thin, fan- shaped beam" means a beam which - as evaluated at impingement on or exit from a medium, or at some point within the medium - is thin in one cross- sectional dimension but is fanned out in another (generally orthogonal) cross-sectional dimension. In other words, with a thin, fan-shaped beam it is possible to find some point along the beam propagation path where the aspect ratio of the beam, perpendicular to the propagation path, is much broader in one direction than in the other.
Very generally speaking a "thin, fan-shaped beam" has/ at some point along the propagation pa-.h, an aspect ratio of perhaps lO:l to 500:1 and even much higher. In special applications of the invention, however, as will be clear to people skilled in this 0 field, the ratio may be only 5:l or even 3:l and remain within the reasonable scope of the appended claims. The cross-section may be rectangular, elliptical, oval or irregular, provided only that the aspect ratio is at least 3:1 or 5:1 as indicated above.
This is not a requirement that the beam have such a cross sectional relationship at the point of transmission from the appa- ratus or the point of receipt into the apparatus - since in fact very commonly the beam at these particular points has a very low aspect ratio and indeed may be nearly circular. The term "aspect ratio", in turn, is used in this document in a very general sense that is common in the optics field, namely the ratio of widest to thinnest dimensions of an optical-beam cross-section (without regard to the orientations of such dimensions relative to the horizon or any other reference frame).
The point is that the aspect ratio of a thin, fan-shaped beam for purposes of this field varies greatly from the point of emission (or receipt) along the optical path - for example, ris- ing as a transmitted beam traverses a relatively clear medium and then, within a turbid medium, changing in complicated ways as the beam continues to expand in the broader dimension but also becomes so more diffuse in the thinner dimension. Therefore the aspect ran tio, for purposes of this definition of a "thin, fan-shaped beam", is to be evaluated at some point where it assumes a value reasonably close to its maximum value. Thus the concept that is inten- ded by the phrasing "thin, fan-shaped" is a mental picture of a classical old-fashioned handheld fan used to cool a person's face, and does not encompass an optical beam that merely is thin at the outset and expands to a circular or slightly oval shape.
In three other basic preferences, the imaging device in- cludes an optical module for forming the plural images as images of the at least one reflected beam at respectively - discrete optical wavelengths, or different polarization states, or differ- s ent angular sectors, of the at least one beam. In this last case the imaging device preferably further includes an optical device for rearranging image elements in each angular sector to form a single line image for that sector; and this device in turn pref- erably nciudes remapping optics whicii still mare preferably JO include a fiberoptic or laminar-optic module, ideally a lenslet array.
Another basic preference is that the light source include an emitter for emitting light in a wavelength region at or near lo microns. In this case preferably the imaging device includes an unconverted for generating light at or near the visible wavelength region in response to the light at or near lo microns; and this upconverter in turn preferably includes phosphorescent or fluores- cent material - ideally ETIR material.
Another group of basic preferences relates to the character of the medium into which the light source emit. In particular the source preferably includes some means for emitting the at least one beam into a generally clear fluid above a generally hard sur- face; or into a turbid medium, including but not limited to ocean water, wastewater, fog, clouds, smoke or other particulate suspen signs; or into a diffuse medium, including but not limited to fo- liage at least partially obscuring a landscape.
In addition, as noted earlier it is further preferred that the first primary aspect or facet of the invention also be em- poyed in conjunction with other main aspects introduced below.
Many of the preferences just discussed are analogously applicable to the following main facets of the invention.
In preferred embodiments of its second major independent facet or aspect, the invention is a lidar imaging system for optical measurements of a medium with any objects therein; the system includes a light source for emitting at least one light pulse into the medium. It also includes some means for receiving the at least one light pulse reflected from the medium and for 3A _,, forming from each reflected pulse a set of plural substantially simultaneous output images, each image representing reflected energy in two dimensions. - For purposes of generality and breadth in discussing the in vention, these last-mentioned means will be called simply the "re- ceiving and forming means', - or sometimes just "receiving means".
The "substantially simultaneous" i...aglng character of the receiv- ng and forming means does not imply that the mages are formed instantaneously, but rather only that formation of substantially - all images in a particular set occurs during a common time inter- val, namely the interval during which the reflected pulse is received.
The foregoing may represent a description or definition of - the second aspect or facet of the invention in its broadest or ?5 most general form. Even as couched in these broad terms, however, it can be seen that this facet of the invention importantly ad- vances the art.
In particular, as outlined above, this aspect of the inven- tion as broadly conceived is directed to pulsed systems, without regard to whether pulses are time resolved by streaking subsystems; or by other means; such other means may encompass for instance extremely fast electronics; or instead optical circuits, proces sors and memories that are nowadays being devised to replace elec tronics. This second facet of the invention thus provides, in pulsed systems generally, a triple-parameter capability that ena- bles range resolution (or equivalently time resolution) of two independent characteristics modulating the optical pulses - not just one such characteristic as in the past.
The term "range" as used in this document, if not otherwise so specified or clear from the context, ordinarily means distance from the apparatus. As suggested earlier, this understanding is implicit in the acronym "lidar".
Although the second major aspect of the invention thus sig nificantly advances the art, nevertheless to optimize enjoyment of its benefits preferably the invention is practiced in conjunction with certain additional features or characteristics. In particu- lar, preferably the light source includes means for emitting not just one but a series of light pulses into the medium, each of the pulses in the series generating a corresponding such image set (in this way the receiving means generate a sequence of plural corre sponding image sets); and further includes some means for storing the sequence of corresponding image sets.
Another basic preference is that the receiving means include some means for allocating image elements, in each image of the set, as among (l) azimuth, (2) range or time, and (3) an extrinsic measurement dimension. As noted earlier, the azimuth dimension may be mapped to another physical quantity as desired. In this to case it is further preferred that the extrinsic measurement di- mension be, selectively, wavelength, or polarization state, or a spatial selection. (The latter typically is a different spatial choice than any that is represented by azimuth, In the invention as actually used.) Two additional basic preferences are that the receiving means include some means for causing the images in the set to be substantially contiguous) and that the receiving means include some means for receiving the reflected light pulse as a beam with a cross-section that has an aspect ratio on the order of 1:l. In this latter case it is further preferable that the light source include some means for emitting the at least one light pulse as a beam with a cross-seation that, analogously, has an aspect ratio on the order of Ill.
Yet another basic preference is that the receiving means 2s include some means for forming the images in such a way that the two dimensions are range/time and output-image azimuth, For a particular extrinsic dimension that corresponds to each output image respectively. (In other words, when a user looks - whether visually or using viewing apparatus - at any one of the output images, what the viewer sees is an intensity plot in time or range vs. output azimuth.) As noted earlier, this second main facet of the invention is preferably used in conjunction with certain of the other major aspects and their preferences. Thus for instance here preferably the light source includes some means for emitting the at least one beam into each of certain specific kinds of media, enumerated in the last above-stated preference for the first aspect of the invention.
In preferred embodiments of its third major independent facet or aspect, the invention is an optical system. It includes a first lenslet array for performing a first optical transforma tion on an optical beam; and a second lenslet array, in series with the first array, for receiving a transformed beam from the first array and performing a second optical transformation on the transformed beam.
The foregoing may represent a description or definition of the third aspec' or facet of the -'n-er.ton 'n its broadest or most general form. Even as couched in these broad terms, however, it can be seen that this facet of the invention importantly advances the art.
In particular, whereas earlier uses of lenslet arrays have been limited to applications that analyze essentially static or low-frequency phenomena (e. A. only events having no significant frequency content above roughly lO kHz), this third main aspect of the invention enables reconfiguration of optical beams into for mats that are useful in time resolution of extremely fast phenom ena (e. a. l GHz and above).
zo Furthermore this facet of the invention performs such redone figurations with minimal loss of certain optical characteristics, such as - depending on the particular layout - optical phase, or wavefront orientation. This aspect of the invention thereby fa cilitates an advance in the art of time-resolving complicated op tical signals, by five orders of magnitude.
Although the third major aspect of the invention thus advan ces the art to an extent that is all but astonishing, nevertheless to optimize enjoyment of its benefits preferably the invention is practiced in conjunction with certain additional features or char acteristics. In particular, preferably one of the arrays includes image-plane defining lensless to define image elements of the beam; and the other array includes deflecting lensless to selec tively deflect beam elements to reconfigure an image transmitted in the beam. In this case, preferably the one of the arrays that defines the image elements is the first array.
Another preference for this image-defining/deflecting case is that the system further include some means defining an image carried by the beam, and that the first array be positioned sb stantially at a focal plane of the image. In this case it is fur- ther preferable that the image-defining means include a lidar source emitting an excitation beam to a region of interest; and collection optics receiving a reflection of the excitation beam from the region and focusing the reelection at the focal plane.
This preferable form or the invention is still further preferably implemented in such a way that the two transformations, considered together, include selectively imaging p_t-cular components of the beam onto plural slits following the second array; and also incor to porating some means for streaking images from both slits for reim- aging at a detector.
Yet another preference for the image-defining/deflecting case under discussion is that the first array also relay the image from the focal plane to the second array. In this case preferably the second array is substantially in a plane, and that plane is disposed substantially at the relayed image.
One other preference, as to the third main facet of the in- vention, will be mentioned here. The two transformations, consid- ered together, include selectively imaging particular components of the beam onto plural slits following the second array.
In preferred embodiments of its fourth major independent facet or aspect, the invention is a streak lidar imaging system for making measurements of a medium with any objects in the me- dium. The system includes a light source for emitting into the medium a beam in a substantially eye-safe wavelength range.
It also includes an imaging dev--.ce for receiving light re- flected from such medium and forming an image of the reflected light. In addition the system includes an upconverter for gen erating light at or near the visible wavelength region in response to the reflected lisht (i. e. to the returning light that is in a substantially eye-safe wavelength range); and a device for dis placing the image along a streak direction.
The foregoing may represent a description or definition of the fourth aspect or facet of the invention in its broadest or most general form. Even as couched in these broad terms, however, it can be seen that this facet of the invention importantly ad vances the art.
In particular, by means of this fourth aspect of the inven tion the fine range- or time-resolving capabilities of streak im- L aging lidar are made available for many kinds of measurements that otherwise would be precluded by proximity to unprotected people.
In some relatively small-scale applications, such people might be simply passersby in a laboratory, or in an industrial or like environment - and in applications at a larger scale such people might be members of a general population. It has not previously been suggested that the temporal resolving power or streak lidar systems could be exploited for such applications.
Although the fourth major aspect of the invention thus sig nificantly advances the art, nevertheless to optimize enjoyment of its benefits preferably the invention is practiced in conjunction with certain additional features or characteristics. In particu lar, in alternative preferences the upconverter may be positioned in the system either after or before the displacing device. I Preferably the upconverter includes phosphorescent or fluo rescent material, and most preferably ETIR material. The light source preferably emits the beam in a wavelength range at substan tially lo microns.
In preferred embodiments of its fifth major independent fac et or aspect, the invention is a streak lidar imaging system. It includes a light source for emitting a beam, and an imaging device for Deceiving light originating from the source and for forming an image of the received light.
The system also includes at least one microelectromechanical mirror for displacing the image along a streak direction. In ad dition it includes an image-responsive element for receiving and responding to the displaced image. 3 The foregoing may represent a description or definition of the fifth aspect or facet of the invention in its broadest or most general form. Even as couched in these broad terms, however, it can be seen that this facet of the invention importantly advances the art.
In particular, this fifth main aspect of the invention ena bles enjoyment of streak-lidar capabilities without the inordinate expense and fragility of an evacuated streak tube with its asso- ciated high voltages and relatively temperamental drive electron- ics - and also without the previously discussed cumbersomeness and very limited operating properties of a macroscopic scanning mirror, such as a relatively large spinning polygon.
Although the fifth major aspect of the invention thus sig- nificantly advances the art, nevertheless to optimize enjoyment of its benefits preferably the invention is practiced in conjunction to with certain additional features or characteristics. In particu- lar, preferably the at least one mirror includes an array of mul- tiple microelectromechanical mirrors.
In case the system is for use with an optical medium, anoth- er preference is that the light source include some means for emitting the beam into the medium - and that the imaging device include some means for receiving light reflected from the medium and forming an image of the reflected light.
Another basic preference, as to this fifth main facet of the invention, is that the light source include a resonant device and the imaging device include some means for causing imperfections in resonance of the resonant device to modulate the image. In this case, particularly if the resonant device includes a laser, it is also preferred that the imaging device include some means for causing imperfections in optical wavefronts from the laser to mod 2s ulate the image - and further preferably these causing means in- clude some means for deflecting elements of the beam in response to local imperfections in the coherence. These deflecting means, in turn, preferably include at least one lenslet array.
In preferred embodiments of its sixth major independent fac- et or aspect, the invention is a spatial mapping system for map- ping a region. The system includes a light source for emitting at least one thin, fan-shaped beam from a moving emission location toward the region; a thin dimension of the beam is oriented gen- erally parallel to a direction of motion of the emission location.
The system also includes an imaging device for receiving light reflected from such region and forming an image of the re- flected light. The system also includes some means for separating the reflected light to form plural reflected beam images - repre- senting different aspects of the region, respectively.
Here the term "aspects" means characteristics or properties of the region. It is to be interpreted broadly to include differ- s ent values of any parameter that is susceptible to probing by an optical spatial-mapping system. Merely by way of example, such a parameter may be spatial, dynamic, optical, chemical, acoustic, biological or even sociological.
Also r,cluded is ar' '.,iage-responsie element for receiving and responding to the plural beam images. The foregoing may rep- resent a description or definition of the sixth aspect or facet of the invention in its broadest or most general form.
Even as couched in these broad terms, however, it can be seen that this facet of the invention importantly advances the art. In particular, a thin fan beam or plural such beams can be used to yield a representation of two or more values of any opti- cal parameters - expanding the usefulness of a pushbroom mapping I system into a three-dimensional regime, with the selected parame ter functioning as the third dimension.
Thus with this aspect of the invention it is not necessary to be limited to mapping spatially at just one wavelength, or in just one polarization state - or even at only one visual angle, one pair of subtended angular widths, or one focal condition.
Subject to laser-technology limitations, a single emitter may be made capable of emissions at more than one coherence length; and this would enable application of the invention with coherer.ae length as the extrinsic parameter.
Although the sixth major aspect of the invention thus sig nificantly advances the art, nevertheless to optimize enjoyment of its benefits preferably the invention is practiced in conjunction with certain additional features or characteristics. In particu- far, there are several preferences regarding capability of the separating means to discriminate between different aspects of the probed region: spatially different aspects, or aspects that are carried in portions of the beam received at different angles, or in portions of the beam received at different angular ranges, or in different polarization states of the beam, or in different spectral components.
Another basic preference is that the separating means in clude means for discriminating between combinations of two or more different aspects of the region that are carried in different characteristics of the beam, at least one of which characteristics is selected from among spatially different aspects, different po larization states, and different spectral components of the beam. ! In this case preferably at least two of the characteristics are selected from among those three enumerated characteristics.
et another basic preference is that the emission location lo be a spacecraft, an aircraft, another type of vehicle, or another type of moving platform. As an alternative, preferably the emis- ; sion location is a fixed light source cooperating with a scanning system to provide a moving image of the light source. Also appli cable here are the previously enumerated preferences as to type of medium into which the beam is emitted.
In preferred embodiments of its seventh major independent facet or aspect, the invention is a spatial mapping system for zo mapping a region. The system includes a light source for emitting a beam whose cross-section has an aspect ratio on the order of l:l, from a moving emission location toward the region. I It also includes an imaging device for receiving light re flected from such region and forming an image of the reflected light; and some means for separating the reflected light to form t plural reflected berm images representing different aspects of the region; respectively. Also included is an image-responsive ele ment for receiving and responding to the plural beam images.
* One foregoing may represent a description or definition of the seventh aspect or facet of the invention in its broadest or most general form. Even as couched in these broad terms, however, it can be seen that this facet of the invention importantly ad vances the art.
In particular, this form of the invention extends broad-beam; imaging to three-parameter measurement space, analogously to the extension introduced above for thin fan-beam work. As will be; seen, the benefits of this extension are felt in ability to obtain much more sophisticated image interpretations, in a variety of applications.
Although the seventh major aspect of the invention thus sig- nificantly advances the art, nevertheless to optimize enjoyment of its benefits preferably the invention is practiced in conjunction with certain additional features or characteristics. In particu far, preferably the imaging device includes some means for also receiving the reflected light from the region as a reflected-beam whose crosssection has an aspect ratio on the order of l:l.
In preferred embodiments of its eighth major independent facet or aspect, the invention is a spectrometric analytical sys tem for analyzing a medium with any objects therein. The system includes a light source for emitting substantially at least one pencil beam toward the medium, and an imaging device for receiving light reflected from such medium and forming an image of the re- flected light.
Also included are some means for separating the reflected light along one dimension to form plural reflected beam images ar- rayed along that "one dimension" and representing different as pacts of the medium, respectively. The system further includes optical-dispersing means for forming a spectrum from at least one of the plural images, by dispersion of the at least one image along a dimension generally orthogonal to the one dimension - and an image-responsive element for receiving and responding to the plural beam images.
The foregoing may represent a description o' de=' hi tion of the eighth facet of the invention in its broadest or most genera' form. Even as couched in these broad terms, however, it can be seen that this facet of the invention importantly advances the art.
In particular, this eighth main facet of the invention pro- vides the benefits of an added extrinsic parameter, in the context of hyperspectral measurements. In particular through use of this eighth facet of the invention it is possible to obtain spectra for 3s different aspects of the medium or objects - as for different values of the optical properties listed above in discussion of the first preferences for the sixth facet of the invention.
Although the eighth major aspect of the invention thus sig- - nificantly advances the art, nevertheless to optimize enjoyment of its benefits preferably the invention is practiced in conjunction - with certain additional features or characteristics. In particu far, preferably the dispersing means include means for forming a spectrum from each of the plural images, respectively.
Other preferences are that the separating means _nalude some means for separating the reflected light to form plural Damages representing aspects of the beam that, respectively, are spatially - different - or represent different polarization states or differ- ent spectral constituents.
In preferred embodiments of its ninth major independent fac- is et or aspect, the invention is a wavefront sensor, for evaluating a light beam from an optical source The sensor includes optical components for receiving the beam from the source.
It also includes optical components for subdividing small portions of such beam to form indicator sunbeams that reveal a direction of substantially each of the small portions; and optical components for steering the indicator subbeams to fall along at least one slit. The sensor also includes some means for streaking light that passes through the at leastone slit; and some means for capturing the streaked light during a streaking duration.
The foregoing may represent a description or definition of the ninth aspect or facet of the invention in its broadest or most genera' form. Even as couched in these broad terms, however, it can be seen that this facet of the inve"tior. "r,portantly advances the art.
In particular by steering the subbeams to fall along a slit, the sensor provides an output that enables the aggregate of those subbeams to be streaked - and thereby makes it possible to time resolve the directional or other behavior of the sunbeams. Be cause many sunbeam sets can be arrayed along even just a single slit, wavefront directions can be time resolved for many points in the beam cross-section.
Although the ninth major aspect of the invention thus sig- niricantly advances the art, nevertheless to optimize enjoyment of -óO its benefits preferably the invention is practiced in conjunction with certain additional features or characteristics. In particu- lar, preferably the at least one slit comprises plural slits.
(This preference raises very greatly the number of points in a beam crosssection that can be time resolved.) In another preference, particularly for use with a resonant optical source, the receiving and subdividing components include means for causing imperfections in optical wavefronts from the resonant source to modify the light that passes through the at least one slit. In this case preferably the receiving, subdivid- ing and steering components include at least one lenslet array - and more preferably at least two lenslet arrays in optical series.
In this latter arrangement preferably the lenslet arrays include one array that defines image elements at or near a focal plane of the beam, and another array that receives the image elements relayed from the first array, and that steers light from the image elements to the at least one slit. An alternative basic preference is that the receiving, subdividing and steering compo Dents comprise at least one lenslet array in optical series with at least one fiber-optic remapping device - in other words, that the steering function be performed by fiber-optic remapping rather than the other array just mentioned.
In preferred embodiments of its tenth major independent fac- et or aspect, the invention is a spectromatric analytical system for analyzing a medium with any objects therein. The system includes a light source for emitting substantially at least one pencil beam toward the medium, and an imaging device for receiving light reflected from such medium and forming an image of the re- flected light.
It also includes optical or electronic means for streaking the plural images, and an image-responsive element for receiving and responding to the plural beam images. Also included is a computer for extracting fluorescence-lifetime information from a signal produced by the imageresponsive element.
The foregoing may represent a description or definition of the tenth aspect or facet of the invention in its broadest or most general form. Even as couched in these broad terms, however, it can be seen that this facet of the invention importantly advances l the art.
In particular, this hybrid form of the invention uniquely combines capabilities of earlier-discussed facets. It does so in such a way as to provide from a information about biological materials in volume materials - such as clouds - that heretofore would strain the capabilities of two or more different instruments.
Although the tenth major aspect of the invention thus sig- nificantly advances the art, nevertheless to optimize enjoyment of its benefits preferably the invention is practiced in conjunction with certain additional features or characteristics. In particu- lar, preferably the at least one beam includes at least one pencil beam. l Also preferably the imaging device includes a hyperspectral optical system. In this case it is further preferred that the imaging device include a plural-wavelength optical system, in which each of plural wavelength bands is arrayed along a length dimension of a respective slit- shaped image.
All of the foregoing operational principles and advantages of the present invention will be more fully appreciated upon consideration of the following detailed description, with refer- ence to the appended drawings, of which:
BRIEF DESCRIPTION OF THE DRAWINGS
Fig. l is an elevation, highly schematic, of conventional streak-tube architecture; Fig. 2 is a pair of simplified diagrams showing (a) in perspective or isometric view, typical STIR data collection in a plane that extends away from the instrument through and beyond an object of interest; and (b) an elevation of a resulting CCD image for the same measurement setup; Fig. 3 is a pair of perspective views showing (a) a rugge- dined streak-tube assembly and (b) a three-dimensional model of a s two-receiver lidar system fabricated to fit into a very small vol- ume, e._g. of an unmanned underwater vehicle; Fig. 4 is a set of illustrations relating Lo conventional- STIL terrestrial mapping data, -,ncluding (a) an aerial photo of buildings being surveyed, (b) a single laser shot showing raw data to for one line image (outlined by a rectangular white line in [a3), and (c) a three-dimensional rendered range image of a corresponding area (outlined in [a]) generated by reconstruction from the individual line images (brighter is taller); Fig. is a set of five images relating to data acquired by t5 conventional STIr imaging of an object submerged in shallow water, including (a) a photo of the bottom object used in the experiment, (b) a contrast/reflectivity image of the object at a depth of 6 m (20 feet), (c) a range image of the same object, in which brighter rendering represents greater proximity to the instrument, (d) a threedimensionally-rendered surface, with contrast data mapped onto the surface, and (e) a one-dimensional cut through the range image, with actual object profile, showing an excellent match be- tween data and object; Fig. 6 is a set of three diagrams showing general character 2s istics applicable to several different forms of the invention: (aj system block diagram, (b) wavelength dispersion methods, and (cj data-processing algorithms and hardware implementations; Fig. 7 is an elevational diagram, somewhat schematic, of streak-tube receiver architecture for plural-slit (in this case as in Fig. 6, three-slit) operation; Fig. 8 is a set of four diagrams - three plan views and one isometric - showing how a simple area image can be remapped onto plural lines suitable for input into the plural-slit streak tube: (a) the original area image, (b) the area image rearranged into plural lines (here four) by remapping optics, (c) the relationship between the first two views, and (d) a deformed, sliced, and reas- sembled fiber-optic device performing the transformation shown in the first three views; Fig. 9 is an isometric diagram, highly schematic, of lenslet-array remapping optics for a plural-slit streak-tube configuration; Fig. 10 is a pair of phosphor-screen images (in both sets range is vertical and a spatial dimension horizontal) for compari- son: (a) conventional single-=lit operation, and (b) plural-slit operation according to the invention, in particular With four slits; Fig. 11 is a set of two elevational diagrams, both highly lo schematic, and an artist's perspective or isometric rendering showing respectively (a) conventional optical streaking by means of a large rapidly rotating mirror, (b) innovative optical streak- ing with MEMs mirrors and no other moving parts, and (c) a more- specific novel design with DMDs; Fig. 12 is a diagram like Fig. 7, but showing a further modification for plural-slit spectroscopy - with a wavelength dispersion device arranged to array the available spectrum along the height of the photocathode plane, and at the photocathode a preferably programmable slit-mask device (e. a: spatial light modulator) for selecting particular plural spectrally narrow wavebands specified for a desired spectral analysis; as well as streaked forms of those warebands appearing on the anode at right; Fig. 12(a) is a like diagram but using an electronic, rather than optical, form of slit masking - with the selective plural slit mask now inside the tube, following the photocathode; Fig. 12(b) is a diagram like Fig. 12 but showing an optical, rather than electronic, form of streaking - which obviates the need for an evacuated tube and also represents one way of facili tating use of wavelengths outside the visible range; Fig. 13 is a set of three diagrams showing, for the Fig. 12 system, respective phenomena related to the image as it progresses through the front-end components: (a) the vertically dispersed spectrum at the entrance to the slit-mask device, (b) the program mable mask itself, here set for three slits at specified heights, and (c) the resulting selected image on the photocathode - con- sisting of three isolated line images in respective shallow wave- bands; Fig. 14 is a diagram, highly schematic, of a fan beam pass- ing through a cloud of atmospheric constituents as for analysis in the system of Figs. 12 and 13, with the horizontal axis of the three-waveband image in the spatial dimension and the vertical axis (within each spectral band respectively) in the time dimen sion - in this case also range; Fig. 15 is a single-frame data image, exemplary of the type of data that can be collected by the system of Figs. 12 through 14 and at right an associated spectral profile r from which the character and quantity or r.he atmospheric constituents (or contam inants) can be determined; Fig. 16 is a set of three measurement-volume diagrams show ing sampling regimes of different conventional lidar systems: (a) range- gated cameras with poor range resolution, that must avoid the ocean surface, (b) time-resolved detectors with poor spatial resolution, and (c) STIL systems that provide good resolution in all dimensions; Fig. 17 is a set of three like diagrams, but representing the present invention: (a) increased areal coverage rate due to more lines per shot, (b) finer spatial resolution due to more pixels per line (shown as a magnified portion of one of the volume cubes), and (c) resolution-enha ced broader areal coverage, dem- onstrating both high spatial and high range resolution in a single shot; Fig. 18 is an isometric diagram of fore-and-aft viewing us- ing two fan beams to simultaneously acquire plural (here only two) 2s different views of every object; Fig. l9 is a set of three diagrams illustrating image-dis- torting effects of water waves: (a) ray deviations, for rays to and from certain submerged objects, shown for a given surface, (b) images and associated SNRs of the same objects but showing various distorting effects for rays incident no ally, and (c) images of the same objects for rays coming in at 30 degrees off normal - showing different object shapes, locations and SNRs; Fig. 20 is a pair of single-shot range-azimuth images through leaves: (a) light foliage on a relatively small tree, and (b) heavy foliage on a much larger tree; Fig. 21 is a set of three diagrams representing an example of measuring a covered hard object with a polarization-sensitive two-slit SAIL: (a) a vertically projected fan beam intersecting the object on the ground, (b) streak camera images from a single -4s laser shot showing at left the respective images in two generally orthogonal polarization states - and at right the corresponding crosssectional views - and (c) reconstructed images from multi ple shots; Fig. 22 is a diagram, highly schematic, of transmitter optics for a preferred embodiment of a polarization form of the invention wherein the half-wave ("I'd) plate allows the linear polarization to be rotated arbitrarily and the eighth-wave ("/8") plate creates the desired elliptical polarization state; Fig. 23 is a like diagram of complementary receiver optics for the same embodiment - the i/8 plate and Wollaston prism being polarization-state analyzers that produce the two measurements necessary to determine the degree of polarization of the return beam; Fig. 24 is a pair of illustrations relating to data from a preferred embodiment of a hyperspectral form of the invention: (a) a graphic showing how the spectral dimension is arrayed along the length of the slit - rather than across the width dimension - and (b) to essentially the same spectral scale, a reproduction of actual excitation and fluorescence image data acquired in a representative measurement; Fig. 25 is an elevational diagram, highly schematic, of a conventional Hartmann-Shack wavefront sensor (WFS) that directs subbeams to a conventional quad cell to indicate wavefront angle; Fig. 26 is a conventional quad cell as used in a Hartmann Shack WFS to measure spot position (which corresponds to the three-dimensional tilt of the wavefront) and intensity; Fis. 27 is an optical quad cell, according to the present invention, that performs the function of the conventional cell of So Fig. 16 and also redirects the indicator subbeams onto a slit for the streak tube; Fig. 8 is a set of three images relating to timeresolved laser pulse and fluorescence return (a) from a highly fluorescent plastic cable tie, with the two 3-D views of the data in (b) and 3s (c) showing the elastic return from the laser pulse, followed by the longer fluorescence return: the gray lines locate the wave- length of maximum return as a function of time (vertical plot) and the time of maximum return as a function of wavelength (horizontal plot); and Fig. 29 is a set of three illustrations relating to a com Lined STIR polarimeter and fluorescence sensor: (a) is a diagram comparable with Figs. 12 etc. showing the front-end optics for an electron-tube system, (b) an enlarged detail view of beam-splitter I optics at a fiber-optic faceplate that transfers the long-wave length image to the actual image plane, and (c) a diagram of the combined sinner data as imaged on the photccathode slits.
DETAILED DESCRIPTION:
OF THE PREFERRED EMBODIMENTS! 1. PLURAL-SLIT SYSTEM FOR SINGLE-PULSE SCANLESS 3-D IMAGING Conventional underlying streak-tube concepts have been t introduced above - in subsection 2 of the earlier "BACKGROUND" section. In those conventional approaches, a system transmits and 3 receives a single narrow fan beam.
The present invention proceeds to a new technique, plural slit streaktube imaging lidar (PS-STIL) - with associated plural line images, and other resulting plural parameters. This innova tion provides a much more general solution to a number of lidar applications than possible with a single line image per laser shot. , As usual, "plural" means "two or more". This term thus en compasses "multiple" - i. e. three, four etc. slits as well as i associated multiple images, multiple wavelengths and other corre spending parameters.
Basically, the PS-STIL approach to streak-tube imaging pro V7 des plural contiguous range-azimuth images (Figs. 6tal and 6lb]) per laser pulse. Each shot thereby yields a full three-di mensional image.
All three of these dimensions can be, but are not necessar ily, spatial dimensions. The third dimension - the one other than range (or time) and azimuth - may instead be virtually any parameter that has an optical manifestation.
Such a parameter may be for instance focal length, or wave- length, or coherence length, or polarization angle, or the sub- tended angle or other property of a beam or subbeam. (It is not intended to suggest that these particular examples are particu larly preferred embodiments of the extrinsic dimension or even that they are particularly useful choices but rather only that the available range of choices for that dimension may be extremely broad.) The plural-slit STIL technique of the present invention, however, does enable collection of two-spatial- dimension area images rather than line images. It also thereby enables formation of true single-pulse three-spatial-dimensional images.
Each slit forms its own image zone on the phosphor anode for other receiving surface such as will be considered below). The plural-slit system requires no modification to the streak tube or COD (although such modification can be provided for further en hancement if desired); the system can be implemented through use of external front-end remapping optics that convert an image into plural separated line images.
The plural-slit technique takes advantage of the fact that many of the pixels ordinarily dedicated to range are unused in the conventional single-slit configuration. After careful study of such relationships, a system designer - or advanced operator can therefore reassign pixels to provide additional spatial (or other) information instead.
By parceling out image regions into plural lines on the streak-tube photocathode, the invention can trade-off range pixels against spatial (or other) pixels. In thi R way the invention provides an additional degree of freedom, that can be used in any of several ways to better optimize the system for a given applica- tion One preferred way to implement the plural-image feature of the invention is simply to form corresponding plural optical slits - i. e. masks or baffles (Fig. 7) - on the photocathode. As the drawing suggests, this technique does not require changes to the streak tube itself (although optimizing refinements, as mentioned above, are encompassed within the scope of the invention), but can be a change in front-end optics only. Several other ways of form ing and streaking plural images are introduced below (particularly in subsection 4).
Further exploiting this novel arrangement, an area image can now be remapped by fiber-optic devices (Fig. 8), by lenslet are rays, or in other ways into plural line images - not just one as r in the previous fiber-optic systems Of Alfano or of Knight. Such plural line images can then be directed for input into the plural slit streak tube.
Manor ether ways to.ua'-e use of this newly added degree of - freedom are within the scope of the present invention. Several such innovations are detailed below in subsections 4 through 7.
The invention is adaptable to most uses of the now-standard STIL technology, including airborne bathymetry, airborne detection of.
fish schools and various other utilizations mentioned in related g patent documents of Arete Associates and its personnel.
2. EYE-SAFE OPERATION, AND RELATED INNOVATIONS 20Converting STIL technology to eye-safe wavelengths entails conversion of both the transmitter and the receiver.
(a) Using electronic tube with selected detector materials - It appears to the present inventor that phosphor upconversion promises to be the simplest and most straightforward technique for moving to the eye-safe regime. It also appears to the inventor that in particular the class of phosphors called "electron-trap- ping infrared" (ETIR) up-conversion phosphors is the best of the candidates, based on the simplicity of its application to photo cathode surfaces - although testing in STIL cameras has not been reported.
Collaboration with the Lumitek firm mentioned earlier, or an alternative source, is advisable for implementation of ETIR phos- phor films. Samples of the Q-32-R phosphor film are a good start ing point and have been evaluated by the inventor for sensitivity, resolution, and temporal pulse spreading. For STIL testing the ETIR film should be deposited on the fiber-optic faceplate of the streak twoe, and actual lidar data taken in both the lab and the
field.
Also desirable is provision of a l.5 micron source - as e. g. by conversion of an existing Nd:YAG laser, through addition of an optical parametric oscillator (oPO). LiteCycles, a manufac- turer of YAG lasers also mentioned earlier in this document, can perform such work particularly for that firm's own products.
(b) Using optical streaking - A new variant according to the present invention is to use a MEMS Digital Micomirror Device (nodal product that scans the beam without the use of large mov ing mirrors, thus allowing the system to be compact and rugged.
As will be recalled, such units are available as an off-the-shelf component from Texas Instruments, though such usage in a streak imaging lidar system has not been suggested earlier. This tech- nique allows for longer-wavelength operation - thereby promoting s the previously noted specialized applications that call for better penetration and discrimination - and also provides a substitute methodology for the eye-safe regime, for situations in which the STIR phosphor may prove inadequate.
A MEMS-based PS-STIL sensor uses the motion of the MEMS el ements to provide the optical streaking. The beams enter the MEMS sensor (Fig. 12[b] ) through a series of slits, closely analogous to the arrangement when a streak tube is used, and the beams are reimaged by a lens to form a new image on the detector.
Putting a MEMS mirror (labeled "DMD" in the figure) behind the lens as shown, and bouncing the light back through the lens, has an important result: the MEMS mirror can then be used near a pupil plane, nowhere the scanning motion of the MEMS elements is translated into motion in the image plane (at the detector).
Attempting to place the MEMS element at a focal plane and then reimaging that focal plane onto the detector would result in no streaking.
The rest of the system is closely similar to the streak- tube-based systems. Software and control-electronics modifica- tions are needed only as appropriate to accommodate the specific detailed relationships between the beam, at the detector, and giv- en mirror-control commands.
It appears that a MEMS optically streaked camera can be very compact. In fact the invention contemplates that size limitations are determined by the IR detector array, rather than the streak tube assembly. Thus the main details are diffraction effects and mirror-response uniformity during scanning.
Nonuniformity of motion generates a blur in the range direc- tion, and diffraction from the small mirrors can cause blurring in both the range and spatial directions. Based on the discussion here and in the papers method, these aspects of DMD performance are straightforwardly calculated and optimum, configurations then found accordingly.
Available DMD scanning speeds allow for 1 GHz range sampling 0 on a 25 mm (l inch) detector that is spaced 70 mm from the DMD scanner. DMD units are compatible with operation at any wave- length for which a standard area imaging sensor is available (e.:g 300 nm to 5 microns).
IR detectors and optical streaking, for present purposes, must be implemented with care to avoid disrupting the plural-slit functionality of the present invention - which is also accom- plished optically.
(c) Longer-wavelength operation - A microelectromechanical DMD device, mentioned above, can be procured from Texas Instru ments and straightforwardly integrated into a streak-camera con figuration. The resulting instrument is particularly effective for special applications that exploit longer wavelengths as poin ted out previously.
Uniformity of mirror motion and diffraction effects should be measured, and for relative safety and simplicity of operation most or ale a. this preliminary laboratory phase can be carried out with the system operating in the visible. 0= course care must still be taken to avoid eye injury.
Those skilled in the field will understand that actual dif fraction is of course different when the unit is put into service in the infrared, but also that the behavior in the two wavelength regions is related in simple and very predictable ways. For infra red operation it is also advisable to determine the emissivity of 36 the DMD unit itself, in the anticipated actual operating region of l to 5 microns - and its utility in that region.
For this purpose, collaboration with infrared astronomers may be found particularly helpful; for example, the present inven- tor has made arrangements with such scientists at Steward Observa tory, a facility of the University of Arizona. A 3-to-5-micron streak camera system is a good initial implementation for practi- cal testing, familiarizing personnel with operation and use as a platform for possible redesign to satisfy specific requirements of the intended application.
3. SIGlAL-PROCESSING FINDINGS As mentioned earlier, conventional realtime range proc essing typically uses multiple SHARC digital signal processors (DSPs), each capable of 120 Mflop/sec (million floating-point operations per second). Through porting of existing and proven algorithms to run on a single field-programmable gate array (FPGA), throughput can be enormously increased.
Such units are available from the previously mentioned firm Nallatech Limited, of Glasgow. Nallatech has demonstrated image processing at lOO Gflop/sec (billion floating-point operations per second).
More specifically Nallatech's demonstration entailed a matched-filter convolution of a 13xl3-pixel kernel on a 1024x1024 image at lOOO Hz. This increase by three orders of magnitude was accomplished in part by configuring the FPGA hardware specifically for the image-processing task.
An FPGA-based range-processing scheme can save considerable volume and power over real-time DSP solution. The associated reduction in computer size makes practical a short-range system that can be carried by a person - in this way resolving a previ ously discussed major problem of the prior art, as well as reduc ing weight, volume, power, and heat-loading in vehicle-mounted sensors.
In addition the very compact systems enabled by the FPGA approach can devote all possible power to the laser transmitter rather than the processing hardware. Power allocated to the as transmitter directly improves system performance, thus optimally actualizing the plural-slit technique of the invention with max- imum detection SNR in a single receiver.
Collaboration with Nallatech is advisable before specifying the interfaces and algorithms to be implemented. Nallatech can develop a first-effort processor using the firm's hardware known as "DIME" (DSP and Image-processing Module for Enhanced FPGAs).
This is a plug-and-play PCI board, with two FPGAs that work in a standard PC. Timing and performance of the selected algorithm should be evaluated and compared with the performance of a stan- dard algorithm run on DSPs.
4- SPCTnT.-PSTUTIQN COPES OF TEE =1NT' ON The plural-image technique allows tremendous gains in abil- ity to simultaneously and independently resolve spectral, temporal and spatial portions of a lidar signal, all within the same exci- tation pulse. These forms of the invention represent extension of pixel- remapping concepts into the spectral domain.
This approach can provide a simplified and more robust mode of fluorescence imaging in detecting and measuring atmospheric particulates and constituents, waterborne particulates, and hard objects (with propagation paths in either air or water). Simulta zo neous measurement of all pertinent signal parameters, within a common laser pulse, removes many hardware requirements and noise terms associated with use of multiple pulses to gather all of these data.
For example the laser-pulse power, pulse shape and pulse timing are all the same for an entire data frame; therefore any artifacts that would be caused by differences in these quantities are absent. In addition r Absolute calibration requirements that do remain are significantly reduced.
Timing we thin an area is all internally consistent. The on ly persisting problem of this sort must arise from jitter between the start of the laser pulse and the start of the electronic re- ceiving system.
The invention facilitates high levels of digitization and sampling achieved by spreading out temporal data spatially on a streak-tube screen - and so enables extremely fine resolution in critical dimensions. The system can trade-off resolution between wavelength, time and space in order to optimize performance for a given application.
Digitization to twelve bits and better, with up to lOOO channels sampledsimultaneously at over lOO GHz, is far beyond anything that can be done with conventional analog-to-digital conversion electronics. For instance lOOO channels of twelve-bit digitization at only lOO MXZ would require approximately a hundred VME-size boards (computer-bus type, chassis/backplane units).
As explained above, the plural-slit technique employs two or more slits, stacked in the streak direction, to provide an addi- tional dimension to the data set. Here the additional dimension is made to correspond to wavelength. The streaking electronics require additional controls to avoid having the streaked image from one wavelength band overlap the streaked image from another.
In one form of hardware suitable for performing plural-slit spectroscopy, a wavelength dispersion device (Fig. 12) distributes the wavelength spectrum along the photocathode plane, but just ahead of the photocathode a slit-mask device (a spatial light modulator, for example) forms a programmable set of slits for se- lecting desired wavelength bands from the spectrum.
The spectrum image (Fig. 13) reaches the cathode, then (to the same scale) the programmable mask, and finally the image on the photocathode. The latter is a set of discrete line images at respective different wavelengths or, more precisely, narrow wavebands.
Alternative hardware applies the entire dispersed image 2s (Fig. 12ta]) to the photocathode - producing a corresponding uni- tary electronic image, within the evacuated tube, representing the full spectral image. Masking or electronic selection within the tube then performs the selection process.
The streak-tube electronics subsystem then streaks these images to create a set of wavelength regions on the phosphor anode that have both time and space data at each wavelength. Thus each wavelength region has its own time- and space-resolved image.
An alternative to the electronic streaking just described is optical streaking. In accordance with the present invention this can be accomplished using microelectromechanical devices - devel- oped commercially for quite different purposes - that make the streak lidar apparatus (Fig. 12[b]) far more compact and robust than possible with the spinning polygon mirrors historically employed.
O01/81949 PUTQJS01/13489 This section introduces various applications of such a device. It also discusses in more detail the data in each of the individual wavelength regions.
A number of trade-offs between the various dimensions of resolution are available. The total number of sampled points is, at most, equal to the total number of pixels in the COD. To first order, the product of the number _ A of discrete wavelength bands, the number ns of spatial points, and the number _ or time points must be equal to or less than the number np of pixels.
to For schemes that are more complicated (compared with those shown in Figs. 12 and 13), there are a number of options involving wavelength dispersion elements and the imaging optic - to allow four nearly any desired mapping of time, space and wavelength on the streak-tube screen and COD. Discussed below are a few simple examples showing how a pluralimage device can be useful.
Although many other examples could be given, these are rea sonably representative. In all these applications the streak-tube receiver is assumed to be coupled with a pulsed laser system that provides the excitation sources for phenomena to be observed (e. a. fluorescence, Raman shifts etc..
(a) Atmospheric constituents and contaminants - The inven tion passes a fan beam (Fig. 14) through a cloud of substances in the atmosphere. In the example a three-waveband system is set up, in which the fluorescence for the red and blue wavelengths is strong but the greer fluorescence in weak. The horizontal axis of the three-color-ban& image is the spatial dimension, while the vertical axis (within each spectral band respectively) is the time dimension - which in this case also corresponds to range.
An alternative way of practicing this form of the invention is to use a plural-wavelength laser source and construct a plural wavelength differential-imaging absorption lidar (DIAL) system.
That type of system directly measures the return of the individual wavelengths, rather than looking at the fluorescence signature.
(Related polarization, spectral-polarization and hyperspectral forms of the invention are discussed below in subsection 6.) (b) Hard objects (airborne terrestrial mapping) - This type of application is related to the metropalitan-regon and construction-prOiect surveys discussed elsewhere in this document, but with important added benefits from the injection of spectral discrimination into the apparatus and procedures. A down-looking airborne system developing an image of the ground can perform several tasks simultaneously.
Data that can be collected include, as an example (Fig. Is) fluorescence imaging or other DIAL-type data. The bright line in each of the wavelength regions corresponds to the ground; thus, such a system provides accurate range maps of terrain under the to aircraft.
If the system is only single-wavelength, a monochromatic reflectivity map of the surface can be generated through simple consideration of the brightness of the line. In the illustration, the hump at the right of the image has a different reflectivity than the rest of the image - indicative of a different substance.
Through consideration of the different wavelengths, a spec tral-spatial profile of the return can be determined. The re sponse at different frequencies in turn can be used to determine the type of material.
In essence the system acts as an active plural- or hyper spectral system. This mode has two major benefits: (l) being active, the system is not dependent on ambient lighting and therefore can operate in day or night; and (2) the system also provides an accurate range map to the surface.
This ranging allows the user to also pick out three-dimen sional shapes (i. e. including heights) of objects as well as their spectral signatures. Addition of shape information makes automatic object-recognition algorithms much easier to manage, as compared with contrast-only systems.
(c) Water-based applications - Water-based measurement environments are essentially the same as the air-based systems except that the water has a very high spectral attenuation coeffi cient, which changes the return signature dramatically compared with air. In addition, the water has a very high backscatter coefficient compared with air; therefore the system picks up more reflected fundamental frequencies than does an air-based system.
5. PLURAL-BEAM PIXEL REALLOCATIONS The foregoing discussions relate primarily to resolution of a single returning beam into different wavelength bands, or rear rangement and reassembly of a single returning beam into a differ- ently formatted image. This section instead introduces expanded capabilities that arise from transmission and recovery of more than one beam at a time.
Advanced airborne lidar concepts according to this invention are applicable to a number of methodologies for detecting and classifying marine objects that are moored, floating and resting on a shallow bottom. Such objects present a hazard to shipping and recreation, as well as having some significance to police interests and the like Each of the ideas presented uses the patented streak-tube imaging lidar (STIL) concept as a base, but represents a signifi cant advancement. These forms of the present invention have important advantages over other lidar- related systems, including those familiarly known as ALMDS, Magic Lantern (Adaptation), and RAMICS.
As noted elsewhere in this document, plural-image technique can be used either for generating very high resolution three- dimensional images, or for significantly increasing the areal coverage rate of an airborne lidar system. The technique can also 2s be used to provide area images with a single laser pulse (compared to the line images normally produced with a streak-tube system).
This capability is useful for a number of applications, including a PANICS refinement that provides the exact depth for every pixel in the image. The conventional two-sensor design instead gives depth data for only the image center.
The preferred embodiments discussed below are not wholly independent. Rather; the embodiments are to a large extent over- lapping, and many of the features taken up in one or another sub- section are applicable in others as well.
(a) Fore-and-aft (or "progressives) viewing - This tech- nigue in its simplest form simply makes two or more line images on the streak tube (Fig. 18). Plural fan beams - for instance two that are pointed roughly 15 degrees forward and 15 degrees back ward from the aircraft, as shown generate the corresponding plural line images.
Fore-and-aft viewing offers a notable improvement for an ALMDS-type system. In fore-and-aft viewing the system takes two or many looks at each object, through significantly different wave structure, on a single pass and with a single sensor thereby enabling the system to remove wave- noise errors that can seriously degrade, or completely eliminate, the signal.
Because it is extremely unlikely that the target has low SHR or highly distorted images as seen from all observing positions (Fig. l9), multiple passes over the same area are not required.
Area coverage rate is therefore made very high without adding significant cost to the sensor.
While the near-surface targets (#l and #2) suffer little change, for the surface through which imaging is assumed, it does not matter because the rays have little chance to deviate signifi- cantly. The more distorted deeper targets get significantly dif- ferent looks.
Progressive viewing can also be used over dry land, for example in aid of imaging through cover - discussed elsewhere in this document - and through patchy fog and clouds.
(b) Greater operating efficiency or resolution - With a multibeam, multislit, multiimage system, many more spatial pixels 2s can be placed in the swath of the sensor than possible with a single line. This allows a STIL-carry-ng aircraft to fly 'aster, or obtain grate- resolution, or bosh.
Areal coverage can be increased by collecting more lines (Fig. 17[a]) with a single laser shot. Since aircraft speed is so limited by the size of the sampled area on the surface in the direction along the track, speed can be increased - proportion- ally with the number of lines.
Alternatively, the additional spatial pixels can be used to provide higher-spatial-resolution images (Fig. 17Eb]). This is accomplished by inserting the additional pixels into the same area as the original line image.
(c) Staring systems - The plural-image innovation may in- stead be used to provide three beams, or many, 'or surveying over ground. One way to exploit this advance is in stationarY area imaging. Conventional streak-tube surveying requires a pushbroom system depending on aircraft motion to sample the dimension along the track - but multiimage viewing can provide the desira- ble "staring" type of system mentioned earlier.
In the earlier "background" section of this document it was shown that STIt systems have powerful sampling and SNR advantages (Fig. l6) over other lidar systems. It was shown also, however, that STTL systems have been restrictively limited in the amount of data collected in each laser pulse, and hampered by the require- ment for continuous translation of the instrument in a "pushbroom" mode, and also have been inadequately exploited with regard to several important commercial and industrial applications.
Thus multlimage viewing, instead of being used to provide viewing redundancy through fore-and-aft viewing as described above, can instead be used to multiply the amount of definite in- formation acquired in each pulse. This advantage in turn can be exploited to help avoid (Fig. 17 Ec] ) the requirement for mechani- cal movement of the detector, or a scanner - and also to mitigate the requirement for remapping.
{d) Lenslet arrays for remapping - For performance of area imaging and other plural-image embodiments of the invention, typi- cally a streak-tube module need not itself be changed. Represen tatively the only changes are in the front-end optics, called the "remapping optics", and in the software that reassembles and in- terprets the COD output.
As explained earlier, remapping optics convert an area image into a series of line images that can be fed into the streak tube.
Fig. 9 shows a module for remapping an area image into lines using two lenslet arrays.
The first lenslet array is in the focal plane of the recei- ver (i. e., this is the location of the area image that is to be remapped -- the size of the lensless determines the pixel size of the receiver). These lensless relay the pupil of the receiver onto the second set of lensless.
It is the second set of lensless that performs the actual remapping task, through the use of beam-steering elements. The steering elements, shown as prisms, can redirect the beam so that -s9 all of the light falls onto a selected one or selected ones - of the slits.
The second lenslet array and its beam-steering elements are advantageously fabricated in one piece, i._e. as a single off-aXis lens element. These lensless can be made by photolithography/ which is completely computer-controlled.
RoLk. arrays are very st,Aaighttorwardly manutacturzle. They do have come restrictions requiring some design work, the most im- portant being a maximum achievable ray deviation. This may dictate to some compromises in a particular system design (e. q., reduced ap ertures of input optics, or reduced numbers of spatial pixels that can be accommodated). Very close collaboration with the optics fabricator is advisable, to define the lenslet design trade-off5 in such a way as to optimize system performance.
The remapping process is otherwise completely under control of the designer. A number of system trade-offs should be strate gized before settling on a final design.
The most significant of these is deciding between the den sity of spatial pixels and range pixels. In principle, however, several different interchangeable sets of remapping optics - each with its own associated software - can be prepared for a single streak-tube, to accommodate various data-collection environments.
Once in existence, these optics/software sets can be interchanged as readily as the lenses on an ordinary SLR, video or cinema 2s camera.
Throughput of a leaflet array may be a concern, due to fab rication details that limit the curvature of the lenses. The impact of these limitation" should be examined in the detailed design of the optics.
Discussions with lenslet array vendors (such as Wavefront Sciences, Inc.) should verify in advance that reasonable through put for the intended application is possible with existing tech nology and that alreadyoccurring improvements in the state of the art are likely to eliminate any serious limitations. Also, the 3s two lenslet arrays have to be well aligned to each other in order to work.
An alignment mechanism (either alignment fiducials or actual alignment structures, such as a post) can be built onto the array as a part of the fabrication process. Such fixtures or other pro \o 01/81949 ns Urther dista nce practiC e of the in ention from potential gnment problems Alth ugh as men t er a typical cost for i iti 1 t design and fabrication is $20,ooo per array, this amount s moStly nonrecO P se for setUp prOcesses quent COpies o t e len 1 t array can be rocured or Si n fi cantllr les,s most embeiments o th requireS two 1enSlet a rraYs i there fore, the initial prOcurement is on the order O $40 the unique reuirement ú thi design t_=1enSlet arrays in conj ti emaPPing thousands o C.), two separate abri ti runs should be planned detailS and ptimize t Performance. Becau e arrays can de of various materials Operations a ger waelengths are fee ib e ig. 9 ptics m good throughput (so th t th s stem SNR is not degra eY must demonstrate th abili to Survive in _= tory environment of a t i pter. Se era1 di f Brent sets of optics should be prepared, so that the perator ca wil1 to all of the meth d el2, high-resolution g ngr and progressive vie i e integrated ith sTIL or easy inter g Sh Uld be f llowed b i characterization of re li enSlet array is adi aL one S trak_ tube s \,s t i imaging The STIL d t accommodate the remap i g pixel positions in th 1 Ount for the Plural im phor screen (_= g zone per slit as shown i d, Single-laser-puls th al images are generated systems, however are an beams; therefore ide 1 ubstitUtiOn Qf new tr
a field or vie s-it d
sional imaging The range-processing software that generates the three-dimensional image must be modified to work with the plural- slit configuration.
As noted earlier, remapping optics whether as lenslet arrays (Fig. 9) or as fiber-optic modules (next subsection) are not only expensive and somewhat cumbersome r but even when those drawbacks are endured are also Able to collect only a very limited amount of image information. With the present invention, after an area image is remapped to a line image a three-dimensional region can be imaged - again, without requiring sensor motion or a scanner.
This is an ideal sensor for a range-gated system in which the exact range of the target can be determined, as well as the spatial position, for every pixel. This technique is also useful 76 for a large number of applications in which true three-dimensional imaging is desired.
(e) Fiber optics for remapping - Although fiber-optic re- mappers as such are not at all new in STIR technology, the present invention makes fiber-optic remapping far more interesting than ever before. By virtue of the greater number of slit lengths that can be productively used with multislit imaging, as set forth in the preceding subsections, a great deal more can now be accom plished through remapping.
A large fiber redistribution network may prove difficult to build without first preparing an automated setup of some kind. A method for providing plural line images by stacking slices of large fiber tapers, however! has the potential to provide some of the desired capability without significant fixture fabrication.
This has been verified in discussions with the previously noted large manufacturer INCOM.
Another approach is modular, but requires custom fixturing and may require addressing some Cocal-plane gaps. This approach has been confirmed in talks with Polymicro Technologies, also mentioned earlier.
In any event it is advisable to work with manufacturers to determine the best way to fabricate the devices. It is also best to have smaller, less expensive test pieces built for evaluation before ordering any final pieces.
hbile the earlier work of Alfano and Rnight was stringently hampered as to overall number of spatial pixels, the present in- vention very greatly mitigates that obstacle. Given a CCD camera with 1024x1024 pixels r a user can split up the image in a number of ways, with only the constraint that the product of spatial pixels and range pixels be equal to the total number of pixels (i. e., 1024x1024, or 1 million pixels).
As a practical matter, unused "buffer" pixels malt be desired between the slit image zones; yet in most cases this consideration only slightly reduces the total number of spatial or range pixels available. Such buffer pixels ordinarily will occupy less than ten percent of the CCD.
Table 2 shows some examples of range vs. spatial pixel trade-offs for various CCD sizes. For larger CCDs, as the table suggests, electron optics of the streak tube may dominate imaging performance.
spatial-image size CCD size for indicated number of range bins _ 64 bins 256 bins | 512 bins l 1024x1024 128x128-- 1 64x- 64 1 -45x 45- 2048x2048 256x256 181 - 128xl28 L gox 512x512 _ 362x362 1 256x256 1 181x1 Table 2. Tradeoff between the umber of spatial and range pixels for given CCD size.
In addition to square image areas, a designer has the free- dom to arrange any rage area that has the same number of pixels.
Given a 10242 CCD and 256 desired range pixels, for example, the designer can choose a 64x64 square, a 4xlO24-pixel rectangle (Fig. 10), a 4,096-pixel-by-one-line image, or virtually any other de- sired allocation of the available pixels.
The very great proportion of unused range pixels that is characteristic of many conventional STIt images (top and bottom in Fig. lO[a]) can be eliminated (Fig. lO[b]) through use of the invention. The PS-STIL remapping optics concept allow significant system-optimization options that are not readily available to de- signers using all-electronic sensors: with those devices, the designer typically cannot change the number and geometric arrange- ment of detector pixels.
(f) Terrestrial mauPing - The potential importance of STIL application to municipal and industrial surveys has been discussed earlier. The present invention offers a key to unlocking this potential.
Cost analysis studies suggest that the invention can cover roughly five square miles per hour at a total cost of roughly $5,000 per square mile, which is considered to be very cost com petitixe. Thus in principle the Los ngeles ares alone could generate income on the order of $10 million each year - and the 0 effort should require only about 500 hours.
By the nature of the apparatus of the invention, this work can be done at all hours. Hence if desired the entire project, working three shifts around the clock, can be completed in only twenty days, leaving ample time for other assignments to more fully utilize a single equipment set.
Alternatively working a single shift of weekdays only, the Los Angeles mapping could be completed in sixty working days or twelve weeks - an annual duty cycle of only twenty-four percent, still allowing another three equivalent such efforts each year, and also still assuming only a single equipment set. Other metro- politan areas have similar requirements, which in the aggregate thus can provide a sustained business in airborne surveying.
The importance of using an eye-safe system bears repetition here. A significant business advantage is reduced risk and lia bility for eye damage, actual or claimed: physically speaking, there is essentially no possibility of such injury from an eye- safe system - though of course claims can always be made.
Another benefit is that the system can be operated at grea- ter power. This could allow for higher-altitude flight, which would result in greater area coverage for each hour of flight.
6. POLARIZATION AND SPECTRAL-POLARIZATION EMBODIMENTS A single lidar sensor that can provide several different sets of information (e. q., contrast/reflectivity images, 3-D im ages, and polarimetry images) is described. Because all of these different data sets are collected simultaneously, and because they are ale collected from the same sensor, there are no image regis tration issues (i. e., difficulty in aligning data from different sensors in time or space).
Also, because this is an active system, it is not dependent on ambient lighting and can provide operation during either day or night. Finally, operation is sound at any wavelength; therefore, as previously described the system is adaptable to the eye-safe wavelength regime for deliverable systems.
Polarization forms Of the invention are useful in police work for interdiction of various clandestine activities - as in fugitive pursuit and detection of drug-running or smuggling - and also for general-purpose private-sector surveying under natural but adverse viewing conditions. Such applications are discussed next. In addition to law-enforcement applications, the technology is also useful for commercial water-based applications such as bathymetry and oil-slick detection.
(a) Imagine through diffuse cover - In the case of single- shot STIL images through trees (Fig. 20), with the system of the invention the foliage is clearly defined but the ground is also visible. Thus the 3-D STIL polarimeter can detect objects under such cover.
One of the benefits of a monostatic lidar system is that the system transmits and receives through the same opening in a screening/covering medium. Therefore it can register an object under a forest canopy, whereas a simple imaging system fails - at least intermittently in such a situation because the image is fragmented by the screening media.
(b) Covered hard objects - Range and polarimetry data can provide a clear distinction between an object and the ground, the first by height and the second by polarization effects. While complete reconstruction of the object may be impeded by blocking effects of the cover, general size and overall polarization signature can still be estimated.
Any artificial covering such as a tarpaulin or netting po tentially has polarization signatures significantly different from those of the background, due to the significantly different ma terials. A 3-D imaging polarimeter has the potential to provide an excellent countermeasure to most forms of covering - and - (s especially netting in that it can both detect that coverlet and separate its signature from that of an object beneath.
As an example, consider detection of a domed hard object, well-covered but intersected (Fig. 21lal) by a STIL fan beam. The object is assumed to be only slightly different in reflectivity, but has a significant polarization effect on the return light.
Here the horizontal axis of the dualist polarimetry image is the spatial dimensi on (Fig. 2iEbl? À.hle the -vertical axis within each polarization state analyzer (PSA) band - is the time dimension, which also corresponds to range. Three maps (Fig. 21tc]) are produced by the sensor, illustrating how difficult it would be for a criminal to effectively hide the object from this system.
Although the cover is very effective in concealing the object in a simple reflectivity image (similar to what would appear in an ordinary photo), the different polarization returns clearly reveal an object of differing material - and the conspicuous hump caused by the object in the range direction adds emphasis to the polari zation data.
Reconstructed images from multiple shots illustrate the ad vantage of acquiring both polarization and range data in addition to reflectivity. The PS-STIL 3-D polarimeter, described in the next section, captures all these data in a single laser pulse.
Analysis of the data is performed using Mueller matrix tech niques documented earlier. These analyses are performed as de- scribed in the technical literature - but they all come down to creating at least two independent measurements (i. e., two equa- tions in two unknowns).
(c) PS-STIL molarimeter design - As can be seen from a preferred implementation of the instrument for the example given above (Figs. 22 and 23), the simplicity of the device (i. e., few optics and no moving parts) is one of the most appealing aspects of the PS-STI approach. As the receiver polarization-state ana lyzer splits the incoming beam into two separate polarization states, the polarization-state generator in the transmitter has to produce only one state.
Because the two polarization states in the receiver are typi- cally produced by a polarizer at two orthogonal settings (e. g., the two polarizer settings, -72.385 and -162.385 , are 90 apart), the rotating polarizer can be replaced with a fixed Wollaston prism. This prism, which is a commonly available polarization component, splits the two polarization states by adding an angular displacement to one polarization as compared to the other.
The angular displacement is translated into a spatial dis- placement by the imaging ens, which forms two separate images on the streak-tube photocathode that go into two separate slits.
Other polarization elements (e. cry r beam-splitters, Brewster win cows, etc.) can achieve a similar polarization split, but the Wollaston ispreferred for this example.
This instrument uses a fan-beam illumination pattern. It produces three maps of the area under investigation: (l) a con- trast (or reflectivity) map at the wavelength of the sensor; (2) a Is degree of polarization (DP) map that shows how the target and surroundings affected the polarization of the transmitted light; Feature Benefit À Significantly improves detection So and clutter reduction.
À Segmentation, the method for de sensor provides data termining which pixels are object from three different and which are background, is sig physical mechanisms nificantly enhanced by looking for for interacting with correlation in all three images.
targets and clutter Algorithm development, analysis, and visualization are simpler than _ __, for hyperspectral systems À Provides 3-D shape of target.
Allows separation of range data screening/camouflage from target.
À Allows imaging through forest can _ opy._! contrast data Provides standard imagery, easy tot evaluate and process.
Difficult to provide camouflage as polarimetry data paint typically has distinct po larization characteristics.
one receiver and À Sensor is compact.
one transmitter À Sensor is less complex.
_ Sensor is more reliable.
no moving parts Sensor is more reliable.
simultaneous collection À Spatial and temporal registration of all three data sets of the data sets is not an issue.
in one receiver_ Table 3. Features and benefits of the so 3-D imaging polarimetry system.
and (3) a range map that has the 3-D shape of a detected object and its surroundings.
These three maps interact with the object and the surround- ings in fundamentally different ways; therefore the clutter signa- Lures are different (and often uncorrelated). In addition, it is difficult to conceal an object from all three of these detection methods at once; consequently countermeasures are difficult. Ta- ble 3 summarizes benefits of the three-dimensional-imaging pola- rimetry system.
(d) Spectral polarimetry A plural-spectral version of the instrument is discussed above. Here a hyperspectral version is presented, combined with a polarimeter to provide single-laser pulse spectral-polarimetry data.
The sort of data collected from the hyperspectral form of the instrument appears in Fig. 24. The receiver is identical in con- struction to the plural-spectral form except that the dispersive element is rotated 90 degrees, so that wavelengths are spread alone rather than perpendicular to the slit.
In this case the transmitter emits a pencil beam, rather than a fan beam. A plural-slit hyperspectral version of the instrument can be implemented by transmitting plural pencil beams at the same or different wavelengths (one per slit).
The spectral-polarimetry (SP) form of the instrument is a combination of the hyperspectral and polarization forms described above. The laser transmitter is identical to that used for pola- rimetry, and the receiver has both the polarization optics and the wavelength dispersion optics in front of the lens.
This form of the instrument has the potential to allow truly unique data capture for a variety of applications. For atmos pheric constituents the combination of fluorescence spectral strength combined with polarization data provides potentially great discrimination capability against a variety of chemical and biological species.
Biological materials are known to have variations in fluores cence lifetime ranging from lOO psec and lO nsec. Temporal reso lution of the elastic backscatter and the fluorescence from nano second laser pulses provides the capability of discriminating objects by measuring their fluorescence lifetimes.
In the hyperspectra1 mode, operator-controlled timing parame- ters for the streak tube determine the resolution of the range information and the distance in space over which the range infor- mation is collected. When the streak tube sweep extends less than a meter in front of and beyond the target, the streak tube time- resolves the return laser pulse and the fluorescence return.
In such circumstances, the lifetime of the induced fluores- cence can be measured. Fluorescence lifetime measurements a-e an additional discriminant that may be beneficial for identification of biological agents.
With such a measurement arrangement, different views of data (Figs. 28) collected from a single laser pulse contain the elastic backscatter and induced fluorescence return as a function of wave- length and time. In the illustrated example, the tall, narrow peak is the time-resolved elastic backscatter from the 9 ns exci- tation pulse at 532 nm.
The broadly sloping region is the time-resolved induced fluo- rescence. The induced fluorescence is delayed in time with re- spect to the excitation pulse and lasts more than-twice as long as the excitation pulse.
Yet another extremely powerful hybrid instrument (Figs. 16) enables simultaneous acquisition of polarimetric data together with fluorescence information. Here polarization data are col 2s lected in two slits, and all other wavelengths through another pair of slits.
Such a system detects whether fluorescence occurred, but provides no further spectral-dscrimination aap-ility. Other configurations can provide significantly more spectral data.
For example, the fan-beam transmitter can be replaced with a pencil beam, and the spectrum analyzed as for the hyperspectral configuration analogously to the fluorescence-lifetimes system discussed just above. Multiple slits can then be used for dif ferent-wavelength transmitter beams, or for multiple polarization states.
PUTS01/13489 7. A GIGAHERTZ WAVEFRONT SENSOR This system, although based upon an array of subapertures and resulting indicator-beam spot positions analogous to those used in the conventional Hartmann-Shack sensor discussed earlier, is a ma- jor advancement over that Hartmann-Shack system. In order to take advantage of the speed of the streak tube r the light from each subaperture has to be split up in a way that it is possible to both measure the spot position and also direct the light onto at least one slit - thereby enabling streaking, so that the indicator-beam positions can be time resolved.
(a) Principles of the system - Precisely this capability is achieved by adding a second set of lensless (Fig. 27) that serves as an array of optical quad cells. Each of these cells collects the light from a corresponding respective group of four lensless in the first array - and then redistributes the light onto a slit.
This redistribution is invoked by proper selection of the so lenslet focal length and - in effect - the strength of an asso ciated prism that is used to "steer" the light onto the slit. In practice, however, there is no separate prism as such; rather, the lenslet and prism are an integrated single-element optic (i. e. an off-axis lens element).
The light reaching the slit is then streaked, to provide time resolution of the linear array of indicator beams mentioned above - and thus of the three-dimensional tilts of all the subapertures whose indicators are directed to the slit. It is this stage, in particular, which achieves the previously noted improvement in time resolution by five orders of magnitude. This system, using a single slit, is believed to be novel and is within the scope of certain of the appended claims.
(b) Plural-slit enhancement - A single-slit embodiment, however, is not the end of the matter - for the number of indi- vidual wavefront elements that can be sensed and time resolved in this way is somewhat limited. To obtain a truly excellent result, the preferred system also incorporates the plural-slit feature described above and thereby at least doubles the spatial resolu- tion across the wavefront to be sensed.
Ideally multiple slits are used, and a corresponding multiple thereby imposed upon the number of wavefront regions whose orien tations can be independently measured. As a result this gigahertz wavefront senso' tGHz WFS) is able to measure both the intensity and the phase 04 laser light with extraordinarily high resolution in both time and space.
Furthermore this instrument collects all the information in a single laser pulse (i. e., it does not require multiple pulses to assemble information). In this way the system eliminates any pulse-to-pulse variations that would compromise the data.
With this instrument, designers, modelers, and users of high- power short-pulse lasers now have access to data that can be di rectly compared to information from their design models and simu- lations. This allows much greater confidence in the design, and allows the models and designs to be experimentally validated.
In particular, transient "hot spot" events (intensity peaks in single pulses which reach the damage threshold of the optics) - and which are often blamed for the degradation of laser perEor- mance - can now be fully captured. Conventional laser instrumen- tation would be able to do no more than determine that such an event has occurred. For instance, an oscilloscope trace might show a narrow bright peak, or a laser-characterization instrument 2s might show a hot spot in the intensity image - but they fail to provide phase and intensity information at high spatial and tempo- raL resolution, as the present invention does, and therefore can- not enable a full causal assessment.
Laser manufacturers can use these several types of new in formation to provide far better control over their design and fab- rication processes and consequently can greatly improve laser reliability and performance by virtue of the enormously su- perior diagnostic capabilities that this sensor provides. All applications using high-power short-pulse lasers will in turn benefit from better lasers resulting from use of the Gz WAS.
(c) Pixel allocation - The number of subapertures that can be sampled (and thus the spatial resolution) is limited by several parameters: the dimensions of the COD camera on the back of the streak tube, the pulse length, and the desired time resolution of the samples. If the GHz WES were implemented with conventional streak-tube imaging, as noted above the user could only get the number of pixels that would fit across one slit.
Plural-slit streak-tube imaging allows a great deal more flexibility in the manner in which pixels are allocated in space and time. An exam..ple of the way in when pixcis can be assigned to slits will now be presented and is very interesting in that it serves as an example for many pluralslit systems other than the WFS. In particular, the discussion below also represents a methodology for determining system limits and organizing the proc- ess of pixel-to-slit allocation for essentially all the PS-STIL embodiments introduced in this document.
Consider a camera with 1024x1024 pixels on the back of the streak tube, together with a laser having a 5 ns FARM average pulse length. The spatial and temporal resolution limits of the system are now clear - they are given by: 4PSPT PCCn, where Ps. is the number of spatial subapertures (the factor of 4 is due to the fact that each spatial subaperture requires the four quad cell measurements), PI is the number of samples in time per pulse, and PCCO is the total number of pixels in the COD (i e., roughly l,OOO, OOO).
Pt is the total sample length divided by the sample period.
Most of the energy is ordinarily in a time corresponding to 6 FWHM (full width at half maximum) of the pulse; therefore; the equation becomes 4P5 6Eq/T5 PCCD where is is the amount or time per sample. Putting in the numbers gives Ps 8333Ts where T is in nanoseconds. Thus, if 1 nanosecond samples are desired, the GHz WAS can have 8333 subapertures (i. e., approxi- mately a 90x90 spatial sampling of the laser beam).
Since there are 1024 pixels in the camera, the optics can be set up to provide 8 slits that are 1024 pixels wide. A conven- tional streak-tube imaging system, which can only use one slit, would only provide 1024 spatial samples - forcing all of the other pixels to be in the time dimension. Plural slits thus pro- vide much more flexibility and capability.
This description is based upon theoretical maximum resolu tion. In actual usage, the numerical result may be reduced by ten to twenty percent, to provide some visual buffer zones between pixels.
Performance of all forms of the present invention is unusu- ally and extremely sensitive to any individual component deficien- cies. For this reason all components should be carefully tested individually to determine whether they meet their respective spec- ifications before attempting to assemble and operate the system.
Also highly advisable is initial system testing in a labora- tory setting. Radiometry, resolution, and noise tests and analy- ses should establish whether the system meets specifications 2s before essaying field operation. Field testing, when it is appro- priate, should be performed both in aircraft and in a ground-based vehicle.
For an effective assessment it is advisable to Independently model the expected system performance (radiometry, resolution, and noise). Comparison of a model with measurements taken in the lab and in the field, as prescribed above, is very informative. Modi- fications to the model, and validation against the data, may then be necessary. It is also essential to modify any existing STIL data-analysis software straightforwardly to work with the PS-STIL 3s of GHz WAS data.
It will be understood that the foregoing disclosure is intended to be merely exemplary, and not to limit the scope of the invention -- which is to be determined by reference to the ap- pended claims. r
Claims (1)
1 1 A streak lidar imaging system for making measurements of a 2 medium with any objects therein; said system comprising: 3 a light source for emitting into such medium a beam in a sub 4 stantially eye-safe wavelength range; an imaging device for receiving light reflected from such 6 medium and forming an image of the reflected light; 7 an upconverter for generating light at or near the visible B wavelength region in response to the reflected light in a substan 9 tially Pye-safe,raveler.g.h ranse; and lo a device for displacing the image along a streak direction.
1 2. The system of claim 1, wherein: 2 the upconverter is positioned in the system after the dis- 3 placing device.
1 3. The system of claim 1, wherein: 2 the upconverter comprises ETIR material.
1 4. The system of claim 1, wherein: 2 the displacing device is positioned in the system after the 3 upconverter.
1 5. The system of claim 1, wherein: z the light source emits said beam in a wavelength range at 3 s,bstar.tially 1 microns.
1 6' A streak lidar imaging system for measurements of a medium 2 with any objects therein; said system comprising: 3 a light source for emitting at least one beam into such 4 medium; an imaging device for receiving light reflected from such medium and forming plural images, arrayed along a streak direc 7 tion, of the reflected light; B wherein the imaging device comprises plural slits for select g ing particular bands of the plural images respectively; and to a device for displacing all the plural images along the streak direction.
1 7. The system of claim 6, wherein: 2 the imaging device comprises an optical device; 3 the plural images are optical images; and 4 the displacing device comprises a module for displacing the plural optical images.
1. The system of claim 7, wherein: 2 the displacing device comprises an electromechanical device.
1 9. The system of claim 8, wherein: 2 the electromechanical device comprises at least one scanning 3 microelectromechanical mirror.
10- The system of claim wherein: 2 the electromechanical device comprises an array of scanning 3 microelectromechanica, mirrors.
1 11 The system of claim 7, wherein: 2 the displacing device comprises an electrooptical device.
12 The system of claim 6, wherein: the imaging device comprises an electronic device; 3 the plural Images are electronic images; and the displacing de-vice comprises a module for displacing the plural electrcn.ic images.
1 13 The system of claiml2, wherein: 2 the displacing device comprises electronic deflection plates.
14 The system of Plainly wherein the imaging device comprises: an optical front end that forms a single optical image of the 3 reflected light; and 4 an electronic stage receiving the single optical image and s forming therefrom the plural electronic images.
1 15. The system of claim l2,wherein the imaging device comprises: z an optical front end that forms plural optical images of the 3 reflected light; and 4 an electronic stage receiving the plural optical images and forming therefrom the plural electronic images.
16- The system of claim 6: 2 wherein the displacing device forms from each of the plural 3 images a respective streak image; 4 whereby the displacing device forms, from the plural images s considered in the aggregate, a corresponding array of plural 6 streak images; and further comprising a device for receiving the array of plural streak images and in response forming a corresponding composite I signal.
1 17. The system of claim 6, wherein: 2 the plural slits operate on the images in optical form.
1 18 The sys tem of claim6, wherein: 2 the plural slits operate on the images in electronic form.
1 19. The sync-.,. of claim 6, wherein: 2 the imaging device comprises a module for forming substan 3 tially a continuum of images of the reflected beam; and 4 each of the plural slits selects a particular image band from the continuum.
1 20. The system of claim 6, wherein: 2 the light source comprises an optical module for emitting at 3 least one thin, fan-shaped beam into such medium; the imaging device comprises an optical module for receiving at least one thin, fan-shaped beam reflected from such medium; and 6 at least one of the optical modules comprises an optical unit 7 for orienting a thin dimension of the reflected beam along the streak direction.
1 21 The system of claim 6, wherein: 2 the imaging device comprises an optical module for forming 3 the plural images as images of the at least one reflected beam at 4 d, sCrete optical wavelengths, respectively.
1 22. The system of claim 6, wherein: 2 the imaging device comprises an optical module for forming 3 the plural images as images of the at least one reflected beam in 4 different polarization states, respectively.
1 23. The system of claim 6, wherein: 2 the imaging device comprises an optical device for forming 3 the plural images from different angular sectors, respectively, of 4 the at least one reflected beam.
1 24. The system of claim 2.S, wherein: 2 the imaging device further comprises an optical device for 3 _earrar.ging image elements in each angular sector to form a single line image for that sector.
1 25. The system of claim 24, wherein: 2 the optical device comprises remapping optics.
1 26. The system of claim 25, wherein: Z the remapping optics comprise a fiber-optic or laminar-optic 3 module.
1 27. The system of claim 25, wherein: 2 the remapping optics comprise a lenslet array.
1 28. The system OF claim 6, wherein: 2 the Light source comprises means for emitting plural beams 3 into such medium; and A- the imaging device comprises: s 6 means for receiving plural beams of the reflected 7 light from such medium, and 9 an optical device for forming the plural images from, respectively, the plural reflected 71 beams.
1 29 À The system of claim 6, wherein: 2 the light source comprises an emitter for emitting light in a 3 wavelength region at or near 1 microns.
1 30. The system of claim 29, wherein: 2 the imaging device comprises an upconverter for generating 3 light at or near the --sable wavelength region in response to the 4 light at or near 1 laron3.
1 31- The system of claim 30, wherein: 2 the upconverter comprises STIR material.
1 32. The system of claim 6,wherein: 2 the light source comprises means for emitting the at least 3 one beam into such medium that is selected from the group consist ing of: 6 a generally clear fluid above a generally 7 hard surface; s 9 a turbid medium, including but not limited to to ocean water, wastewater, fog, clouds, smoke 11 or other particulate suspensions; 13 a diffuse medium, including but not limited to 14 foliage at least partially obscuring a land scape.
1 33. A lidar imaging system for optical measurements of a medium 2 with any objects therein; said system comprising: 3 a light source for emitting at least one light pulse into suck medium; and means for receiving the at least one light pulse reflected 6 from such medium and for forming from each reflected pulse a set 7 of plural substantially simultaneous output images r each Image 8 representing reflected energy In two dimensions.
1 34. The system of claim 33: 2 wherein the light source comprises means for emitting a 3 series of light pulses into such medium, each of the pulses in the 4 series generating a corresponding such image set; whereby the receiving means generate a sequence of plural 6 corresponding image sets; and 7 further comprising means for storing the sequence of corre B spending image sets.
1 35 The system of claim 33, wherein: 2 the receiving means comprise means for allocating image 3 elements, in each image of the set, as among (1) azimuth, (2) A range or time, and (3) an extrinsic measurement dimension.
1 6 The system O.' claim 35, wherein: 2 the extrinsic measurement dimension is wavelength.
1 37. The system of claim 35, wherein: 2 the extrinsic measurement dimension is polarization state.
1 3. The system of claim35, wherein: 2 the extrinsic measurement dimension is a spatial selection.
1 39. The system of claim 33, wherein: the receiving means comprise means for causing the images in 3 the set to be substanti ally contiguous.
1 40. The system of claim 33, wherein: 2 the receiving means comprise means for receiving the re- 3 Elected light pulse as a beam with a cross-sect on that has an 4 aspect ratio on the order of 1 1.
1 41- The system of claim40, wherein: 2 the light source comprises means for emitting the at least 3 one light pulse as a beam with a cross-section that has an aspect 4 ratio on the order of 1:1.
42. The system of claim 33, wherein: the receiving means comprise means for forming the images in 3 such a way that the two dimensions are range/time and output-image 4 azimuth, for a particular extrinsic dimension that corresponds to s each output image respectively.
1 43. The system of claim 33, wherein: z the light source comprises vans for emitting the at least 3 one beam into such medium that Is a generally clear fluid above a 4 generally hard surface.
1 44. The system of claim33, wherein: 2 the light source comprises means for emitting the at least 3 one beam into such medium that is a turbid medium, including but not limited to ocean water, wastewater, fog, clouds, smoke or other particulate suspensions.
45. The system of claim33, wherein: the light source comprises means for emitting the at least 3 one beam into such medium that is a diffuse medium, including but 4 not limited to foliage at least partially obscuring a landscape.
1 46. An optical system comprising: 2 a first ienslet array for performing a first optical trans 3 ormatlan on an optical beam; and 4 a second lenslet array, in series with the first array, for receiving a transformed beam from the first array and performing a 6 second optical transformation on the transformed beam.
1 47. The system of claim 46, wherein: 2 one of the arrays comprises image-plane defining lensless to 3 define image elements of the beam; and 4 the other array comprises deflecting lensless to selectively deflect beam elements to reconfigure an image transmitted in the 6 beam.
1 4. The system of claim 47, wherein: 2 the one of the arrays that defines the image elements is the 3 first array.
1 49. The system of claim47, further comprising: 2 means defining an image carried by the beam; and 3 wherein the first array is positioned substantially at a 4 focal plane of the image.
1 50. The system of claim 49, wherein the image-defining means 2 comprise: 3 a lidar source emitting an excitation beam to a region of 4 interest; and collection optics receiving a reflection of the excitation 6 beam from the region and focusing the reflection a, the focal 7 plane.
1 51. The systems of claim 50 2 wherein the two transformations, considered together, com 3 prise selectively imaging particular components of the beam onto 4 plural slits following the second array; and further comprising means for streaking images from both slits 6 for reimaging at a detector.
1 52 The system of claim 49, wherein: 2 the first array also relays the image from the focal plane to 3 the second array.
1 53. The system of claim 52, wherein: 2 the second array is substantially in a plane, said plane 3 being disposed substantially at the relayed image.
54. The system of claim 46, wherein: 2 the two transformations, considered together, comprise 3 selectively imaging particular components of the beam onto plural 4 slits following the second array.
1 55. A streak lidar imaging system comprising: 2 a light source for emitting a beam; 3 an imaging device for receiving light originating from the 4 source and for forming an image of the received light; at least one microelectromechanical mirror for displacing the 6 image along a streak direction) and 7 an image-responsive element for receiving and responding to the displaced image.
1 56. The system of claim 55, wherein: 2 the at least one mirror comprises an array of multiple 3 microelectromechanical mirrors.
1 57. The system of claim 55, for use with an optical medium and 2 wherein: 3 the light source comprises means for emitting the beam into 4 such medium; and s the imaging device comprises means for receiving light 6 reflected from such medium and forming an image of the reflected 7 light.
1 58. The system of claim 55, wherein: 2 the light source comprises a resonant device, and 3 the imaging device comprises means for causing imperfections 4 in resonance of the resonant device to modulate the image.
1 59. The system of claim 58, particularly for use with a resonant 2 device that comprises a laser; and wherein: 3 the imaging device comprises means for causing imperfections 4 in optical wave*ronts from the laser to modulate the image. -8s
1 60. The system of claim 59, wherein: 2 the causing means comprise means for deflecting elements of 3 said beam in response to local imperfections in said coherence.
61. The system of claim 60, wherein: z the deflecting means comprise at toast one lenslet array.
1 62. A spatial mapping system for mapping a region; said system 2 comprising: 3 a light source for emitting at least one thin, fan-shaped 4 beam from a moving emission location toward such region, a thin dimension of the beam being oriented generally parallel to a di 6 rection of motion of the emission location; 7 an imaging device for receiving light reflected from such region and forming an image of the reflected light; 9 means for separating the reflected light to form plural re fleeted beam images representing different aspects of such region, 11 respectively; and 12 an image-responsive element for receiving and responding to 13 the plural beam images.
1 63. The system of claim 62, wherein: 2 the separating means comprise means for discriminating be- 3 tween spatially different aspects of such region.
1 64. The system of claim 62, wherein: 2 the separating means comprise means for discriminating be- 3 tween aspects of such region that are carried in portions of the 4 beam received at different angles.
1 65. The system of claim 64, wherein: z the discriminating means comprise means for forming discrete 3 plural reflected beam images from portions of the beam received at 4 different angular ranges, respectively.
1 66. The system Of claim 62, wherein: 2 the separating means comprise means for discrimina'r.g 3 between aspects of such region that are carried in different - 4 polarization states of the beam.
1 67. The system of claim 62, wherein: 2 the separating means comprise means for discriminating 3 between aspects of such region that are carried in different spectral components of the beam.
1 68. The system of claim 62, wherein: 2 the separating means comprise means for discriminating be 3 tween combinations of two or more different aspects of such region 4 that are carried in different characteristics of the beam, at least one of which characteristics is selected from among: 7 spatially different aspects of the beam,
B
g different polarization states of the beam, and 11 different spectral components of the beam.
1 fig. The system of claim 68, wherein: 2 at least two of said characteristics are selected from said 3 spatial, polarization and spectral characteristics.
1 70. The system of claim 62, wherein the emission location is 2 selected from the group consisting of: 3; 4 a spacecraft; 6 an aircraft; & another type of vehicle; and another type of moving platform.
1 71. The system of claim 62, wherein: 2 the emission location is a fixed light source cooperating 3 with a scanning system to provide a moving image of the light 4 source.
1 72. The system of claim 62, wherein: 2 the light source comprises means for emitting the at least 3 one beam into such medium that is a generally clear fluid above a 4 generally hard surface.
1 73. The system of claim 62, wherein: 2 the light source comprises means for emitting the at least 3 one beam into such medium that is a turbid medium, including but 4 not limited to ocean water, w=stewater, fog, clouds, smoke or other particulate suspensions.
1 74. The system of claim 62, wherein: 2 the light source comprises means for emitting the at least 3 one beam into such medium that is a diffuse medium, including but 4 not limited to foliage at least partially obscuring a landscape.
1 75. A spatial mapping system for mapping a region; said system z comprising: 3 a light source for emitting a beam whose cross-section has an 4 aspect ratio on the order of 1:1, from a moving emission location toward such region; 6 an imaging device for receiving light reflected from such 7 region and forming an image of the reflected light; - 8 means for separating the reflected light to form plural re- 9 fleeted beam images representing different aspects of such region, 1O respectively; and 11 an image-responsive element for receiving and responding to 2 the plural beam images.
1 76. The system of claim 75, wherein: 2 the imaging device comprises means for receiving the reflec- 3 ted light from such region as a reflected-beam whose cross-section 4 has an aspect ratio on the order of 1:1.
1 77. A spectrometric analytical system for analyzing a medium with 2 any objects therein; said system comprising: 3 a light source for emitting substantially at least one pencil 4 beam toward such medium; an imaging device for receiving light reflected from such me 6 di''m and forming an image of the reflected light; 7 means for separating the reflected light along one dimension s to form plural reflected beam images arrayed along said dimension g and representing different aspects of the medium, respectively; lo optical- dispersing means for forming a spectrum from at least 11 one of the plural images, by dispersion of the at least one image 12 along a dimension generally orthogonal to the said dimension; and an image-responsive element for receiving and responding to the plural beam images.
1 78. The system of claim 77, wherein: z the dispersing means comprise means for forming a spectrum from each of the plural images, respectively.
l 79. The system of claim 77, wherein: 2 the separating means comprise means for separating the 3 reflected light to Form plural images representing spats a7 l-y different aspects of the beam, respectively.
1 80. The system of claim 77, wherein: 2 the separating means comprise means for separating the 3 reflected light to form plural images representing different 4 polarization states of the beam, respectively.
2 81. The system of claim 77, wherein: 2 the separating means comprise means for separating the 3 reflected light to form plural images representing different 4 spectral constituents of the beam, respectively.
1 82. The system of claim 77, wherein: 2 the separating means comprise means for separating the 3 reflected light to form plural images representing cooinations of 4 two or more different aspects of such medium that are carried in different characteristics of the beam, at least one of which 6 characteristics is selected from among: 8 spatially different aspects of the beam, different polarization states of the beam, and 22 different spectral components of the beam.
1 83. A wavefront sensor, for evaluating a light beam from an 2 optical source; said sensor comprising: 3 optical components for receiving such beam from such source; 4 optical components for subdividing small portions of such beam to form indicator subbeams that reveal a direction of sub 6 stantially each of said small portions; and optical components for steering the indicator sunbeams to e fall along at least one slit; A means for streaking light that passes through the at least one slit; and 11 means for capturing the streaked light during a streaking 2 duration.
1 84. The system of claim 84, wherein: 2 the at least one slit comprises plural slits.
1 85. The system of claim 83, particularly for use with a resonant 2 optical source; and wherein: 3 the receiving and subdividing components comprise means for 4 causing imperfections in optical waveEronts from the resonant s source to modify the light that passes through the at least one 6 slit.
1 86. The sysL em of claim 85, wherein: 2 the receiving, subdividing and steering components comprise ? at least one lenslet array.
1 87. The system of claim 86, wherein: 2 the receiving, subdividing and steering components comprise 3 at least two lenslet arrays in optical series. 9]
1 88. The system of claim 87, wherein the lenslet arrays comprise: 2 one array that defines image elements at or near a focal 3 plane of such beams and another array that receives the image elements relayed from the first array, and that steers light from the image elements to 6 the at least one slit.
1 8. The system of claim 86, wherein: 2 the receiving, subdividing and steering components comprise 3 at least one lenslet array in optical series with at least one 4 fiber-optic remapping device.
90. A spectrometric analytical system for analyzing a medium with any objects therein; said system comprising: 3 a light source for emitting substantially at least one beam toward such medium; an imaging device for receiving light reflected from such 6 medium and *arming plural images of the reflected light; 7 optical or electronic means for streaking the plural images; 8 an image-responsive element for receiving and responding to a the plural beam images) and a computer for extracting fluorescence-lifetime information from a signal produced by the image-responsive element.
1 91. The system of claim 90, wherein the at least one beam 2 comprises: 3 at least one pencil beam.
1 92. The system of claim 90, wherein the imaging device comprises: 2 a hyperspectral optical system.
1 93. The system of claim 92, wherein the imaging device further 2 comprises: 3 a plural-wavelength optical system wherein each of plural 4 wavelength bands is arrayed along a length dimension of a respec Live slit-shaped image.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US19991500P | 2000-04-26 | 2000-04-26 | |
GB0227440A GB2380344B (en) | 2000-04-26 | 2001-04-26 | Very fast time resolved imaging in multiparameter measurement space |
Publications (3)
Publication Number | Publication Date |
---|---|
GB0421647D0 GB0421647D0 (en) | 2004-10-27 |
GB2403615A true GB2403615A (en) | 2005-01-05 |
GB2403615B GB2403615B (en) | 2005-02-23 |
Family
ID=33542656
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB0421638A Expired - Fee Related GB2403614B (en) | 2000-04-26 | 2001-04-26 | Streak lidar imaging system |
GB0421647A Expired - Fee Related GB2403615B (en) | 2000-04-26 | 2001-04-26 | Streak lidar imaging system |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB0421638A Expired - Fee Related GB2403614B (en) | 2000-04-26 | 2001-04-26 | Streak lidar imaging system |
Country Status (1)
Country | Link |
---|---|
GB (2) | GB2403614B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009141622A1 (en) * | 2008-05-21 | 2009-11-26 | Ntnu Technology Transfer As | Underwater hyperspectral imaging |
US8599374B1 (en) | 2012-11-15 | 2013-12-03 | Corning Incorporated | Hyperspectral imaging systems and methods for imaging a remote object |
GB2629029A (en) * | 2023-04-15 | 2024-10-16 | Thursby Jonathan | Underwater hyperspectral imaging |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7710545B2 (en) | 2008-02-13 | 2010-05-04 | The Boeing Company | Scanned laser detection and ranging apparatus |
ES2573955B2 (en) * | 2014-11-11 | 2017-03-16 | Universitat De València | Computer system, method and program for the measurement and analysis of temporary light signals |
CN109991620A (en) * | 2019-04-02 | 2019-07-09 | 哈尔滨工业大学(威海) | The imaging method of streak tube laser imaging radar system based on cathode gating |
US12060148B2 (en) | 2022-08-16 | 2024-08-13 | Honeywell International Inc. | Ground resonance detection and warning system and method |
CN116482677B (en) * | 2023-06-25 | 2023-08-29 | 成都远望科技有限责任公司 | Multi-radar cooperative control scanning scheduling method based on sea fog observation |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1998013909A2 (en) * | 1996-09-03 | 1998-04-02 | Stanger, Leo | Energy transmission by laser radiation |
US5786889A (en) * | 1993-05-12 | 1998-07-28 | Pilkington P E Limited | Method of monitoring coalignment of a sighting or surveillance sensor suite |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5467122A (en) * | 1991-10-21 | 1995-11-14 | Arete Associates | Underwater imaging in real time, using substantially direct depth-to-display-height lidar streak mapping |
-
2001
- 2001-04-26 GB GB0421638A patent/GB2403614B/en not_active Expired - Fee Related
- 2001-04-26 GB GB0421647A patent/GB2403615B/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5786889A (en) * | 1993-05-12 | 1998-07-28 | Pilkington P E Limited | Method of monitoring coalignment of a sighting or surveillance sensor suite |
WO1998013909A2 (en) * | 1996-09-03 | 1998-04-02 | Stanger, Leo | Energy transmission by laser radiation |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009141622A1 (en) * | 2008-05-21 | 2009-11-26 | Ntnu Technology Transfer As | Underwater hyperspectral imaging |
US8502974B2 (en) | 2008-05-21 | 2013-08-06 | Ecotone As | Underwater hyperspectral imaging |
AU2009248487B2 (en) * | 2008-05-21 | 2014-04-10 | Ecotone As | Underwater hyperspectral imaging |
US8767205B2 (en) | 2008-05-21 | 2014-07-01 | Ecotone As | Underwater hyperspectral imaging |
US8599374B1 (en) | 2012-11-15 | 2013-12-03 | Corning Incorporated | Hyperspectral imaging systems and methods for imaging a remote object |
US9200958B2 (en) | 2012-11-15 | 2015-12-01 | Corning Incorporated | Hyperspectral imaging systems and methods for imaging a remote object |
US9267843B2 (en) | 2012-11-15 | 2016-02-23 | Corning Incorporated | Hyperspectral imaging systems and methods for imaging a remote object |
US9341514B2 (en) | 2012-11-15 | 2016-05-17 | Corning Incorporated | Hyperspectral imaging systems and methods for imaging a remote object |
GB2629029A (en) * | 2023-04-15 | 2024-10-16 | Thursby Jonathan | Underwater hyperspectral imaging |
Also Published As
Publication number | Publication date |
---|---|
GB2403615B (en) | 2005-02-23 |
GB0421638D0 (en) | 2004-10-27 |
GB2403614A (en) | 2005-01-05 |
GB2403614B (en) | 2005-02-23 |
GB0421647D0 (en) | 2004-10-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7227116B2 (en) | Very fast time resolved imaging in multiparameter measurement space | |
US20230305147A1 (en) | Three-dimensional hyperspectral imaging systems and methods using a light detection and ranging (lidar) focal plane array | |
Albota et al. | Three-dimensional imaging laser radars with Geiger-mode avalanche photodiode arrays | |
US7652752B2 (en) | Ultraviolet, infrared, and near-infrared lidar system and method | |
CN102947726B (en) | Scanning 3 D imaging instrument | |
CN101871815B (en) | Programmable polarization hyperspectral imager based on aperture segmentation and acousto-optic tunable filter | |
US10887532B2 (en) | Full field visual-mid-infrared imaging system | |
US11231323B2 (en) | Time-resolved hyper-spectral single-pixel imaging | |
Gleckler | Multiple-slit streak tube imaging lidar (MS-STIL) applications | |
EP3460518B1 (en) | Hybrid lidar-imaging device for aerial surveying | |
CN109425869A (en) | The measuring device for setting range of receiving with scanning function and receiver | |
Grossman et al. | Active millimeter-wave imaging for concealed weapons detection | |
GB2403615A (en) | Eye-safe streak tube imaging lidar | |
Mackay et al. | High-resolution imaging in the visible from the ground without adaptive optics: new techniques and results | |
Jonsson et al. | Experimental evaluation of penetration capabilities of a Geiger-mode APD array laser radar system | |
RU2544305C1 (en) | Laser location system | |
Pierrottet et al. | Characterization of 3-D imaging lidar for hazard avoidance and autonomous landing on the Moon | |
AU2004206520A1 (en) | Ultraviolet, infrared, and near-infrared lidar system and method | |
Liu et al. | Research on a flash imaging lidar based on a multiple-streak tube | |
Reipurth et al. | Hα Emission-Line Stars in Molecular Clouds. I. The NGC 2264 Region | |
AU2007200003A1 (en) | Very fast time resolved imaging in multiparameter measurement space | |
Amiaux et al. | Euclid imaging channels: from science to system requirements | |
RU2489804C2 (en) | Optical-electronic system for remote aerial radiological survey | |
Powell | The development of the CALIPSO lidar simulator | |
Kinder et al. | Ranging-imaging spectrometer |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PCNP | Patent ceased through non-payment of renewal fee |
Effective date: 20090426 |